id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2305.14892 | Segmented GRAND: Combining Sub-patterns in Near-ML Order | The recently introduced maximum-likelihood (ML) decoding scheme called
guessing random additive noise decoding (GRAND) has demonstrated a remarkably
low time complexity in high signal-to-noise ratio (SNR) regimes. However, the
complexity is not as low at low SNR regimes and low code rates. To mitigate
this concern, we propose a scheme for a near-ML variant of GRAND called ordered
reliability bits GRAND (or ORBGRAND), which divides codewords into segments
based on the properties of the underlying code, generates sub-patterns for each
segment consistent with the syndrome (thus reducing the number of inconsistent
error patterns generated), and combines them in a near-ML order using two-level
integer partitions of logistic weight. The numerical evaluation demonstrates
that the proposed scheme, called segmented ORBGRAND, significantly reduces the
average number of queries at any SNR regime. Moreover, the segmented ORBGRAND
with abandonment also improves the error correction performance. | Mohammad Rowshan, Jinhong Yuan | 2023-05-24T08:43:07Z | http://arxiv.org/abs/2305.14892v1 | # Segmented GRAND: Combining Sub-patterns
###### Abstract
The recently introduced maximum-likelihood (ML) decoding scheme called guessing random additive noise decoding (GRAND) has demonstrated a remarkably low time complexity in high signal-to-noise ratio (SNR) regimes. However, the complexity is not as low at low SNR regimes and low code rates. To mitigate this concern, we propose a scheme for a near-ML variant of GRAND called ordered reliability bits GRAND (or ORBRAND), which divides codewords into segments based on the properties of the underlying code, generates sub-patterns for each segment consistent with the syndrome (thus reducing the number of inconsistent error patterns generated), and combines them in a near-ML order using two-level integer partitions of logistic weight. The numerical evaluation demonstrates that the proposed scheme, called segmented ORBRAND, significantly reduces the average number of queries at any SNR regime. Moreover, the segmented ORBRAND with abandonment also improves the error correction performance.
Error pattern, segment, integer partition, guessing random additive noise decoding, GRAND, ORBRAND, ordered statistics decoding, maximum likelihood decoding, complexity.
## I Introduction
Soft decision-based decoding algorithms can be classified into two major categories [1]: Code structure-based algorithms and reliability-based algorithms or generic decoding algorithms as they usually do not depend on the code structure. In the generic algorithms a.k.a universal algorithms, which is the focus of this paper, the goal is to find the closest modulated codeword to the received sequence using a metric such as the likelihood function. That is, we try to maximize the likelihood in the search towards finding the transmitted sequence. Hence, this category of decoding algorithm is called maximum likelihood (ML) decoding which is known as an optimal decoding approach. Maximum likelihood decoding has been an attractive subject for decades among researchers. Error sequence generation is one of the central problems in any ML decoding scheme. The brute-force approach for ML decoding of a linear \((n,k)\) block code requires the computation of likelihood or Euclidean distances of \(2^{k}\) modulated codewords from the received sequence. In general, ML decoding is prohibitively complex for most codes as it was shown to be an NP-complete problem [2]. Hence, the main effort of the researchers has been concentrated on reducing the algorithm's complexity for short block-lengths. Although there are approaches in which the optimal performance is preserved, the ML performance can be traded off for a significant complexity reduction. Here, we review some of the notable efforts toward complexity reduction in the past decades.
Forney proposed the generalized minimum distance (GMD) decoding algorithm in 1966 [3], where a list of candidate codewords based on the reliability of the received symbols was produced using an algebraic decoder. In 1972, Chase proposed a method [4] in which the search was performed among a fixed number of the error patterns corresponding to a particular number of least reliable bit positions with respect to the minimum distance \(d\) of the underlying code. Chase classified his algorithm into three types, as per the error pattern generation. In another effort, Snyders and Be'ery in 1989 [5] proposed to perform syndrome decoding on the received sequence and then use the syndrome information to modify and improve the original hard-decision decoding.
The best-known generic decoding algorithm is perhaps the information set decoding (ISD) algorithm proposed by Prange in 1962 [6], which was improved by Stern in 1989 [7] and Dumer in 1991 [8]. Following this approach, other generic decoding approaches were developed based on the most reliable basis (MRB), defined as the support of the most reliable independent positions (MRIPs) of the received sequence, hence forming an information set. In these approaches, each error pattern is subtracted from the hard decision of the MRIPs and the corresponding codeword is reconstructed by encoding the corresponding information sequence. In 1974, Dorsch [9] considered error patterns restricted to the MRB in increasing a priori likelihood. Following this approach, Fossorier and Lin in 1995 [10] proposed processing the error patterns in a deterministic order within families of increasing Hamming weight. This algorithm, which is referred to as ordered statistics decoding (OSD), is one of the most popular generic decoding algorithms nowadays. The OSD algorithm permutes the columns of the generator matrix with respect to the reliability of the symbols for every received vector and performs elementary row operations on the independent columns extracted from the permuted generator matrix resulting in the systematic form. The testing error patterns can have a Hamming weight of up to \(l,0\leq l\leq k\) in \(l\)-order OSD, chosen from the most reliable \(k\) positions. Apparently, the main drawback of OSD is the use of row operations to put either the generator matrix or the parity check matrix of the code into systematic form. The complexity of row operation for an \((n,k)\) linear block code is \(O(n^{3}\min\{R,1-R\}^{2})\) where R is the code rate. However, since overall complexity is an exponential function of code
length, this preprocessing complexity is negligible. Moreover, having information set in a systematic form is needed only for simplifying further the decoding attempts. Otherwise, the error patterns can be checked without this preprocessing. The OSD algorithm further evolved in 2004 into box-and-match algorithm (BMA) [11] and enhanced BMA [12] where the matching technique was used to reduce time complexity at the cost of space complexity. The matching techniques were employed for fast decoding of polar codes with Reed-Solomon kernel in [13]. It is worth noting that a similar algorithm to BMA, called the sort and match algorithm, was proposed by Dumer in 1991 in [14, 15] which has the same asymptotic complexity as BMA.
In 2018, Duffy et al. [16] suggested a hard-decision scheme in which the error patterns were ordered from most likely to least likely ones based on a statistical channel model, and then the incorrect error patterns were sequentially removed, querying for the first error patterns corresponding to a valid codeword. This original idea, which was later called _guessing random additive noise decoding_ (GRAND), further developed into a soft decision scheme or sGRAND where the error patterns were generated based on the symbols' reliability and sequential insertion and removal of the error patterns from an ordered stack until the first valid codeword was found. The sGRAND was shown to be capacity achieving [17] and ML algorithm [18] though it came at a significant computational complexity cost because the error patterns in the stack needed to be sorted after insertion of new patterns into the stack. The approach used in GRAND appears to align with a general optimum technique proposed in [19] to handle the pattern generation with monotonicity [20]. The next evolution in this approach occurred by employing a simple metric that gave the error patterns for testing in a near ML order [21]. This step was a significant boost for GRAND toward making it practical for high rate and short codes. The approximate scheduling of the error sequences is based on distinct integer partitioning of positive integers which is significantly less complex. Alternatively, a sequential algorithmic method to generate error sequences was suggested based on partial ordering and a modified logistic weight in [23] that prioritizes the low-weight error sequences resulting in improving the performance though its pattern generation process is not as simple as integer partitioning process. Several hardware architectures have also been proposed for ORBGAND in [24, 25, 26, 27].
The main advantage of ORBGAND is its simplicity in the generation of error patterns in an order near ML by a simple weight function that makes a hardware-friendly algorithm. Unlike some of the other schemes, it does not require any preprocessing, or sorting (except for the reliability order) and it has inherently an early termination mechanism in itself that stops searching after finding the most likely codeword or near that. However, the number of invalid error patterns is significantly high. The aim of this work and our previous work in [28] was to reduce invalid patterns and save computations and time. In constrained GRAND [28], by simply utilizing the structure of a binary linear code, we proposed an efficient pre-evaluation that constrains the error pattern generation. This approach could save the codebook checking operation. These syndrome-based constraints are extracted from the parity check matrix (with or without matrix manipulation) of the underlying code. We also showed that the size of the search space deterministically reduces by a factor of \(2^{p}\) where \(p\) is the number of constraints. Note that the constrained error sequence generation does not degrade the error correction performance as it is just discarding the error sequences that do not result in valid codewords. The proposed approach could be applied to other GRAND variants such as SGRAND [18].
In this paper, different from [28], we propose an approach that generates sub-patterns for the segments corresponding to the defined constraints. We simultaneously generate sub-patterns for each segment with odd or even weight, guided by the available information from the syndrome, otherwise with both weights. To address the challenging problem of combining the sub-patterns in an ML order, we propose a customized partition (a.k.a composition [29]) of the logistic weight into segment-specific sub-weights. This composition involves partitioning the logistic weight into non-distinct positive integers, with the number of parts (a.k.a composition order) restricted to the number of segments. Furthermore, our approach allows for zero to be included as an element in the composition. The numerical results show that by employing the proposed method, the average number of attempts reduces significantly compared with the conventional ORBGAND (i.e., the ORBGAND without segmentation). This reduction is justified by the expectation of a reduction in search space. Furthermore, we show how this approach can improve the block error rate when employing segmented ORBGAND with abandonment.
## II Preliminaries
We denote by \(\mathbb{F}_{2}\) the binary finite field with two elements. The cardinality of a set is denoted by \(|\cdot|\). The interval \([a,b]\) represents the set of all integer numbers in \(\{x:a\leq x\leq b\}\). The _support_ of a vector \(\mathbf{e}=(e_{1},\ldots,e_{n})\in\mathbb{F}_{2}^{n}\) is the set of indices where \(\mathbf{e}\) has a nonzero coordinate, i.e. \(\operatorname{supp}(\mathbf{e})\triangleq\{i\in[1,n]\colon e_{i}\neq 0\}\). The _weight_ of a vector \(\mathbf{e}\in\mathbb{F}_{2}^{n}\) is \(w(\mathbf{e})\triangleq|\operatorname{supp}(\mathbf{e})|\). The all-one vector \(\mathbf{1}\) and all-zero vector \(\mathbf{0}\) are defined as vectors with all identical elements of 1 or 0, respectively. The summation in \(\mathbb{F}_{2}\) is denoted by \(\oplus\). The modulo operation (to get the remainder of a division) is denoted by \(\%\).
### _ML Decoding and ORBGAND_
A binary code \(\mathcal{C}\) of length \(n\) and dimension \(k\) maps a message of \(k\) bits into a codeword \(\mathbf{c}\) of \(n\) bits to be transmitted over a noisy channel. We assume that we are using binary phase shift keying (BPSK) modulation. The channel alters the transmitted codeword such that the receiver obtains an \(n\)-symbol vector \(\mathbf{r}\). A ML decoder supposedly compares \(\mathbf{r}\) with all the \(2^{k}\) modulated codewords in the codebook, and selects
the one closest to \(\mathbf{r}\). In other words, the ML decoder finds a modulated codeword \(\mathtt{x}(\mathbf{c})\) such that
\[\hat{\mathbf{c}}=\underset{\mathbf{e}\in\mathcal{C}}{\text{arg max}}\ p\big{(} \mathbf{r}|\mathtt{x}(\mathbf{c})\big{)}. \tag{1}\]
For additive white Gaussian noise (AWGN) channel with noise power of \(\sigma_{n}^{2}=N_{0}/2\) where \(N_{0}\) is the noise spectral density, the conditional probability \(p\big{(}\mathbf{r}|\mathtt{x}(\mathbf{c})\big{)}\) is given by
\[p\big{(}\mathbf{r}|\mathtt{x}(\mathbf{c})\big{)}=\frac{1}{(\sqrt{\pi N_{0}})^ {n}}\text{exp}\Bigg{(}-\sum_{i=1}^{n}(r_{i}-\mathtt{x}(c_{i}))^{2}/N_{0}\Bigg{)}. \tag{2}\]
Observe that maximizing \(p(\mathbf{r}|\mathtt{x}(\mathbf{c}))\) is equivalent to minimizing
\[d_{E}^{2}=\sum_{i=1}^{n}(r_{i}-\mathtt{x}(c_{i}))^{2}, \tag{3}\]
which is called _squared Euclidean distance_ (SED). Therefore, we have
\[\hat{\mathbf{c}}=\underset{\mathbf{e}\in\mathcal{C}}{\text{arg max}}\ p\big{(} \mathbf{r}|\mathtt{x}(\mathbf{c})\big{)}=\underset{\mathbf{e}\in\mathcal{C}}{ \text{arg min}}\ \big{(}\mathbf{r}-\mathtt{x}(\mathbf{c})\big{)}^{2}. \tag{4}\]
The process of finding \(\mathbf{c}\), depending on the scheme we employ, may require checking possibly a large number of binary error sequences \(\hat{\mathbf{e}}\) to select the one that satisfies
\[\mathbf{H}\cdot(\theta(\mathbf{r})\oplus\hat{\mathbf{e}})=\mathbf{0} \tag{5}\]
where \(\theta(\mathbf{r})\) returns the hard-decision demodulation of the received vector \(\mathbf{r}\) and \(\mathbf{H}\) is the parity check matrix of code \(\mathcal{C}\),
\[\mathbf{H}=[\mathbf{h}_{1}\,\mathbf{h}_{2}\,\cdots\,\mathbf{h}_{n-k}]^{T} \tag{6}\]
and the \(n\)-element row vectors \(\mathbf{h}_{j}\) for \(j\in[1,n-k]\) are denoted by \(\mathbf{h}_{j}=[h_{j,1}\ h_{j,2}\ \cdots\ h_{j,n}]\). Note that any valid codeword \(\mathbf{c}=\theta(\mathbf{r})\oplus\hat{\mathbf{e}}\) gives \(\mathbf{H}\cdot\mathbf{c}=\mathbf{0}\). Here, \(\hat{\mathbf{e}}\) is the binary error sequence that we refer to it as an error pattern in the rest of the paper.
To get the error patterns in ML order, one can 1) generate all possible error patterns \(\hat{\mathbf{e}}\), that is, \(\sum_{j=1}^{n}\binom{n}{j}\) patterns, 2) sort them based on a likelihood measure such as the squared Euclidean distance \(\big{(}\mathbf{r}-\mathtt{x}\big{(}\theta(\mathbf{r})+\hat{\mathbf{e}})\big{)} ^{2}\), and then 3) check them using (5) one by one from the smallest distance in ascending order. It was numerically shown in [21] that the error patterns generated by all the integer partitions of _logistic weights_\(w_{L}=1,2,\ldots,n(n+1)/2\) can give an order close to what we describe earlier. Obviously, the latter method, which is used in ORBGAND, is more attractive as it does not need any sorting operation over a large set of metrics at every decoding step.
The logistic weight \(w_{L}\) of a length-\(n\) binary vector \(\mathbf{z}\) is defined as [21]
\[w_{L}(\mathbf{z})=\sum_{i=1}^{n}z_{i}\cdot i \tag{7}\]
where \(z_{i}\in\mathbb{F}_{2}\) is the \(i\)-th element of the error pattern \(\hat{\mathbf{e}}\) permuted in the ascending order of the received symbols' reliability \(|r_{i}|,i\in[1,n]\). That is, the error pattern is \(\hat{\mathbf{e}}=\pi(\mathbf{z})\) where \(\pi(\cdot)\) is the vector-wise permutation function which maps binary vector \(\mathbf{z}\) to error pattern \(\hat{\mathbf{e}}\). For element-wise mapping of this permutation, we will use \(\hat{\pi}(\cdot)\) for mapping the index of any elements in \(\mathbf{z}\) to the corresponding element in \(\hat{\mathbf{e}}\) and \(\hat{\pi}^{-1}(\cdot)\) for the reverse mapping. For the sake of simplicity, we refer to \(w_{L}(\mathbf{z})\) by \(w_{L}\).
To get all binary vectors \(\mathbf{z}\) corresponding to a certain \(w_{L}\), there is a simple approach. All coordinates \(j\) in \(\mathbf{z}\), where \(z_{j}=1\) for a certain \(w_{L}\), can be obtained from _integer partitions_ of \(w_{L}\) with distinct parts and no parts being larger than the code length\(n\). Let us define the integer partitions of \(w_{L}\) mathematically as follows:
**Definition 1**.: The integer partitions of \(w_{L}\) are the elements of any subset \(\mathcal{I}\subset[1,w_{L}]\) such that
\[w_{L}=\sum_{j\in\mathcal{I}\subset[1,w_{L}]}j. \tag{8}\]
Then, the binary vector \(\mathbf{z}\) corresponding to any \(\mathcal{I}\) consists of the elements \(z_{j}=1,j\in\mathcal{I}\) and \(z_{j}=0,j\not\in\mathcal{I}\).
In Definition 1, we abused the notion of integer partitions and considered a single part/partition as well to cover all the error patterns obtained from every \(w_{L}\). Observe that for every \(w_{L}\), there exists at least one \(\mathcal{I}\) with a single element \(w_{L}\). For instance, for \(w_{L}=1,2\), we have a single \(\mathcal{I}=\{w_{L}\}\). As \(w_{L}\) gets larger, the number of subsets \(\mathcal{I}\subset[1,w_{L}]\) increases.
**Example 1**.: Suppose we have the received sequence \(\mathbf{r}=[0.5,-1.2,0.8,1.8,-1,-0.2,0.7,-0.9]\).
We can get the following permutation based on \(|r_{i}|,i\in[1,8]\) in ascending order:
\[\dot{\pi}:[1,2,3,4,5,6,7,8]\rightarrow[6,1,7,3,8,5,2,4]\]
Assuming we have attempted all the error patterns generated based on \(w_{L}=1,2,3,4,5\) so far. Then, we need to find the error patterns based on \(w_{L}=6\). The integer partitions of \(w_{L}=6\) are \(\mathcal{I}=\{6\},\{1,5\},\{2,4\}\), and \(\{1,2,3\}\) that satisfy \(w_{L}=\sum_{j\in\mathcal{I}}j,\mathcal{I}\subset[1,6]\). We call every element in \(\mathcal{I}\) as a _part_. Then, the \(\mathbf{z}\) vector and the corresponding error patterns after the vector permutation \(\pi\) are
\[\mathbf{z}=[0\ 0\ 0\ 0\ 0\ 1\ 0\ 0]\rightarrow\hat{\mathbf{e}}=[0\ 0\ 0\ 0\ 1\ 0\ 0\ 0],\] \[\mathbf{z}=[1\ 0\ 0\ 0\ 1\ 0\ 0\ 0]\rightarrow\hat{\mathbf{e}}=[0\ 0\ 0\ 0\ 0\ 1\ 0\ 1],\] \[\mathbf{z}=[0\ 1\ 0\ 1\ 0\ 0\ 0\ 0]\rightarrow\hat{\mathbf{e}}=[1\ 0\ 1\ 0\ 0\ 0\ 0\ 0],\] \[\mathbf{z}=[1\ 1\ 1\ 0\ 0\ 0\ 0\ 0]\rightarrow\hat{\mathbf{e}}=[1\ 0\ 0\ 0\ 0\ 1\ 1\ 0].\]
These error patterns can be checked using (5) in an arbitrary order. In the next section, we will see that any of these error patterns results in an identical increase in \(d_{E}^{2}\) in (3), i.e., they are all located at an identical distance from the received sequence, under some assumption about the distribution of \(|r_{i}|,i\in[1,n]\).
**Remark 1**.: By statistically analyzing the reliability of the received sequence or any other insight, one can prioritize the low Hamming weight \(\hat{\mathbf{e}}\), \(w_{H}(\hat{\mathbf{e}})\)'s over large Hamming weight ones, or vice versa. Alternatively, we can limit the scope of attempts to small or large low Hamming weight \(\hat{\mathbf{e}}\)'s. Observe
that as the logistic weight increases, the error patterns with larger Hamming weight can be generated.
## III Near-ML Ordering of Error Patterns with Logistic Weight
In this section, we investigate analytically how the error patterns in the ascending order of the logistic weight can follow closely the maximum likelihood order over the AWGN channel. The analysis is based on an assumption made for ORBGRAND [22], which is in disagreement with the Gaussian distribution in AWGN channel. This assumption is also a basis to devise a similar approach for combining the sub-patterns in the segmented GRAND in Section V.
**Assumption 1**.: We assume that the ordered sequence of \(|r_{i}|,i=1,2,\ldots,n\) as
\[|r_{1}|\leq|r_{2}|\leq|r_{3}|\leq\cdots\]
are placed equidistantly. That is,
\[\delta=|r_{i+1}|-|r_{i}|=|r_{i+2}|-|r_{i+1}|=\cdots.\]
Additionally, for some \(\rho\geq 0\), we define
\[|r_{i}|=\rho+i\cdot\delta.\]
Now, let us get back to the Euclidean distance. The squared Euclidean distance (SED) as a function of \(\mathbf{z}\) denoted by \(d_{E}^{2}(\mathbf{z})\) is
\[d_{E}^{2}(\mathbf{z})=\sum_{i=1}^{n}(r_{i}-\text{x}(\theta(r_{i})\oplus z_{i}) )^{2}. \tag{9}\]
and for \(\mathbf{z}=\mathbf{0}\), we have
\[d_{E}^{2}(\mathbf{0})=\sum_{i=1}^{n}(r_{i}-\text{x}(\theta(r_{i})))^{2}\]
which is the minimum SED that we can get. Hence,
\[d_{E}^{2}(\mathbf{z})>d_{E}^{2}(\mathbf{0})\]
and the increase of \(d_{E}^{2}(\mathbf{z})\) compared to \(d_{E}^{2}(\mathbf{0})\), denoted by \(d^{(+)}\), is formulated as
\[d_{E}^{2}(\mathbf{z})=d_{E}^{2}(\mathbf{0})+d^{(+)}(\mathbf{z}) \tag{10}\]
for any \(\mathbf{z}\neq\mathbf{0}\). For the sake of simplicity, we refer to \(d^{(+)}(\mathbf{z})\) by \(d^{(+)}\).
Observe that \(\text{x}(\theta(r_{i}))=\operatorname{sgn}(r_{i})\) and when we apply \(z_{i}=1\), the sign changes as follows
\[\text{x}(\theta(r_{i})\oplus z_{i})=\begin{cases}\operatorname{sgn}(r_{i})&z _{i}=0,\\ -\operatorname{sgn}(r_{i})&z_{i}=1.\end{cases} \tag{11}\]
Without loss of generality, we assume \(r_{i}>0\) hence \(r_{i}=i\cdot\delta\) for \(\rho=0\) any \(i\) with \(z_{i}=1\) to make the following discussion easier to follow. Since \(\operatorname{sgn}(r_{i})\in\{1,-1\}\) is a bipolar mapping, then, we have
\[(r_{i}-\text{x}(\theta(r_{i})\oplus z_{i}))^{2}=(i\cdot\delta-1)^{2} \tag{12}\]
To begin with, we consider only the error patterns with a single error. For a pattern \(\mathbf{z}\) with \(w(\mathbf{z})=1\), we flip \(z_{i}=0\) to \(z_{i}=1\) and we get \((i\cdot\delta+1)^{2}\). Then, the increase in the SED is
\[d^{(+)}=(i\cdot\delta+1)^{2}-(i\cdot\delta-1)^{2}=i(4\delta)=i\Delta. \tag{13}\]
where the notation \(\Delta=4\delta\) is introduced and it will be used in the rest of this section.
Now, let us take all the error patterns with identical logistic weights. As we know, these patterns can be obtained by integer partitioning with distinct parts. The following proposition discusses the increase in the SED for this case, where the logistic \(w_{L}\) is found proportional to the increase in distance from the received sequence, \(d^{(+)}=d_{E}^{2}(\mathbf{z})-d_{E}^{2}(\mathbf{0})\). In other words, given Assumption 1, our aim is to show that
\[d^{(+)}\propto w_{L}. \tag{14}\]
**Proposition 1**.: Given an arbitrary logistic weight \(w_{L}>0\) and Assumption 1, the increase in the squared Euclidean distance, i.e., the term \(d^{(+)}(\mathbf{z})\) in \(d_{E}^{2}(\mathbf{z})=d_{E}^{2}(\mathbf{0})+d^{(+)}(\mathbf{z})\), remains constant for all binary vector \(\mathbf{z}\) with \(z_{j}=1,j\in\mathcal{I}\subset[1,w_{L}]\) such that \(w_{L}=\sum_{j\in\mathcal{I}}j\). That is, for some \(\Delta>0\), we have
\[d^{(+)}=\big{(}\sum_{j\in\mathcal{I}}j\big{)}\Delta\ \text{ for all }\mathcal{I}\subset[1,w_{L}]\ \text{s.t.}\ \ w_{L}=\sum_{j\in\mathcal{I}}j. \tag{15}\]
Proof.: Suppose \(w_{L}=i=i_{1}+i_{2}\). We first compare \(d^{(+)}\) for the error patterns corresponding to \(i\) alone and \(i_{1},i_{2}\) together. We observed the increase in the SED by an error pattern with \(w(\mathbf{z})=1\) in (13). Now, if we use an error pattern \(\mathbf{z}\) with weight \(w(\mathbf{z})=2\) by flipping \(z_{i_{1}}=z_{i_{2}}=0\) to \(z_{i_{1}}=z_{i_{2}}=1\) given \(w_{L}(\mathbf{z})=i=i_{1}+i_{2}\), we get
\[d^{(+)}=\Big{(}(i_{1}\delta+1)^{2}+(i_{2}\delta+1)^{2}\Big{)}- \Big{(}(i_{1}\delta-1)^{2}+(i_{2}\delta-1)^{2}\Big{)}\] \[=\Big{(}(i_{1}\delta+1)^{2}-(i_{1}\delta-1)^{2}\Big{)}+\Big{(}(i _{2}\delta+1)^{2}-(i_{2}\delta-1)^{2}\Big{)}\] \[\stackrel{{(\ref{eq:w_L})}}{{=}}i_{1}\Delta+i_{2} \Delta=(i_{1}+i_{2})\Delta\]
In general, if we use any error pattern \(\mathbf{z}\) with weight larger than \(w(\mathbf{z})>1\) given \(w_{L}(\mathbf{z})=i\), we have
\[d^{(+)}=\sum_{j\in\mathcal{I}}\Big{(}(j\delta+1)^{2}-(j\delta-1)^{2}\Big{)}= \big{(}\sum_{j\in\mathcal{I}}j\big{)}\Delta. \tag{16}\]
Therefore, as \(i=\sum_{j\in\mathcal{I}}j\), any error pattern \(\mathbf{z}\) with \(z_{j}=1,j\in\mathcal{I}\) and \(\mathcal{I}\subset[1,i)\) gives the same \(d^{(+)}\). Note that all \(\mathcal{I}\) subsets can be obtained by integer partitioning with distinct parts.
Hence, the error patterns with an identical logistic weight will have the identical squared Euclidean distance as well. That is why the order of checking these patterns is arbitrary as suggested in [21].
**Remark 2**.: Given two logistic weights of \(w_{L}=i\) and \(i^{\prime}\) such that \(i^{\prime}>i\). Since \(i^{\prime}\Delta>i\Delta\) and so the \(d^{(+)}\) corresponding to \(i^{\prime}\) will be larger, we have \(d_{E}^{2}(\mathbf{z})<d_{E}^{2}(\mathbf{z}^{\prime})\) where \(\mathbf{z}\) and \(\mathbf{z}^{\prime}\) are the corresponding error patterns to \(w_{L}=i\) and \(i^{\prime}\). Hence, the error pattern(s) with \(w_{L}=i\) should be checked first in this case.
Recall that we considered Assumption 1 for the analysis in this section which implies a uniform distribution for the received signals. However, this assumption is not realistic as the \(r_{i}\) values follow the Gaussian distribution. Therefore, the error patterns in the order generated based on the logistic weight may not be aligned precisely with the ML order. As a result, we refer to this order as a near-ML order.
## IV Segmented GRAND: Error Sub-patterns
As discussed in Section II, the ORBGRAND checks the error patterns \(\hat{\mathbf{e}}\) using (5). In this scheme, we have a single pattern generator that outputs \(\hat{\mathbf{e}}\) in a near-ML order. This may require generating a significant number of patterns to find a valid pattern that passes all the parity constraints in the parity check matrix \(\mathbf{H}\). In [28], we studied how to constrain this single error pattern generator to output the patterns satisfying one or multiple disjoint constraints. The aim was to avoid the computationally complex operation in (5) in the pattern checking stage and replace it with a computationally simple partial pre-evaluation in the pattern generation stage. Towards this goal, we extracted multiple constraints from the original or manipulated parity check matrix such that the constraints cover disjoint sets of indices in \([1,n]\).
In this section, we use the extracted constraints in [28] and we call the corresponding disjoint sets _segments_. Furthermore, we employ multiple error pattern generators associated with the segments to generate short patterns, named _sub-patterns_, satisfying the constraint corresponding to the segments. Hence, unlike in [28], all the generated sub-patterns and the patterns resulting from the combinations of sub-patterns will satisfy all the constraints and we do not discard any generated patterns. However, this advantage comes with the challenging problem of how to order error patterns resulting from the combinations of sub-patterns. We will tackle this problem in the next section. In the rest of this section, we define the segments and the notations needed for the rest of the paper.
Depending on the parity check matrix \(\mathbf{H}\) of the underlying code, we can have at least two segments. Let us denote the total number of segments by \(p\) and the set of coordinates (or indices) of coded symbols in segment \(j\) by \(\mathcal{S}_{j}\). Any row \(\mathbf{h}_{j},j\in[1,n-k]\), of matrix \(\mathbf{H}\) can partition the block code into two segments as follows:
\[\mathcal{S}_{j}=\operatorname{supp}(\mathbf{h}_{j}),\]
\[\mathcal{S}^{\prime}_{j}=[1,n]\backslash\operatorname{supp}(\mathbf{h}_{j}).\]
Before further discussion, let us define explicitly a segment as follows:
**Definition 2**.: **Error Sub-pattern**: A subset of coordinates in the error pattern \(\hat{\mathbf{e}}\) corresponding to a segment is called an error sub-pattern. In other words, the error sub-pattern corresponding to segment \(j\), denoted by \(\mathcal{E}_{j}\), is defined as
\[\mathcal{E}_{j}=\mathcal{S}_{j}\cap\operatorname{supp}(\mathbf{e}). \tag{17}\]
The syndrome can give us some insight into the number of errors in each segment.
**Remark 3**.: The corresponding element \(s_{j}\) in syndrome \(\mathbf{s}=[s_{1}\ s_{2}\ \cdots\ s_{n-k}]\) determines the weight of the corresponding error sub-pattern \(\mathcal{E}_{j}\) as [28]
\[|\mathcal{E}_{j}|=|\operatorname{supp}(\mathbf{h}_{j})\cap\operatorname{supp }(\mathbf{e})|=\begin{cases}\text{odd}&s_{j}=1,\\ \text{even}&s_{j}=0,\end{cases} \tag{18}\]
where the even number of errors includes no errors as well. However, the weight of the error sub-pattern corresponding to positions outside \(\operatorname{supp}(\mathbf{h}_{j})\), i.e.,
\[|\big{(}[1,n]\backslash\operatorname{supp}(\mathbf{h}_{j})\big{)}\cap \operatorname{supp}(\mathbf{e})|\rightarrow\text{unknown},\]
can be either even or odd as the positions in \([1,n]\backslash\operatorname{supp}(\mathbf{h}_{j})\) are not involved in the parity constraint \(\mathbf{h}_{j}\).
Depending on the parity check matrix \(\mathbf{H}\), we may be able to cover the positions in \([1,n]\backslash\operatorname{supp}(\mathbf{h}_{j})\) by one or more other rows in \(\mathbf{H}\) other than row \(j\). This can be achieved by matrix manipulation of \(\mathbf{H}\), i.e., row operation, because the row space is not affected by elementary row operations on \(\mathbf{H}\) (resulting in \(\mathbf{H}^{\prime}\)) as the new system of linear equations represented in the matrix form \(\mathbf{H}^{\prime}\cdot\mathbf{c}=\mathbf{0}\) will have an unchanged solution set \(\mathcal{C}\).
**Example 2**.: Suppose we have three rows of a parity check matrix and the associated syndrome bits as follows:
\[\mathbf{h}_{j_{1}}=[1\ 1\ 1\ 1\ 0\ 1\ 1\ 0],\ \ s_{j_{1}}=0,\]
\[\mathbf{h}_{j_{2}}=[0\ 1\ 0\ 1\ 0\ 0\ 1\ 0],\ \ s_{j_{2}}=1,\]
\[\mathbf{h}_{j_{3}}=[0\ 1\ 0\ 1\ 1\ 0\ 1\ 1],\ \ s_{j_{3}}=0.\]
From \(\mathbf{h}_{j_{2}}\), we can form two segments corresponding to the following disjoint index sets:
\[\mathcal{S}_{j_{2}}=\operatorname{supp}(\mathbf{h}_{j_{2}})=\{2,4,7\},\]
\[\mathcal{S}^{\prime}_{j_{2}}=[1,8]\backslash\operatorname{supp}(\mathbf{h}_{j_{2 }})=\{1,3,5,6,8\},\]
From \(s_{j_{2}}=1\), we understand that
\[|\mathcal{S}_{j_{2}}\cap\operatorname{supp}(\mathbf{e})|\rightarrow\text{odd},\]
\[|\mathcal{S}^{\prime}_{j_{2}}\cap\operatorname{supp}(\mathbf{e})|\rightarrow\text{ unknown},\]
Here, unknown means the weight of error sub-pattern \(\mathcal{E}^{\prime}_{j_{2}}=\mathcal{S}^{\prime}_{j_{2}}\cap\operatorname{supp}( \mathbf{e})\) can be either even or odd. Hence, we have to generate all the sub-patterns, not constrained to odd or even sub-patterns only. Note that we can efficiently generate only odd or even sub-patterns as illustrated in Section VIII, however in the case of no insight into the number of errors in the segment, we have to generate all possible sub-patterns for that specific segment.
Now, by row operations on \(\mathbf{h}_{j_{1}}\) and \(\mathbf{h}_{j_{3}}\), we can get
\[\mathbf{h}^{\prime}_{j_{1}}=\mathbf{h}_{j_{1}}\oplus\mathbf{h}_{j_{2}}=[1\ 0\ 1\ 0\ 0\ 1\ 0\ 0],\ \ s^{\prime}_{j_{1}}=1,\]
\[\mathbf{h}_{j_{2}}=[0\ 1\ 0\ 1\ 0\ 0\ 1\ 0],\ \ s_{j_{2}}=1,\]
\[\mathbf{h}^{\prime}_{j_{3}}=\mathbf{h}_{j_{3}}\oplus\mathbf{h}_{j_{2}}=[0\ 0\ 0\ 0\ 1\ 0\ 0\ 1],\ \ s^{\prime}_{j_{2}}=0,\]
where we can form three segments (\(p=3\)) with corresponding disjoint index sets
\[\mathcal{S}^{\prime}_{j_{1}}=\{1,3,6\},\;\mathcal{S}_{j_{2}}=\{2,4,7\},\; \mathcal{S}^{\prime}_{j_{2}}=\{5,8\},\]
from which we understand that the weight of error sub-patterns are as follows:
\[|\mathcal{S}^{\prime}_{j_{1}}\cap\mathrm{supp}(\mathbf{e})| \rightarrow\text{odd},\] \[|\mathcal{S}_{j_{2}}\cap\mathrm{supp}(\mathbf{e})| \rightarrow\text{odd},\] \[|\mathcal{S}^{\prime}_{j_{3}}\cap\mathrm{supp}(\mathbf{e})| \rightarrow\text{even}.\]
As practical examples for segmentation, we can give the following codes:
* eBCH code (128, 106): The rows \(\mathbf{h}_{1}\) and \(\mathbf{h}_{2}\) in \(\mathbf{H}\) matrix satisfy the relationship \(\mathrm{supp}(\mathbf{h}_{2})\subset\mathrm{supp}(\mathbf{h}_{1})\) where \(\mathbf{h}_{2}=\mathbf{1}\) and \(|\mathbf{h}_{1}|/2=|\mathbf{h}_{2}|=64\). Hence, we can modify \(\mathbf{h}_{1}\) by row operation \(\mathbf{h}^{\prime}_{1}=\mathbf{h}_{1}\oplus\mathbf{h}_{2}\) to get the following two segments: \(\mathcal{S}_{2}=\mathrm{supp}(\mathbf{h}_{2})\) and \(\mathcal{S}^{\prime}_{1}=\mathrm{supp}(\mathbf{h}^{\prime}_{1})\) where \(\mathcal{S}_{2}\cup\mathcal{S}^{\prime}_{1}=[1,128]\).
* PAC code (64, 44): The rows \(\mathbf{h}_{1}\), \(\mathbf{h}_{4}\), and \(\mathbf{h}_{5}\) in \(\mathbf{H}\) matrix satisfy the relationship \(\mathrm{supp}(\mathbf{h}_{5})\subset\mathrm{supp}(\mathbf{h}_{4})\subset \mathrm{supp}(\mathbf{h}_{1})\) where \(\mathbf{h}_{1}=\mathbf{1}\) and \(|\mathbf{h}_{1}|/2=|\mathbf{h}_{4}|=2|\mathbf{h}_{5}|=32\). Hence, we can modify \(\mathbf{h}_{1}\) and \(\mathbf{h}_{4}\) by row operations \(\mathbf{h}^{\prime}_{1}=\mathbf{h}_{1}\oplus\mathbf{h}_{4}\) and \(\mathbf{h}^{\prime}_{4}=\mathbf{h}_{4}\oplus\mathbf{h}_{5}\) to get the following three segments: \(\mathcal{S}^{\prime}_{1}=\mathrm{supp}(\mathbf{h}^{\prime}_{1})\), \(\mathcal{S}^{\prime}_{4}=\mathrm{supp}(\mathbf{h}^{\prime}_{4})\), and \(\mathcal{S}_{5}=\mathrm{supp}(\mathbf{h}_{5})\) where \(\mathcal{S}^{\prime}_{1}\cup\mathcal{S}^{\prime}_{4}\cup\mathcal{S}_{5}=[1,64]\).
Now, we turn our focus to the possible complexity reduction that the segmentation can provide in terms of sorting complexity and membership checking complexity.
**Complexity of Sorting the Received Signals.** In all the variants of GRAND, the received signals should be sorted in ascending order of their absolute values. Let us take Bitonic network sorter with the total number of stagescomputed based on the sum of the arithmetic progression as (30, Section V)
\[\Psi=\sum_{\psi=1}^{\log_{2}n}\psi=\frac{1}{2}(\log_{2}n)(1+\log_{2}n). \tag{19}\]
Observe that reduction in \(n\) can significantly reduce \(\Psi\) as a measure of complexity. For instance, given a code with length \(n=64\) with \(\Psi=21\). If it is segmented into two equal segments, then we get \(\Psi=15\). Note that the total number of stages in (19) as a measure of time complexity (all nodes in every stage are processed simultaneously) is in order of \(O(\log_{2}^{2}n)\). Clearly, by segmentation, \(n\) reduces and so does the time complexity.
**Average Number of Queries.** As the reduction in the number of queries depends on the number of parity constraints, let us first see how many segments we can have.
**Remark 4**.: The maximum number of segments depends on the underlying code. However, the minimum number of segments is two as was shown in Example 2 by considering either one or two parity check constraints. The latter gives a lower complexity because we get an insight into both segments. The codes that have a well-structured parity check matrix such as polar codes can form easily more than two segments.
The reduction in the average complexity is also proportional to the reduction in the size of the search space as was shown numerically in [28]. The following lemma shows that the size of the search space reduces by a factor of two and it depends on the total number of parity constraints.
**Lemma 1**.: Suppose we have a parity check matrix \(\mathbf{H}\) in which there are \(p\) rows of \(\mathbf{h}_{j},j=j_{1},j_{2},...,j_{p}\) with mutually disjoint index sets \(\mathcal{S}_{j}=\mathrm{supp}(\mathbf{h}_{j})\) that define \(p\) segments, then the size of the search space by these \(p\) parity check equations is
\[\Omega(\mathbf{h}_{j_{1}},..,\mathbf{h}_{j_{p}})=2^{n-p}. \tag{20}\]
Proof.: Let us first take a row \(\mathbf{h}_{j}\) and \(\mathcal{S}_{j}=\mathrm{supp}(\mathbf{h}_{j})\). In this case, we only consider the error sequences satisfying \(|\mathcal{S}_{j}\cap\mathrm{supp}(\hat{\mathbf{e}})|\;\text{mod}\;2=s_{j}\) in the search space. Then, the size of the constrained search space will be
\[\Omega(\mathbf{h}_{j})=\sum_{\begin{subarray}{c}\ell\in[0,|\mathcal{S}_{j}|] :\\ \ell\;\text{mod}\;2=s_{j}\end{subarray}}\binom{|\mathcal{S}_{j}|}{\ell}\cdot 2^{n-| \mathcal{S}_{j}|}=\frac{2^{|\mathcal{S}_{j}|}}{2}\cdot 2^{n-|\mathcal{S}_{j}|}=2^{n-1}. \tag{21}\]
Generalizing (21) for \(p\) constraints, we have
\[\Big{(}\prod_{j=j_{1}}^{j_{p}}\sum_{\begin{subarray}{c}\ell\in[0,| \mathcal{S}_{j}|]:\\ \ell\;\text{mod}\;2=s_{j}\end{subarray}}\binom{|\mathcal{S}_{j}|}{\ell}\Big{)} \cdot 2^{n-\sum_{j=j_{1}}^{j_{p}}|\mathcal{S}_{j}|}=\] \[\big{(}\prod_{j=j_{1}}^{j_{p}}2^{|\mathcal{S}_{j}|-1}\big{)}\cdot 2^{n- \sum_{j=j_{1}}^{j_{p}}|\mathcal{S}_{j}|}=2^{n-p}.\]
So far, we have defined the segments and the corresponding error sub-patterns. In [28], we provided an efficient scheme to evaluate the outputs of a single error pattern generator of the conventional ORBGRAND with respect to the segments' constraint in (18) before checking the codebook membership by (5). The drawback of the proposed method is that we still have to generate invalid patterns although we can discard them to avoid the computationally more complex operation of codebook membership checking.
In this paper, our goal is to entirely avoid generating invalid error patterns with respect to the parity constraints of the segments. To this end, we need multiple error pattern generators that only produce valid sub-patterns simultaneously for the associated segments. Note that we can have a single pattern generator that produces sub-patterns for the segments successively at the cost of longer latency. This sub-pattern-based approach is discussed in the next section in detail. Figs. 1 and 2 illustrate the difference between the approaches in this paper and in [28]. As can be seen, the pre-evaluation in Fig. 1 can save the codebook membership checking operation. Whereas the error patterns generated based on the valid sub-patterns in Fig. 2 do not need any pre-evaluation.
## V Combining Sub-patterns in Near-ML Order
A challenging problem in handling sub-patterns is combining them in order near the ML order. In the conventional ORBGRAND, the logistic weight \(w_{L}\) is used as a guide to generate error patterns in a near-ML order, i.e., the logistic \(w_{L}\) is assumed proportional to the increase in distance, \(d^{(+)}\) from the received sequence as shown in (15). As discussed in the previous section, we eventually want to generate sub-patterns for the segments of the underlying code and then combine them. However, we do not know how to combine the sub-patterns from different segments in order to generate the entire pattern in a near-ML order. The trivial way would be generating a set of entire patterns by considering all the possible combinations of the sub-patterns (probably in batches due to the limitation of resources), computing their SEDs, and then sorting them in the ascending order of SED. This method is not of our interest because we need to store many patterns and sort them frequently, similar to what we do in soft-GRAND.
Here, we propose an approach based on a logistic weight \(w_{L}\), to preserve the near-ML order in the conventional ORBGRAND, in which we assign sub-weights \(w_{L}^{(j)},j\in[1,p]\) to \(p\) segments such that
\[w_{L}=\sum_{j=1}^{p}w_{L}^{(j)}. \tag{22}\]
Observe that the combined sub-patterns will still have the same \(w_{L}\) for any set of \([w_{L}^{(1)}\quad w_{L}^{(2)}\cdots w_{L}^{(p)}]\) that satisfies (22). Now the question is how to get all such sub-weight vectors \([w_{L}^{(1)}\quad w_{L}^{(2)}\cdots w_{L}^{(p)}]\). It turns out that by modification of integer partitioning defined in Definition 1, we can obtain all such sub-weights. The difference between the integer partitions in Definition 1 and what we need for sub-weights are as follows: 1) The integer partitions do not need to be distinct (repetition is allowed). That is, two or more segments can have identical sub-weights, 2) the permutation of partitions is allowed, 3) the number of integer partitions (a.k.a part size) is fixed and is equal to the number of segments, and 4) the integer zero is conditionally allowed, i.e., one or more partitions can take zero value given the syndrome element corresponding to the segment is \(s_{j}=0\).
After obtaining the sub-weights, we can use the integer partitions in Definition 1 to get the sub-pattern(s). Hence, we have two levels of integer partitioning in the proposed approach. These two levels are illustrated in Fig. 3. The rest of this section is dedicated to giving the details of this approach starting with some examples for the first level of partitioning and then some definitions and a proposition on how to get all the valid sub-weights for the segments in an efficient way.
**Example 3**.: Suppose the current logistic weigh is \(w_{L}=5\) and the codeword is divided into three segments, \(p=3\), with the corresponding syndrome elements \(s_{j_{1}}=0\), \(s_{j_{2}}=1\) and \(s_{j_{3}}=1\). That is, the weights of the sub-patterns corresponding to the segments are [even, odd, odd], respectively. To generate the sub-patterns for this \(w_{L}\), the logistic weights of the segments by the first level of integer partitioning are chosen as
\[[0\ 1\ 4],[0\ 2\ 3],[0\ 3\ 2],[0\ 4\ 1],[3\ 1\ 1].\]
Observe that the sum of the segment weights is 5 while there are repetitions of weights in \([3\ 1\ 1]\), permutation of the weights in \([0\ 2\ 3]\) and \([0\ 3\ 2]\), and zero weight for the segment with \(s_{j_{1}}=0\) to allow considering no errors for segment \(j_{1}\), i.e., empty sub-pattern. We will discuss later the other details of the sub-pattern generation shown in this example.
Note that the sub-patterns are generated based on the logistic weight of the segments provided in the example above at the
Fig. 1: The error pattern generation process with pre-evaluation in “constrained GRAND” [28] for two constraints.
Fig. 3: Two-level integer partitioning to generate error patterns for \(p\) segments. Note that we have \(j\in[1,p]\) and \(t,t^{\prime}\) are the number of parts (odd, even, or arbitrary when we don’t have \(s_{j}\) for the corresponding segment such as segment \(\mathcal{S}^{\prime}_{j_{2}}\) in Example 2).
Fig. 2: The proposed error pattern generation approach based on sub-patterns in “segmented GRAND” for two segments.
second level of integer partitioning where the parts are distinct integers, in a manner employed in conventional ORBGRAND.
**Example 4**.: Suppose we have three segments and \([w_{L}^{(1)}\ w_{L}^{(2)}\ w_{L}^{(3)}]=[0\ 3\ 5]\) for \(w_{L}=8\). The integer partitioning of 3 and 5 with distinct parts results in \([1\ 2]\) for \(w_{L}^{(2)}=3\), and \([1\ 4]\) and \([2\ 3]\) for \(w_{L}^{(3)}=5\). Therefore, there are \(1\times 2\times 3=6\) sub-patterns as follows:
\[[\ ]+[3]+[5],\ \ \ [\ ]+[1\ 2]+[5],\]
\[[\ ]+[3]+[1\ 4],\ \ \ [\ ]+[1\ 2]+[1\ 4],\]
\[[\ ]+[3]+[2\ 3],\ \ \ [\ ]+[1\ 2]+[2\ 3].\]
**Local permutation.** The integers in the aforementioned sub-patterns refer to the relative position of the symbols in the segments, locally ordered with respect to their reliability. Hence, we need to use a local permutation \(\pi^{(j)}(\cdot)\) for every segment \(j\), unlike the conventional ORBGRAND where we have only one permutation function \(\hat{\pi}(\cdot)\) as discussed in Section II. The operator "+" denotes the concatenation of the sub-patterns. These patterns can be checked in an arbitrary order as long as they belong to the same \(w_{L}\). The local permutation \(\hat{\pi}^{(j)}(\cdot)\) maps a local index in \([1,|\mathcal{S}_{j}||]\) belonging to segment \(j\) to the overall index in \([1,n]\) as
\[\hat{\pi}^{(j)}:\{1,2,\ldots,|\mathcal{S}_{j}|\}\rightarrow\mathcal{S}_{j}. \tag{23}\]
From Definition 1, we can define a \(\mathbf{z}_{j}\) as a binary vector with length \(|\mathcal{S}_{j}|\) in which \(z_{j,i}=1\) where \(i\in\mathcal{I}\subset[1,|\mathcal{S}_{j}|]\) for \(w_{L}^{(j)}=\sum_{i\in\mathcal{I}}i\). Then, the element-wise permutation from \(w_{L}^{(j)},j=1,\ldots,p\) can be used to flip the relevant positions in an all-zero binary vector with length \(n\) to obtain the error pattern vector \(\mathbf{e}\), as shown in Fig. 3.
**Example 5**.: Suppose we have the received sequence \(\mathbf{r}=[0.5,-1.2,0.8,1.8,-1,-0.2,0.7,-0.9]\) similar to Example 1. We use the segments defined in Example 2 as
\[\mathcal{S}_{j_{2}}=\{2,4,7\},\ \ \ \mathcal{S}^{\prime}_{j_{2}}=\{1,3,5,6,8\}.\]
Now, the local permutation function based on \(|r_{i}|,i\in[1,8]\) in ascending order can be obtained as follows:
\[\hat{\pi}^{(j_{2})}:[1,2,3]\rightarrow[7,2,4]\]
\[\hat{\pi}^{\prime(j_{2})}:[1,2,3,4,5]\rightarrow[6,1,3,8,5]\]
Now, let us define an efficient framework for error pattern generation based on sub-patterns that plays the role of guidelines to generate valid sub-patterns only. This framework consists of bases for the formation of error patterns and a minimum logistic weight that each base can take. We begin with defining the bases with respect to the syndrome elements as follows:
**Definition 3**.: **Error Pattern Bases**: A base for the error patterns, denoted by \([f_{1}\ f_{2}\ \ldots\ f_{p}]\) for \(p\) segments, determines the segments contributing their sub-patterns to the error patterns given by logistic weight \(w_{L}\) as
\[w_{L}=\sum_{j=1}^{p}f_{j}\cdot w_{L}^{(j)} \tag{24}\]
where \(f_{j}\) can get the following values:
\[f_{j}=\begin{cases}\{0,1\}&s_{j}=0,\\ \{1\}&s_{j}=1.\end{cases} \tag{25}\]
The segments with \(f_{j}=0\) are called _frozen segments_ where the sub-pattern contributed by segment \(j\) is empty. The total number of bases is \(\prod_{j=1}^{p}2^{1-s_{j}}\) that can be between 1 and \(2^{p}\) depending on \(s_{j},j\in[1,p]\).
Note that when \(s_{j}=0\) for segment \(j\), this segment might be error-free. That is the reason why we have error pattern bases excluding the sub-patterns of such segments by setting \(f_{j}=0\). Moreover, when we have \(s_{j}=0\) and \(f_{j}=1\), since the segment \(j\) can have sub-patterns with an even weight and the smallest even number of parts is 2, we need to have \(w_{L}^{(j)}\geq 3\) as \(3=1+2\) gives the first two most probable erroneous positions. That is, the first error pattern \(\mathbf{z}\) for this segment will be \(z_{1}=z_{2}=1\) and \(z_{i}=0,i\geq 3\) or \(\mathbf{z}=[1\ 1\ 0\cdots 0]\). On the contrary, we necessarily need \(f_{j}=1\) and \(w_{L}^{(j)}\geq 1\) when \(s_{j}=1\). That is, we cannot have an empty sub-pattern for such segment \(j\) in this case.
**Proposition 2**.: Given the segments' syndrome \([s_{1}\,s_{2}\ \cdots\ s_{p}]\) and the pattern base \(\mathbf{s}=[f_{1}\,f_{2}\ \ldots\ f_{p}]\) for \(p\) segments, the minimum \(w_{L}\) that every pattern base can give is
\[\underline{w}_{L}(\mathbf{s})=\sum_{j=1}^{p}f_{j}\cdot\underline{w}_{L}^{(j)}(s _{j}), \tag{26}\]
where \(\underline{w}_{L}^{(j)}(s_{j})\) is
\[\underline{w}_{L}^{(j)}(s_{j})=3-2s_{j}. \tag{27}\]
Thus, the overall logistic weight \(w_{L}(\mathbf{s})\) and sub-weights \(w_{L}^{(j)}(s_{j}),j\in[1,p]\) must satisfy
\[w_{L}(\mathbf{s})\geq\underline{w}_{L}(\mathbf{s})\ \text{and}\ w_{L}^{(j)}(s_{j}) \geq\underline{w}_{L}^{(j)}(s_{j}). \tag{28}\]
Proof.: Equation (27) follows from function \(\underline{w}_{L}^{(j)}(s_{j}):\{0,1\}\rightarrow\{3,1\}\) as discussed earlier, which maps the minimum non-zero \(w_{L}^{(j)}\) to 3 when \(s_{j}=0\) and maps to 1 when \(s_{j}=1\). Then, Equation (26) clearly holds for the minimum of overall logistic weight which is denoted by \(\underline{w}_{L}(\mathbf{s})\).
Observe that the base patterns are used to efficiently enforce the minimum weight constraints in (28). The importance of the base patterns is realized when we recall that the level-1 integer partitioning allows permutation and repetition of parts (here, sub-weights).
**Example 6**.: Given \(s_{1}=0,s_{2}=1\) and \(s_{3}=1\), we would have \(2\times 1\times 1=2\) error pattern bases \([f_{1}\ f_{2}\ f_{3}]\) and their minimum weights/sub-weights as follows:
\[[f_{1}\ f_{2}\ f_{3}]=[0\ 1\ 1],\underline{w}_{L}=2,[\underline{w}_{L}^{(1)}=0\ \ \underline{w}_{L}^{(2)}=1\ \ \underline{w}_{L}^{(3)}=1],\]
\([f_{1}\ f_{2}\ f_{3}]=[1\ 1\ 1],\underline{w}_{L}=5,[\underline{w}_{L}^{(1)}=3\ \ \underline{w}_{L}^{(2)}=1\ \ \underline{w}_{L}^{(3)}=1]\).
Now, for \(w_{L}=4\), the sub-weights \([w_{L}^{(1)}\ w_{L}^{(2)}\ w_{L}^{(3)}]\) are \([0\ 1\ 3],[0\ 2\ 2],\) and \([0\ 3\ 1]\). As can be seen, \(w_{L}^{(1)}=0\), i.e., segment 1 is frozen, and all the sub-weights were generated with the pattern base \([0\ 1\ 1]\). However, for \(w_{L}=5\), the sub-weights are \([0\ 1\ 4],[0\ 2\ 3],[0\ 3\ 2],[0\ 4\ 1],\) and \([3\ 1\ 1]\) where the last one is based on the pattern base \([1\ 1\ 1]\) (note that \(\underline{w}_{L}=5\) for this base). The analogy to the overall weight and the sub-weights are shown in Fig. 4.
Following the example above, we define our tailored integer partitioning scheme for combining the sub-patterns.
**Definition 4**.: **Logistic Weight and Sub-weights**: Suppose we have a block code with \(p\) segments. The overall logistic weight \(w_{L}\) can be distributed among segments by sub-weights \(w_{L}^{(j)}=\kappa_{j}+c_{j}\) as
\[w_{L}=\sum_{j=1}^{p}f_{j}\cdot w_{L}^{(j)}=\sum_{j=1}^{p}f_{j}(\kappa_{j}+c_{j}), \tag{29}\]
where \(\kappa_{j}\geq\underline{w}_{L}^{(j)}\) is the initial value for \(w_{L}^{(j)}\) and \(c_{j}\geq 0\) is the increments to get larger \(w_{L}^{(j)}\).
**Example 7**.: Given \(s_{1}=1\) and \(s_{2}=0\), we would have \(1\times 2=2\) error pattern bases \([f_{1}\ f_{2}]\) as follows:
\[[f_{1}\ f_{2}]=[1\ 0],\underline{w}_{L}=1,[\underline{w}_{L}^{(1)}=1\ \ w_{L}^{(2)}=0],\]
\[[f_{1}\ f_{2}]=[1\ 1],\underline{w}_{L}=4,[\underline{w}_{L}^{(1)}=1\ \ \underline{w}_{L}^{(2)}=3].\]
As Fig. 5 shows, segment 2 is frozen up to \(w_{L}=3\) and all sub-weights are generated by base \([1\ 0]\) at level-1 partitioning. Hence no error pattern is allocated to this segment for \(1\leq w_{L}\leq 3\). Note that the all-zero error pattern is not valid in this case, i.e., \(w_{L}>0\). Furthermore, Fig. 6 shows the two levels of partitioning specifically for \(w_{L}=6\) when the partitions \([w_{L}^{(1)},w_{L}^{(2)}]=[2\ 4]\) is selected in the first level. Following the permutation functions in Example 5, the error pattern vector \(\mathbf{e}\) is given as well.
The idea of splitting the logistic weight into sub-weights for the segments is based on the assumption that the least reliable symbols are almost evenly distributed among the segments. The statistical results for 15000 transmissions of eBCH(128,106) codewords over AWGN channel show that this assumption is actually realistic. Fig. 7 shows the distribution of 64 least reliable symbols between two 64-length segments, by locating and counting them in the segments for each transmitted codeword. The mean and standard deviation of the bell-shaped histogram for each segment is 32 and 2.85, respectively. Moreover, as the additive noise follows Gaussian distribution and it is independent and identically distributed among the symbols, these results were expected.
Now, let us look at a realistic example comparing the conventional ORBGKAND with Segmented ORBGKAND in terms of searching for a valid error pattern.
**Example 8**.: Suppose a codeword of eBCH code (64,45) is transmitted over an AWGN channel and the hard decision on the received sequence leads to three erroneous bits at coordinates of \(\mathrm{supp}(\mathbf{e})=[1\ 15\ 23]\). One employs ORBGKAND to find these coordinates. This goal is achieved after 57 attempts sweeping through logistic weights \(w_{L}=1\to 12\). Fig. 8 illustrates the Euclidean distance of all queries.
Fig. 4: Analogy of segment weights (or sub-weights) and the overall weight \(w_{L}=5\) on a scale. Note that (d) is representing the pattern base \([1\ 1\ 1]\) and the rest are based on the pattern base \([0\ 1\ 1]\) where segment 1 is frozen.
Fig. 5: The sub-weights generated based on \(\mathbf{s}=[s_{1}=1\ s_{2}=0]\) for two-segment based GRAND. For \(w_{L}=1,2,3\), the base \(=[\mathbf{1}\ \mathbf{0}]\) is activated only because the base \(=[\mathbf{1}\ \mathbf{1}]\) has \(\underline{w}_{L}=4\). We have both bases activated for \(w_{L}\geq 4\).
Fig. 6: An example of Two-level error pattern generation based on sub-patterns when \(\mathbf{s}=[s_{1}=1\ s_{2}=0]\). Note that \(n1\) and \(n2\) are the lengths of segments 1 and 2.
Now, if one divides the codeword into two equal-length segments with coordinates in \(\mathcal{S}_{1},\mathcal{S}_{2}\) based on two constraints, it turns out that segments 1 and 2 have odd and even numbers of errors since \(s_{1}=1\) and \(s_{2}=0\). The proposed Segmented ORBGRAND can find the error coordinates in only 7 queries as illustrated in the table below.
The patterns found by Segmented ORBGRAND are circled in Fig. 8. As can be seen, by segmentation, we can avoid checking many invalid error patterns.
We can combine the sub-patterns generated by \(w_{L}^{(j)},j=1,\ldots,p\), in an arbitrary order as the sub-weights of all the combinations are summed up to \(w_{L}\).
## VI Tuning Sub-weights for Unequal Distribution of Errors among Segments
In the previous section, we suggested initializing the parameter \(\kappa_{j}\) by \(\underline{w}_{L}^{(j)}\in\{1,3\}\) depending on the value of \(s_{j}\in\{1,0\}\). Although we are considering the AWGN channel where the random noise added to every symbol is independent and identically distributed (i.i.d.), there is a possibility that the distribution of errors is significantly unbalanced, that is, the weight of the error vector in one segment is quite larger than the other one(s). We can get statistical insight into this distribution by counting the low-reliability symbols (or small \(|r_{i}|\)) in each segment, denoted by \(a_{j}\). Then, we adjust the initialization of \(\kappa_{j}\)s and make them proportional to the number of symbol positions in the segment with \(|r_{i}|<\epsilon\) where \(\epsilon\) is an arbitrary threshold for low-reliability symbols. This will account for the unequal distribution of errors among the segments. Suppose the expected number of errors in a segment with length \(L\) is
\[\mu_{e}^{(j)}=L\cdot 2P(|r|<\epsilon), \tag{30}\]
where the probability \(Pr(\cdot)\) follows the Gaussian distribution with mean \(1\) and noise variance \(\sigma_{n}^{2}\). Then, we can adjust \(k_{j}\) as
\[k_{j}=\underline{w}_{L}^{(j)}+\Big{\lceil}\frac{\max\{a_{j}\}-a_{j}}{\rho\cdot \mu_{e}^{(j)}}\Big{\rceil}, \tag{31}\]
where \(\rho\cdot\mu_{e}^{(j)}\) is used for normalization of the relative difference with the number of low-reliability symbols in the segments. The parameter \(\rho\leq 0\) can be adjusted to get a better result. We denote the second term by \(\tau_{j}=\Big{\lceil}\frac{\max\{a_{j}\}-a_{j}}{\rho\cdot\mu^{(j)}}\Big{\rceil}\). Note that the offset \(\tau_{j}\) is only for the initialization stage and we should consider it when we conduct integer partitioning of \(w_{L}^{(j)}\) by subtracting the offset from the segment weight, i.e., \(w_{L}^{(j)}-\tau_{j}\). Although these adjustments (addition and subtraction of \(\tau_{j}\)) seem redundant and ineffective, they will postpone the generation of large-weight patterns for the segment(s) with small \(a_{j}\), and hence we will get a different order of patterns that may result in a fewer number of queries for finding a valid codeword. Let us have a look at an example.
**Example 9**.: Let us consider two segments with \(L=32\) elements, the corresponding syndrome element \(s_{1}=1,s_{2}=0\), threshold \(\epsilon=0.2\) and \(\mu_{e}=L\cdot 2Pr(|r|<\epsilon)=8\). We realize that there are 11 and 3 elements in segments 1 and 2, respectively, satisfying \(|r_{i}|<\epsilon\). Having fewer low-reliable positions than the expected number (i.e., \(3<\mu_{e}\)) implies that the possibility of facing no errors in segment 2 is larger than having at least 2 errors (recall that the Hamming weight of error sub-pattern for this segment should be even due to \(s=0\)). Therefore, in level-1 partitioning, we can increase
Fig. 8: The squared Euclidean distance of queries up to the first codeword in ORBGRAND. The red circles indicate the queries performed by Segmented ORBGRAND with an order different from ORBGRAND. Note that since the metric is not _monotonically increasing_, i.e., not always increasing or remaining constant, it doesn’t give the ML order.
Fig. 7: Distribution of the 64 least reliable coordinates between two segments for the 15000 independent transmissions of eBCH(128,106) codewords.
the initial sub-weight for this segment from \(\kappa_{2}=\underline{w}_{L}^{(2)}=3\) to \(\kappa_{2}=\underline{w}_{L}^{(2)}+\tau_{2}=5\) by \(\tau_{2}=2\) assuming \(\rho=1/2\). This increase will delay generating sub-patterns with base \([1\quad 1]\) from \(w_{L}=4\) to \(w_{L}=6\) in Example 7. This prioritizes checking all sub-patterns with sub-weights \([4\quad 0]\) and \([5\quad 0]\) hoping that we find the correct error pattern faster by postponing the less likely error patterns to a later time. Nevertheless, in level-2 partitioning when we want to generate the sub-patterns with sub-weight \(\kappa_{2}=5\) and base \([1\quad 1]\), we should subtract \(\tau_{2}\) from \(\kappa_{2}=\geq 5\); otherwise, we will miss the error patterns with smaller sub-weights, i.e., \(w_{L}^{(2)}=3,4\).
The numerical evaluation of this technique for eBCH(128,106) with two segments and \(\rho=0.3,\epsilon=0.2\) shown in the table below reveals a slight reduction in the average queries while the BLER remains almost unchanged. The reduction in queries can be attributed to cases where there is an imbalance in the distribution of low-reliability symbols across segments. However, a significant imbalance between segments does not necessarily imply a significant imbalance in the distribution of erroneous coordinates. Consequently, tuning techniques in such scenarios may necessitate relatively larger queries, leading to only a slight reduction overall. These results demonstrate that the original segmented ORBGRAND without tuning overhead is good enough despite not considering the reliability imbalance between the segments. The reason comes from the imperfection of the reliability metric and complexity averaging over all received sequences.
\begin{tabular}{c|c c c c c} \hline \hline \(E_{b}/N_{0}\) & 3.5 & 4 & 4.5 & 5 & 5.5 \\ \hline without tuning & 30685 & 8358 & 1750 & 315 & 54 \\ \hline with tuning & 30492 & 8110 & 1661 & 291 & 48 \\ \hline \hline \end{tabular}
## VII Complexity Analysis
In this section, we discuss the expected reduction in the complexity (the average number of queries) of the proposed scheme. The overall size of the search space is considered \(2^{n}\) where we have \(2^{k}\) valid codewords. Note that the complexity to obtain ML decoding of any GRAND algorithm is upper bounded by a function of the redundancy, \(n-k\) (of order \(2^{n-k}\)) as a consequence of Theorem 2 in [17].
The search for finding the first valid codeword resembles the geometric distribution where the random variable is defined as the number of failures until the first success. Here, the first success is in fact finding the first valid codeword. However, unlike geometric distribution, every query in ORBGRAND cannot be considered independent as they are checked based on the order given by the weight function. Fig. 9 shows the behaviour of the number of queries to the first valid codeword under ORBGRAND order. This experiment was performed for decoding 80,000 eBCH(64,45) codewords at \(E_{b}/N_{0}=4\) dB where 44% of the decodings required more than 100 queries. Note that zero queries are considered for checking the hard decision of the received sequence. Due to the special order of queries in ORBGRAND, modelling such a distribution is not trivial. Nevertheless, as expected value is a measure of the central tendency of a probability distribution, and it is calculated as the weighted average of all possible outcomes, where the weights are the probabilities of each outcome. The probability of the outcome, here finding a valid codeword, changes by SNR and by the size of the search space. If we consider \(X\) as a random variable representing the abscissa, \(x_{i}\) in Fig. 9 and \(p_{i}\) as the relative frequency, then \(E(X)\approx\sum_{i}x_{i}q_{i}\). The reduction in the sample space changes (increases) the probability of the outcomes, \(q_{i}\). On the other hand, the relative frequency of small \(x_{i}\) is considerably larger than large ones, the expected value is shifted towards a smaller value, i.e., the expected value of queries will decrease. Furthermore, the probability of finding the first valid codeword after \(k\geq 1\) queries is \(P(X=k)\approx\prod_{i=1}^{k-1}(1-q_{i})q_{k}\). Note that unlike geometric distribution, we do not assume \(q_{i}=q_{j}\) for any \(i\neq j\).
Now, let us consider two scenarios:
* Queries without abandonment: In this scenario, the segmented ORBGRAND cannot improve the BLER as there is no query constraint for finding the first valid codeword. However, due to the smaller search space, the segmented ORBGRAND is expected to find the codeword with fewer queries, hence it has a lower average complexity. This scenario applies to queries with abandonment given the abandonment threshold \(b\) is large enough to have a valid codeword within. According to Theorem 2 in [17], GRAND algorithms find the error pattern after approximately \(2^{n-k}\) queries. For instance, this approximation of query bound for eBCH(128,106) is 4,194,304.
* Queries with abandonment: In this scenario, similar to the queries without abandonment, we have a reduction in complexity. Moreover, the abandonment threshold \(b\) limits the scope of queries leading to potential decoding failure in ORBGRAND. Fig. 10 (a) illustrates the failure due to the limited scope of the search. As can be seen, the reduction of search space in (b) helps the valid codeword fall into the scope of queries with threshold \(b\). Observe that this scenario is equivalent to an increase in \(b\).
As the maximum query in practice could be a bottleneck of the
Fig. 9: Relative frequency of the number of queries to the first valid codeword, up to 100 queries. As expected, the relative frequency gradually reduces for larger queries.
system and therefore it is important to evaluate the decoding performance and complexity under the abandonment scenario, we consider these two scenarios in the evaluation of segmented ORBGRAND in section IX.
## VIII Implementation Considerations
In this section, we propose a procedure to efficiently perform the first and second levels of weight partitioning with the required number of parts. Recall that in the first level of integer partitioning where we partition \(w_{L}\) into \([w_{L}^{(1)}\ \ w_{L}^{(2)}\cdots w_{L}^{(p)}]\), every \(w_{L}^{(j)},j\in[1,p]\) does not need to be a distinct integer, to sum up to \(w_{L}\) (i.e., repetition is allowed). However, in the second level of integer partitioning, sub-weight \(w_{L}^{(j)}\) should be partitioned into distinct parts with 1) either an even or odd number of parts if there exists an \(s_{j}\) associated with the segment, such as segment \(\mathcal{S}_{j_{2}}\) in Example 2, or 2) both even and odd parts if there is no \(s_{j}\) associated with it, such as segment \(\mathcal{S}_{j_{2}}^{\prime}\) in Example 2. In [28], we assumed that we generate all distinct parts, regardless of having an even or odd number of parts, and then we discard the unwanted ones by pre-evaluation. Consider the case where we have an all-one row in parity check matrix \(\mathbf{H}\), i.e., overall parity, if we can generate only odd or even number of distinct parts in level-2 partitioning, instead of discarding the unwanted ones, that can save reduce the number generated patterns for checking by half. We have a similar requirement in the segmented GRAND.
Luckily, there is a simple procedure to generate distinct integers ( for level-2 partitioning) with only an even number of parts or an odd number of parts that is hardware compatible as well. This procedure is illustrated in Algorithm 1. An example for integer partitioning of \(w=18\) into \(t=4\) distinct parts is illustrated in Fig. 11. We use this example along with Algorithm 1 to explain the procedure.
The procedure for every integer \(w=w_{L}^{(j)},j\in[1,p]\) starts with an initial sequence \(\mathbf{p}\) of \(t\) elements as shown below:
This initialization is performed in lines 2-3 of Algorithm 1. Before the generation of the next sequence of integer parts, we check to see which of the following two operations should be sought.
1. Increment-and-decrement: If we have \(p[t-2]+1<p[t-1]-1\), we keep the sub-sequence \(p[0],p[1],\ldots,p[t-3]\) while incrementing \(p[t-2]=p[t-2]+1\) and decrementing \(p[t-1]=p[t-1]-1\). These operations are performed in the last two parts in white cells, circled by blue dashed lines in Fig. 11 except for the first sequence in the circle that plays the role of the basis for these operations. As long as \(i=1\) in for loop in lines 12-23 in Algorithm 1, this operation continues to generate new sequences. Resuming this loop is performed by line 21. Note that the assignment in line 14 of Algorithm 1 is the general form for any \(i\). For instance, we can get line 12 by substituting \(i=1\) in line 14. Here, we showed them separately because we predominantly have \(i=1\).
2. Re-initialization: If we have \(p[t-2]+1\geq p[t-1]-1\), we would have non-distinct parts in the sequence in the case of equality or repeated sequence when inequality holds. Hence, we need to change the other parts, i.e., \(p[t-1-i],t-1\leq i\leq 2\). The extent of change is determined by some \(i>1\) such that the condition in line 15 is met.
Fig. 11: An example showing integer partitioning procedure for \(w=18\) into four distinct integers, \(k=4\).
Fig. 10: A sketch showing a stack of candidate sequences sorted in descending order with respect to a likelihood metric (the codeword at the top has the highest likelihood). With no abandonment condition, removing the invalid sequences accelerates reaching the first valid codeword by fewer queries (7 queries versus 14 queries). With abandonment after \(b=8\) queries, case (a) will fail to reach the valid codeword.
The re-initialization for such an \(i\) will be as follows:
\[[p[t\!-\!1\!-\!i]\!+\!1,p[t\!-\!1\!-\!i]\!+\!2,\ldots,p[t\!-\!1\!-\!i]\!+\!i\!+ \!1]\]
For instance, in Fig. 11, the sequences 6,9, and 14 are re-initialized when \(i=2\) and the sequences 11 and 15 when \(i=3\). Note that when \(i=t-1\), i.e., all parts except for \(p[t-1]\) are re-initialized and still the condition \(p[t-2]+1\geq p[t-1]-1\) in line 15 is not met, the process ends. This means all the possible options for parts have been checked.
As mentioned in Section II, no parts can be larger than the length of the code. Here, we need to consider this as well for the length of the segment denoted by \(p_{\max}\) in Algorithm 1 as you can observe in lines 6 and 18.
A similar procedure can be used for the first level of integer partitioning for the error pattern bases by lifting the constraint on the distinctness of the parts and allowing the permutation. However, we need to consider the minimum sub-weight (1 or 3 depending on \(s_{j}\)) that each segment can take. Given these differences, one can observe that the initialization of non-frozen segments can allow repetition of 1's or 3's, instead of distinct values of \(1,2,3,\cdots\). For instance, for three segments with \(s_{1}=s_{2}=1,s_{3}=0\) and base \([1\;1\;1]\), we can start the above procedure for \(w_{L}=7\) with \(\mathbf{p}=[1\;1\;5]\), then we proceed with \(\mathbf{p}=[1\;2\;4]\), \(\mathbf{p}=[1\;3\;3]\). The rest are \(\mathbf{p}=[2\;1\;4]\), \(\mathbf{p}=[2\;2\;3]\), and finally, \(\mathbf{p}=[3\;1\;3]\). In this example, the order of re-initialization is the same as Fig. 11.
```
input : sub-weight \(w\), part size \(t\), largest part \(p_{max}=n\) output :\(\mathcal{P}\)
1\(\mathcal{P}\leftarrow\{\}\)
2\(\mathbf{p}\leftarrow[1,2,\ldots,t-1]\)
3\(\mathbf{p}\leftarrow\mathbf{p}+[w\text{- sum}(\mathbf{p})]\)
4if\(p[t]<=p[t-1]\)then
5return\(\mathcal{P}\)
6if\(p[t]\leq p_{\max}\)then
7\(\mathcal{P}\leftarrow\mathcal{P}\cup\{\mathbf{p}\}\)
8incr_decr \(\leftarrow\) True // Operation: \(\text{Interment-and-decrement}\)
9while True do
10for\(i\) in \([1,t-1]\)do
11if\(i=1\)then
12\(p^{*}\gets p[t-1]-1\)
13else
14\(p^{*}\gets w-(i\!\cdot\!p[t\!-\!1\!-\!i]\!+\!\sum_{j=1}^{i}j)\!-\!\sum_{j =0}^{t\!-\!2\!-\!i}p[j]\)
15if\(p[t-1-i]+i<p^{*}\)then
16\(\mathbf{p}\leftarrow\mathbf{p}[0\!:\!t\!-\!2\!-\!i]+[p[t-1-i]\!+\!1,p[t\!-\!1\!- \!i]\!+\!2,\ldots,p[t-1-i]\!+\!i+\!1]\)
17\(\mathbf{p}\leftarrow\mathbf{p}+[w\text{- sum}(\mathbf{p})]\)
18if\(p[t-1]\leq p_{\max}\)then
19\(\mathcal{P}\leftarrow\mathcal{P}\cup\{\mathbf{p}\}\)
20incr_decr \(\leftarrow\) True
21break
22else
23\(\text{incr_decr}\leftarrow\) False
24if\(\text{incr_decr}=\) False and \(i=t-1\)then
25break
26return\(\mathcal{P}\)
```
**Algorithm 1**Non-recursive integer partitioning to a fixed number of distinct parts
## IX Numerical Results
We consider two sample codes for the numerical evaluation of the proposed approach. The polarization-adjusted convolutional (PAC) code (64,44) [31] is constructed with Reed-Muller-polar rate-profile with design-SNR=2 dB and convolutional generator polynomial \([1,0,1,1,0,1]\). The extended BCH code (128,106) with the primitive polynomial \(D^{7}+D^{3}+1\) and \(t=3\). Note that the rows \(\mathbf{h}_{1}\) and \(\mathbf{h}_{2}\) in \(\mathbf{H}\) matrix for eBCH code (128, 106) satisfy the relationship \(\operatorname{supp}(\mathbf{h}_{2})\subset\operatorname{supp}(\mathbf{h}_{1})\) where \(\mathbf{h}_{2}=\mathbf{1}\) and \(|\mathbf{h}_{1}|/2=|\mathbf{h}_{2}|=64\). Hence, for two constraints, we modify \(\mathbf{h}_{1}\) by \(\mathbf{h}_{1}=\mathbf{h}_{1}\oplus\mathbf{h}_{2}\). Similarly, the rows \(\mathbf{h}_{1}\), \(\mathbf{h}_{4}\), and \(\mathbf{h}_{5}\) in \(\mathbf{H}\) matrix for PAC code (64, 44) satisfy the relationship \(\operatorname{supp}(\mathbf{h}_{5})\subset\operatorname{supp}(\mathbf{h}_{4}) \subset\operatorname{supp}(\mathbf{h}_{1})\) where \(\mathbf{h}_{1}=\mathbf{1}\) and \(|\mathbf{h}_{1}|/2=|\mathbf{h}_{4}|=2|\mathbf{h}_{5}|=32\).
Figs. 12 and 13 show the block error rates (BLER) of the PAC code (64, 44) and the extended BCH code (128, 106), respectively, under the conventional ORBGAND with no constraints (NoC) and the segmented GRAND with the maximum number of queries based on (5) (a.k.a) abandonment threshold \(b=10^{5},10^{6}\). Note that the threshold \(b\) in GRAND algorithms should be approximately \(2^{n-k}\) queries [17, Theorem 2] to find the error pattern and get reasonable performance.
As expected, the average queries reduce significantly for both codes under segmented ORBGAND. In the case of PAC code (64,44), the average queries become half at high SNR regimes while this reduction is larger at low SNR regimes. The reduction of average queries for eBCH(128,106) is more significant under the same abandonment thresholds as the short PAC code. Note that the average queries for the short PAC code with different \(b\)'s are approaching at high SNR regimes due to the effectiveness of smaller \(b\) at this code length. Furthermore, there is a BLER improvement where \(b=10^{5}\) however this improvement diminishes by increasing \(b\) or under no abandonment as we will observe later. Note that unlike the comparisons in [28] where the BLER was fixed and the impact of applying constraints on the average queries was studied, here we fix the maximum number of queries \(b\) for both conventional ORBGAND and segmented ORBGAND to have a fair comparison. As discussed in Section VII, in case
of decoding failure by ORBGRAND, if we reduce the search space, we don't have to process many invalid error patterns. As a result, the first valid pattern may fall within the abandonment threshold \(b\) and the segmented ORBGRAND would succeed.
The table below shows the average queries of two codes at \(E_{b}/N_{0}=5\) dB (with two/three constraints, denoted by 2C/3C, and with no constraints/segmentation, denoted by NoC). The average queries reduce by halves (in the case of two segments, it is slightly less than half while in the case of three segments, it is more than half).
\begin{tabular}{l|l|l|l} \hline & PAC(64,44) & eBCH(128,106) \\ \hline & NoC & 3C & NoC & 2C \\ \hline \(b=10^{o}\) & 95.1 & 49.0 & 460.7 & 208.9 \\ \hline \(b=10^{o}\) & 103.3 & 53.2 & 872.7 & 314.9 \\ \hline \end{tabular} Note that if we maintain the BLER, the average query reduction is expected to approximately follow Lemma 1 as it was shown numerically in [28]. Note that here, with abandonment threshold, further reduction to meet the expectation in Lemma 1 is traded with BLER improvement.
Now, let us consider ORGBGRAND without abandonment. Fig. 14 compares the BLER and the (average) complexity of eBCH(128,106) under various decoding algorithms. The main benchmark is naturally ORBGRAND. Compared to ORBGRAND, the segmented ORBGRAND reduces the average number of queries by three times, while the BLER remains almost the same as before. We also compare it with the most popular MRB-based decoding algorithm, that is, ordered statistics decoding (OSD) with order \(i\), as its relationship with its variants such as the box-and-match algorithm (BMA) [11] and enhanced BMA [12] is known. Moreover, the reduction in the complexity of the variant comes at the cost of the increase in space complexity which makes the comparison unfair. For instance, the BMA reduces the computational complexity of OSD roughly by its squared root at the expense of memory, as the BMA with order \(i\) considers all error patterns of weight at most \(2i\) over \(s\) most reliable positions (\(s>k\)). The BLER of OSD(2) is remarkable compared to other algorithms while it provides a reasonable complexity at low SNR regimes. Whereas ORBGRAND requires considerably fewer queries at high SNR regimes at the cost of degradation in BLER performance.
The other two algorithms used for comparison are Berlekamp-Massey Algorithm and Chase-II algorithm. Chase-II algorithm, denoted by Chase-II(\(t\)), for decoding a code with the error-correcting capability of \(t\) has the computational complexity of order \(2^{t}\cdot O(\text{HD})\) as it uses a hard decision (HD) decoder, such as the Berlekamp-Massey Algorithm with the complexity of order \(O(n^{2})\), in \(2^{t}\) times as the decoder attempts all the error patterns with weight up to \(t=\lfloor\frac{d_{min}-1}{2}\rfloor\) over the \(t\) least reliable positions, hence, \(\sum_{j=0}^{t}{t\choose j}=2^{t}\). In the case of eBCH(128,106), we have \(t=3=\lfloor\frac{d_{min}-1}{2}\rfloor\) where \(d_{min}=7\). As can be seen, the BLER of the Berlekamp-Massey Algorithm and Chase-II algorithm is not comparable with OSD and ORBGRAND though they have a computational complexity of orders \(O(2^{14})\) and \(8\cdot O(2^{14})\), respectively. Furthermore, we observed that by increasing the total attempts to \(2^{t}=2^{8}\), the Chase-II algorithm can approach the BLER of ORBGRAND as shown in Fig. 14.
It is worth noting that our observation showed that OSD(3) with significantly higher complexity, does not meaningfully improve the performance of eBCH(128,106) as OSD(2) has almost reached the ML performance. Hence, we did not plot it to have a fair comparison. Moreover, as can be seen, soft GRAND (sGRAND) as an ML decoder provides similar performance. Note that we did not plot the complexity of sGRAND as it requires sorting at each step which makes it incomparable with others though OSD also needs preprocessing (row operations to get a systematic form) as mentioned in Section I.
## X Conclusion
In this paper, we propose an approach to divide the scope of searching for the error sequence induced by the channel noise with the segmentation approach. Every segment is defined based on the parity constraints extracted from the parity check matrix with or without manipulation (row operations).
Fig. 12: Performance comparison between three sub-pattern generators based on three constraints (3C) and a single generator with no constraints (NoC). The vertical axis is on the logarithmic scale for both queries and BLER.
Fig. 13: Performance comparison between three sub-pattern generators based on two constraints (2C) and a single generator with no constraints (NoC).
Then, we employ multiple error pattern generators, each for one segment. We propose a method to combine these sub-patterns in a near-ML order for checking. As this approach generates valid error patterns with respect to the selected parity constraint, both the average number of queries and the BLER performance improve remarkably.
|
2305.05747 | Persistent synchronization of heterogeneous networks with time-dependent
linear diffusive coupling | We study synchronization for linearly coupled temporal networks of
heterogeneous time-dependent nonlinear agents via the convergence of attracting
trajectories of each node. The results are obtained by constructing and
studying the stability of a suitable linear nonautonomous problem bounding the
evolution of the synchronization errors. Both, the case of the entire network
and only a cluster, are addressed and the persistence of the obtained
synchronization against perturbation is also discussed. Furthermore, a
sufficient condition for the existence of attracting trajectories of each node
is given. In all cases, the considered dependence on time requires only local
integrability, which is a very mild regularity assumption. Moreover, our
results mainly depend on the network structure and its properties, and achieve
synchronization up to a constant in finite time. Hence they are quite suitable
for applications. The applicability of the results is showcased via several
examples: coupled van-der-Pol/FitzHugh-Nagumo oscillators, weighted/signed
opinion dynamics, and coupled Lorenz systems. | Hildeberto Jardón-Kojakhmetov, Christian Kuehn, Iacopo P. Longo | 2023-05-09T19:59:48Z | http://arxiv.org/abs/2305.05747v1 | # Persistent synchronization of heterogeneous networks with time-dependent linear diffusive coupling
###### Abstract
We study synchronization for linearly coupled temporal networks of heterogeneous time-dependent nonlinear agents via the convergence of attracting trajectories of each node. The results are obtained by constructing and studying the stability of a suitable linear nonautonomous problem bounding the evolution of the synchronization errors. Both, the case of the entire network and only a cluster, are addressed and the persistence of the obtained synchronization against perturbation is also discussed. Furthermore, a sufficient condition for the existence of attracting trajectories of each node is given. In all cases, the considered dependence on time requires only local integrability, which is a very mild regularity assumption. Moreover, our results mainly depend on the network structure and its properties, and achieve synchronization up to a constant in finite time. Hence they are quite suitable for applications. The applicability of the results is showcased via several examples: coupled van-der-Pol/FitzHugh-Nagumo oscillators, weighted/signed opinion dynamics, and coupled Lorenz systems.
## 1 Introduction
Within the class of complex systems in nature, technology and society, an interconnected structure is a recurrent feature [51; 64]. Dynamical systems on networks go a long way into successfully capturing and explaining these structures and their evolution [51]. For example, the available dynamical theory for static networks--dynamical systems on graphs with a static topology--allows to study problems of opinion formation [32], collective motion [66], and epidemic dynamics [28]. In contrast to their static counterpart, temporal networks--sometimes also referred to as time-varying networks, dynamic networks and temporal graphs--feature a time-dependent variation of the graph structure. It has become increasingly evident that certain types of natural and societal interactions require a time-dependent framework [23; 24]. Examples include human proximity and communication networks [57; 63; 69], brain networks [3], travel and transportation networks [20], distributed computing [30], or ecological and biological networks [8] to name just a few. However, the analysis of temporal networks is generally much more complicated and the respective mathematical theory remains still substantially open [21; 24].
For example, the emergence of a synchronized state in a temporal network or in some of its parts--also referred to as clusters--has been investigated analytically and numerically mostly on a case-to-case basis [9; 27; 68]; typically under the assumption of switched dynamics--only a finite set of coupling structures are selected over time [1; 12; 31; 33; 67]--or of fast switching [5; 26; 50; 49; 53; 62].
More general results usually involve the study of the errors between nodes or with respect to a synchronizing trajectory, either through a global Lyapunov function [6; 13], a master stability function [44; 58; 45; 48; 42], linearization along a synchronizing trajectory [40; 41], or the Hajnal diameter [41].
In this work, we study synchronization for linearly coupled temporal networks of \(N\geq 2\) heterogeneous time-dependent agents of the form
\[\dot{x}_{i}=f_{i}(t,x_{i})+\sum_{k=1}^{N}a_{ik}(t)(x_{k}-x_{i}),\quad x_{i}=x_ {i}(t)\in\mathbb{R}^{M},\,i=1,\ldots,N, \tag{1.1}\]
where \(x_{i}\in\mathbb{R}^{M}\) is the state variable of node \(i\) and \(f_{i}:\mathbb{R}\times\mathbb{R}^{M}\rightarrow\mathbb{R}^{M}\) its internal dynamics, and \(A:\mathbb{R}\rightarrow\mathbb{R}^{N\times N}\), \(t\mapsto A(t)=\big{(}a_{ij}(t)\big{)}\) is the generalized (or weighted) adjacency matrix of
the network. We shall assume that \(A\) is a locally integrable function allowing, for example, for discontinuous switching in the network topology. Moreover, we emphasize that, in contrast to the majority of the literature, we allow \(a_{ij}\) to take values over the whole real line, which implies that signed networks [59] are covered by our theory. Furthermore, the assumptions of regularity in time required for our work are particularly mild: the matrix function \(A\) needs to be locally integrable, while the functions \(f_{i}(t,x_{i})\) are Lipschitz Caratheodory. In other words, continuity in time is not assumed.
As for other works on synchronization, also our analysis is based on the study of the evolution of the errors, which, however, are considered in norm \(\xi_{ij}(t)=|x_{i}(t)-x_{j}(t)|^{2}\) for \(i,j=1,\ldots,N\) with \(i<j\). This choice allows us to construct a suitable bounding linear system whose stability, in terms of spectral dichotomy, is used to infer information on the finite-time and asymptotic synchronization of the nonlinear problem. The key advantages of our approach can be summarized as follows:
* Only very mild assumptions of regularity akin to local integrability in time are required.
* The sufficient conditions for synchronization depend almost completely (and increasingly so in case of global coupling) on the adjacency matrix \(A(t)=\big{(}a_{ij}(t)\big{)}\). Moreover, such conditions are quantitative and constructive allowing for a control-driven approach to synchronization via the modification of a suitable part of the network.
* The results are written for signed networks and for a considerably general class of agents. On the one hand, we consider time-dependent agents, a surprisingly rare case in the available theory for temporal networks; synchronization results for nonautonomous agents appear for example in [11, 47] although always with static coupling. On the other hand, the nodes can present heterogeneous dynamics and synchronization up to a constant is still achieved, provided that the theory applies. In case the network features identical nodes, a result of _exact synchrony_ is achieved.
* The ideas used to obtain synchronyzation can be exploited to investigate its occurrence in either the entire network or only some of its parts (clusters), and we provide sufficient conditions for both these cases.
* Being based on the roughness of the exponential dichotomy, the results of synchrony are robust against perturbation. This fact, which is important by itself, has also striking consequences in
the case of perturbation of static networks. For example, we show that strongly connected static networks with positive edges always satisfy our conditions of synchronization provided that a sufficiently large global coupling exists and therefore the achieved synchrony is also robust.
Note also that our problem features time dependent agents. Synchronization results for nonautonomous agents appear for example in [11, 47] although always with static coupling. Furthermore, our main results of synchronization are written for heterogeneous agents guaranteeing convergence up to a constant of the individual nodes. In case the considered network features identical nodes, a result of _exact synchrony_ is achieved.
Our work is structured as follows. In Section 2 the notation, assumptions and some basic notions about nonautonomous linear systems are introduced. Section 3 deals with the synchronization (up to a constant) of networks of heterogeneous non-autonomous agents which are linearly coupled via a time-dependent network. Our approach entails the study of the synchronization errors \(\xi_{ij}(t)=|x_{i}(t)-x_{j}(t)|^{2}\) for \(i,j=1,\ldots,N\) with \(i<j\). Firstly we show that the maps \(\xi_{ij}(t)\) for \(i,j=1,\ldots,N\) with \(i<j\) induce a dynamical system on the positive cone of \(\mathbb{R}^{N(N-1)/2}\). Then, we introduce our fundamental assumption (**H1**) on the one-sided affine upper-bound to the difference of the vector fields of any pair of uncoupled agents in our network. The assumption (**H1**) is firstly introduced pointwise and then generalized to pairs of continuous functions in Lemma 3.3, which allows us to upper bound the evolution of any pair of trajectories of two nodes \(i\) and \(j\) in Lemma 3.4. Theorem 3.6 is our main result of synchronization which uses the results above to generate a nonautonomous linear inhomogeneous system controlling the evolution of the synchronization errors and therefore synchronization (up to a constant) is achieved via the analysis of stability of this problem: we check sufficient conditions for the existence of an exponential dichotomy with projector the identity. Two additional assumptions are made: (**H2**) there is a region of the phase space such that the solutions of each node of the network with initial condition therein, remain uniformly ultimately bounded near a globally defined bounded solution; (**H3**) the heterogeneity of the nodes can be uniformly bounded on compact intervals of time. Some corollaries address the simpler but important and possibly more common cases that the network is subjected to a global coupling coefficient, and that the assumption (**H3**) can be changed for a stronger essential boundedness on the whole real line. Two examples complete the section to showcase the applicability of the obtained results of synchronization: in the first example we consider a time-dependent network of heterogeneous van der Pol oscillators while in the second one we show how Theorem 3.6 can induce a control-oriented
approach to achieve synchronization in a ring network featuring a contrarian node.
The idea contained in the proof of Theorem 3.6 are further explored in Section 4 to guarantee the synchronization of just a cluster (see Theorem 4.1). This section is completed by an example of a neural network of heterogeneous FitzHugh-Nagumo where two leading neurons induce recurrent patterns of synchronization on their immediate but possibly shared neighbors.
In Section 5 we briefly expand on the persistence of synchronization achieved via the previous results by using the roughness of the exponential dichotomy. An application to the time-dependent perturbation of synchronized static network is also presented. Incidentally, we also show that strongly connected static networks with positive edges satisfy the conditions of synchronization of our result provided that a sufficiently high global coupling exists. The example of a perturbed star-network of Lorenz systems is presented at the end of the section to showcase the appearance and persistence of synchronization.
In Section 6, we investigate sufficient conditions for the existence of local attractors for both the uncoupled and the coupled problems. The case of solutions that remain only uniformly ultimately bounded near a bounded reference trajectory is also addressed. In other words, some sufficient results guaranteeing (H2) are given.
## 2 Notation, assumptions and preliminary definitions
By the symbol \(\mathbb{R}^{d}\), with \(d\in\mathbb{N}\), we will denote the \(d\)-dimensional Euclidean space with the usual norm \(|\cdot|\). As a rule, a singleton \(\{\xi\}\) in \(\mathbb{R}^{d}\) will be identified with the element \(\xi\) itself. For every \(i=1,\ldots,d\) the \(i\)-th component of \(\xi\in\mathbb{R}^{d}\) will be denoted by \(\xi_{i}\). Moreover, the notation \(\xi\geq 0\) means that \(\xi_{i}\geq 0\) for all \(i=1,\ldots,d\), whereas \(\xi\gg 0\) means that \(\xi_{i}>0\) for every \(i=1,\ldots d\). The space \(\mathbb{R}^{d}_{+}\) will denote the set of points \(\xi\in\mathbb{R}^{d}\) such that \(\xi\geq 0\). This notation naturally extends to vector-valued functions. When \(d=1\), we will simply write \(\mathbb{R}\) instead of \(\mathbb{R}^{1}\) and thus the symbol \(\mathbb{R}_{+}\) will denote the set of non-negative real numbers. Moreover, by \(B_{r}\), we denote the closed ball of \(\mathbb{R}^{d}\) centered at the origin.
The symbol \(\mathbb{R}^{M\times N}\), with \(N,M\in\mathbb{N}\) represents the set of matrices of dimension \(M\times N\), and given \(A\in\mathbb{R}^{M\times N}\), \(A^{\top}\) will denote its transpose. The space \(\mathbb{R}^{M\times N}\) will be endowed with the induced norm \(\|\cdot\|\) defined by \(\|A\|=\sup_{|x|=1}|Ax|\). Oftentimes, we shall write \((\mathbb{R}^{M})^{N}\) to denote the set of \(N\)-tuples of vectors in \(\mathbb{R}^{M}\). Although, this set can evidently be identified with \(\mathbb{R}^{M\times N}\), the notation \((\mathbb{R}^{M})^{N}\) will be used when it is more convenient to treat each vector in \(\mathbb{R}^{M}\) separately--typically, when they
correspond to the state of a node in a network of size \(N\). In this case, for every \(i=1,\ldots,N\) the \(i\)-th vector of \(x\in(\mathbb{R}^{M})^{N}\) will be denoted by \(x_{i}\)--the \(i\)-th node of the network.
For any interval \(I\subseteq\mathbb{R}\) and any \(W\subset\mathbb{R}^{N}\), \(\mathcal{C}(I,W)\) will denote the space of continuous functions from \(I\) to \(W\) endowed with the sup norm \(\|\cdot\|_{\infty}\). Moreover, \(L^{\infty}\) and \(L^{1}_{loc}\) will denote the spaces of real functions which are respectively, essentially bounded and locally integrable on \(\mathbb{R}\), i.e. belonging to \(L^{1}(I)\) for every bounded interval \(I\subset\mathbb{R}\). These latter spaces are intended as each endowed with their usual topology.
We shall work under the most general assumptions guaranteeing existence and uniqueness of solutions for (1.1) (see for example [10]). Specifically, the functions \(t\mapsto a_{ij}(t)\) are assumed to be _locally integrable_, whereas the functions \(f_{i}(t,x_{i})\) are assumed to be Lipschitz Caratheodory.
**Definition 2.1** (Lipschitz Caratheodory functions).: A function \(f\colon\mathbb{R}\times\mathbb{R}^{M}\to\mathbb{R}^{M}\) is Lipschitz Caratheodory (in short \(f\in\mathfrak{EC}\)) if it satisfies Caratheodory conditions
* \(f\) is Borel measurable and
* for every compact set \(K\subset\mathbb{R}^{M}\) there exists a real-valued function \(m^{K}\in L^{1}_{loc}\), called \(m\)_-bound_ in the following, such that for almost every \(t\in\mathbb{R}\) one has \(|f(t,x)|\leq m^{K}(t)\) for any \(x\in K\);
and also a Lipschitz condition
* for every compact set \(K\subset\mathbb{R}^{M}\) there exists a real-valued function \(l^{K}\in L^{1}_{loc}\) such that for almost every \(t\in\mathbb{R}\) one has \(|f(t,x_{1})-f(t,x_{2})|\leq l^{K}(t)|x_{1}-x_{2}|\) for any \(x_{1},x_{2}\in K\).
Let us also recall the notion of \(L^{1}_{loc}\)-boundedness. A subset \(S\) of positive functions in \(L^{1}_{loc}\) is bounded if for every \(r>0\) the following inequality holds
\[\sup_{m\in S}\int_{-r}^{r}m(t)\,dt<\infty\,.\]
In such a case we will say that \(S\) is \(L^{1}_{loc}\)-bounded.
**Remark 2.2**.: In many cases in the applications, the functions \(m^{K}\) and \(l^{K}\) appearing in (**C2**) and (**L**) are in fact constant. This is a particular case of Lipschitz Caratheodory functions where the \(m\)- and \(l\)-bounds are in \(L^{\infty}\subset L^{1}_{loc}\). Consequently, the theory we present still applies. Note also that if a set of functions is uniformly bounded in \(L^{\infty}\), then it is also \(L^{1}_{loc}\)-bounded.
Furthermore, we will assume that the nodes are subjected to linear diffusive coupling through a time-dependent weighted adjacency matrix \(A:\mathbb{R}\to\mathbb{R}^{N\times N}\) so that each entry \(a_{ij}(\cdot)\in L^{1}_{loc}\). More in general, we could consider the problem
\[\dot{x}_{i}=f_{i}(t,x_{i})+\sum_{k=1}^{N}a_{ik}(\omega_{t})(x_{k}-x_{i}),\quad x _{i}\in\mathbb{R}^{M},\,\omega\in\Omega,\,i=1,\ldots,N,\]
where \(A:\Omega\to\mathbb{R}^{N\times N}\), \(\omega\mapsto A(\omega)=\big{(}a_{ij}(\omega)\big{)}\) is the generalized (or weighted) adjacency matrix of the network with respect to a fixed \(\omega\) in a complete metric space \(\Omega\). A continuous flow \(\theta:\mathbb{R}\times\Omega\to\Omega\), \((t,\omega)\mapsto\theta(t,\omega)=\omega_{t}\) on \(\Omega\) can be considered, that, depending on the assumptions, can be completely deterministic or random: if \(\Omega=\mathbb{R}\) and \((t,s)\mapsto\theta(t,s)=t+s\), we call the network deterministic; if \((\Omega,\mathcal{F},\mathbb{P})\) is a probability space, we call the network random. This formalism would allow the construction of a continuous skew-product flow [35, 34, 37] and the use of tools from topological dynamics to investigate, for example, the propagation of properties of synchronization. Given the complexity of the topic at hand, we prefer to restrict ourselves to a simpler--although less powerful--formalism, to privilege the understanding of the conditions of synchronization, the main focus of our work. In any case, we note that, fixed a certain \(\omega\in\Omega\), a path-wise approach through the base flow \(\theta\) is also covered by our theory by simply identifying \(A(\omega_{t})\) with \(A(t)\) and dropping the dependence on \(\omega\).
As a rule, we shall say that a locally absolutely continuous function \(\sigma:I\subset\mathbb{R}\to\mathbb{R}^{M\times N}\), \(t\mapsto\sigma(t)=(\sigma_{1}(t),\ldots,\sigma_{n}(t))\) solves (or is a solution of) (1.1) with initial conditions at \(t_{0}\in I\) given by \(\overline{x}=(\overline{x}_{1},\ldots,\overline{x}_{N})\in(\mathbb{R}^{M})^{N}\), if for every \(i=1,\ldots,N\), \(\sigma_{i}(\cdot)\) is a solution of the integral problem
\[x_{i}(t)=\overline{x}_{i}+\int_{t_{0}}^{t}\left(f_{i}(s,x_{i}(s))+c\sum_{j=1}^ {N}a_{ij}(s)(\sigma_{j}(s)-x_{i}(s))\right)ds,\quad t\in I.\]
We shall denote by \(x(\cdot,t_{0},\overline{x}):I\to(\mathbb{R}^{M})^{N}\) such solution.
Our objective is to identify conditions of synchronization for (1.1). Let us clarify what we mean by synchronization in this context.
**Definition 2.3**.: We say that (1.1) synchronizes up to a constant \(\mu>0\) in a synchronization region \(E\subset\mathbb{R}\times\mathbb{R}^{M}\) if all the following properties are satisfied
* there is a function \(\sigma:\mathbb{R}\to(\mathbb{R}^{M})^{N}\), that is absolutely continuous on any compact interval and maps \(t\in\mathbb{R}\) to \(\sigma(t)=(\sigma_{1}(t),\ldots,\sigma_{N}(t))\) such that \((t,\sigma_{i}(t))\in E\) for all \(t\in\mathbb{R}\) and all \(i=1,\ldots,N\), and \(\sigma(t)\) solves (1.1),
* for all \((t_{0},\overline{x})\in\mathbb{R}\times(E_{t_{0}})^{N}\), where \(E_{t_{0}}=\{x\in\mathbb{R}^{M}\mid(t_{0},x)\in E\}\), if the absolutely continuous function \(x(\cdot,t_{0},\overline{x}):=(x_{1}(\cdot,t_{0},\overline{x}_{1}),\ldots,x_{N} (\cdot,t_{0},\overline{x}_{N}))\) solves (1.1) with \(x_{i}(t_{0},t_{0},\overline{x}_{i})=\overline{x}_{i}\) for all \(i=1,\ldots,N\), then \(x(\cdot,t_{0},\overline{x})\) is defined for all \(t>t_{0}\) and \[\limsup_{t\to\infty}|x_{i}(t,t_{0},\overline{x}_{i})-\sigma_{i}(t)|\leq\frac{ \mu}{3},\quad\text{ for all }i=1,\ldots,N,\]
* for all \(i,j=1,\ldots,N\) \[\limsup_{t\to\infty}|\sigma_{i}(t)-\sigma_{j}(t)|\leq\frac{\mu}{3}.\]
If the previous conditions are satisfied then, the triangular inequality guarantees that any two trajectories of two nodes \(i,j\) with initial data \(\overline{x}_{i},\overline{x}_{j}\in E_{t_{0}}\) will satisfy
\[\limsup_{t\to\infty}|x_{i}(t,t_{0},\overline{x}_{i})-x_{j}(t,t_{0},\overline {x}_{j})|\leq\mu.\]
In particular, if \(f_{i}=f\) for all \(i=1,\ldots,N\), then we say that (1.1) synchronizes in a synchronization region \(E\subset\mathbb{R}\times\mathbb{R}^{M}\) if there is a solution \(s(t)\) of \(\dot{x}=f(t,x)\) defined over the whole real line and with graph in \(E\) such that for all \((t_{0},\overline{x})\in\mathbb{R}\times(E_{t_{0}})^{N}\), the solution \(x(\cdot,t_{0},\overline{x})\) of (1.1), with \(x(t_{0},t_{0},\overline{x})=\overline{x}\), is defined for all \(t>t_{0}\) and
\[\lim_{t\to\infty}|x_{i}(t,t_{0},\overline{x}_{i})-s(t)|=0,\quad\text{ for all }i=1,\ldots,N.\]
The most important results in our work require a notion of splitting of the extended phase space of a linear time-dependent problem \(\dot{y}=A(t)y\), \(y\in\mathbb{R}^{d}\), in invariant manifolds characterized by asymptotic exponential decay either in forward or in backward time. The notions of exponential dichotomy, dichotomy spectrum and associated splitting fulfill this requirement. We briefly recall them here and point the interested reader to Siegmund [60] for all the details of the locally integrable case.
**Definition 2.4** (Exponential dichotomy and dichotomy spectrum).: Let \(A:\mathbb{R}\to\mathbb{R}^{d\times d}\) be a locally integrable function and consider the linear system
\[\dot{y}=A(t)y. \tag{2.1}\]
An invariant projector of (2.1) is a function \(P:\mathbb{R}\to\mathbb{R}^{d\times d}\) of projections \(P(t)\), \(t\in\mathbb{R}\), such that
\[P(t)Y(t,s)=Y(t,s)P(s),\quad\text{for all }t,s\in\mathbb{R},\]
where \(Y:\mathbb{R}^{2}\to\mathbb{R}^{d\times d}\) is the principal matrix solution of (2.1) at \(s\in\mathbb{R}\), i.e. \(Y(\cdot,s)\) solves (2.1) with \(Y(s,s)=Id\).
The system (2.1) is said to have an _exponential dichotomy_ on \(\mathbb{R}\) if there are an invariant projector \(P(\cdot)\) and constants \(\gamma>0\), \(K\geq 1\) such that
\[\big{\|}Y(t,s)P(s)\big{\|}\leq Ke^{-\gamma(t-s)},\quad\text{for all $t \geq s$, and}\] \[\big{\|}Y(t,s)\big{(}Id-P(s)\big{)}\big{\|}\leq Ke^{\gamma(t-s)}, \quad\text{for all $t\leq s$,}\]
where \(Id\) is the identity matrix on \(\mathbb{R}^{d\times d}\). The _dichotomy spectrum_ of (2.1) is the set
\[\Sigma(A):=\{\alpha\in\mathbb{R}\mid\dot{y}=(A(t)-\alpha\,Id)y\ \text{ has no exponential dichotomy}\}.\]
**Remark 2.5**.: (i) Siegmund [60] showed that either \(\Sigma(A)\) is empty, or it is the whole \(\mathbb{R}\), or there exists \(k\in\mathbb{N}\), with \(1\leq k\leq d\), such that
\[\Sigma(A)=I_{1}\cup[a_{2},b_{2}]\cup\cdots\cup[a_{k-1},b_{k-1}]\cup I_{k}\,,\]
where \(I_{1}\) is either \([a_{1},b_{1}]\) or \((-\infty,b_{1}]\), \(I_{k}\) is either \([a_{k},b_{k}]\) or \([a_{k},\infty)\), and \(a_{1}\leq b_{1}<a_{2}\leq b_{2}<\cdots\leq a_{k}\leq b_{k}\). If \(A(\cdot)\) has constant entries, the dichotomy spectrum reduces to the real parts of the eigenvalues of \(A\). In addition, a decomposition of \(\mathbb{R}\times\mathbb{R}^{d}\) in spectral manifolds holds, i.e.
\[\mathbb{R}\times\mathbb{R}^{d}=\mathcal{W}_{0}\oplus\cdots\oplus\mathcal{W}_{ k+1}\,;\]
see [60] for details. If the dichotomy spectrum of (2.1) is contained in \((-\infty,0)\), then (2.1) admits an exponential dichotomy with invariant projector \(P(\cdot)\equiv Id\).
(ii) The notion of exponential dichotomy should be regarded as a natural extension to the nonautonomous framework of the tools for stability analysis of autonomous linear systems. Indeed, it is well-known that the time-dependent eigenvalues are of little help for the investigation of stability of nonautonomous linear problems [14, 18], unless the time-dependence is periodic (replace eigenvalues for Floquet multipliers [22]) or slow [14, 52]. On the other hand, Lyapunov exponents successfully encapsulate the asymptotic stability but do not necessarily provide robustness against nonlinear perturbations, which is instead guaranteed by the roughnees of the exponential dichotomy [14]--the Lyapunov spectrum is a subset of the dichotomy spectrum and if the latter is a point spectrum, then it reduces to the former [56, 16]. The downside of the notion of exponential dichotomy resides in the difficulty of verifying it. Some explicit calculable sufficient criteria were provided
by Coppel in terms of the time-dependent eigenvalues for bounded matrix functions [14] and by Fink in terms of row- or column-dominance of the matrix \(A(t)\)[18], which we will use later in this work. Efficient numerical approximations have also been developed using QR and singular value decomposition [15; 16; 17; 19].
## 3 Synchronization
In this section, we provide sufficient conditions for synchronization up to a constant via comparison arguments with a suitably constructed linear system bounding the growth of the "errors". Given a solution \(x:\mathbb{R}\to(\mathbb{R}^{M})^{N}\) of (1.1), we shall call _(synchronization) errors_, the vector \(\xi(t)\in\mathbb{R}^{N(N-1)/2}\) defined by
\[\xi_{ij}(t)=|x_{i}(t)-x_{j}(t)|^{2},\quad i,j=1,\ldots,N,\,i<j.\]
Our first observation is that given a network (1.1), the synchronization errors induce a nonautonomous dynamical system on the positive cone \(\mathbb{R}^{N(N-1)/2}_{+}\) of \(\mathbb{R}^{N(N-1)/2}\).
**Proposition 3.1**.: _Given a network of \(N\) nodes, \(N\geq 2\), in \(\mathbb{R}^{M}\) as in (1.1), the map_
\[x\in(\mathbb{R}^{M})^{N}\mapsto(|x_{i}-x_{j}|^{2})_{i,j=1,\ldots,N,\,i<j}\in \mathbb{R}^{N(N-1)/2} \tag{3.1}\]
_induces a nonautonomous dynamical system on \(\mathbb{R}^{N(N-1)/2}_{+}\)._
Proof.: Let \((t_{0},x_{0})\in\mathbb{R}\times(\mathbb{R}^{M})^{N}\) and consider the solution \(x(\cdot,t_{0},x_{0}):\mathbb{R}\to(\mathbb{R}^{M})^{N}\) of (1.1) with initial data \(x(t_{0})=x_{0}\). Note that, by definition of solution, the cocycle property \(x(t+s,t_{0},x_{0})=x\big{(}t,s,x(s,t_{0},x_{0})\big{)}\) is satisfied whenever all the involved terms are well-defined. Consequently, a local cocycle is induced also on \(\mathbb{R}^{N(N-1)/2}\) through the map in (3.1). In particular, note that the obtained trajectories verify
\[\frac{d}{dt}|x_{i}(t,t_{0},x_{0})-x_{j}(t,t_{0},x_{0})|^{2}\] \[\qquad\qquad=2\langle x_{i}(t,t_{0},x_{0})-x_{j}(t,t_{0},x_{0}), f_{i}\big{(}t,x_{i}(t,t_{0},x_{0})\big{)}-f_{j}\big{(}t,x_{j}(t,t_{0},x_{0}) \big{)}\rangle\] \[\qquad\qquad\qquad\qquad+\langle x_{i}(t,t_{0},x_{0})-x_{j}(t,t_ {0},x_{0}),\sum_{k=1}^{N}a_{ik}(t)\big{(}x_{k}(t,t_{0},x_{0})-x_{i}(t,t_{0},x_ {0})\big{)}\rangle\] \[\qquad\qquad\qquad-\langle x_{i}(t,t_{0},x_{0})-x_{j}(t,t_{0},x_ {0}),\sum_{k=1}^{N}a_{jk}(t)\big{(}x_{k}(t,t_{0},x_{0})-x_{j}(t,t_{0},x_{0}) \big{)}\rangle\]
for \(i,j=1,\ldots,N,\,i<j\). Therefore, we have that if \(|x_{i}(t,t_{0},x_{0})-x_{j}(t,t_{0},x_{0})|^{2}=0\) for some \(i,j=1,\ldots,N,\,i<j\), then \(\frac{d}{dt}|x_{i}(t,t_{0},x_{0})-x_{j}(t,t_{0},x_{0})|^{2}=0\). Hence, the positive cone \(\mathbb{R}^{N(N-1)/2}_{+}\) is invariant for the induced flow.
The following pair-wise property on the decoupled dynamics will be important for our argument.
* For every \(i,j=1,\ldots,N\) and \(r>0\) there are functions \(\alpha^{r}_{ij},\beta^{r}_{ij}\in L^{1}_{loc}(\mathbb{R},\mathbb{R})\), with \(\beta^{r}_{ij}\) non-negative, such that for almost every \(t\in\mathbb{R}\) \[\langle x-y,f_{i}(t,x)-f_{j}(t,y)\rangle\leq\alpha^{r}_{ij}(t)\,|x-y|^{2}+ \beta^{r}_{ij}(t),\quad\text{for all }x,y\in B_{r}.\] (3.2)
**Remark 3.2**.: Hypothesis (**H1**) seems somewhat technical but it is in fact not very restrictive. In particular, note that if the network is made of identical nodes, i.e. if \(f_{i}=:f\) for all \(i=1,\ldots,N\), then (**H1**) is trivially true with at least \(\alpha^{r}_{ij}(t)=l^{r}(t)\) being the Lipschitz coefficient of \(f\) on \(B_{r}\) and \(\beta^{r}_{ij}(t)=0\) for all \(r>0\) and \(t\in\mathbb{R}\). Indeed, using the Cauchy-Schwarz inequality and the Lipschitz continuity, we have that
\[\langle x-y,f(t,x)-f(t,y)\rangle\leq|x-y|\cdot|f(t,x)-f(t,y)|\leq l^{r}(t)|x-y| ^{2}.\]
Note also that, in practical cases, one can often choose the functions \(\alpha^{r}_{ij},\beta^{r}_{ij}\in L^{1}_{loc}(\mathbb{R},\mathbb{R})\) as constants. It goes without saying that the theory hereby developed still applies. In fact, this simplification allows sharper results as we highlight in some of the corollaries to our main theorems.
In any case, it is helpful to have a rough intuition of what \(\alpha_{ij}\) and \(\beta_{ij}\) in Hypothesis (**H1**) represent. The function \(\beta^{r}_{ij}(t)\) can be interpreted as a measure of how distinct the dynamics of nodes \(i\) and \(j\) is; after all, \(\beta^{r}_{ij}(t)\geq 0\) vanishes when \(f_{i}=f_{j}\). On the other hand \(\alpha^{r}_{ij}(t)\) tells us the "tendency of the decoupled solutions to synchronize". In the homogeneous scalar case, if we call \(e(t)=x(t)-y(t)\), we have that hypothesis (**H1**) would read as \(e(t)\dot{e}(t)\leq\alpha^{r}_{ij}(t)e(t)^{2}\). If \(\alpha^{r}_{ij}(t)<0\) for all \(t\in\mathbb{R}\), indeed the "decoupled error" \(e(t)\) vanishes as \(t\to\infty\). This interpretation becomes also evident when we present our main result, Theorem 3.6. If the nodes are identical, then \(\alpha^{r}_{ij}(t)=\alpha^{r}(t)<0\) for all \(t\in\mathbb{R}\) implies the dissipativity of the uncoupled problem \(\dot{x}=f(t,x)\) and therefore the existence of a forward attracting trajectory.
Next, we show that it is possible to extend (**H1**) to pairs of continuous functions obtaining the same inequality almost everywhere.
**Lemma 3.3**.: _Assume that_ (**H1**) _holds. Then, considered two continuous functions \(\phi,\psi:I\to B_{r}\), with \(I\subseteq\mathbb{R}\) and \(r>0\), we have that for almost every \(t\in I\),_
\[\big{\langle}\phi(t)-\psi(t),f_{i}\big{(}t,\phi(t)\big{)}-f_{j}\big{(}t,\psi(t) \big{)}\big{\rangle}\leq\alpha_{ij}^{r}(t)\left|\phi(t)-\psi(t)\right|^{2}+ \beta_{ij}^{r}(t). \tag{3.3}\]
Proof.: The proof of this statement is based on the one of Lemma 3.3 in [38]. Consider two functions \(\phi,\psi:I\to B_{r}\) and let \(D=\{s_{n}\mid n\in\mathbb{N}\}\) be a dense subset of \(I\). From (**H1**) we know that given \(n\in\mathbb{N}\) there is a subset \(J_{n}\subset I\) of full measure such that for every \(t\in J_{n}\),
\[\big{\langle}\phi(s_{n})-\psi(s_{n}),f_{i}\big{(}t,\phi(s_{n})\big{)}-f_{j} \big{(}t,\psi(s_{n})\big{)}\big{\rangle}\leq\alpha_{ij}^{r}(t)\left|\phi(s_{n })-\psi(s_{n})\right|^{2}+\beta_{ij}^{r}(t). \tag{3.4}\]
Next we consider the subset \(J=\bigcap_{n=1}^{\infty}J_{n}\subset I\), also of full measure and fix \(t\in J\). From the density of \(D\) we find a subsequence \((s_{n_{k}})_{k\in N}\) such that \(\lim_{k\to\infty}s_{n_{k}}=t\), and from (3.4) we deduce that for each \(k\in\mathbb{N}\)
\[\begin{split}\big{\langle}\phi(s_{n_{k}})-\psi(s_{n_{k}}),f_{i} \big{(}t,\phi(s_{n_{k}})\big{)}&-f_{j}\big{(}t,\psi(s_{n_{k}}) \big{)}\big{\rangle}\\ &\leq\alpha_{ij}^{r}(t)\left|\phi(s_{n_{k}})-\psi(s_{n_{k}}) \right|^{2}+\beta_{ij}^{r}(t).\end{split} \tag{3.5}\]
Moreover, from (**L**) and the continuity of the functions \(\phi\) and \(\psi\) we obtain
\[\lim_{k\to\infty}f_{i}\big{(}t,\phi(s_{n_{k}})\big{)}=f_{i}\big{(}t,\phi(t) \big{)},\quad\text{and}\quad\lim_{k\to\infty}f_{j}\big{(}t,\psi(s_{n_{k}}) \big{)}=f_{j}\big{(}t,\psi(t)\big{)},\]
which together with (3.5) yields (3.3) for \(t\in J\), and finishes the proof.
Now, we proceed to proving a differential inequality that will be the fundamental block to subsequently construct our comparison argument.
**Lemma 3.4**.: _Assume that_ (**H1**) _holds and that for every \(i=1,\ldots,N\) there is a bounded absolutely continuous function \(\sigma_{i}:\mathbb{R}\to\mathbb{R}^{M}\), such that \(\sigma(t)=(\sigma_{1}(t),\ldots,\sigma_{N}(t))\) solves (1.1). Then, for every \(i,j=1,\ldots,N\), the following inequality holds,_
\[\frac{1}{2}\frac{d}{dt}\left|\sigma_{i}(t)-\sigma_{j}(t)\right|^{ 2}\leq\delta_{ij}(t)\left|\sigma_{i}(t)-\sigma_{j}(t)\right|^{2}+\beta_{ij}^{ \rho}(t)+\] \[\qquad\qquad\qquad+\frac{1}{2}\sum_{\begin{subarray}{c}k=1\\ k\neq i,j\end{subarray}}^{N}\big{(}a_{jk}(t)-a_{ik}(t)\big{)}\left(\left| \sigma_{i}(t)-\sigma_{k}(t)\right|^{2}-\left|\sigma_{j}(t)-\sigma_{k}(t) \right|^{2}\right)\]
_where_
\[\delta_{ij}(t)=\alpha_{ij}^{\rho}(t)-\Big{(}a_{ij}(t)+a_{ji}(t)+\frac{1}{2} \sum_{\begin{subarray}{c}k=1\\ k\neq i,j\end{subarray}}^{N}\big{(}a_{ik}(t)+a_{jk}(t)\big{)}\Big{)}.\]
Proof.: By definition, we have that
\[\frac{1}{2}\frac{d}{dt}|\sigma_{i}(t)-\sigma_{j}(t)|^{2}=\big{\langle} \sigma_{i}(t) -\sigma_{j}(t),f_{i}\big{(}t,\sigma_{i}(t)\big{)}-f_{j}\big{(}t, \sigma_{j}(t)\big{)}\big{\rangle}+\] \[+\big{\langle}\sigma_{i}(t)-\sigma_{j}(t),\sum_{k=1}^{N}a_{ik}(t) \big{(}\sigma_{k}(t)-\sigma_{i}(t)\big{)}\big{\rangle}+\] \[-\big{\langle}\sigma_{i}(t)-\sigma_{j}(t),\sum_{k=1}^{N}a_{jk}(t) \big{(}\sigma_{k}(t)-\sigma_{j}(t)\big{)}\big{\rangle}.\]
From Lemma 3.3 and the boundedness of \(\sigma(t)\), we immediately have that for almost every \(t\in\mathbb{R}\),
\[\big{\langle}\sigma_{i}(t)-\sigma_{j}(t),f_{i}\big{(}t,\sigma_{i}(t)\big{)}-f _{j}\big{(}t,\sigma_{j}(t)\big{)}\big{\rangle}\leq\alpha_{ij}^{\rho}(t)\,| \sigma_{i}(t)-\sigma_{j}(t)|^{2}+\beta_{ij}^{\rho}(t),\]
for some \(\rho>0\). On the other hand, recalling that \(2\langle x,y\rangle=|x|^{2}+|y|^{2}-|y-x|^{2}\), the following chain of inequalities holds true
\[\big{\langle}\sigma_{i}(t)-\sigma_{j}(t),\,\sum_{k=1}^{N}\big{[} a_{ik}(t)\big{(}\sigma_{k}(t)-\sigma_{i}(t)\big{)}-a_{jk}(t)\big{(}\sigma_{k}(t) -\sigma_{j}(t)\big{)}\big{]}\big{\rangle}\] \[=-\big{(}a_{ij}(t)+a_{ji}(t)\big{)}\,|\sigma_{i}(t)-\sigma_{j}(t) |^{2}+\] \[-\sum_{\begin{subarray}{c}k=1\\ k\neq i\end{subarray}}^{N}\frac{a_{ik}(t)}{2}\big{(}|\sigma_{i}(t)-\sigma_{j}(t )|^{2}+|\sigma_{i}(t)-\sigma_{k}(t)|^{2}-|\sigma_{k}(t)-\sigma_{j}(t)|^{2} \big{)}+\] \[-\sum_{\begin{subarray}{c}k=1\\ k\neq i\end{subarray}}^{N}\frac{a_{jk}(t)}{2}\left(|\sigma_{i}(t)-\sigma_{j}(t )|^{2}+|\sigma_{k}(t)-\sigma_{j}(t)|^{2}-|\sigma_{i}(t)-\sigma_{k}(t)|^{2} \right)\] \[=-\Big{(}a_{ij}(t)+a_{ji}(t)+\frac{1}{2}\sum_{\begin{subarray}{c }k=1\\ k\neq i\end{subarray}}^{N}\big{(}a_{ik}(t)+a_{jk}(t)\big{)}\Big{)}\,|\sigma_{i} (t)-\sigma_{j}(t)|^{2}\] \[+\frac{1}{2}\sum_{\begin{subarray}{c}k=1\\ k\neq i,j\end{subarray}}^{N}\big{(}a_{jk}(t)-a_{ik}(t)\big{)}\left(|\sigma_{i} (t)-\sigma_{k}(t)|^{2}-|\sigma_{j}(t)-\sigma_{k}(t)|^{2}\right)\]
Gathering all the previous formulas we obtain the sought-for inequality.
In order to discuss the question of synchronization for systems like (1.1), we still need two more properties besides (**H1**). First of all, we shall assume that for every \(i=1,\ldots,N\), there is a reference trajectory \(\sigma_{i}:\mathbb{R}\to\mathbb{R}^{M}\), and a tubular neighborhood around it, towards which the dynamics of the node \(i\) converges as time increases. If the trajectories \(\sigma_{i}(\cdot)\) are in fact (locally) attractive, then we shall talk of synchronization of attracting trajectories. In order to rigorously present our assumption, let us recall the notion of nonautonomous set.
**Definition 3.5**.: A nonautonomous set is a subset of the extended phase space \(\mathcal{U}\subset\mathbb{R}\times\mathbb{R}^{M}\). A \(t\)-fiber of \(\mathcal{U}\) is defined as \(\mathcal{U}_{t}=\{x\in\mathbb{R}^{M}\mid(t,x)\in\mathcal{U}\}\). A nonautonomous set is called forward invariant if \(x(t,t_{0},\mathcal{U}_{t_{0}})\subseteq\mathcal{U}_{t}\) for all \(t>t_{0}\). In general, \(\mathcal{U}\) is said to have a topological property (such as compactness or closedness) if each fiber of \(\mathcal{U}\) has this property. We shall say that this property is uniform if it is uniformly true for every fiber.
Our assumption on local uniform ultimate boundedness of solutions reads as,
* consider (1.1) and assume that there are \(\mathcal{U}\in\mathbb{R}\times\mathbb{R}^{M}\), \(\mu\geq 0\), and for every \(i=1,\ldots,N\) there is a bounded absolutely continuous function \(\sigma_{i}:\mathbb{R}\to\mathbb{R}^{M}\) such that \(\sigma(t)=(\sigma_{1}(t),\ldots,\sigma_{N}(t))^{\top}\in\mathcal{U}_{t}\) solves (1.1), \(\sup_{i=1,\ldots,N}\|\sigma_{i}(\cdot)\|_{\infty}<\rho\) for some \(\rho>0\), and if additionally \(x(\cdot,t_{0},\overline{x})=(x_{1}(\cdot,t_{0},\overline{x}_{1}),\ldots,x_{N}( \cdot,t_{0},\overline{x}_{N}))^{\top}\) solves (1.1) with initial conditions \(x_{i}(t_{0})=\overline{x}_{i}\in\mathcal{U}_{t_{0}}\), then \(x(\cdot,t_{0},\overline{x})\) is defined for all \(t\geq t_{0}\) and \[\limsup_{t\to\infty}|x_{i}(t,t_{0},\overline{x}_{i})-\sigma_{i}(t)|\leq\frac{ \mu}{3},\quad\text{for all }i=1,\ldots,N.\] (3.6)
In Section 6, we provide sufficient conditions for (**H2**) to hold true. The assumption (**H2**) can be alternatively read as the existence of an inflowing invariant open ball of diameter \(\mu>0\) for the networked system. Indeed, if this is true a globally defined bounded solution of (1.1) can be obtained as pullback limit of any initial condition on the boundary of such ball.
Furthermore, we shall assume that heterogeneity of the nodes can be uniformly bounded on compact intervals of time, i.e.
* the set \(\{\beta^{\rho}_{ij}(\cdot+t)\in L^{1}_{loc}\mid t\in\mathbb{R}\), \(i,j=1,\ldots,N\}\) is \(L^{1}_{loc}\)-bounded, and call \[0\leq\mu_{1}:=\sup_{\tau\in\mathbb{R}}\int_{\tau}^{\tau+1}|\beta(s)|\,ds<\infty,\] where \(\beta(t)=(2\beta^{\rho}_{ij}(t))^{\top}_{i,j=1,\ldots,N,\,i<j}\).
### Synchronization up to a constant of the entire network
The following theorem is our main result of synchronization up to a constant for heterogeneous time-dependent linearly coupled networks. Note that, due to assumption (**H2**), any function \(x(t)=(x_{1}(t),\ldots,x_{N}(t))\) which solves (1.1) with initial data in \(\mathcal{U}\) also satisfies (3.6). Hence, in order to investigate the synchronization of the system (1.1) for initial conditions of the nodes in \(\mathcal{U}\), it is sufficient to study the asymptotic behavior of \(|\sigma_{i}(t)-\sigma_{j}(t)|\) for all \(i,j=1,\ldots,N\).
**Theorem 3.6**.: _Assume that_ (**H1**)_, (**H2**) and_ (**H3**) _hold and fix any \(M>\mu_{1}\). If there is \(t_{0}\in\mathbb{R}\) such that for all \(i,j=1,\ldots,N\), with \(i<j\), and for almost every \(t>t_{0}\),_
\[\delta_{ij}(t):=\alpha_{ij}^{\rho}(t)-\Big{(}a_{ij}(t)+a_{ji}(t)+\frac{1}{2} \sum_{\begin{subarray}{c}k=1\\ k\neq i,j\end{subarray}}^{N}\big{(}a_{jk}(t)+a_{ik}(t)\big{)}\Big{)}<0 \tag{3.7}\]
_and additionally_
\[\overline{\gamma}:=\inf_{t\in\mathbb{R}}\min_{\begin{subarray}{c}i,j=1, \ldots,N,\\ i\neq j\end{subarray}}\Big{\{}\underbrace{2|\delta_{ij}(t)|-\sum_{ \begin{subarray}{c}k=1\\ k\neq i,j\end{subarray}}^{N}\big{|}a_{jk}(t)-a_{ik}(t)\big{|}}_{=:\gamma_{ij}(t)} \Big{\}}>-\log\left(1-\frac{\mu_{1}}{M}\right), \tag{3.8}\]
_then, for every \(\varepsilon>0\) there is \(T(\varepsilon)=\frac{1}{\gamma}\ln\big{(}4\rho^{2}/\varepsilon\big{)}>0\) such that for all \(i,j=1,\ldots,N\)_
\[|\sigma_{i}(t)-\sigma_{j}(t)|^{2}<\varepsilon+M,\qquad\text{for $t-t_{0}>T( \varepsilon)$.}\]
_In other words, (1.1) synchronizes up to a constant in finite time._
Proof.: Firstly, let us introduce the continuous function \(\eta:\mathbb{R}\to\mathbb{R}_{+}\) defined by
\[\eta(a)=\begin{cases}0&\text{if $a\leq 0$},\\ a&\text{if $a>0$},\end{cases}\]
which will be used multiple times within this proof. Consider the synchronization errors given by the vector of \(N(N-1)/2\) components
\[\xi(t)=\big{(}\xi_{ij}(t)\big{)}_{i,j=1,\ldots,N,\,i<j}^{\top},\qquad\text{ where}\qquad\xi_{ij}(t)=|\sigma_{i}(t)-\sigma_{j}(t)|^{2}.\]
Thanks to Lemma 3.4, we have that \(\dot{\xi}(t)\) is an _under-function_ with respect to the initial value problem \(u^{\prime}=E(t)u+\beta(t)\), \(u(t_{0})=\xi(t_{0})\), i.e.,
\[\dot{\xi}(t)\leq E(t)\xi(t)+\beta(t),\quad\text{for all $t>t_{0}$, $t_{0}\in\mathbb{R}$}, \tag{3.9}\]
where \(\beta(t)=(2\beta_{ij}^{\rho}(t))_{i,j=1,\ldots,N,\,i<j}^{\top}\) (see (H1)) and \(E(t)\) is the time-dependent square matrix defined row-wise as follows: let us label each of the \(N(N-1)/2\) rows by \(e^{ij}\in\mathbb{R}^{N(N-1)/2}\), where \(i,j=1,\ldots,N,\,i<j\), i.e.
\[e^{ij}(t)=\big{(}e^{ij}_{lk}(t)\big{)}_{l,k=1,\ldots,N,\,l<k};\]
then, fixed \(i,j=1,\ldots,N\), \(i<j\), we have that for \(l,k=1,\ldots,N,\,l<k\),
\[e^{ij}_{lk}(t)=\begin{cases}2\delta_{ij}(t)&\text{if $l=i$ and $k=j$},\\ \eta\big{(}a_{jk}(t)-a_{ik}(t)\big{)}&\text{if $l=i$ and $k\neq j$},\\ \eta\big{(}a_{ik}(t)-a_{jk}(t)\big{)}&\text{if $l=j$},\\ 0&\text{otherwise}.\end{cases} \tag{3.10}\]
Note that the entries of \(E\) outside the diagonal are either positive or null. In other words, the coefficients obtained through the inequality in Lemma 3.4, have been further bounded from above in the sense that only the terms with positive sign are left. Moreover, due the inequality in (3.7), the entries on the diagonal, that is \(e^{ij}_{ij}(t)=2\delta_{ij}(t)\), are strictly negative and the matrix is row-dominant [18, Definition 7.10] for \(t\geq t_{0}\). Consequently, the linear homogeneous problem \(\dot{u}=E(t)u\) has dichotomy spectrum contained in \((-\infty,0)\) for \(t\geq t_{0}\) (and thus it admits an exponential dichotomy with projector the identity on \([t_{0},\infty)\)) [18, Theorem 7.16)]. In fact a stronger property of exponential decay holds, i.e., denoted by \(U(t,s)\) the principal matrix solution of \(\dot{u}=E(t)u\) at \(s\in\mathbb{R}\), it holds that
\[\|U(t,s)\|\leq e^{-\overline{\gamma}(t-s)},\quad\text{for all $t\geq s\geq t_{0}$}. \tag{3.11}\]
Incidentally, this means that the solution of the non-homeogeneous linear problem \(u^{\prime}=E(t)u+\beta(t)\), \(u(t_{0})=\xi(t_{0})\) which is given by the variation of constants formula
\[u\big{(}t,t_{0},\xi(t_{0})\big{)}=U(t,t_{0})\xi(t_{0})+\int_{t_{0}}^{t}U(t,s) \beta(s)\,ds, \tag{3.12}\]
where the integral is understood component-wise, is defined for all \(t\geq t_{0}\) thanks to (3.11).
We shall prove that (3.9) implies \(\xi(t)\leq u\big{(}t,t_{0},\xi(t_{0})\big{)}\) for all \(t\geq t_{0}\). Considering \(\varepsilon>0\) and \(\overline{\varepsilon}=\varepsilon(1,\ldots,1)\in\mathbb{R}^{N(N-1)/2}\), note that from the continuity with respect to initial data, it is enough to prove that \(\xi(t)\ll u\big{(}t,t_{0},\xi(t_{0})+\overline{\varepsilon}\big{)}=:u(t,\varepsilon)\) for all \(t\geq t_{0}\). Assume, on the contrary, that there is a first time \(t_{1}>t_{0}\) for which the equality holds for some component. By simplicity of notation,
let the first be such a component. Then,
\[\xi_{ij}(t)<u_{ij}(t,\varepsilon)\quad\text{for }t\in[t_{0},t_{1})\qquad\text{ and}\qquad\xi_{12}(t_{1})=u_{12}(t_{1},\varepsilon). \tag{3.13}\]
Denote by \(g_{\xi},g_{u}:\mathbb{R}\times\mathbb{R}\to\mathbb{R}\), the Caratheodory functions defined by
\[g_{\xi}(t,v)=2\delta_{12}(t)v+\sum_{\begin{subarray}{c}k=1\\ k\neq 1,2\end{subarray}}^{N}\Big{[}\eta\big{(}a_{2k}(t)-a_{1k}(t)\big{)}\xi_{1k}(t)+ \eta\big{(}a_{1k}(t)-a_{2k}(t)\big{)}\xi_{2k}(t)\Big{]}\]
\[g_{u}(t,v)=2\delta_{12}(t)v+\sum_{\begin{subarray}{c}k=1\\ k\neq 1,2\end{subarray}}^{N}\Big{[}\eta\big{(}a_{2k}(t)-a_{1k}(t)\big{)}u_{1k}(t, \varepsilon)+\eta\big{(}a_{1k}(t)-a_{2k}(t)\big{)}u_{2k}(t,\varepsilon)\Big{]}\]
and consider the scalar Caratheodory differential problems \(\dot{v}=g_{\xi}(t,v)\), \(\dot{v}=g_{u}(t,v)\). Due to Lemma 3.4 and the assumptions in (3.13), the following scalar differential inequalities holds true,
\[\dot{\xi}_{12}(t)\leq g_{\xi}\big{(}t,\xi_{12}(t)\big{)}\leq g_{u}\big{(}t, \xi_{12}(t)\big{)},\quad\text{for all }t\in[t_{0},t_{1}],\]
and the comparison theorem for Caratheodory scalar differential equations (see Olech and Opial [43]) yields
\[\xi_{12}(t)\leq v\big{(}t,t_{0},\xi_{12}(t_{0})\big{)}<v\big{(}t,t_{0},\xi_{1 2}(t_{0})+\varepsilon\big{)}=u_{12}(t,\varepsilon)\]
for every \(t\in[t_{0},t_{1}]\) and in particular for \(t=t_{1}\). However, this contradicts (3.13). Hence, it must be that such \(t_{1}\in\mathbb{R}\) does not exist and \(\xi(t)\ll u\big{(}t,t_{0},\xi(t_{0})+\overline{\varepsilon}\big{)}\) for all \(t\geq t_{0}\). In turn, the arbitrariness on \(\varepsilon>0\) gives us the sought for ordering of vector solutions \(\xi(t)\leq u\big{(}t,t_{0},\xi(t_{0})\big{)}\) for all \(t\geq t_{0}\). Then, from (3.12) we immediately obtain that
\[|\xi(t)|\leq\|U(t,t_{0})\|\,|\xi(t_{0})|+\int_{t_{0}}^{t}\|U(t,s)\|\,|\beta(s) |\,ds. \tag{3.14}\]
Concerning the first term, we have that \(\|U(t,t_{0})\|\,|\xi(t_{0})|\leq 4\rho^{2}e^{-\overline{\gamma}(t-t_{0})}\), thanks to (3.11) and (**H2**). Therefore, for any \(\varepsilon>0\), it holds
\[\|U(t,t_{0})\|\,|\xi(t_{0})|<\varepsilon,\qquad\text{whenever }t-t_{0}>\frac{1}{ \overline{\gamma}}\ln\left(\frac{4\rho^{2}}{\varepsilon}\right). \tag{3.15}\]
We shall, thus, analyze the second term, the integral \(\int_{t_{0}}^{t}\|U(t,s)\|\,|\beta(s)|\,ds\). Note that, thanks
to (**H3**), \(\mu_{1}:=\sup_{\tau\in\mathbb{R}}\int_{\tau}^{\tau+1}|\beta(s)|\,ds<\infty\). Then, we have that
\[\begin{split}\int_{t_{0}}^{t}\|U(t,s)\|\,|\beta(s)|\,ds& \leq\int_{t_{0}}^{t}|\beta(s)|e^{-\overline{\gamma}(t-s)}\,ds= \int_{0}^{t-t_{0}}|\beta(t-u)|e^{-\overline{\gamma}u}\,du\\ &\leq\int_{0}^{\infty}\lvert\beta(t-u)\rvert e^{-\overline{ \gamma}u}\,du\leq\sum_{n=0}^{\infty}\int_{n}^{n+1}\lvert\beta(t-u)\rvert e^{- \overline{\gamma}u}\,du\\ &\leq\sum_{n=0}^{\infty}e^{-\overline{\gamma}n}\int_{n}^{n+1} \lvert\beta(t-u)\rvert\,du=\frac{\mu_{1}}{1-e^{-\overline{\gamma}}}.\end{split} \tag{3.16}\]
Furthermore, since by assumption \(\overline{\gamma}>-\log(1-\mu_{1}/M)\), then
\[\frac{\mu_{1}}{1-e^{-\overline{\gamma}}}<M.\]
This inequality, together with (3.14) and (3.15) gives the sought-for result.
**Remark 3.7**.: A closer look at the proof of Theorem 3.6 shows that the fundamental step was to show that, under the given assumptions, the homogeneous linear system \(\dot{u}=E(t)u\) has dichotomy spectrum contained in \((-\infty,0)\). Yet, the property of row-dominance is only a sufficient condition for the existence of an exponential dichotomy with projector the identity. Therefore, it is worth noting that, although we privileged row-dominance in order to give a set of easily computable inequalities, other sufficient conditions may be just as effective in guaranteeing synchronization. This also includes the weaker requirement that the Lyapunov spectrum is contained in \((-\infty,0)\). Numerical methods to approximate the Lyapunov and the dichotomy spectrum under the assumption of integral separation can be found in [15, 16, 17, 19]. See also Remark 2.5 for further details.
We notice that in our main Theorem 3.6, the synchronization error might be largely influenced by the heterogeneity of the nodes. In the next corollary we address such an issue provided that the network has a global coupling strength.
**Corollary 3.8**.: _Suppose that Theorem 3.6 holds for (1.1). Then, the same Theorem holds for the system with global coupling strength_
\[\dot{x}_{i}=f_{i}(t,x_{i})+c\sum_{k=1}^{N}a_{ik}(t)(x_{k}-x_{i}),\qquad c\geq 1 \tag{3.17}\]
_but with a synchronization error_
\[|\sigma_{i}(t)-\sigma_{j}(t)|^{2}<\varepsilon+\frac{1}{c}M. \tag{3.18}\]
Proof.: The proof is a slight adaptation of the proof of Theorem 3.6, therefore, we refer to the notation used there. First, let \(\alpha_{ij}\leq 0\) in (**H1**). In such a case, it suffices to consider \(\alpha_{ij}=0\). It follows as in the proof of Theorem 3.6 that the error dynamics \(\dot{\xi}(t)\) is an under function with respect to the initial value problem \(u^{\prime}=cE(t)u+\beta(t)\). In turn, this initial value problem is smoothly equivalent, via time rescaling, to \(u^{\prime}=E(t)u+\frac{1}{c}\beta(t)\). The rest of the proof follows by the same arguments as in the proof of Theorem 3.6 but with \(\beta(t)\) replaced by \(\frac{1}{c}\beta(t)\). Next, we consider the case when \(\alpha_{ij}>0\). Since we assume that Theorem 3.6 holds for \(c=1\), it necessarily follows that \(\Big{(}a_{ij}(t)+a_{ji}(t)+\frac{1}{2}\sum_{\begin{subarray}{c}k=1\\ k\neq i,j\end{subarray}}^{N}\big{(}a_{jk}(t)+a_{ik}(t)\big{)}\Big{)}>0\). Let
\[\begin{split}\bar{\delta}_{ij}&\coloneqq\alpha_ {ij}-c\Big{(}a_{ij}(t)+a_{ji}(t)+\frac{1}{2}\sum_{\begin{subarray}{c}k=1\\ k\neq i,j\end{subarray}}^{N}\big{(}a_{jk}(t)+a_{ik}(t)\big{)}\Big{)}\\ &=c\bigg{(}\frac{\alpha_{ij}}{c}-\Big{(}a_{ij}(t)+a_{ji}(t)+\frac {1}{2}\sum_{\begin{subarray}{c}k=1\\ k\neq i,j\end{subarray}}^{N}\big{(}a_{jk}(t)+a_{ik}(t)\big{)}\Big{)}\bigg{)} \end{split}\leq c\delta_{ij}.\end{split} \tag{3.19}\]
By the previous inequality, we now have the same arguments as above for the error dynamics \(\dot{\xi}_{ij}(t)\).
Analogous results to Corollary 3.8 are known, for example, for the static case with \(a_{ij}\geq 0\)[46] provided that the internal dynamics are dissipative. Nevertheless, our result also covers the time-varying case and allows for the weights to be negative.
Let us next present a corollary that might be of additional use in practical cases. Assume that the function \(\beta\) in (**H3**) is in fact essentially bounded. Then a stronger result is available that highlights the role of the coupling strength in the network. Particularly, a result of sharp synchronization appears when \(\|\beta(\cdot)\|_{L^{\infty}}=0\).
**Corollary 3.9**.: _Under the assumptions of Theorem 3.6, if additionally \(\beta(\cdot)\in L^{\infty}\), then, for every \(\varepsilon>0\) there is \(T(\varepsilon)=\frac{1}{\gamma}\ln\big{(}4\rho^{2}/\varepsilon\big{)}>0\) such that for all \(i,j=1,\ldots,N\)_
\[|\sigma_{i}(t)-\sigma_{j}(t)|^{2}<\varepsilon+\frac{\|\beta(\cdot)\|_{L^{ \infty}}}{\overline{\gamma}},\qquad\text{for $t-t_{0}>T(\varepsilon)$}.\]
_Moreover, if the considered system has a global coupling coefficient \(c>0\) as in (3.17), then the previous inequality reads as_
\[|\sigma_{i}(t)-\sigma_{j}(t)|^{2}<\varepsilon+\frac{\|\beta(\cdot)\|_{L^{ \infty}}}{c\,\overline{\gamma}},\qquad\text{for $t-t_{0}>T(\varepsilon)$}.\]
_If \(f_{i}=f\) for all \(i=1,\ldots,N\), and (**H2**) holds with \(\mu=0\), then (1.1) synchronizes._
Proof.: The result is an easy consequence of Theorem 3.6 once (3.16) is changed for
\[\int_{t_{0}}^{t}\|U(t,s)\|\,|\beta(s)|\,ds\leq\|\beta(\cdot)\|_{L^{\infty}}\int_{ 0}^{t-t_{0}}e^{-\overline{\gamma}u}\,du=\frac{\|\beta(\cdot)\|_{L^{\infty}}}{ \overline{\gamma}}\big{(}1-e^{-\overline{\gamma}(t-t_{0})}\big{)}.\]
Then, the case of global coupling is inherited from the proof of Corollary 3.8. Finally, the result of sharp synchronization for a network of identical agents follows immediately by taking \(\beta_{ij}(t)=0\) for all \(t\in\mathbb{R}\) and all \(i,j=1,\ldots,N\) (see also Remark 3.2).
Our last corollary is concerned with the strengthening of (**H1**) from a local to a global property, i.e. when the functions \(\alpha_{ij}^{r},\beta_{ij}^{r}\in L^{1}_{loc}(\mathbb{R},\mathbb{R})\) do not depend on \(r>0\). Note that this is for example the case when identical nodes with a global Lipschitz constant are chosen.
* Assume that there are functions \(\alpha_{ij},\beta_{ij}\in L^{1}_{loc}(\mathbb{R},\mathbb{R})\), with \(\beta_{ij}\) non-negative, such that (**H1**) holds with \(\alpha_{ij}^{r}=\alpha_{ij}\) and \(\beta_{ij}^{r}=\beta_{ij}\) for every \(r>0\) and every \(i,j=1,\ldots,N\).
It is immediateto check that an analogous version of Lemma 3.3 holds true also for (**H1*).
If (**H1*) is in force, then the assumption (**H2**) can be weakened in the sense that boundedness for the solution \(\sigma\) in (**H2**) becomes dispensable.
* Assume that (**H2**) holds, except the solution \(\sigma(t)=(\sigma_{1}(t),\ldots,\sigma_{N}(t))^{\top}\in\mathcal{U}_{t}\) (1.1), is not necessarily bounded.
**Corollary 3.10**.: _Assume that (**H1*)**, (**H2*) and (**H3**) hold. Then with the respective additional assumptions, the results of Theorem 3.6, and Corollaries 3.8 and 3.9 still hold true._
Proof.: The results are immediate once one notices that the boundedness of \(\sigma(t)\) in (**H2**) was only needed to fix \(\rho>0\) so that (**H1**) could be used in Lemma 3.4 for the suitable \(\alpha_{ij}^{\rho},\beta_{ij}^{\rho}\in L^{1}_{loc}(\mathbb{R},\mathbb{R})\). The same argument of Lemma 3.4 can now be repeated to construct a differential inequality holding for any (possibly not bounded) solution of (1.1).
To highlight some the results of this section, let us present a couple of examples.
**Example 3.11** (Synchronization of heterogeneous van der Pol oscillators with time-dependent perturbation parameter).: Let us consider a network of \(N\geq 2\) heterogeneous van der Pol oscillators. Each oscillator has internal dynamics given by
\[\begin{split} u_{i}^{\prime}&=v_{i}+b_{i}u_{i}- \frac{u_{i}^{3}}{3}\\ v_{i}^{\prime}&=-\varepsilon_{i}(t)u_{i}.\end{split} \tag{3.20}\]
Notice that since the individual oscillators are heterogeneous, both in amplitude and phase, we expect accordingly that \(\alpha_{ij}>0\) and \(\beta_{ij}>0\) in (**H1**). Nevertheless, in this example, we shall show that a strong enough coupling allows us to synchronize the oscillators. This observation goes in hand with (**H2**) since for a weak enough coupling we indeed expect the existence of a large enough inflowing invariant ball for the coupled dynamics. Finally, since the individual nodes are all oscillators, they differ on bounded quantities, hence (**H3**) holds.
For this example, we randomly choose the parameters \(b_{i}\in(\frac{1}{2},1)\) from a uniform distribution, which influence the amplitude of each oscillator. Furthermore, we let \(\varepsilon_{i}(t)=\varepsilon_{0}\left(1+\frac{1}{2}\sin(\omega_{i}t)\right)\), with \(\omega_{i}>0\) also picked uniformly at random in the interval \(\omega_{i}\in(1,2)\), and \(0<\varepsilon_{0}\ll 1\). Naturally, we can choose any other time-varying behavior of \(\varepsilon(t)\) as long as it is positive but sufficiently small, i.e., \(0<\varepsilon(t)\ll 1\) for almost all \(t\geq t_{0}\). Regarding the network, we choose a piece-wise constant adjacency matrix \(A(t)=[a_{ij}(t)]\) where \(a_{ij}\in\{0,1\}\) updates randomly every \(\Delta t\) units of time (we do ensure that the underlying graph is always connected). Finally, we also assume that the interconnection occurs on the fast timescales, meaning that the model we consider reads as
\[\begin{split} u_{i}^{\prime}&=v_{i}+b_{i}u_{i}- \frac{u_{i}^{3}}{3}+\frac{c}{N}\sum_{j=1}^{N}a_{ij}(t)(u_{j}-u_{i})\\ v_{i}^{\prime}&=-\varepsilon_{i}(t)u_{i}+\frac{c}{ N}\sum_{j=1}^{N}a_{ij}(t)(v_{j}-v_{i}).\end{split} \tag{3.21}\]
We notice that since the weights \(a_{ij}\) are nonnegative and the dynamics of the decoupled nodes are ultimately bounded, one can guarantee that Theorem 3.6, and especially Corollary 3.8, hold for \(c\) large enough. In Figure 1 we show a corresponding simulation for \(N=5\) and \(\Delta t=50\), while in Figure 2 we show the results for a similar setup, but with \(N=100\) and \(\Delta t=5\) (for practicality we only show the error dynamics for the latter one). In both cases we compare the effect of the global coupling \(c\), as described in Corollary 3.8 and verify that the synchronization error decreases as \(c\) increases. The shown pictures are representative among ten different simulations, all of which show a similar qualitative behavior.
In the next example, we exploit Theorem 3.6 to achieve synchronization in the case when some weights in the network can be negative. We also briefly argue how our synchronization results on a static network may persist under small enough time-varying perturbations of the weights. The latter case is further detailed in section 5.1.
Figure 1: A temporal network of \(N=5\) heterogeneous van der Pol oscillators, and update time \(\Delta t=50\), as described above. Each column corresponds to a particular value of \(c\) in (3.21). In the first row, we plot the projection to the \((u,v)\)-plane of the solutions. Since (3.21) is nonautonomous, the apparent intersections are just due to the projection. The second and third rows show the maximum of pairwise errors for each component. According to Corollary 3.8, there is a large enough \(c\) leading to synchronization. This is verified as one goes from left to right in the plots. We notice that the attractor, for example in the right most column, does not appear regular due to the time-dependent random switching of the network topology.
**Example 3.12** (Compensating contrarians for consensus on a ring network).: Consensus dynamics is very important in several fields of science [4; 61; 65]. Let us consider a ring network with \(N\) nodes as sketched in Figure 3. In this example, each node is scalar \(x_{i}\in\mathbb{R}\) and interacts with its nearest 2-neighbors. We assume that the network dynamics are governed by the well-known consensus protocol \(\dot{x}=-Lx\), where \(L\) denotes the (signed) Laplacian. Component-wise, the dynamics of each node is determined by \(\dot{x}_{i}=\sum_{j=i-2}^{i+2}a_{ij}(x_{j}-x_{i})\) with \(j\in\mathbb{Z}\bmod N\), \(a_{ii}=0\), and where \(a_{ij}\neq 0\) denotes a connection from node \(j\) towards node \(i\). From now on, we stick to the previously described 2-nearest neighbor topology.
Motivated by [25], we identify \(a_{ij}>0\) with a "conformist" influence, while \(a_{ij}<0\) is referred to as a "contrarian" influence. Let us consider that there is one contrarian node. Without loss of generality, let node \(x_{1}\) be the contrarian represented as \(a_{i1}<0\) for \(i=1,2,N-1,N\). Moreover, we assume for now that neighboring nodes to \(x_{1}\) do not influence it, that is \(a_{1j}=0\) for \(j=1,2,N-1,N\), and that the rest of the nodes are conformists, i.e., the remaining nonzero weights \(a_{ij}\), are positive. We recall that in the case where all the \(a_{ij}\)'s are positive, the dynamics of the consensus protocol lead to convergence of the solutions to some finite value. In the case of a signed Laplacian, where negative weights are allowed, determining the stability of the protocol is considerably harder. In particular, in the above setting consensus does not hold. In this example we are going to use Theorem 3.6 to determine how strongly would node \(x_{2}\) need to influence node \(x_{1}\) (that is choose
Figure 2: Error plots analogous to those in Figure 1, corresponding to a simulation with \(N=100\) and \(\Delta t=5\). Observe that, again, as \(c\) increases the synchronization error decreases, as guaranteed by Corollary 3.8.
\(a_{12}\)) to overcome \(x_{1}\)'s negative influence so that the protocol can reach consensus.
For simplicity, let \(a_{i1}=-a\), \(a>0\), and \(a_{ij}=1\) for \(i,j\geq 2\). With the notation of Theorem 3.6, we have that \(\alpha_{ij}=0\) and \(\mu_{1}=0\). Moreover, notice that for all \(i,j\in\{4,\ldots,N-2\}\) one has \(\delta_{ij}<0\). Accounting for the symmetry \(\delta_{ij}=\delta_{ji}\) we have
\[\delta_{12} =-\left(a_{12}+\frac{3}{2}-a\right) \tag{3.22}\] \[\delta_{13}=\delta_{1N}=\delta_{1(N-1)} =-\left(\frac{a_{12}}{2}+\frac{3}{2}-a\right)\] \[\delta_{23}=\delta_{2N} =-\left(\frac{5}{2}-a\right).\]
Since \(a_{12}\) has no influence on \(\delta_{23}\), we impose the further constraint that \(a<\frac{5}{2}\). On the other hand, regarding \(\gamma\) let
\[\gamma_{ij}=2|\delta_{ij}|-\sum_{k\neq i,j}|a_{jk}-a_{ik}|. \tag{3.23}\]
Figure 3: A ring network where, _except for the contrarian node_\(x_{1}\), all nodes interact in a bidirectional way with its nearest 2-neighbors. The contrarian node \(x_{1}\) has a negative influence on its 2-neighbors, while the neighbors do not influence \(x_{1}\). In this example we assume that all the conformist weights (black arrows) are 1, while the contrarian weights (red arrows) are \(-a\), \(a>0\). In this setup, _consensus in not achieved_. We use the results of Theorem 3.6 to determine what influence \(x_{2}\) must have on \(x_{1}\) to reach consensus.
So, accounting for the symmetry mentioned above, we look to satisfy the inequalities
\[\gamma_{12} =2\left|a_{12}+\frac{3}{2}-a\right|-3>0 \tag{3.24}\] \[\gamma_{13} =2\left|\frac{a_{12}}{2}+\frac{3}{2}-a\right|-|1-a_{12}|-2>0\] \[\gamma_{23} =2\left|\frac{5}{2}-a\right|-2>0,\]
where, as for the \(\delta_{ij}\)'s above, the rest of the \(\gamma_{ij}\)'s are all nonnegative. We notice that \(\gamma_{12}>0\) can always be satisfied by an appropriate choice of \(a_{12}\) and that since \(a_{12}\) has no influence on \(\gamma_{23}\), we have the further restriction \(a<\frac{3}{2}\). However \(\gamma_{13}>0\) further imposes that \(a<1\). With this restriction, \(\delta_{ij}<0\) for any \(a_{12}>0\). So, from Theorem 3.6, we can conclude that for \(0<a<1\) the given consensus protocol achieves consensus provided that the contrarian influence is compensated by \(a_{12}>a\). This last inequality is obtained from the requirement \(\gamma_{12}>0\).
It is evident from the analysis performed in this example, that the weights \(a_{ij}\) can be time-varying. For more general details see section 5.1. Moreover, the contrarian influence does not have to be homogeneous, the result above will hold as long as all of them have modulus less than \(1\). In Figure 4 we show a couple of simulations that verify our arguments with the conformist weights fixed to \(1\), the contrarian weights given by \(a_{i1}=a_{i1}(t)=-\frac{1}{2}+\frac{1}{2}\sin(\omega_{i}t)\) with uniformly distributed random frequencies, and the conformist compensation \(a_{12}=0\), on the left, and \(a_{12}=1\) on the right.
## 4 On the synchronization of a cluster
In practical cases, the synchronization of the whole network can be sometimes excessive and one is rather interested in the behaviour of a limited portion of the network. Hereby, we show how our arguments can be used to analyze also such case.
**Theorem 4.1**.: _Let \(2<n\leq N\) integer and consider an ordered set \(J\) of \(n\) nodes from (1.1) identified by their indices, i.e. \(J=\{j_{1},\ldots,j_{n}\}\subset\{1,\ldots,N\}\), with \(j_{m}<j_{m+1}\) for all \(m=1,\ldots,n-1\). Assume that (_**H2**_) holds for (1.1), and that (_**H1**_) and (_**H3**_) are true for all \(i,j\in J\)._
_Furthermore, assume that \(\{a_{jk}(\cdot+t)-a_{ik}(\cdot+t)\in L^{1}_{loc}\mid t\in\mathbb{R},\,i,j\in J,k\notin J\}\) is \(L^{1}_{loc}\)-bounded, and call_
\[0\leq\mu_{2}:=2\rho^{2}\max_{\begin{subarray}{c}i,j\in J,\\ k\notin J\end{subarray}}\sup_{\tau\in\mathbb{R}}\int_{\tau}^{\tau+1}\left|a_{ jk}(s)-a_{ik}(s)\right|ds<\infty.\]
Figure 4: Simulations of the consensus protocol \(\dot{x}=-Lx\), for a ring network as in Figure 3. For all simulations, the initial conditions are randomly chosen, the conformist weights are set to \(1\), and the contrarian weights are \(a_{i1}=-\frac{1}{2}+\frac{1}{2}\sin(\omega_{i}t)\), \(i=2,3,N-1,N\), for some randomly chosen frequency \(\omega_{i}>0\). In the left we show the dynamics where no compensation is implemented, i.e. \(a_{12}=0\). One can clearly notice that not only the overall dynamics is unstable (due to the negative weights), but there is no consensus at all. On the right we verify that from Theorem 3.6 we have deduced that, for example, a weight \(a_{12}=1\) leads to consensus.
_Fixed any \(M>\mu_{1}+\mu_{2}(N-n)\sqrt{2n(n-1)}\), if there is \(t_{0}\in\mathbb{R}\) such that for all \(i,j\in J\) with \(i<j\) and almost every \(t>t_{0}\),_
\[\delta_{ij}(t):=\alpha_{ij}^{\rho}(t)-\Big{(}a_{ij}(t)+a_{ji}(t)+\frac{1}{2} \sum_{\begin{subarray}{c}k=1\\ k\neq i,j\end{subarray}}^{N}\big{(}a_{jk}(t)+a_{ik}(t)\big{)}\Big{)}<0\]
_and_
\[\overline{\gamma}_{J}:=\inf_{t\in\mathbb{R}}\min_{\begin{subarray} {c}i,j=1,\ldots,N,\\ i\neq j\end{subarray}}\Big{\{}2|\delta_{ij}(t)| -\sum_{k\in J\setminus\{i,j\}}\big{|}a_{jk}(t)-a_{ik}(t)\big{|} \Big{\}}\] \[>-\log\left(1-\frac{\mu_{1}+\mu_{2}(N-n)\sqrt{2n(n-1)}}{M}\right),\]
_then for every \(\varepsilon>0\) there is \(T(J,\varepsilon)=\frac{1}{\overline{\gamma}_{J}}\ln\big{(}4\rho^{2}/ \varepsilon\big{)}>0\) such that_
\[|\sigma_{i}(t)-\sigma_{j}(t)|^{2}<\varepsilon+M,\qquad\text{for }t-t_{0}>T(J, \varepsilon).\]
_In other words, \(J\) synchronizes up to a constant in finite time._
Proof.: For the sake of simplicity, we shall assume that \(J=\{1,\ldots,n\}\). Then, we can proceed akin to the proof of Theorem 3.6 but now considering the vector \(\zeta(t):=\big{(}|\sigma_{i}(t)-\sigma_{j}(t)|^{2}\big{)}_{i,j\in J,\,i<j}\). Thanks to Lemma 3.4, we have that \(\dot{\zeta}(t)\) is an _under-function_ with respect to the initial value problem \(u^{\prime}=C(t)u+\nu_{J}(t)\), \(u(t_{0})=\zeta(t_{0})\), i.e.,
\[\dot{\zeta}(t)\leq C(t)\zeta(t)+\nu_{J}(t),\quad\text{for all }t>t_{0},\,t_{0}\in \mathbb{R},\]
where \(\nu_{J}(t)=(\nu_{ij}(t))_{i,j\in J,\,i<j}^{\top}\) is defined by
\[\nu_{ij}(t)=\eta\left(2\beta_{ij}^{\rho}(t)+\sum_{k\notin J}\big{(}a_{jk}(t)-a _{ik}(t)\big{)}\,\big{(}\xi_{ik}(t)-\xi_{jk}(t)\big{)}\right), \tag{4.1}\]
and \(C(t)\) is the time-dependent matrix constructed exactly as the matrix \(E(t)\) in the proof of Theorem 3.6 where now \(n\) appears in place of \(N\) everywhere. Now, note that, under the given assumptions, \(\dot{u}=C(t)u\) admits an exponential dichotomy on \([t_{0},\infty)\) with projector the identity and exponential rate of convergence \(\gamma_{J}>0\); as for Theorem 3.6, the linear homogeneous system \(\dot{u}=C(t)u\) is row-dominant for \(t\geq t_{0}\)[18, Theorem 7.16]. Then, with analogous reasoning, we obtain that for all \(t\geq t_{0}\),
\[\big{(}|\sigma_{i}(t)- \sigma_{j}(t)|^{2}\big{)}_{i,j=1,\ldots,n,\,i<j}^{\top}\] \[\leq U(t,t_{0})\big{(}|\sigma_{i}(t_{0})-\sigma_{j}(t_{0})|^{2}\big{)} _{i,j=1,\ldots,n,\,i<j}^{\top}+\int_{t_{0}}^{t}U(t,s)\nu(s)\,ds,\]
where \(U(t,t_{0})\) is the principal matrix solution at \(t_{0}\) of \(\dot{u}=C(t)u\). Then, fixed \(\varepsilon>0\) and reasoning as in the proof of Theorem 3.6, but recalling that \(\nu\) is defined by (4.1), we come to the analogous chain of inequalities as (3.16), that is,
\[\int_{t_{0}}^{t} \|U(t,s)\|\left(|\beta(s)|+\sqrt{\frac{n(n-1)}{2}}\max_{i,j\in J} \sum_{k\notin J}|a_{jk}(s)-a_{ik}(s)|\,|\xi_{ik}(t)-\xi_{jk}(t)|\right)ds\] \[\leq\int_{0}^{t-t_{0}}\Bigl{(}|\beta(t-u)|+4\rho^{2}\sqrt{\frac{n (n-1)}{2}}\max_{i,j\in J}\sum_{k\notin J}|a_{jk}(t-u)-a_{ik}(t-u)|\Bigr{)}e^{- \overline{\gamma}_{J}u}\,du\] \[\leq\sum_{n=0}^{\infty}e^{-\overline{\gamma}_{J}n}\bigl{(}\mu_{1 }+\mu_{2}(N-n)\sqrt{2n(n-1)}\bigr{)}=\frac{\mu_{1}+\mu_{2}(N-n)\sqrt{2n(n-1)} }{1-e^{-\overline{\gamma}_{J}}}.\]
Therefore, one has for every \(i,j=1,\ldots,n\),
\[|\sigma_{i}(t)-\sigma_{j}(t)|^{2}\leq\varepsilon+\frac{\mu_{1}+\mu_{2}(N-n) \sqrt{2n(n-1)}}{1-e^{-\overline{\gamma}_{J}}}<\varepsilon+M,\]
whenever \(t-t_{0}>T(J,\varepsilon)=\frac{1}{\overline{\gamma}_{J}}\ln\left(4\rho^{2}/ \varepsilon\right)>0\), which concludes the proof.
Also in the case of cluster synchronization, a sharper result can be obtained by considering a stronger assumption than the uniform \(L^{1}_{loc}\)-boundedness in the statement of Theorem 4.1, i.e. boundedness in \(L^{\infty}\).
**Corollary 4.2**.: _Under the assumptions of Theorem 4.1, if additionally \(|\beta(\cdot)|\in L^{\infty}\), and also \(a_{jk}(\cdot),a_{ik}(\cdot)\in L^{\infty}\) for all \(i,j\in J\) and \(k\notin J\), then, for every \(\varepsilon>0\) there is \(T(\varepsilon)=\frac{1}{\gamma_{J}}\ln\left(4\rho^{2}/\varepsilon\right)>0\) such that for all \(i,j=1,\ldots,N\)_
\[|\sigma_{i}(t)-\sigma_{j}(t)|^{2}<\varepsilon+\frac{1}{\overline{\gamma}_{J}} \Bigl{(}\|\beta(\cdot)\|_{L^{\infty}}+2\rho^{2}(N-n)\sqrt{2n(n-1)}\max_{ \begin{subarray}{c}i,j\in J,\\ k\notin J\end{subarray}}\|a_{jk}(\cdot)-a_{ik}(\cdot)\|_{L^{\infty}}\Bigr{)},\]
_for \(t-t_{0}>T(\varepsilon)\)._
Proof.: The result is an easy consequence of Theorem 4.1 reasoning as for Corollary 3.9.
The next example highlights how the up-to-a-constant synchronization achieved in finite time through Theorem 4.1, can be used to produce recurrent patterns of cluster synchronization alternated by intervals with no synchrony.
**Example 4.3** (Clustering in a FitzHugh-Nagumo network).: To showcase Theorem 4.1, let us consider a network of heterogeneous FitzHugh-Nagumo neurons. The \(i\)-th neuron has dynamics
given by
\[\dot{x}_{i} =c_{i}x_{i}-x_{i}^{3}-y_{i}+I_{i} i=1,\ldots,N. \tag{4.2}\] \[\dot{y}_{i} =\varepsilon(x_{i}+a_{i}-b_{i}y_{i}),\]
In this model \(x_{i}\) represents the \(i\)-th membrane's voltage and \(y_{i}\) the \(i\)-th recovery variable [55, 54]. Regarding the parameters, we have that \(c_{i}>0\) modulates the amplitude of oscillations, \(I_{i}\) accounts for the stimulus current, and \(\varepsilon\) stands for the difference in timescales between the voltage and the recovery variables. The parameters \(a_{i}>0\) and \(b_{i}>0\), together with \(I_{i}\), determine whether the neuron is in excitatory or in refractory mode. For this example we let the neurons be slightly heterogoeneous, and unless otherwise stated, we randomly assign parameter values according to the following: \(c_{i}\in[0.75,1]\), \(a_{i}\in[-0.3,\,0.3]\), \(b\in[0.1,2]\), \(I_{i}\in[0,.01]\) and \(\varepsilon=0.05\). We notice that with these parameters, isolated neurons may or may not oscillate. We moreover have similar arguments as in example 3.11 regarding (**H1**), (**H2**), and (**H3**).
We setup the network as follows: an underlying connected, unweighted, and directed graph of \(N\) nodes is randomly generated by selecting an adjacency matrix \(A=[a_{ij}]_{i,j=1,\ldots,N}\) with \(a_{ij}\in\{0,1\}\), from a (discrete) uniform distribution. Below we shall specify a time-varying change in the weights of this underlying network, but we emphasize that no new edges are created. Then, two nodes are chosen at random. Let \((x_{l},y_{l})\) and \((x_{k},y_{k})\) be such neurons; we set their parameters to \((c_{l},I_{l},a_{l},b_{l})=(0.5,0.1,0.3,1.4)\) and \((c_{k},I_{k},a_{k},b_{k})=(0.75,0.15,0.3,1.4)\), which ensures that, in isolation, the \(l\) and \(k\) neurons are oscillating. Furthermore, we call _the neighbors of neuron \(l\) (resp. \(k\))_ the nodes \(i\) for which there is a directed edge \(a_{il}\) from \(l\) to \(i\) (resp. \(a_{ik}\) from \(k\) to \(i\)). The set of neighbors of neuron \(l\) (resp. \(k\)) is denoted by \(\mathcal{N}_{l}\) (resp. \(\mathcal{N}_{k}\)).
Next, the (nonzero) weights of the network are set to \(a_{ij}=\frac{1}{100}\)4, and the weights of the outgoing edges of the \(l\) and \(k\) neurons are updated periodically as follows:
Footnote 4: where \(A=[a_{ij}]\) is the adjacency matrix of the previously randomly generated graph, and we identify a weighted edge from \(j\) to \(i\) with \(a_{ij}\). If there is no edge from node \(j\) to node \(i\), then \(a_{ij}=0\,\forall t\).
\[a_{il}(t)=\begin{cases}\bar{a},&\sin(\omega_{l}t)\geq 0 \\ \frac{1}{100},&\sin(\omega_{l}t)<0\end{cases},\qquad a_{ik}(t)=\begin{cases} \bar{a},&\sin(\omega_{k}t)\geq 0\\ \frac{1}{100},&\sin(\omega_{k}t)<0,\end{cases} \tag{4.3}\]
for some positive frequencies \(\omega_{l}\) and \(\omega_{k}\). This example has been setup so that whenever \(a_{il}=\bar{a}\) is sufficiently large and \(a_{ik}=\frac{1}{100}\) (or viceversa), Theorem 4.1 holds for \(J\) being the neuron \(l\) together
Figure 5: A representative simulation for \(N=15\) nodes. As described in the main text, within the black/red shaded time intervals, neighboring neurons synchronize with the dashed black/red neuron. This is because during such time-frames, the conditions of Theorem 4.1 hold. Indeed one can particularly observe that, since \(k=12\) is a neighbor of \(l=9\), the \(k\)-th neuron (dashed red) has larger amplitude according to its own parameters during the red time-frames, but synchronizes with the smaller amplitude oscillator \(l=9\) (dashed black) during the black time-frames. We further notice that during the overlap of the time-frames, some trajectories also seem to synchronize. This, however, is not characterized in the example, and may very well depend on further connectivity properties. Nevertheless, notice that all neighbors of \(k=12\), except for \(1\) and \(15\), are also neighbors of \(l\). During the overlap of the time-frames we hence see a common cluster that does not include the aforementioned neighbors.
with its neighbors \(\mathcal{N}_{l}\) (or viceversa).
In figure 5 we present a representative simulation for \(N=15\) and \(\bar{a}=3\). The frequencies \(\omega_{l}\) and \(\omega_{k}\) have been chosen so that during the time interval with a black (resp. red) background, the cluster is formed by the \(l\) (resp. \(k\)) neuron, shown as the black (resp. red) dashed curve, and its neighbors. So, indeed notice that along the "black interval" the \(l\) neuron and its neighbors form a cluster while outside such intervals no synchronization seems to ensue (the same for the red intervals and the \(k\) neuron). Outside the black and red intervals, the network is weakly connected with \(a_{ij}=\frac{1}{100}\), except for the first 50 time-units where, for comparison, the network is disconnected. The black and red intervals overlap differently because for this simulation we chose \(\omega_{l}\) and \(\omega_{k}\) incommensurate with each other. In figure 6 we show the nodes that are not neighbors of either the \(l\) or the \(k\) neurons.
## 5 Persistence of synchronization
One of the interesting features of Theorems 3.6 and 4.1 is the property of inherent robustness of the achieved synchronization against perturbation of both, the dynamics of the individual nodes, and of the adjacency matrix. The underlying reason is the roughness of the exponential dichotomy guaranteeing synchronization in Theorems 3.6 and 4.1. In this sense, the following result reminds in
Figure 6: Neurons that are not neighbors of the \(l\) nor the \(k\) neurons.
spirit the one in [47], although only static networks are therein considered. In this brief subsection, we aim to make these relations more explicit. For simplicity of notation, we shall treat the case of the entire network, although the same ideas can be applied also to the synchronization of clusters as we briefly highlight in Example 5.4.
**Theorem 5.1**.: _Let \(f:\mathbb{R}\times\mathbb{R}^{M}\to\mathbb{R}^{M}\) be a \(\mathfrak{L}\mathfrak{C}\) function and \(A:\mathbb{R}\to\mathbb{R}^{N\times N}\) be locally integrable, and consider the network_
\[\dot{x}_{i}=f(t,x_{i})+\sum_{k=1}^{N}a_{ik}(t)(x_{k}-x_{i}),\quad x_{i}\in \mathbb{R}^{M},\,i=1,\ldots,N,\]
_Moreover, assume that the assumptions of_ Theorem 3.6 _are satisfied with_ \(\mu=0\) _in_ (**H2**)_, and thus sharp synchronization of the entire network is achieved. The following statements are true._
* _For every_ \(\delta>0\)_, such that if_ \(f_{i}:\mathbb{R}\times\mathbb{R}^{M}\to\mathbb{R}^{M}\)_, for_ \(i=1,\ldots,N\)_, are_ \(\mathfrak{L}\mathfrak{C}\) _functions and_ \[\sup_{x\in\mathbb{R}^{M},\,t\in\mathbb{R}}|f(t,x)-f_{i}(t,x)|<\delta,\] _the perturbed network (_1.1_) synchronizes up to a constant_ \(4\rho\delta\sqrt{N(N-1)/2}/\overline{\gamma}\)_, provided that condition_ (**H2**) _is still satisfied with_ \(\mu=0\)_._
* _If_ \(B:\mathbb{R}\to\mathbb{R}^{N\times N}\)_, defined by_ \(B(t)=(b_{ij}(t))_{i,j=1,\ldots,N}\) _is locally integrable and_ \(\sup_{t\in\mathbb{R}_{+}}|B(t)|<\overline{\gamma}/4\)_, then the perturbed network_ \[\dot{x}_{i}=f(t,x_{i})+\sum_{k=1}^{N}\big{(}a_{ik}(t)+b_{ik}(t)\big{)}(x_{k}-x _{i}),\quad x_{i}\in\mathbb{R}^{M},\,i=1,\ldots,N,\]
_achieves sharp synchronization._
Proof.: In order to prove the first statement, note that for all \(t\in\mathbb{R}\),
\[\langle x-y, f_{i}(t,x)-f_{j}(t,y)\rangle=\langle x-y,f_{i}(t,x)-f(t,x)\rangle+\] \[+\langle x-y,f(t,x)-f(t,y)\rangle+\langle x-y,f(t,y)-f_{j}(t,y)\rangle\] \[\leq l^{r}(t)|x-y|^{2}+2r\delta,\qquad\text{for all $x,y\in B_{r}$.}\]
For the previous chain of inequality we have used Remark 3.2, Cauchy-Schwarz inequality and the assumption of \(f_{i},f_{j}\in\mathfrak{L}\mathfrak{C}\). Therefore (**H1**) and (**H3**) hold true with \(\beta_{ij}(t)=2r\delta\) for all \(t\in\mathbb{R}\) and all \(i,j=1,\ldots,N\), while (**H2**) is satisfied by assumption. Hence, Corollary 3.9 applies. Noting that \(\|\beta(\cdot)\|_{L^{\infty}}=4\rho\delta\sqrt{N(N-1)/2}\), one has that for all \(i,j=1,\ldots,N\),
\[\lim_{t\to\infty}|\sigma_{i}(t)-\sigma_{j}(t)|^{2}<\frac{4\rho\delta\sqrt{N(N -1)/2}}{\overline{\gamma}}.\]
The second statement is a direct consequence of the roughness of the exponential dichotomy and it is obtained applying (14, Proposition 4.1).
### An application to time-dependent perturbations of static networks
It is well-known that strongly diffusely coupled, static networks of identical nodes locally synchronize provided that the global coupling overcomes a certain threshold, see e.g., [46; 47]. The case of nonidentical nodes, under further constraints, has also been considered, see for example [70]. In this section we discuss how our theory relates to this classic result both in terms of synchronization and time-dependent perturbation. Furthermore, we showcase such relations by means of an example on a star network at the end of the section.
Firstly, we show consistency of the two theories: the inequalities of Theorem 3.6 are always verified by static strongly connected networks of identical nodes (and as a matter of fact even more general static networks), provided that the global coupling overcomes a certain threshold.
**Corollary 5.2**.: _Consider a static connected network of identical nodes satisfying \(\mathbf{(H2)}\) and \(\mathbf{(H3)}\),_
\[\dot{x}_{i}=f(x_{i})+c\sum_{k=1}^{N}a_{ik}(x_{k}-x_{i}),x_{i}\in\mathbb{R}^{M},\,i=1,\ldots,N,\]
_with \(a_{ij}\in\mathbb{R}\) for all \(i,j=1,\ldots,N\) and global coupling strength \(c>0\). Assume, furthermore, that \(2(a_{ij}+a_{ji})+\sum_{\begin{subarray}{c}k=1\\ k\neq i,j\end{subarray}}\Big{(}a_{jk}+a_{ik}-\big{|}a_{jk}-a_{ik}\big{|}\Big{)}>0\) for every \(i,j=1,\ldots,N\)--e.g. this is true if the graph is strongly connected and \(a_{ij}\geq 0\) for all \(i,j=1,\ldots,N\). Then, there is \(\overline{c}>0\) such that for \(c>\overline{c}\) the assumptions of Theorem 3.6 are satisfied and the network achieves synchronization._
Proof.: The assumptions on the edges of the network imply that \(a_{ij}+a_{ji}+\frac{1}{2}\sum_{\begin{subarray}{c}k=1\\ k\neq i,j\end{subarray}}^{N}\big{(}a_{jk}+a_{ik}\big{)}>0\) for all \(i,j=1,\ldots,N\). Hence, there is \(c_{1}>0\) such that for any pair \(i,j=1,\ldots,N\) with \(i<j\),
\[\delta_{ij}=l^{\rho}-c\Big{(}a_{ij}+a_{ji}+\frac{1}{2}\sum_{ \begin{subarray}{c}k=1\\ k\neq i,j\end{subarray}}^{N}\big{(}a_{jk}+a_{ik}\big{)}\Big{)}<0,\qquad\text{ for }c>c_{1},\]
where \(l^{\rho}\) is the Lipschitz coefficient for \(f\) on \(B_{\rho}\) and the same notation of Theorem 3.6 has been used. Since \(\beta_{ij}\) can be taken equal to zero for identical nodes (see Remark 3.2), we also have that for \(c>c_{1}\),
\[2|\delta_{ij}|-c\sum_{\begin{subarray}{c}k=1\\ k\neq i,j\end{subarray}}^{N}\big{|}a_{jk}-a_{ik}\big{|}=2c(a_{ij}+a_{ji})-2l^{ \rho}+c\sum_{\begin{subarray}{c}k=1\\ k\neq i,j\end{subarray}}^{N}\big{(}a_{jk}+a_{ik}-\big{|}a_{jk}-a_{ik}\big{|} \big{)}.\]
Again, the assumptions on the network topology imply that a \(\overline{c}\geq c_{1}\) exists such that both inequalities of Theorem 3.6 are satisfied for \(c>\overline{c}\) and in such a case the system synchronizes.
Of further interest, is the fact that the synchronization achieved beyond the coupling threshold presented in the previous corollary is robust against small perturbations of the type discussed in Theorem 5.1.
**Corollary 5.3**.: _Consider the static network of identical nodes,_
\[\dot{x}_{i}=f(x_{i})+c\sum_{k=1}^{N}a_{ik}(x_{k}-x_{i}),x_{i}\in\mathbb{R}^{M}, \,i=1,\ldots,N,\]
_with coupling coefficients \(a_{ik}\geq 0\) for all \(i,k=1,\ldots,N\) and global coupling strength \(c\) greater than the coupling threshold \(\overline{c}\) in Corollary 5.2. Then, any sufficiently small (possibly time-dependent) perturbation in the sense of Theorem 5.1 will also produce synchronization._
Proof.: The result is a direct consequence of Theorem 5.1 and Corollary 5.2.
**Example 5.4** (Synchronization of chaotic oscillators on star networks).: Although dynamics on undirected star networks can be studied, for example, by spectral methods at least in the static case, we can also use Theorem 3.6 and the Corollaries 5.2 and 5.3 to find conditions that lead to synchronization on time-varying directed ones.
We consider the following setup: the central node (the hub), we call it \(x_{1}\), has directed outgoing edges with weights \(a_{i1}=a\in\mathbb{R}\), \(i=2,\ldots,N\), with the rest of the network, while the remainder of the nodes (the leaves, or satellites) have corresponding weight \(a_{1j}=b\in\mathbb{R}\), \(j=2,\ldots,N\); every other weight \(a_{ij}\) is zero due to the considered graph structure--a schematic representation is shown in Figure 7. We shall hereby assume that all the nodes have identical dynamics (hence \(\beta_{ij}^{r}=0\) for all \(r>0\) and all \(i,j=1,\ldots,N\)), and that a global attractor exists and \(\alpha_{ij}^{r}(t)<\alpha\in\mathbb{R}\) for all \(i,j=1,\ldots,N\) in a sufficiently big ball of radius \(r>0\) containing the global attractor. Notice, however, that if \(\alpha_{ij}(t)\leq 0\) for all \(i,j=1,\ldots,N\) and all \(t\in\mathbb{R}\), then it suffices to let \(\alpha=0\).
Under the described setting, one finds that, for any pair of satellites, i.e. \(1<i<j\),
\[0<2(a_{ij}+a_{ji})+\sum_{\begin{subarray}{c}k=1\\ k\neq i,j\end{subarray}}^{N}\Big{(}a_{jk}+a_{ik}-\big{|}a_{jk}-a_{ik}\big{|} \Big{)}=2a\quad\Leftrightarrow\quad a>0, \tag{5.1}\]
whereas, for the hub (\(i=1\)) and any other satellite (\(j>1\)),
\[0<2(a_{1j}+a_{j1})+\sum_{\begin{subarray}{c}k=1\\ k\neq 1,j\end{subarray}}^{N}\Big{(}a_{jk}+a_{1k}-\big{|}a_{jk}-a_{1k}\big{|} \Big{)}=2(b+a)+(N-2)(b-|b|), \tag{5.2}\]
and the previous inequality is satisfied if and only if
\[\text{either}\quad\text{\Large{\Large{A}}}\begin{cases}b<0,\\ a>-b(N-1),\end{cases}\qquad\text{or}\qquad\text{\Large{B}}\begin{cases}b\geq 0,\\ a\geq-b.\end{cases}\]
Since we assume \(a>0\) (due to (5.1)), then \(\text{\Large{B}}\) holds always true, while in case \(\text{\Large{\Large{A}}}\) (\(b<0\)), some care needs to be taken in choosing \(a\) depending on the size of the network and the modulus of \(b\). It is also straightforward to extend the previous arguments to the time-varying case \(a_{ij}=a_{ij}(t)\), provided that the \(a_{ij}\)'s are bounded since it would suffice that for almost all \(t\geq t_{0}\),
\[a\leq\min_{i=2,\ldots,N}\left\{a_{i1}(t)\right\}\qquad\text{and}\qquad b\leq \min_{j=2,\ldots,N}\left\{a_{1j}(t)\right\}. \tag{5.3}\]
In conclusion, on a star network and provided that \(\mathbf{(H1)}\)-\(\mathbf{(H3)}\) hold, no matter the (time-varying, possibly negative) influence of the satellites, the hub can always induce synchronization if \(a>0\)
Figure 7: A representation of the star network setup used in Example 5.4. Node \(x_{1}\) is set as the hub and it has directed outgoing edges with positive weight \(a_{i1}=a>0\), \(i=2,\ldots,N\). Every other node \(x_{j}\), for \(j=2,\ldots,N\) has only one directed outgoing edge towards \(x_{1}\), which has weight \(a_{1j}=b<0\).
is sufficiently big. Moreover the synchronization error can be made as small as desired if a global coupling \(c>0\) intervenes--see Corollaries 3.8 and 5.2.
To verify our arguments, we consider a directed star network where the nodes are Lorenz systems, that is, the internal dynamics of each node is given by
\[\dot{x}_{i} =\sigma(y_{i}-x_{i}) \tag{5.4}\] \[\dot{y}_{i} =x_{i}(\rho-z_{i})-y_{i}\] \[\dot{z}_{i} =x_{i}y_{i}-\beta z_{i},\]
where, unless otherwise is stated, the parameters are \(\sigma=10\), \(\rho=28\), \(\beta=\frac{8}{3}\). We focus on the less immediate case of negative edges and set the influence of the leaves to the hub to \(a_{1j}=b=-1\), \(j=2,\ldots,N\). Whenever a simulation depends on a random choice of parameters, we always show a representative among dozens of simulations. All simulations have been performed in Matlab using ODE45 with initial conditions randomly chosen near the origin. For the simulations we shall use an error
\[\hat{e}=\sqrt{e_{x}^{2}+e_{y}^{2}+e_{z}^{2}} \tag{5.5}\]
defined by
\[e_{\zeta}=e_{\zeta}(t)=\max_{ij}|\zeta_{i}(t)-\zeta_{j}(t)|,\qquad\zeta=x,y,z,\;i=1,\ldots,N, \tag{5.6}\]
corresponding to the maximum of pairwise errors at each time \(t\).
First, in Figure 8 we show simulations for \(N=5\) nodes, for three different scenarios, see the description in the caption. The common feature is that following the analysis presented above, synchronization of the network can be achieved.
Finally, we consider a large network of heterogeneous Lorenz systems. For this we let \(N=200\), and all the leaves have randomly picked parameters \(\sigma_{i}\in(\sigma-1,\sigma+1)\), \(\rho_{i}\in(\rho-1,\rho+1)\), \(\beta_{i}\in(\beta-1,\beta+1)\). Similar to the second experiment in Figure 8, we let \(a_{1j}=b\left(1+\frac{1}{10}\sin(\omega_{1j}t)\right)\), \(b=-1\), and \(a_{j1}=-2(N-1)b\left(1+\frac{1}{10}\sin(\omega_{j1}t)\right)\); for some randomly set frequencies \(\omega_{ij}\in(\pi,2\pi)\), and \(c=1\). For presentation purposes we prefer to show in Figure 9 only the corresponding error (5.5).
## 6 Existence of local attractors
In this final section, we provide a sufficient condition for the existence of an attracting trajectory for each equation of (1.1) so that (**H2**) is satisfied. The treatment largely owes to the theory
Figure 8: Simulations of three distinct numerical experiments for a star network of \(N=5\) Lorenz oscillators. For convenience we show only the time series of the \(z\)-coordinate on the left and the corresponding error \(\hat{e}\) (5.5) on the right. In the first plot, we let the network be disconnected for the first \(10\) time units. At \(t=10\) we connect the network with \(a_{i1}=a=5\) and \(a_{1j}=b=-1\) (see A above), and global coupling \(c=2\). With these parameters, Corollary 5.2 guarantees synchronization, as verified in the plot for \(t\geq 10\). To showcase the persistence of synchronization, the second plot shows a similar setting as the first, but with all weights perturbed as \(a_{ij}\mapsto a_{ij}(1+\frac{1}{10}\sin(\omega_{ij}t))\) for some randomly chosen frequencies \(\omega_{ij}\in(\pi,2\pi)\). Note that for the considered static example \(1/10<\min_{i,j=1,\ldots,N}\overline{\gamma}_{ij}/4\) for all \(t>10\). Therefore, Corollary 5.3 applies. Finally, the third plot shows the case where \(a=a(t)=4+3\tanh\left(\frac{1}{5}(t-10)\right)\). Hence this plot shows the transition from (5.2) not holding (\(a<4\)) to where it does. Notice that since (5.1) holds, we observe that first the satellites tend to form a cluster, to later transition to full synchronization.
developed by Caraballo et al. [11]. A substantial generalization intervenes in the sense that now all the considered inequalities involve locally integrable functions in place of constants and more than two coupled systems are considered. This is in line with the rest of the treatment in our work. Our fundamental inspiration for this type of generalization is the work by Longo et al. [37].
### The uncoupled problem
Given \(f\in\mathfrak{LC}\), we shall consider the following assumptions,
* (_local one-sided Lipschitz continuity_) there exists a nonempty forward invariant uniformly bounded nonautonomous set \(\mathcal{U}\subset\mathbb{R}\times\mathbb{R}^{M}\) and a function \(l\in L^{1}_{loc}\) such that for almost every \(t\in\mathbb{R}\), \(B_{r}\subset\mathcal{U}_{t}\) and, \[2\langle x_{1}-x_{2},f(t,x_{1})-f(t,x_{2})\rangle\leq l(t)|x_{1}-x_{2}|^{2}, \ \ \text{for all}\ x_{1},x_{2}\in\mathcal{U}_{t}.\] (6.1)
* (_dissipativity_) given (**SL1**), there are constants \(K\geq 1\) and \(\gamma>0\) such that \[\exp\left(\int_{t_{0}}^{t}l(s)\,ds\right)\leq Ke^{-\gamma(t-t_{0})},\quad\text {for any }t_{0}\leq t.\]
For practical reasons, we shall assume that the origin, denoted by the vector \(\mathbf{0}\in\mathbb{R}^{M}\), belongs to \(\mathcal{U}_{t}\) for almost every \(t\in\mathbb{R}\). This hypothesis is not restrictive: let us assume that there is a set \(S\subset\mathbb{R}\)
Figure 9: Synchronization error (5.5) for a temporal star network with \(N=200\) heterogeneous Lorenz oscillators. In this figure, for the first 10 time units, the network is disconnected. Afterwards, for \(t\geq 10\) the network is connected as described above so that Theorem 3.6 holds, leading to synchronization.
with positive measure for which \(\mathbf{0}\notin\mathcal{U}_{t}\) for all \(t\in S\). Since \(\mathcal{U}\) is nonempty, uniformly bounded and forward invariant there is at least one entire solution \(\zeta(t)\) whose graph is contained in \(\mathcal{U}\). Then, the time dependent change of variables \((t,y)=(t,x-\zeta(t))\) returns a new vector field \(\widetilde{f}\) and a new forward invariant uniformly bounded nonautonomous set \(\widetilde{\mathcal{U}}\) for which \(\mathbf{0}\in\widetilde{\mathcal{U}}_{t}\) for almost all \(t\in\mathbb{R}\). This fact will be important in the proof of some of the following results.
Next, we show that condition (**SL1**) can be extended to pairs of continuous functions and the inequality (6.1) keeps holding almost everywhere.
**Proposition 6.1**.: _Let \(f\in\mathfrak{IC}\) and assume that_ (**SL1**) _holds. If \(I\subset\mathbb{R}\) is an interval and \(\phi,\psi\in C(I,\mathbb{R}^{M})\) with \(\phi(t),\psi(t)\in\mathcal{U}_{t}\) for almost every \(t\in I\), then_
\[2\langle\phi(t)-\psi(t),f\big{(}t,\phi(t)\big{)}-f\big{(}t,\psi(t)\big{)} \rangle\leq l(t)|\phi(t)-\psi(t)|^{2}\]
_for almost every \(t\in I\)._
Proof.: A proof of this statement can be obtained reasoning as for Lemma 3.3.
Note that (**SL1**) guarantees that each pair of solutions \(x(t),y(t)\) of the uncoupled problem \(\dot{x}=f(t,x)\) with initial conditions in \(\mathcal{U}\) will converge in forward time. Indeed, one immediately has that
\[\frac{d}{dt}|x(t)-y(t)|^{2}\leq l(t)|x(t)-y(t)|^{2}.\]
Hence, using (**SL2**)we have that for any \(t,t_{0}\in\mathbb{R}\), with \(t\geq t_{0}\), and \(x_{0},y_{0}\in\mathcal{U}_{t_{0}}\),
\[|x(t)-y(t)|^{2}\leq Ke^{-\gamma(t-t_{0})}|x_{0}-y_{0}|^{2}.\]
In order to understand towards what these solutions are converging, we firstly have to show that (**SL1**) implies a more standard dissipative condition.
**Proposition 6.2**.: _Let \(f\in\mathfrak{IC}\) and assume that_ (**SL1**) _holds. Then, \(f\) is asymptotically dissipative, that is, for almost every \(t\in\mathbb{R}\), it satisfies_
\[2\langle x,f(t,x)\rangle\leq\alpha(t)|x|^{2}+\beta(t),\quad\text{for all }x\in \mathcal{U}_{t}, \tag{6.2}\]
_where \(\alpha(\cdot),\beta(\cdot)\in L^{1}_{loc}\) and there is \(0<\overline{\gamma}<\gamma\) such that_
\[\exp\left(\int_{t_{0}}^{t}\alpha(s)\,ds\right)\leq Ke^{-\overline{\gamma}(t- t_{0})},\quad\text{for any }t_{0}\leq t. \tag{6.3}\]
Proof.: Firstly notice that
\[\langle x,y\rangle =\frac{1}{2}(|x+y|^{2}-|x|^{2}-|y|^{2})\leq\frac{1}{2}(|x|^{2}+|y|^{ 2}+2|x||y|-|x|^{2}-|y|^{2})\] \[=\frac{1}{2}\big{(}|x|^{2}+|y|^{2}-(|x|-|y|)^{2}\big{)}\leq|x|^{2} +|y|^{2}.\]
Fix \(0<\varepsilon<\gamma/2\) and \(x\in\mathcal{U}_{t}\). Since (**SL1**) holds, we have that
\[2\langle x,f(t,x)\rangle \leq 2\langle x,f(t,\mathbf{0})\rangle+l(t)|x|^{2}\leq 2 \varepsilon|x|^{2}+\frac{2}{\varepsilon}|f(t,\mathbf{0})|^{2}+l(t)|x|^{2}\] \[=\big{(}2\varepsilon+l(t)\big{)}|x|^{2}+\frac{2}{\varepsilon}|f( t,\mathbf{0})|^{2}.\]
Denoted \(\alpha(t):=2\varepsilon+l(t)\) and \(\beta(t):=2/\varepsilon|f(t,\mathbf{0})|^{2}\), note that we have
\[\exp\left(\int_{t_{0}}^{t}\alpha(s)\,ds\right)=e^{2\varepsilon(t-t_{0})}\exp \left(\int_{t_{0}}^{t}l(s)\,ds\right)\leq Ke^{(2\varepsilon-\gamma)(t-t_{0})},\quad\text{for any $t_{0}\leq t$},\]
which concludes the proof with \(\overline{\gamma}=\gamma-2\varepsilon\).
It is now possible to prove that any uncoupled system \(\dot{x}=f(t,x)\) satisfying (**SL1**) admits a bounded local pullback and forward attracting trajectory, provided that the set,
\[\{\beta_{t}(\cdot)\}_{t\in\mathbb{R}}=\{|f(t+\cdot,\mathbf{0})|^{2}\mid t\in \mathbb{R}\},\]
is \(L^{1}_{loc}\)-bounded.
**Proposition 6.3**.: _Let \(f\in\mathfrak{CE}\), satisfying (**SL1**) and (**SL2**) Moreover, let \(\alpha,\beta\in L^{1}_{loc}\) be the functions provided by Proposition 6.2. If \(\{\beta_{t}(\cdot)\}_{t\in\mathbb{R}}\) is \(L^{1}_{loc}\)-bounded, then \(\dot{x}=f(t,x)\) has a bounded local pullback attractor made of a single globally defined trajectory which is also forward attracting._
Proof.: Thanks to Proposition 6.2 we have that for any \(t\in\mathbb{R}\), \(s>0\) and \(x_{0}\in\mathcal{U}_{t-s}\),
\[|x(t,t-s,x_{0})|^{2}\leq Ke^{-\overline{\gamma}s}|x_{0}|^{2}+K\int_{t-s}^{t} \beta(u)e^{-\overline{\gamma}(t-u)}\,du.\]
Reasoning as for (3.16), and taking \(x_{0}\in\mathcal{U}_{t-s}\), one obtains that
\[|x(t,t-s,x_{0})|^{2}\leq Ke^{-\overline{\gamma}s}r^{2}+\frac{K\mu}{1-e^{- \overline{\gamma}}},\]
where \(\mu:=\sup_{\tau\in\mathbb{R}}\int_{\tau}^{\tau+1}|\beta(s)|\,ds<\infty\), and \(\mathcal{U}_{s}\subset B_{r}\) for some \(r>0\) by assumption. Therefore, the ball of radius \(\varepsilon+K\mu/(1-e^{-\overline{\gamma}})\) for \(\varepsilon>0\) chosen as small as desired, pullback absorbs in finite time the fiber \(\mathcal{U}_{t-s}\). This fact implies that there is a universe of attraction contained in \(B_{r}\) that is
pullback absorbed into the ball of radius \(\varepsilon+K\mu/(1-e^{-\overline{\gamma}})\). Therefore, a unique bounded pullback attractor exists (29, Theorem 3.27). On the other hand, Proposition 6.1 and the remark thereafter imply forward convergence of all the trajectories of \(\dot{x}=f(t,x)\). Therefore, one immediately has that all the sections of the pullback attractor are singleton sets. This fact concludes the proof.
If instead of (**SL1**) and (**SL2**) the weaker conditions (6.2) and (6.3) are assumed, a weaker result of uniform ultimate boundedness of solution is obtained.
**Corollary 6.4**.: _Let \(f\in\mathfrak{L}\mathfrak{C}\), satisfying (6.2) and (6.3). If \(\{\beta_{t}(\cdot)\}_{t\in\mathbb{R}}\) is \(L^{1}_{loc}\)-bounded, then \(\dot{x}=f(t,x)\) has a bounded local pullback attractor and the rest of trajectores starting in \(\mathcal{U}\) are uniformly ultimately bounded._
Proof.: The existence of a unique bounded local pullback attractor is achieved following the same initial steps in the proof of Proposition 6.3. Now consider \(t>t_{0}\). Thanks to Proposition 6.2 and taking \(x_{0}\in\mathcal{U}_{t_{0}}\), one obtains that
\[|x(t,t_{0},x_{0})|^{2}\leq Ke^{-\overline{\gamma}(t-t_{0})}|x_{0}|^{2}+K\int_{ t_{0}}^{t}\beta(u)e^{-\overline{\gamma}(t-u)}\,du.\]
Hence, reasoning as for (3.16), and taking \(x_{0}\in\mathcal{U}_{t_{0}}\), one obtains that
\[|x(t,t_{0},x_{0})|^{2}\leq Ke^{-\overline{\gamma}(t-t_{0})}r^{2}+\frac{K\mu}{ 1-e^{-\overline{\gamma}}},\]
and for any \(\varepsilon>0\) there is a time \(T(\varepsilon)>0\) such that if \(t-t_{0}>T(\varepsilon)\) then
\[|x(t,t_{0},x_{0})|^{2}\leq\varepsilon+\frac{K\mu}{1-e^{-\overline{\gamma}}},\]
which completes the proof.
### The coupled problem
Now we turn our attention to the coupled problem (1.1). The natural question is if, despite the coupling, each node still has a pullback and forward attracting trajectory. An important role in our argument shall be played by the \(L^{1}_{loc}\) linear operator defined for any \(t\in\mathbb{R}\) by
\[\mathbb{R}^{M}\ni u\mapsto(2A(t)-Id_{N\times N}L(t))u\]
where \(L(t)=(l^{\rho}_{i}(t))_{i=1,\dots,N}^{\top}\), and for each \(i=1,\dots,N\), \(l_{i}(\cdot)\) is the \(L^{1}_{loc}\) function provided by (**SL1**). As for the uncoupled case, we firstly deal with the forward attractivity.
**Lemma 6.5**.: _Consider \(f_{i}\in\mathfrak{EC}\) for \(i=1,\ldots,N\) satisfying (**SL1**), and the networked system (1.1) where, for almost every \(t\in\mathbb{R}\), \(a_{ij}(t)\geq 0\) for all \(i,j=1,\ldots,N\), with \(i\neq j\) and \(a_{ii}(t)=0\) for all \(i=1,\ldots,N\). Then, considered any pair of absolutely continuous functions \(y(t)\) and \(z(t)\) respectively solving (1.1) with initial data \(\overline{y},\overline{z}\in\mathcal{U}_{t_{0}}^{i}\) for some \(t_{0}\in\mathbb{R}\), it holds that_
\[\big{(}|y_{i}(t)-z_{i}(t)|^{2}\big{)}_{i=1,\ldots,N}^{\top}\leq U(t,t_{0})(| \overline{y}_{i}-\overline{z}_{i}|^{2})_{i=1,\ldots,N}^{\top}\]
_where \(U(t,t_{0})\) is the principal matrix solution of \(\dot{u}=(2A(t)-Id_{N\times N}L(t))u\) at \(t_{0}\in\mathbb{R}\)._
Proof.: Note that for every \(i=1,\ldots,N\) and using the Cauchy-Schwarz inequality,
\[\frac{d}{dt}|y_{i}(t)-z_{i}(t)|^{2} =2\big{\langle}y_{i}(t)-z_{i}(t),f_{i}\big{(}t,y_{i}(t)\big{)}-f _{i}\big{(}t,z_{i}(t)\big{)}\big{\rangle}+\] \[\leq l_{i}(t)|y_{i}(t)-z_{i}(t)|^{2}-2\sum_{k=1}^{N}a_{ik}(t) \big{\langle}y_{i}(t)-z_{i}(t),y_{i}(t)-z_{i}(t)\big{\rangle}+\] \[\leq l_{i}(t)|y_{i}(t)-z_{i}(t)|^{2}-2\sum_{k=1}^{N}a_{ik}(t)|y_{i }(t)-z_{i}(t)|^{2}+\] \[\qquad\qquad+2\sum_{k=1}^{N}a_{ik}(t)\big{[}|y_{i}(t)-z_{i}(t)|^{ 2}+|y_{k}(t)-z_{k}(t)|^{2}\big{]}\] \[=l_{i}(t)|y_{i}(t)-z_{i}(t)|^{2}+2\sum_{k=1}^{N}a_{ik}(t)|y_{k}(t) -z_{k}(t)|^{2}.\]
Therefore, we have that for \(t_{0}\leq t\) where it is well-defined, the vector \((|y_{i}(t)-z_{i}(t)|^{2})_{i=1,\ldots,N}^{\top}\) is an _under-function_ with respect to the initial value problem \(u^{\prime}=(2A(t)-Id_{N\times N}L(t))u\), \(u(s)=|y(s)-z(s)|^{2}\), where \(L(t)=(l_{i}^{\rho}(t))_{i=1,\ldots,N}^{\top}\). Therefore, we can reason as for the proof of Theorem 3.6 to obtain that for all \(t>t_{0}\),
\[\big{(}|y_{i}(t)-z_{i}(t)|^{2}\big{)}_{i=1,\ldots,N}^{\top}\leq U(t,t_{0})(|y_ {i}(t_{0})-z_{i}(t_{0})|^{2})_{i=1,\ldots,N}^{\top}\]
where \(U(t,s)\) is the principal matrix solution of \(\dot{u}=(2A(t)-Id_{N\times N}L(t))u\) at \(s\in\mathbb{R}\). Incidentally, this shows that \(y(t)\) and \(z(t)\) are defined for all \(t\geq t_{0}\)
Lemma 6.5 allows to immediately obtain a sufficient condition for forward convergence of the trajectories of each node. If the linear differential problem \(\dot{u}=2(A(t)-Id_{N\times N}L(t))u\) has dichotomy spectrum contained in \((-\infty,0)\), then each node has a bounded forward attracting trajectory.
**Theorem 6.6**.: _Consider \(f_{i}\in\mathfrak{AC}\) for \(i=1,\ldots,N\) and assume that they all satisfy \(\mathbf{(SL1)}\), each within a forward invariant uniformly bounded nonautonomous set \(\mathcal{U}^{i}\subset\mathbb{R}\times\mathbb{R}^{M}\). Moreover, assume that \(\dot{u}=2(A(t)-Id_{N\times N}L(t))u\) has dichotomy spectrum contained in \((-\infty,0)\). Then, there is an absolutely continuous function \(\sigma:\mathbb{R}\to\mathbb{R}^{M\times N}\), \(t\mapsto\sigma(t)=\big{(}\sigma_{i}(t)\big{)}_{i=1,\ldots,N}\) that solves (1.1) and such that, for every \(i=1,\ldots,N\), \(\sigma_{i}(\cdot)\) is pullback attracting for_
\[\dot{x}_{i}=f_{i}(t,x_{i})+\sum_{j=1}^{N}a_{ij}(t)(\sigma_{j}(t)-x_{i}),\]
_and if \(y(t)=\big{(}y_{i}(t)\big{)}_{i=1,\ldots,N}\) solves (1.1) with \(y_{i}(t_{0})\in\mathcal{U}^{i}_{t_{0}}\) for some \(t_{0}\in\mathbb{R}\), then for every \(i=1,\ldots,N\),_
\[\lim_{t\to\infty}|y_{i}(t)-\sigma_{i}(t)|=0.\]
Proof.: Since \(\dot{u}=2(A(t)-Id_{N\times N}L(t))u\) has dichotomy spectrum contained in \((-\infty,0)\), we have that there are \(K\geq 1\) and \(\gamma>0\), such that
\[|U(t,t_{0})|\leq Ke^{-\gamma(t-t_{0})}\quad\text{for $t_{0}\leq t$},\]
where \(U(t,t_{0})\) is the principal matrix solution of \(\dot{u}=(2A(t)-Id_{N\times N}L(t))u\) at \(t_{0}\in\mathbb{R}\). Therefore, thanks to Lemma 6.5, we have that for any \(t_{0}\in\mathbb{R}\) and \(\overline{y}=(\overline{y}_{i})_{i=1,\ldots,N},\overline{z}=(\overline{z}_{i })_{i=1,\ldots,N}\in\mathbb{R}^{M}\) with \(\overline{y}_{i},\overline{z}_{i}\in\mathcal{U}^{i}_{t_{0}}\) for all \(i=1,\ldots,N\), the solutions \(y(t,t_{0},\overline{y})=\big{(}y_{i}(t,t_{0},\overline{y}_{i})\big{)}_{i=1, \ldots,N}\) and \(z(t,t_{0},\overline{z})=\big{(}z_{i}(t,t_{0},\overline{z}_{i})\big{)}_{i=1, \ldots,N}\) of (1.1) with initial data \(y(t_{0})=\overline{y}\) and \(z(t_{0})=\overline{z}\), respectively, are defined for all \(t\geq t_{0}\) and in particular,
\[\big{(}|y_{i}(t,t_{0},\overline{y}_{i})-z_{i}(t,t_{0},\overline{z}_{i})|^{2} \big{)}_{i=1,\ldots,N}^{\top}\leq Ke^{-\gamma(t-t_{0})}(|\overline{y}_{i}- \overline{z}_{i}|^{2})_{i=1,\ldots,N}^{\top},\]
for any \(t>t_{0}\). Particularly, since each nonautonomous set \(\mathcal{U}^{i}\), \(i=1,\ldots,N\), is uniformly bounded, then there is \(r>0\) such that \(\mathcal{U}^{i}_{t}\subset B_{\sqrt{r}/2}\) for all \(t\in\mathbb{R}\) and \(i=1,\ldots,N\). Then, fixed \(\varepsilon>0\), and considered \(s\geq\log(rK/\varepsilon)/\gamma=:T(r,\varepsilon)\), one has that for any \(t\in\mathbb{R}\),
\[|y_{i}(t,t-s,\overline{y}_{i})-z_{i}(t,t-s,\overline{z}_{i})|^{2}\leq \varepsilon\quad\text{for $s\geq T(r,\varepsilon)$}.\]
Consequently, the function \(\zeta(t)=(0,\ldots,0)^{\top}\in\mathbb{R}^{M}\) pullback and forward attracts all the trajectories \(\big{(}|y_{i}(t,t-s,\overline{y}_{i})-z_{i}(t,t-s,\overline{z}_{i})|^{2}\big{)} _{i=1,\ldots,N}^{\top}\) of the dynamical system induced by Proposition 3.1. In particular, this implies that there is a globally defined and absolutely continuous function \(\sigma:\mathbb{R}\to\mathbb{R}^{M\times N}\) that solves (1.1) and satisfies the thesis.
As we have recalled in Remark 2.5, row-dominance is a sufficient condition for the existence of an exponential dichotomy. Therefore, the stronger dissipativity condition than (**SL1**) guarantees the persistence of an attractor also for the coupled network as we show in the next corollary. Particularly, we shall henceforth assume that
* for every \(i=1,\ldots,N\) there exists a function \(l_{i}\in L^{1}_{loc}\) such that, (6.1) is verified and \[\sup_{\begin{subarray}{c}t\in\mathbb{R},\\ i=1,\ldots,N\end{subarray}}l_{i}(t)<0\quad\text{and}\quad\gamma=\sup_{ \begin{subarray}{c}t\in\mathbb{R},\\ i=1,\ldots,N\end{subarray}}\left\{|l_{i}(t)|-2\sum_{\begin{subarray}{c}k=1\\ k\neq i\end{subarray}}^{N}a_{ik}(t)\right\}>0.\]
**Corollary 6.7**.: _Consider the networked system (1.1) where, for almost every \(t\in\mathbb{R}\), \(a_{ij}(t)\geq 0\) for all \(i,j=1,\ldots,N\), with \(i\neq j\) and \(a_{ii}(t)=0\) for all \(i=1,\ldots,N\), and assume further that \(f_{i}\in\mathfrak{AC}\) satisfies (**SL1***) for all \(i=1,\ldots,N\). Then, the thesis of Theorem 6.6 holds true._
Proof.: The assumption (**SL1***) implies that the linear problem \(\dot{u}=(2A(t)-Id_{N\times N}L(t))u\) is row dominant and that its dichotomy spectrum is contained in \((-\infty,0)\). Therefore the thesis is a direct consequence of Theorem 6.6.
## 7 Conclusion and Discussion
In this paper we have considered general linearly and diffusely coupled networks. First, the nodes themselves, with internal dynamics \(f_{i}(t,x_{i})\) are of the Lipschitz Caratheodoy class, meaning that, in particular, continuity in time is not even required. Second, the interaction between nodes is also quite general. In essence we consider temporal interconnections, described by \(a_{ij}(t)\in\mathbb{R}\), that are locally integrable. This includes (but it is not restricted to), for example, piece-wise continuous changes in the topology of the network. Our main goal has been to provide quantitative conditions under which synchronization of the nodes is achieved. One of the main highlights of the results we provide is that they mainly depend on the network structure, i.e., on the \(a_{ij}\)'s. This offers important advantages compared to some other criteria that can, for example, depend on spectral properties,
and at the same time allows for a control approach to synchronization. Among the results we have presented we emphasize the synchronization (up to a constant) and the synchronization of clusters, of temporal networks. We have presented some further results, for example for networks with global couplings, where the synchronization error can be made arbitrarily small. Another striking feature of the developed sufficient results of synchronization is their robustness against perturbation due to the roughness of the exponential dichotomy on which they are based.
A limitation of the presented theory is that we require that all the components of (higher dimensional) nodes are connected in a uniform way--no inner-coupling matrix is considered; although it might be possible to also include this case by using weighted inner products for our proofs in analogy with the idea of the Mahalanobis distance, which is often employed in statistics in such an inner coupling between variables. In summary, as we exemplified through the paper, several quite general and practically relevant situations for synchronization can be covered by our theory. As future work, one could attempt to improve the presented results by allowing that only some of the components of the nodes are to be connected. This is reminiscent of under-actuated control laws. Another interesting extension would include nonlinear interaction functions, as well as fully adaptive networks where the network topology depends on the node dynamics and vice versa node dynamics depends on the topology [7].
Finally, we notice that since a topological theory for the construction of continuous flows for Caratheodory differential equations exists for ordinary [2, 34, 35, 37], delay [36, 38] and parabolic [39] differential problems, not only the ideas here exposed could be extended to these contexts but also additional results of propagation of synchronization in the hull could be explored.
|
2308.05963 | Metallic Quantized Anomalous Hall Effect without Chiral Edge States | The quantum anomalous Hall effect (QAHE) is a topological state of matter
with a quantized Hall resistance. It has been observed in some two-dimensional
insulating materials such as magnetic topological insulator films and twisted
bilayer graphene. These materials are insulating in the bulk, but possess
chiral edge states carrying the edge current around the systems. Here we
discover a metallic QAHE in a topological insulator film with magnetic sandwich
heterostructure, in which the Hall conductance is quantized to $e^{2}/h$, but
the longitudinal conductance remains finite. This effect is attributed to the
existence of a pair of massless Dirac cones of surface fermions, with each
contributing half of the Hall conductance due to quantum anomaly. It is not
characterized by a Chern number and not associated to any chiral edge states.
Our study offers novel insights into topological transport phenomena and
topological metallic states of matter. | Kai-Zhi Bai, Bo Fu, Zhenyu Zhang, Shun-Qing Shen | 2023-08-11T06:42:06Z | http://arxiv.org/abs/2308.05963v1 | # Metallic Quantized Anomalous Hall Effect without Chiral Edge States
###### Abstract
The quantum anomalous Hall effect (QAHE) is a topological state of matter with a quantized Hall resistance. It has been observed in some two-dimensional insulating materials such as magnetic topological insulator films and twisted bilayer graphene. These materials are insulating in the bulk, but possess chiral edge states carrying the edge current around the systems. Here we discover a metallic QAHE in a topological insulator film with magnetic sandwich heterostructure, in which the Hall conductance is quantized to \(e^{2}/h\), but the longitudinal conductance remains finite. This effect is attributed to the existence of a pair of massless Dirac cones of surface fermions, with each contributing half of the Hall conductance due to quantum anomaly. It is not characterized by a Chern number and not associated to any chiral edge states. Our study offers novel insights into topological transport phenomena and topological metallic states of matter.
_Introduction-_ QAHE is a quantum transport phenomenon in two-dimensional ferromagnetic materials where the Hall resistance is quantized to the von Klitzing constant \(h/e^{2}\) while the longitudinal resistance disappears [1; 2; 3; 4; 5; 6; 7]. The materials are band insulators in the bulk, and possess chiral edge states carrying a dispersionless electric current around the system boundary [8; 9]. The electronic band structures of the materials are characterized by the Chern number [10; 11], which equals the number of chiral edge states [12]. Over the last decade the effect has been observed experimentally in a series of topological insulator (TI) films and two-dimensional materials [13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24]. The picture of the chiral edge states are also confirmed experimentally [25; 26]. Recently the half-quantized Hall conductance was reported in a magnetic doped TI film [27]. The power-law decay of the Hall current indicates possible existence of distinct QAHE, which is not characterized by the Chern number or chiral edge state [28; 29; 30]. This provides a possible route to explore novel types of QAHE.
A TI film hosts a pair of massless Dirac cones of electrons near the two surfaces. The exchange interaction of magnetic ions or the ferromagnetic magnetization breaks time-reversal symmetry and may manipulate the nature of the surface states [31]. Here we propose a unique type of QAHE with no chiral edge states in a magnetically doped TI film in which the Hall conductance is quantized to be \(e^{2}/h\) while the longitudinal conductance is finite. The Hall resistivity is then not quantized. The magnetically doped layers are confined near the center to form a sandwich structure as illustrated in Fig 1. Based on numerical calculation and analytical analysis of the film, it is observed that increasing the concentration \(x\) of doped Cr atoms or increasing the Zeeman field may induce a transition of the Hall conductance from 0 to \(-e^{2}/h\) meanwhile the band structure shows that no energy gap opens as the magnetically doped layer is far away from the top and bottom surfaces. Further analysis shows that the TI film hosts a pair of massless Dirac fermions, one carries \(e^{2}/2h\), and another carries \(-e^{2}/2h\) of the Hall conductance in the absence of the Zeeman field. An increasing Zeeman field drives one of the gapless Dirac cones and an accompanying gapped Dirac cone to exchange their masses, and the sign of Hall conductance changes from \(e^{2}/2h\) to \(-e^{2}/2h\). Consequently, the total Hall conductance becomes \(-e^{2}/h\) (the sign is determined by the direction of the Zeeman field). The longitudinal conductance is finite as no gap opens in the surface states, and has a minimal value when the chemical potential sweeps the Dirac point of the surface electrons. Hence there do not exist chiral edge states localized near the system boundary.
_Magnetic sandwich TI film-_ We consider a symmetric TI film with a magnetic doped layer at the center \(m\)QLX\({}_{2}\)Te\({}_{3}\)/3QLX\({}_{2-x}\)Cr\({}_{x}\)Te\({}_{3}\)/\(m\)QLX\({}_{2}\)Te\({}_{3}\) with X = (Bi, Sb) and \(m\) = 4 as shown in Fig. 1. A larger integer \(m\) does not change the main result in this proposal. Bi\({}_{2}\)Te\({}_{3}\) and Sb\({}_{2}\)Te\({}_{3}\) are prototypes of strong TIs [32]. 1QL means a quintuple layer of X and Te atoms, and is about \(1nm\) in Bi\({}_{2}\)Te\({}_{3}\). The Dirac cone of surface states was observed explicitly by the ARPES [33; 34] and was also evidenced by a series of transport measurements. The exchange interaction between the p-orbital electron from Bi and Te and magnetic ions Cr may induce a finite magnetization in X\({}_{2-x}\)Cr\({}_{x}\)Te\({}_{5}\)[2; 31]. Tuning the concentration \(x\) of Cr can change the exchange interaction, and even makes it a ferromagnetic insulator [35]. The magnetic element Cr was modulation-doped only near the center layer. The non-doped layers are thick enough such that the top and bottom surface electrons do not open energy gap. The topological nature of the band structures of Bi\({}_{2}\)Se\({}_{3}\) and Bi\({}_{2}\)Te\({}_{3}\) can be well described by the tight-binding model for the electrons of P\({}_{z,\uparrow}\) and P\({}_{z,\downarrow}\) orbitals from (Bi and Te or Se atoms near the Fermi energy [32; 36].
\[H_{TI}=\sum_{l}\Psi_{l}^{\dagger}\mathcal{M}\Psi_{l}+\sum_{l,\alpha=x,y,z}\left( \Psi_{l}^{\dagger}\mathcal{T}_{\alpha}\Psi_{l+\alpha}+\Psi_{l+\alpha}^{\dagger} \mathcal{T}_{\alpha}^{\dagger}\Psi_{l}\right) \tag{1}\]
where \(\mathcal{M}=(m_{0}-2\sum_{\alpha}t_{\alpha})\sigma_{0}\tau_{z}\), \(\mathcal{T}_{\alpha}=t_{\alpha}\sigma_{0}\tau_{z}-i\frac{\lambda_{\alpha}}{2} \sigma_{\alpha}\tau_{x}\), \(\Psi_{l}^{\dagger}\) and \(\Psi_{l}\) are the four-component creation and annihilation operators at position \(l=(l_{x},l_{y},l_{z})\). The Pauli matrices \(\sigma_{\alpha}\) and \(\tau_{\alpha}\) act on the spin and orbital indices, respectively. Adapting a model homogeneous in \(x-y\) plane leads to \(t_{\parallel}=t_{x}=t_{y}\), \(t_{\perp}=t_{z}\), \(\lambda_{\parallel}=\lambda_{x}=\lambda_{y}\), \(\lambda_{\perp}=\lambda_{z}\). The magnetic effect induced by Cr is modeled by introducing the Zeeman field along the z direction, \(V_{Z}=\sum_{l}V_{z}(l_{z})\Psi_{l}^{\dagger}\sigma_{z}\tau_{0}\Psi_{l}\). \(V_{z}(l_{z})=\alpha t_{\perp}\) in the magnetic doped layers (using \(t_{\perp}\) as a unit) with \(l_{z}=\pm 1/2,\cdots,\pm(m_{z}-1)/2\) where film thickness \(L_{z}\) and the magnetic layer thickness \(m_{z}\) are assumed to be even, and equals zero in the non-doped layers. Here we ignore the possible change of the bulk gap \(m_{0}\) in X\({}_{2-x}\)Cr\({}_{x}\)Te\({}_{3}\) caused by doping.
We consider the periodic boundary condition in the x and y direction. The band structure of the film is calculated numerically by means of the exact diagonalization method as shown in Fig. 2(a) in the absence of magnetic layers (\(\alpha=0\)) and (b) in the presence of magnetic layers (\(\alpha=0.9\)). It is observed that there exists a pair of massless Dirac fermions in both cases. The dispersions are doubly degenerated near the crossing point at \(k=0\). The presence of the Zeeman field \(\alpha\) does not open energy gap in the surface states while \(\alpha\) varies from 0 to 0.9. It is reasonable that the massless surface electrons are mainly located near the top and bottom surfaces which are far away from the magnetic ions in the magnetic layers (see Fig. S4 in Ref. [37]). After having the numerical energy eigenvalues and eigenvectors, the Hall conductance can be calculated numerically by means of the Kubo formula for electric conductivity [38]. The Hall conductance becomes nonzero in the presence of \(\alpha\) when the Fermi level crosses the conduction and valence bands with \(n>1\). As shown in Fig. 2(c), a plateau of zero Hall conductance appears near \(\mu=0\) for a weak field, while for a strong Zeeman field \(\alpha\), a flat plateau of \(\sigma_{H}=-\frac{e^{2}}{h}\) appears. Detailed calculation presented in Fig. 2(d) shows the Hall conductance changes from zero to \(-\frac{e^{2}}{h}\) with increasing the Zeeman field \(\alpha\) for fixed chemical potentials. Considering that there is no band gap while \(\alpha\) changes from 0 to 0.9, the longitudinal conductivity must be finite. Thus the appearance of the Hall conductance indicates that it differs from the conventional QAHE in an insulating phase.
Equivalent Dirac-like fermions-To explore the physical origin of the quantized Hall conductance, we study the band structure of of the film in the presence of the Zeeman field. First we adopt the Fourier transformation \(\Psi_{l_{z},\mathbf{k}}=\sum_{l_{x},l_{y}}\exp[il_{x}k_{x}+il_{y}k_{y}]\Psi_{l _{x},l_{y},l_{z}}\). The tight binding model in 1 with the Zeeman field \(H_{tot}=H_{TI}+V_{Z}\) can be split into two parts \(H_{tot}=H_{\parallel}+H_{1D}(\alpha)\). The in-plane spin-orbital coupling \(H_{\parallel}=\sum_{l_{z},\mathbf{k}}\Psi_{l_{z},\mathbf{k}}^{\dagger}\lambda _{\parallel}(\sin k_{x}\sigma_{x}+\sin k_{y}\sigma_{y})\tau_{x}\Psi_{l_{z}, \mathbf{k}}\). The part \(H_{1D}(\alpha)\) for each \(\mathbf{k}\) is equivalent to a one-dimensional TI with the k-dependent band gap \(m(\mathbf{k})=m_{0}-4t_{\parallel}\left(\sin^{2}\frac{k_{x}}{2}+\sin^{2} \frac{k_{y}}{2}\right)\) in a Zeeman field. In the case, \([\sigma_{z},H_{1D}]=0\) such that \(H_{1D}\) can be diagonalized to have a series of energy eigenvalues \(\tilde{m}_{n,\chi}(k_{x},k_{y})\) and eigenvectors \(\tilde{\Phi}_{k,n,\chi}=\sum_{l_{z}}U_{n,\chi;l_{z}}\Psi_{l_{z},k}\) with \(n=1,...,L_{z}\) and \(\chi=\pm\). The double degeneracy is caused by time-reversal symmetry and inversion symmetry. Using the eigenvectors as a new basis, we find that \(H_{tot}\) is equivalently reduced to a series of two-dimensional Dirac-like models \(H_{tot}\equiv\sum_{\mathbf{k},n,\chi=\pm 1}\tilde{\Phi}_{\mathbf{k},n,\chi}^{ \dagger}h_{n,\chi}(\mathbf{k})\tilde{\Phi}_{\mathbf{k},n,\chi}\) with
\[h_{n,\chi}(\mathbf{k})=\lambda_{\parallel}(\sin k_{x}\sigma_{x}+\sin k_{y} \sigma_{y})+\tilde{m}_{n,\chi}(\mathbf{k},\alpha)\sigma_{z}. \tag{2}\]
Figure 1: (a) Schematic of the magnetic sandwich heterostructure of a \((\mathrm{Bi},\mathrm{Sb})_{2}\mathrm{Te}_{3}\) TI film with the concentration \(x\) of magnetically doped Cr atoms. (b) A transition from two pairs of massless and massive Dirac fermions with no net Hall conductance \(\sigma_{H}=0\) at low concentration \(x\) to that with a quantized Hall conductance \(\sigma_{H}=-\frac{e^{2}}{h}\) at higher concentration \(x\) (the sign depending on the direction of magnetization). \(C\) represents the Hall conductance in the unit of \(e^{2}/h\), while color represents the sign-value of the Berry curvature with blue for minus and red for positive. The masses of a pair of massless and massive Dirac fermions (at the upper horizontal row) at lower energy exchange by increasing the concentration \(x\) while the higher energy parts of the Dirac fermions remain almost unchanged. (c) Schematic of the quantized Hall conductance \(\sigma_{xy}\) and (d) the longitudinal conductivity \(\sigma_{xx}\) as function of the chemical potential \(\mu\) at a higher doping concentration \(x\).
The energy dispersions are \(E_{n,\chi,\pm}=\pm\sqrt{\lambda_{\parallel}^{2}(\sin^{2}k_{x}+\sin^{2}k_{y})+ \tilde{m}_{n,\chi}^{2}}\) in which \(\tilde{m}_{n,\chi}\) plays a role of momentum-dependent mass term for the Dirac fermions.
In the absence of magnetic doping, i.e., \(\alpha=0\), \(H_{1D}\) can be solved exactly. For details, the solutions of the energy and wave function can be seen in Ref. [37]. The masses have a relation \(\tilde{m}_{n,+}=-\tilde{m}_{n,-}=m_{n}\), which gives rise to double degeneracy in the band structure rooted in combination of the time-reversal symmetry and inversion symmetry. For \(m(\mathbf{k})>0\), \(H_{1D}\) is topologically nontrivial, and has zero energy modes \(m_{1}=0\); for \(m(\mathbf{k})<0\), \(H_{1D}\) is topologically trivial, and the lowest energy modes \(m_{1}=m(\mathbf{k})\). Here the film is thick enough such that the finite size effect can be ignored [39]. Therefore, in Eq. (2), \(n=1\) corresponds to the pair of gapless bands shown in Fig. 2. The spatial distribution of the wave function of \(m_{1}=0\) is mainly concentrated near the top and bottom surfaces as expected. The states of nonzero \(m_{1}\) or at large \(k\) are spatially distributed in the bulk, which represents that the surface states evolve into the bulk states with the variation of the wave vector \(\mathbf{k}\). Here the complete band structure of the gapless Dirac fermions in the entire Brillouin zone consists of the surface electrons for \(m(\mathbf{k})>0\) or small \(\mathbf{k}\) and those extended in the z direction for \(m(\mathbf{k})<0\) or large \(\mathbf{k}\) (see in Fig. 3a). For \(n\geq 2\), all \(m_{n}(\mathbf{k})\) at \(\mathbf{k}=0\) are not equal to zero, which means the energy bands \(E_{n,\chi}\) open an energy gap at the point (see Section SI in Ref. [37]). For a small \(\mathbf{k}\), \(h_{n,\chi}(\mathbf{k})\simeq\lambda_{\parallel}(k_{x}\sigma_{x}+k_{y}\sigma_{y })+\chi m_{n}(0)\sigma_{z}\). In other words, all the bands can be regarded as massive Dirac fermions.
In the presence of magnetic doping, the Zeeman field \(V_{Z}\) will change the band structures by altering effective mass \(\tilde{m}\), while linear part vertical to \(z\)-direction remains unchanged due to degrees of freedom decoupling. In the basis of the energy eigenstates of \(H_{1D}(\alpha)\) at \(\alpha=0\), the Zeeman term can be expressed as \(\alpha\mathbf{I}_{S}(\mathbf{k})\tau_{0}\sigma_{z}\), where \(\mathbf{I}_{S}(\mathbf{k})\) is a \(L_{z}\times L_{z}\) matrix (see Section SII in Ref. [37]) computable numerically. Thus \(H_{1D}\) is projected into the form \(\left(\bigoplus_{n=1}^{L_{z}}m_{n}\tau_{z}+\alpha\mathbf{I}_{S}(\mathbf{k}) \tau_{0}\right)\sigma_{z}\), and further diagonalizing this provides a bijection which maps the projected Hamiltonian form into the mass term \(\oplus_{n}\tilde{m}_{n,\chi}(\mathbf{k},\alpha)\sigma_{z}\). Confining to the subspace with \(\sigma_{z}=+\), we could then track the evolution and interaction of the mass terms \(\tilde{m}_{n,\chi}\) between \(n=1\) and \(n=2\) blocks with increasing \(\alpha\) for given \(\chi\). What stands out in the process is an exotic grafting behavior signed in Fig. 3: viewing from left to right, while the masses \(\tilde{m}_{n,+}(n=1,2)\) maintain their shapes, \(\tilde{m}_{n,-}(n=1,2)\), which represent one massless Dirac cone plus one massive Dirac cone, will fully exchange their low-energy parts with increasing \(\alpha\), i.e., _massless \(\longleftrightarrow\) massive_. By increasing \(\alpha\), \(\tilde{m}_{n=1,-}\) and \(\tilde{m}_{n=2,-}\) behave as if they cross around \(\alpha_{c}\approx 0.74\) and then separate, during which detailed dynamic exchange reveals (see Section SV in Ref. [37]). On the other hand, what essentially remains unchanged is the high-energy part of each cone. Then since \(\tilde{m}_{n,\chi}\) of \(n=1,2\) are naturally assigned with opposite signs for their high-energy parts, viewing from athelow-energy perspective, their high-energy masses exchange between massless and massive cones. The induced mass exchange of the massless and massive Dirac fermions is closely associated with the sign change of the Hall conductance.
Figure 2: The band structure near the \(\Gamma\) point with \(k_{y}=0\) (a) in the absence of magnetic doping (\(\alpha=0\)) and (b) in the the presence of magnetically doping (\(\alpha=0.9\)). The gapless dispersions for the surface states in (a) and (b) are doubly degenerated. (c) The calculated Hall conductance as a function of the chemical potential \(\mu\). (d) The Hall conductance as a function of \(\alpha\) at different chemical potentials. We set the model parameters as \(\lambda_{\parallel}=0.41\) eV, \(\lambda_{\perp}=0.44\) eV, \(t_{\parallel}=0.566\) eV, \(t_{\perp}=0.4\) eV, \(m_{0}=0.28\) eV, \(a=b=1\) nm and \(c=0.5\) nm if there is no specific indication [32]. The thickness \(L_{z}=22\) and the magnetic layers \(m_{z}=6\). 1QL is about \(2c=1nm\).
_Quantized Hall conductance-_ The Hamiltonian in Eq. 2 can be expressed in terms of the spin texture \(\mathbf{d}=(\lambda_{\parallel}\sin k_{x},\lambda_{\parallel}\sin k_{y},\tilde{m}_ {n,\chi}(k_{x},k_{y}))/E_{n,+}\), \(h_{n,\chi}=E_{n,+}\mathbf{d}(\mathbf{k})\cdot\mathbf{\sigma}\). Using the Kubo formula, the Hall conductance is given by
\[\sigma_{H}=-\frac{e^{2}}{h}\frac{1}{4\pi}\int\frac{dk_{x}dk_{y}}{4\pi}\mathbf{ d}\cdot\left[\partial_{k_{x}}\mathbf{d}\times\partial_{k_{y}}\mathbf{d}\right](f_{k,+}-f_{k,-}) \tag{3}\]
where \(f_{k,\pm}=\Theta\left(\mu-E_{n,\pm}\right)\) is the Heaviside step function for Fermi-Dirac distribution at zero temperature and \(\mu\) is the chemical potential [40; 7]. For the massive Dirac fermions, the values of \(\tilde{m}_{n,\chi}\) at \(\mathbf{k}=(0,0)\) and \(\mathbf{k}=(\pi,\pi)\) have the same sign, and there does not exist band inversion in the first Brillouin zone. The bands are always topologically trivial such that the fully filled bands, i.e., \(\mu=0\), always have no Hall conductance which is consistent with the TKNN theorem [10]. For massless Dirac fermions, \(\tilde{m}_{n,\chi}=0\) near \(\mathbf{k}=0\). In the regime, \(\mathbf{d}\cdot\left[\partial_{k_{x}}\mathbf{d}\times\partial_{k_{y}}\mathbf{ d}\right]=0\) which indicates that the Berry curvature of the band vanishes. Nonzero Berry curvature comes only from the part of nonzero \(\tilde{m}_{n,\chi}\) or the regime of large \(k\). The Hall conductance is half-quantized for \(\mu\) located within the regime of \(\tilde{m}_{n,\chi}=0\), \(\sigma_{H}=\frac{e^{2}}{2h}sgn[\tilde{m}_{n,\chi}(\pi,\pi)]\). The quantization is protected by the emergent parity symmetry near the Fermi surface [29; 30].
Based on the mass-exchange picture, we have a theoretical explanation of the change of the Hall conductance induced by the Zeeman field in Fig. 2(c), (d). The film hosts a series of massive and massless Dirac fermions. For our purpose, we focus on the bands of \(n=1\) and \(n=2\) as all other massive Dirac fermions (\(n\geq 3\)) have no contribution to the Hall conductance when they are fully filled for the chemical potential near \(\mu=0\). In the absence of the Zeeman field, the film hosts a pair of massless Dirac fermions, between which one has \(+\frac{e^{2}}{2h}\) and the other has \(-\frac{e^{2}}{2h}\) due to the sign difference of the mass terms at large \(k\). The total Hall conductance is zero as expected. The presence of a weak Zeeman field does not change this situation. Nevertheless, equipped with a holistic view, when increasing the Zeeman field further, one massless Dirac fermion and one massive Dirac fermion exchange their low-energy masses, meanwhile their higher energy parts remain unchanged, but have different signs. Equivalently, the massless Dirac fermion changes the sign of massive term at higher energy viewed from a low-energy perspective. Consequently, its Hall conductance changes from \(+\frac{e^{2}}{2h}\) from \(-\frac{e^{2}}{2h}\). During the process, the other massless Dirac Fermion remains its minus half-quantized Hall conductance unchanged, and the addition of two massless Dirac fermions gives a quantized Hall conductance \(-\frac{e^{2}}{2h}-\frac{e^{2}}{2h}=-\frac{e^{2}}{h}\).
_Absence of chiral edge states_ There are no chiral edge states around the system boundary in these paired gapless Dirac fermions. The quantum Hall conductance is not governed by the Chern number and does not satisfy the conventional bulk-edge correspondence [12]. We calculated the local density states at the \(y\)-front surface of a \(y\)-opened film in Fig. 4(a), where there is clearly no dispersion that connects the lateral surface valence and conduction bands, opposite to the conventional case. This illustrates explicitly that there do not exist chiral edge states along the system boundary. The asymmetric local density of states between \(k_{x}\) and \(-k_{x}\) reflects the fact that there exists chiral edge current for the filled bulk states. The states carrying chiral edge current gradually becomes prominent when immersing into middle of \(z\) from its top surface. Furthermore, it is found that there still exists a chiral edge current whose amplitude is proportional to the chemical potential due to the time-reversal symmetry breaking caused by the Zeeman coupling [28]. As the Zeeman field is parallel with the lateral surface, the lateral surface states remain gapless. We present the spatial distribution of the electric current density in Fig. 4(b). It shows that the current density is mainly distributed around the surface of the magnetic layers, and decays quickly into the bulk, which demonstrates that the electronic transport mainly occurs on the surface. The local current density on the surface in Fig. 4(c) shows that the current density on the surface decays slowly which obviously deviates the exponential law. We fit the numerical result by using the current formula \(j_{x}(x)\propto J_{1}(2k_{F}x)/x\) in Ref. [28]. \(J_{1}(x)\) is the first Bessel function. Small deviation appears within expectation as \(\alpha\) is finite, the overall shape which hints a power-law decay away from ends along \(y\) direction, however, is also clear, as indicated in Fig. 4(c). Also, it is worth of stressing that the current is induced by the Zeeman exchange interaction, and should be dispationless. Such behavior depends heavily on the metallic nature of surface Dirac cone.
_Discussion-_ In the field theory, the massless Dirac fermions possess the parity symmetry. When the Dirac fermions are coupled to electromagnetic field, its action fails to restore the symmetry in any regularization, and is characterized by a half-quantized Hall conductance. The discussion on parity anomaly in the condensed matter dated back to early 1980s [41; 42; 43]. It has attracted extensive interests since the discovery of TIs as the massless Dirac fermions can exist on the surface [44; 45; 46]. The film here provides a platform to explore the related physics of parity anomaly. The massless Dirac fermions on the surfaces accompany with presence of nonzero zero term \(\tilde{m}_{n,\chi}\) at large \(k\), which plays a role of the regulators of Dirac fermions in the field theory. Thus the nonzero Hall conductance is just determined by the sign of \(\tilde{m}_{n,\chi}\) at \(k=0\) and large \(k\), and independent of the specific form and the amplitude of \(\tilde{m}_{n,\chi}\). In this sense, the present work reflects the physics of quantum anomaly. However, we should keep in mind that the term has already broken the parity symmetry explicitly.
This work was supported by the Research Grants Council, University Grants Committee, Hong Kong under Grant Nos. C7012-21G and 17301823 and the National Key R&D Program of China under Grant No. 2019YFA0308603.
|
2306.13475 | Cosmological Simulations of Galaxy Groups and Clusters-III: Constraining
Quasar Feedback Models with the Atacama Large Millimeter Array | The thermal Sunyaev-Zeldovich (SZ) effect serves as a direct potential probe
of the energetic outflows from quasars that are responsible for heating the
intergalactic medium. In this work, we use the GIZMO meshless finite mass
hydrodynamic cosmological simulation SIMBA (Dave et al. 2019), which includes
different prescriptions for quasar feedback, to compute the SZ effect arising
from different feedback modes. From these theoretical simulations, we perform
mock observations of the Atacama Large Millimeter Array (ALMA) in four bands
(320 GHz, 135 GHZ, 100 GHz and 42 GHz) to characterize the feasibility of
direct detection of the quasar SZ signal. Our results show that for all the
systems we get an enhancement of the SZ signal, when there is radiative
feedback, while the signal gets suppressed when the jet mode of feedback is
introduced in the simulations. Our mock ALMA maps reveal that, with the current
prescription of jet feedback, the signal goes below the detection threshold of
ALMA. We also find that the signal is higher for high redshift systems, making
it possible for ALMA and cross SZ-X-ray studies to disentangle the varying
modes of quasar feedback and their relative importance in the cosmological
context. | Avinanda Chakraborty, Suchetana Chatterjee, Mark Lacy, Soumya Roy, Samrat Roy, Rudrani Kar Chowdhury | 2023-06-23T12:30:43Z | http://arxiv.org/abs/2306.13475v1 | Cosmological Simulations of Galaxy Groups and Clusters-III: Constraining Quasar Feedback Models with the Atacama Large Millimeter Array
###### Abstract
The thermal Sunyaev-Zeldovich (SZ) effect serves as a direct potential probe of the energetic outflows from quasars that are responsible for heating the intergalactic medium. In this work, we use the GIZMO meshless finite mass hydrodynamic cosmological simulation SIMBA (Dave et al., 2019), which includes different prescriptions for quasar feedback, to compute the SZ effect arising from different feedback modes. From these theoretical simulations, we perform mock observations of the Atacama Large Millimeter Array (ALMA) in four bands (320 GHz, 135 GHz, 100 GHz and 42 GHz) to characterize the feasibility of direct detection of the quasar SZ signal. Our results show that for all the systems we get an enhancement of the SZ signal, when there is radiative feedback, while the signal gets suppressed when the jet mode of feedback is introduced in the simulations. Our mock ALMA maps reveal that, with the current prescription of jet feedback, the signal goes below the detection threshold of ALMA. We also find that the signal is higher for high redshift systems, making it possible for ALMA and cross SZ-X-ray studies to disentangle the varying modes of quasar feedback and their relative importance in the cosmological context.
0000-0002-8800-7880]Avinanda Chakraborty
0000-0002-4880-7880]Suchetana Chatterjee
0000-0002-1888-7880]Mark Lacy
0000-0002-1881-7880]Soumya Roy
0000-0002-1883-2288]Samrat Roy
0000-0002-1883-0880]Rudrani Kar Chowdhury
## 1 Introduction
Through a series of observations in the last two decades, it has been established that supermassive black holes (SMBH) residing at the centers of galaxies play a significant role in cosmic evolution of structures in the Universe (e.g., Dressler and Richstone, 1988; Kormendy, 1993; Kauffmann and Haehnelt, 2000; Ferrarese and Merritt, 2000; Gebhardt et al., 2000; Graham et al., 2001; Haring and Rix, 2004; Cattaneo et al., 2009; Kormendy and Ho, 2013; Salviander et al., 2015; Fiore et al., 2017; Mutlu-Pakdil et al., 2018; Schutte et al., 2019; de Nicola et al., 2019; Marsden et al., 2020; Magorrian et al., 1998; Richstone et al., 1998; Ferrarese and Ford, 2005; Silk and Rees, 1998; Gebhardt et al., 2000; Merritt and Ferrarese, 2001; Tremaine et al., 2002; Haring and Rix, 2004; Di Matteo et al., 2005; Aller and Richstone, 2007; Gitti et al., 2012) and hence modeling the effect of SMBH on galaxy evolution has emerged as a frontier in studies involving structure formation (e.g., Di Matteo et al., 2005, 2008; Sijacki et al., 2007; Sijacki et al., 2015; Vogelsberger et al., 2014; Khandai et al., 2015; Liu et al., 2016; Dave et al., 2019; Kar Chowdhury et al., 2020).
Effects of SMBH feedback (or active galactic nuclei; AGN feedback) have been directly observed in galaxies and clusters using multi-wavelength datasets (Roychowdhury, 2007; Kormendy and Ho, 2013; Fiore et al., 2017; Schellenberger et al., 2017; Harrison, 2017; Schutte et al., 2019; Roy et al., 2021, 2021). The effect of SMBH feedback on several observables has been explored in the literature, including the \(L_{x}-T\) relation in galaxy clusters and groups (e.g. Puchwein et al., 2010; Andersson et al., 2009; Maughan et al., 2012; Molham et al., 2020), absence of cooling flow in galaxy clusters (e.g., David et al., 2001; Peterson et al., 2003), Sunyaev-Zeldovich (SZ; Sunyaev and Zeldovich, 1972) profiles (e.g., Chatterjee et al., 2008), SZ power spectrum (e.g., Chatterjee and Kosowsky, 2007; Scannapieco et al., 2008), and star-formation properties of galaxies (e.g., Hopkins et al., 2006; Vitale et al., 2013; Costa et al., 2015; Harrison, 2017).
In the literature, AGN feedback has been broadly divided into two main modes. The scenario where a very strong quasar outburst launches hot winds in short timescales, is generally classified as the "quasar" or the "radiative" mode while comparatively lower power outflows arising from jets or relativistic plasma which operate on much longer timescales, are classified as "radio" or "kinetic" mode. But the specific or distinct roles that these modes play in the evolution of the host galaxy, are still unclear. As proposed before, an effective way to detect hot outflows can be the SZ effect (Natarajan and Sigurdsson, 1999; Aghanim et al., 1999; Yamada et al., 1999; Lapi et al., 2003; Platania et al., 2002;
Roychowdhury et al., 2005; Chatterjee & Kosowsky, 2007; Scannapieco et al., 2008; Chatterjee et al., 2008; Zanni et al., 2005; Sijacki et al., 2007). The thermal Sunyaev-Zeldovich (tSZ) effect is the spectral distortion of the cosmic microwave background (CMB) radiation arising from the inverse Compton scattering of the CMB photons by the high-energy electrons present along the line of sight (Sunyaev & Zeldovich, 1972). It serves as a probe for accumulations of hot gas in the Universe (see Dutta Chowdhury & Chatterjee 2017 and references therein)
Previous studies predicted that the integrated SZ effect gets enhanced due to the presence of hot gas in the vicinity of a quasar owing to its feedback mechanism (Nataarajan & Sigurdsson, 1999; Chatterjee & Kosowsky, 2007; Scannapieco et al., 2008; Chatterjee et al., 2008). The effect was predicted to be directly detectable using mm wave interferometric experiments like the Atacama Large Millimeter Array (ALMA) and statistical techniques using CMB temperature maps (Chatterjee & Kosowsky, 2007). Chatterjee et al. (2010) reported a 1.5\(\sigma\) lower limit of the signal using the quasar catalog of SDSS and WMAP CMB maps. In the same study, it was predicted that, high-resolution CMB experiments will provide better constraints on this effect.
Following the first study, several other teams have tried to detect this signal using the Planck Surveyor Satellite (Ruan et al., 2015; Verdier et al., 2016), the Atacama Cosmology Telescope (ACT; Crichton et al., 2016) and the South Pole Telescope (SPT; Spacek et al., 2016). Moreover, in addition to these cross-correlation studies we now have the very first detection of SZ effect from quasar feedback using the ALMA compact configuration (Lacy et al., 2019). There are a number of upcoming proposals to employ this new technique in understanding the connection between SMBH and the gas distribution in their host galaxies (e.g., Mroczkowski et al., 2019). In this work, we employ high resolution cosmological simulations from Dave et al. (2019) to test for the feasibility of direct detection of AGN feedback using SZ observations.
Using the same set of simulations, in a companion paper Kar Chowdhury et al. (2022) report that the jet/kinetic mode of feedback plays the most significant role in suppressing the X-ray signal in groups and clusters, but the relative contribution from the radiative and the kinetic modes can not be fully determined through X-ray observations only. In this work, we construct the SZ distortion maps and provide a robust machinery to evaluate the relative contributions of radiative and kinetic modes through current and future observations. Our technique also provides a feasible tool to study signatures of AGN feedback in high redshift systems. The paper is organized as follows. In SS2 we briefly discuss the simulation and the methodology for constructing the tSZ maps and present our results in SS3. In SS4 we discuss the implications of our results in light of current and future observations.
## 2 Simulation
For this work, we have used the SIMBA simulation (Dave et al., 2019). SIMBA is one of the most updated cosmologi
Figure 1: The theoretical simulated tSZ maps for different feedback modes (left most: no feedback, second left: no jet feedback, second right: no X-ray feedback, rightmost: all feedback) around similar BHs at z\(\sim\)1 from Dávé et al. (2019). **Top Panel** Simulated tSZ map at 320 GHz of the most massive black hole for no feedback, no jet feedback, no X-ray feedback and all feedback modes respectively. **Bottom Panel** Same as the top panel but now for most active black hole. From the figure we see that addition of radiative feedback and jet feedback have opposing effects (enhancement versus decrement) in the SZ signal. Table 1 and 2 summarizes the feedback model nomenclature and the black hole properties respectively.
cal simulations which is the next generation of the MUFASA cosmological galaxy formation simulations that runs with GIZMO's meshless finite mass hydrodynamics. Cosmological parameters of the simulation are adopted from (Planck Collaboration et al., 2016). The simulation box is \(50h^{-1}\) Mpc
### Modeling AGN Feedback Modes
The black holes (BHs) are seeded and considered to be collisionless sink particles, which can grow by accreting the surrounding gas or by merging with other BHs. The accretion rate of the BHs is modelled via two processes, one is torque-limited accretion model (Angles-Alcazar et al., 2017) and the other is the Bondi accretion model (Bondi, 1952; Hoyle & Lyttleton, 1939). Hence the total accretion rate for a given black hole is given as,
\[\dot{M_{BH}}=(1-\eta)\times(\dot{M_{Torque}}+\dot{M_{Bondi}})\]
where \(\eta\) is the radiative efficiency of accretion (see Appendix B for more discussions on the accretion models). The accretion rate in the torque-limited model can be mildly super Eddington, but the value is never allowed to exceed 3 times the Eddington rate so that it remains consistent with other works (Martinez-Aldama et al., 2018; Jiang et al., 2014) but for the Bondi accretion model, black holes are not allowed to exceed the Eddington limit. See (Dave et al., 2019, D19 hereafter) and Kar Chowdhury et al. (2022) for more details.
A kinetic subgrid model has been incorporated for black hole feedback along with X-ray energy feedback (see Ap
\begin{table}
\begin{tabular}{c c c c} \hline \hline \multicolumn{1}{c}{ Feedback mode} & Configuration \\ \hline All feedback & Radiative+jet+X-ray \\ No-jet feedback & Radiative \\ No X-ray feedback & Radiative+jet \\ \hline \end{tabular}
\end{table}
Table 1: Different Feedback Modes
\begin{table}
\begin{tabular}{c|c c c c c} \hline \hline & Feedback & z & BH Mass & Accretion & Halo Mass \\ & & & \((M_{\odot})\) & \((M_{\odot}/\mathrm{yr})\) & \((log_{10}(h^{-1}M_{\odot}))\) \\ \hline Massive & No & 0.99 & \(\sim 6\times 10^{9}\) & 0.07 & 13.6 \\ & & 0.016 & \(\sim 10^{10}\) & 0.032 & 13.9 \\ & No-jet & 0.99 & \(\sim 6\times 10^{9}\) & 0.01 & \\ & & 0.016 & \(\sim 10^{10}\) & 0.029 & \\ & No X-ray & 0.99 & \(\sim 6\times 10^{9}\) & 0.0003 & \\ & & 0.016 & \(\sim 10^{10}\) & 0.00036 & \\ & All & 0.99 & \(\sim 6\times 10^{9}\) & 0.002 & \\ & & 0.016 & \(\sim 10^{10}\) & 0.000024 & \\ \hline Active & No & 0.99 & \(\sim 10^{9}\) & 0.13 & 13.4 \\ & & 0.016 & \(\sim 6\times 10^{9}\) & 0.083 & 13.6 \\ & No-jet & 0.99 & \(\sim 10^{9}\) & 0.11 & \\ & & 0.016 & \(\sim 5\times 10^{9}\) & 0.12 & \\ & No X-ray & 0.99 & \(\sim 10^{9}\) & 0.07 & \\ & & 0.016 & \(\sim 10^{9}\) & 0.004 & \\ & All & 0.99 & \(\sim 3\times 10^{9}\) & 0.05 & \\ & & 0.016 & \(\sim 5\times 10^{9}\) & 0.0065 & \\ \hline \hline \end{tabular}
\end{table}
Table 2: Black hole Parameters for Fig. 1 and Fig. 2
Figure 2: Same as in Fig. 1 but at z=0.016. **Top Panel** Simulated tSZ map at 320 GHz of the most massive black hole for no, no-jet no X-ray and all feedback modes respectively. **Bottom Panel** Same as the top panel but now for most active black hole. Table 1 and 2 summarizes the feedback model nomenclature and the black hole properties respectively. The results are same as we find in Fig. 1 for the high redshift blackholes. Inclusion of the radiative mode of feedback enhances the SZ signal while adding the jet mode decreases it. This is in stark contrast to the X-ray signal where both modes suppresses the X-ray flux (see Kar Chowdhury et al., 2022)
pendix B for more details). The motivation for the kinetic feedback model comes from the observed dichotomy which includes a "radiative mode" at high Eddington ratios and a "jet mode" at low Eddington ratios in black hole growth modes that is reflected in their outflow characteristics (e.g., Heckman & Best, 2014). The kinetic mass outflow rate is taken directly from the Feedback in Realistic Environments (FIRE; Hopkins et al., 2014, 2018; Muratov et al., 2015) high-resolution simulations which provides a synergy between cosmological-scale simulations of galaxy populations and the interstellar medium (ISM) resolving simulations of individual galaxies. Apart from kinetic feedback, energy input into surrounding gas for the photoionization heating from the X-ray photons coming from the accretion disk of the AGN is also included (Choi et al., 2012).
AGN feedback can be modelled and tested by various observations, but the origin of the seed SMBH remains unknown. Also, we are limited by the resolution of the simulation for probing the relevant length scales. Hence, to seed black holes in galaxies dynamically during the simulation a on-the-fly Friends-of-Friends (FoF) algorithm is used (e.g., Di Matteo et al., 2008; Angles-Alcazar et al., 2017). The star particle closest to the center of mass of a galaxy is converted into a black hole particle if the galaxy does not already contain a black hole and reaches a stellar mass \(M_{*}>\gamma_{BH}\times M_{seed}\). Here \(M_{seed}=10^{4}M_{\odot}/h\) and \(\gamma_{BH}=3\times 10^{5}\), and the threshold stellar mass is \(M_{*}\geq 10^{9.5}M_{\odot}\) for the fiducial simulations. The nomenclature for different feedback models is summarized in Table 1.
### Construction of Sunyaev-Zeldovich Maps
One of the main goals of our work is to simulate the SZ effect arising from different feedback modes and study their redshift evolution.
#### 2.2.1 Sample Selection
To perform our analysis we have made use of the SIMBA halo catalog. We have restricted our analysis to central black holes within the halos. The identification of central and satellite black holes are done based on the SIMBA selection criterion, where SMBHs residing in central galaxies are considered to be central black holes while SMBHs residing in satellite galaxies are considered to be satellite black holes. We have selected our central black holes to have mass \(\geq 10^{7}\)h\({}^{-1}\)M\({}_{\odot}\), residing in halos of mass \(\geq 10^{12}\)h\({}^{-1}\)M\({}_{\odot}\) at all redshifts (Kar Chowdhury et al., 2022). The cut-off mass are motivated from the mass resolutions in the simulation. The distribution of halo mass are shown in (Kar Chowdhury et al., 2022). To ensure that the same black hole is identified for different feedback mode runs, we used the method of tracking the host halo in the SIMBA catalog. The host halo mass of the black holes remain same for different feedback modes. In this work we selected two representative black holes from the simulation, the most massive (highest mass) and the most active (highest accretion rate) ones at z\(=0.016\) and z\(=1.0\). The properties of the black holes are summarized in Table 2.
To obtain the SZ map around the black holes, a projected direction is chosen and then the SZ signal arising from all the elements along the line of sight is integrated for all the pixels in the map. The change in the intensity of the CMB due to
Figure 3: The theoretical SZ radial profiles of the most massive and most active BHs corresponding to Figs. 1 and 2. **Top Left** Radial profile for the most massive BH for no, no-jet, no X-ray, and all feedback modes respectively at z\(\sim\)1 (see Table 1 for nomenclature and Table 2 for black hole properties). **Top Right** Same as the upper left panel but now at z=0.016. **Bottom Left** Same for the most active BH at z\(\sim\)1. **Bottom Right** Same as the lower left panel but now at z=0.016. We note that for all four cases we hardly see any significant difference between no feedback and no-jet feedback modes but no X-ray feedback and all feedback modes are separable. A significant suppression of the signal occurs when the jet mode of feedback is introduced in the model.
\begin{table}
\begin{tabular}{c c} \hline \hline Parameters & Values \\ \hline incell & 0.5 arcsec \\ incenter & 320 GHz/135 GHz/100 GHz/42 GHz \\ inwidth & 7.5 GHz \\ integration & 30s \\ mapsize & 10 arcsecs \\ antennalist & alma.cycle 8.1.cfg \\ totaltime & 3h \\ pwv & 0.5 \\ imsize & 300 \\ cell & 0.1 arcsec/0.23 arcsec/0.32 arcsec/0.76 arcsec \\ niter & 1000 \\ \hline \end{tabular}
\end{table}
Table 3: CASA Parameters for ‘simalma’
the tSZ effect is given by Sazonov & Sunyaev (1998) :
\[\Delta I(x)=\frac{2k_{B}T_{CMB}}{\lambda^{2}}\frac{x^{2}e^{x}}{(e^{x}-1)^{2}} \frac{k_{B}\Theta_{T}}{m_{e}c^{2}}f_{1}(x)\int\mathrm{d}ln_{e}(I)T_{e}(I) \tag{1}\]
where \(x=h\nu/(k_{B}T_{CMB})\), and the integral is along the line of sight direction specified by the direction of projection and \(f_{1}(x)=x\coth(x/2)-4\) stands for the frequency dependence of the tSZ effect. \(n_{e}\) and \(T_{e}\) are the electron number density and temperature along the line-of-sight. The flux density is given as
\[S_{\nu}=I_{\nu}\int\mathrm{d}\Omega,\]
where \(I_{\nu}\) is the intensity and \(\Omega\) is the solid angle subtended by the region. As flux density is in Jansky (Jy) we obtain the intensity in Jy/Steradian. In a region within the smoothing length, the energy due to feedback from the black hole is assumed to be distributed isotropically among the gas particles surrounding them and a common B spline kernel is used to
Figure 4: The mock ALMA tSZ maps at 320 GHz (Band 7) for different feedback modes around most massive and most active BHs at two different redshifts from Dave et al. (2019) using the observational parameters in Table 3. **Top Panel** The mock ALMA tSZ maps for no feedback (**left most column**), no-jet feedback, no X-ray feedback, and all feedback modes (**right most column**) respectively around the most massive BH at z\(\sim\)1. **Second Panel** The mock ALMA tSZ maps for no feedback, no-jet feedback, no X-ray feedback, and all feedback modes respectively around the most active BH at z\(\sim\)1. **Third Panel** Same as the top panel but now at z=0.016. **Fourth Panel** Same as the second panel but now at z=0.016. We note that ALMA has the capability to detect the SZ signal for the no feedback and no-jet feedback modes. The signal gets enhanced when, only the radiative mode of feedback is added. For no X-ray feedback and all feedback modes the signal gets suppressed below the noise threshold of ALMA. The different feedback models and the black hole properties are summarized in Tables 1 and 2 respectively. The ALMA results are summarized in Table 4.
compute the smoothed density and temperature (Kar Chowdhury et al., 2022).
To construct the synthetic observations, we use the Common Astronomy Software Application (CASA; Jaeger, 2008) which is a suite of tools for calibration, imaging and analysis in radio astronomy for both interferometric and single dish configurations. The package takes a model image for a patch of the sky as input and turn it into an observation from multiple viewing angles. For our purpose we employed the most compact configuration for ALMA cycle 8.1. Our observational specifications are summarized in Table 3.
The theoretical tSZ maps and the telescope configuration are convolved through'simalma' task in CASA.'simalma' is a combination of both'simobserve' and'simanalyze'.'simobserve' is used to simulate observations with ALMA and the Atacama Compact Array (ACA), and generate simulated visibility maps, whereas'simanalyze' is used to generate images from the simulated visibility results. We used central observing frequency at 320 GHz (ALMA band 7) with 7.5 GHz bandwidth. We also set a pointing position of the observation (Epoch J2000, RA 13:29:53.94, DEC -047:11:41.0) to prevent, the simulator from obtaining a mosaic to cover the value of the mapsize parameter. Precipitable water vapor (pwv) is set to 0.5 mm to represent observations in nominal weather. Based on these settings, the simulation added noise to the data. For both \(z\sim\)1 and \(z=0.016\), synthetic maps are constructed using a total observation time of 3 hours with an integration time of 30 seconds. The primary beam size for ALMA band 7 is \(\sim 15\) arcsec. The cell size was chosen to be 0.1 arcsec, about 20% of the synthesized beamsize (1-arcsec). Each image is constructed with \(300\times 300\) pixels.
## 3 Results
We now present the tSZ signals predicted from our simulations for different modes of feedback. We note that in our results 'all feedback' mode includes radiative, jet and X-ray feedback, no-jet feedback mode turns off the jet and X-ray feedback and includes only radiative feedback. For the no X-ray feedback case, only X-ray feedback mode is turned off and it includes jet and radiative feedback. The nomenclature is summarized in Table 1.
### Theoretical Maps
Figure 1 shows the simulated tSZ maps (\(500\times 500\) pixels) at 320 GHz for different feedback modes around the BHs (identified in the simulations) in a 100 square \(h^{-1}\)kpc region at z\(\sim\)1 for both most massive and most active cases. Here we compare tSZ signal for different feedback modes, namely no feedback, no-jet feedback, no X-ray feedback, and all feedback at z\(\sim\)1 for two different cases. Properties of the most massive and most active BHs are listed in Table 2. The maps reveal that, for both cases tSZ signal is the lowest for all feedback mode, and the suppression of the signal is driven by the jet mode of feedback. The SZ signal gets enhanced for the no-jet feedback (radiative feedback only) case compared to the no-feedback model.
Figure 2 shows the theoretical simulated tSZ maps (\(500\times 500\) pixels) at 320 GHz for different feedback modes around similar BHs in a 100 square kpc \(h^{-1}\) region at \(z=0.016\). Here we compare tSZ signal for the same feedback modes as Figure 1 but at \(z=0.016\). Properties of the BHs are listed in Table 2. We find that the lower redshift results exhibit similar trends to that of higher redshifts. From our results we observe that for both redshifts, 'jet' is the main driver of feedback and radiative and X-ray feedback have comparatively less effect on altering the SZ signal from the no-feedback case. The implications of these results are discussed in SS4. We also constructed theoretical tSZ maps for 135 GHz, 100 GHz, and 42 GHz respectively, where we observe decrement in the tSZ signal instead of increment since those frequencies are below the null frequency of the tSZ effect (Eq. 1).
Figure 3 shows the profiles of the theoretical tSZ signal computed from Figs. 1 and 2. From the figure we can observe that for all four cases, no feedback mode and no-jet feedback mode are almost indistinguishable but there is a significant suppression of the signal for all feedback and no X-ray feedback modes at both redshifts. The implications of these differences in the suppression and enhancement of signals will be evident in the observational feasibility.
### Mock Observational Maps
Figure 4 shows the simulated ALMA maps constructed using the observing parameters shown in Table 3 where the minimum flux of the maps is set to the rms value of the noise. The maps are for the same black holes represented in Figure 1 and Figure 2 respectively. The black hole parameters are listed in Table 2. Each map is constructed for a \(30^{{}^{\prime\prime}}\times 30^{{}^{\prime\prime}}\) region in the sky and the flux is reported in \(\mu\)Jy. The maps are made at 320 GHz (Band 7). The leftmost column refers to the case when there is no feedback while the rightmost column represents the all feedback case for all the black holes at the two representative redshifts (\(z\sim\)1 and \(z=0.016\)). The second and the third columns represent the cases where only the radiative feedback and the radiative and the jet modes of feedback are present (see Table 1 for the nomenclature). The corresponding signal-to-noise maps are presented in Figure 5. Figures 6, 7, 8 and 9 represent the simulated ALMA maps constructed using the same observing parameters as Fig. 4 but at 135 GHz (Band 4), 100 GHz (Band 3), and 42 GHz (Band 1). We note that for all cases, turning on the jet feedback suppresses the signal below the ALMA detection threshold and hence we have only included
the no-feedback and the radiative feedback simulations for the lower frequency bands.
From Figs. 4 and 5, we note that once the jet mode of feedback is turned on, the SZ signal drops significantly and goes below the detection threshold of ALMA. It is also seen that for all systems, the signal is most prominent when the radiative mode of feedback is turned on. The signal is more pronounced at higher redshifts as seen from the results presented in Fig. 5. It has been noted in other studies that AGN feedback generally enhances the temperature of the IGM/ICM gas while suppressing the gas density (e.g., Kar Chowdhury et al., 2022, 2021; Chatterjee et al., 2008). Since the SZ signal comes from the integrated line-of-sight gas pressure, there is always an optimization between temperature and density in the magnitude of the effect. The increase in SZ signal, as reported is predominantly, due to the presence of hot gas from AGN feedback (e.g., Scannapieco et al., 2008; Chatterjee et al., 2008). Our results show that such is the case for the radiative mode only, but the suppression of density is much higher in the jet mode resulting in a decrement of the SZ signal.
ALMA observations may serve as important probes in understanding feedback effects at higher redshifts as well as constraining theoretical models of feedback. In Figs. 6 through 9 we present the ALMA mock SZ simulations for the four black holes (listed in Table 2) for the lower frequency
Figure 5: The fidelity (signal to noise) maps of the mock ALMA tSZ simulations at 320 GHz for different feedback modes around the most massive and the most active BHs at two different redshifts from Davé et al. (2019) corresponding to Fig. 4. **Top Panel** The fidelity maps for the no feedback, no jet feedback, no X-ray feedback, and all feedback modes respectively around most massive BH at z\(\sim\)1. **Second Panel** The mock ALMA tSZ maps for no feedback, no jet feedback, no X-ray feedback, and all feedback modes respectively around the most active BH at z\(\sim\)1. **Third Panel** Same as the top panel but now at z=0.016. **Fourth Panel** Same as the second panel but now at z=0.016.
bands. Our results show that at lower frequencies we generally tend to resolve more structures in the low redshift case. The ALMA simulation results are summarized in Table 4. We discuss the implications of these results in the next section.
## 4 Discussion of Results
In recent years the SZ effect has become a powerful tool to probe the warm hot universe at multiple length scales (Mroczkowski et al., 2019). In the pioneering work of Chatterjee and Kosowsky (2007), it was suggested that AGN feedback can be probed using the SZ effect through future high resolution submm experiments. The first detection of the SZ effect from quasar feedback with the Atacama Large Millimeter Array (Lacy et al., 2019) served as a validation to the proposed studies (e.g Natarajan and Sigurdsson, 1999; Yamada et al., 1999; Chatterjee et al., 2008; Scannapieco et al., 2008). Following the first detection, other groups followed (Brownson et al., 2019; Hall et al., 2019).
In a series of recent works, SZ and X-ray studies have been suggested to constrain models of baryonic physics, and specifically the role of AGN feedback in determining the thermodynamic properties of the ICM (e.g., Eckert et al., 2021; Chadayamuri et al., 2022; Acharya et al., 2021; Kar Chowdhury et al., 2022; Yang et al., 2022; Kim et al., 2022). In a previous study Brownson et al. (2019) proposed using the FABLE cosmological simulations, that ALMA has the potential to constrain AGN feedback models through SZ ob
Figure 6: Simulated ALMA maps constructed using the same observing parameters as Fig. 4 but at 135 GHz (Band 4, **left most column**), 100 GHz (Band 3, **middle column**), and 42 GHz (Band 1, **right most column**) for the _most massive_ high redshift (\(z=1\)) BH. **Top Panel** The mock ALMA tSZ maps for no feedback **Second Panel** The corresponding signal-to-noise maps, **Third Panel** Same maps, but now for the no-jet feedback mode. **Fourth Panel** The signal-to-noise maps corresponding to the third panel. The signal is enhanced when we have radiative feedback from the black holes.
servations. The study proposed that the best observational band for detecting the SZ signal would be band 3 (\(\sim\) 100 GHz). Recently Kim et al. (2022) used the Illustris TNG, EAGLE and FIRE simulations to infer that the properties of the circumgalactic medium (CGM) and the ICM are highly sensitive to feedback prescriptions, and concluded that SZ measurements of the gas can potentially put constraints on theoretical models. A detection of the SZ effect in the CGM has been recently reported by Das et al. (2023).
In the current work we use the cosmological simulation SIMBA (D19), to constrain models of AGN feedback using ALMA compact array observations. As discussed before, our results show that certain models of AGN feedback go below the detection threshold of ALMA, and that provides an upper limit from SZ observables, on the amount of feedback that is allowed in cosmological simulations. Using the same set of simulations, in a companion paper, Kar Chowdhury et al. (2022) show that feedback suppresses the X-ray flux from the vicinity of the black hole and the jet mode of feedback plays the most significant role in evacuating the gas from the centers of groups and clusters, resulting in suppression of X-ray signal.
Kar Chowdhury et al. (2022) did a thorough study of the thermodynamic properties of the ICM for both individual objects as well as statistical samples. It is observed that once radiative feedback is introduced in the simulation, the temperature of the gas gets slightly enhanced, however the gas density is suppressed in the vicinity of the black hole. It is
Figure 7: Simulated ALMA maps constructed using the same observing parameters as Fig. 4 but at 135 GHz (Band 4), 100 GHz (Band 3), and 42 GHz (Band 1) for the _most active_ high redshift (\(z=1\)) BHs. **Top Panel** The mock ALMA tSZ maps for no feedback. **Second Panel** The corresponding signal-to-noise maps, **Third Panel** Same maps, but now for the no-jet feedback mode. **Fourth Panel** The signal-to-noise maps corresponding to the third panel. See Table 1 and 2 for feedback nomenclature and black hole properties.
noted that the scale of evacuation due to the radiative wind feedback is not high enough and hence a density enhancement is observed at distances further away from the black hole. The drop in gas density, closer to the black hole stays responsible for the drop in X-ray flux in the vicinity of the black hole, compared to the scenario where there is no feedback. Previously Kar Chowdhury et al. (2021) noted similar effects with the MassiveBlack-II simulation (Khandai et al., 2015) where, only the radiative mode of feedback was used in the modeling. However, in this work we compute the SZ flux which is an integrated line-of-sight pressure and thus the increase in gas temperature manifests more strongly in the SZ signal, and for the systems under consideration, we observe an enhancement of SZ effect compared to the no feedback case. Previously, enhancement of SZ signal due to the radiative mode of feedback was also reported by other groups using different simulations (e.g., Chatterjee et al., 2008; Scannapieco et al., 2008).
The situation is dramatically reversed, once the jet feedback is turned on. It has been noted that the temperature as well as the density of gas becomes significantly lower once the jet feedback is turned on (Kar Chowdhury et al., 2022; Robson & Dave, 2020; Robson & Dave, 2023). The powerful jet evacuates the gas to larger length scales resulting in drastic drop of the temperature as well as the density of hot gas near to the black hole. This effect results in strong suppres
Figure 8: Simulated ALMA maps constructed using the same observing parameters as Fig. 4 but at 135 GHz (Band 4), 100 GHz (Band 3), and 42 GHz (Band 1) for the _most massive_ lower redshift (\(z=.016\)) BH. **Top Panel** The mock ALMA tSZ maps for no feedback case. **Second Panel** The corresponding signal-to-noise maps, **Third Panel** Same maps, but now for the no-jet feedback mode. **Fourth Panel** The signal-to-noise maps corresponding to the third panel. We note that the lower frequency maps reveal more structure at lower redshift. See Table 1 and 2 for feedback nomenclature and black hole properties.
sion of both the X-ray and the SZ fluxes. Although in simulations it is possible to turn on and turn-off different modes of feedback and assess their impact on the ICM, it was noted in Kar Chowdhury et al. (2022) that detection of X-ray cavities due to feedback effects with current and future telescopes will only be useful in disentangling the combined effects of jet versus radiative modes. It is not possible to quantify their individual contributions with X-ray observations only.
Our SZ results reveal that unlike X-rays, SZ signals can get both enhanced or suppressed due to feedback effects, depending on the mode of feedback. This provides a unique route to combine X-ray and SZ measurements to disentangle the mode of feedback that is dominant in a given system. In a future work we wish to perform X-ray-SZ cross studies to address the effect of different modes of feedback. Recently Chadayammuri et al. (2022) concluded using eROSITA stacks of the circumgalactic medium (CGM), that numerical simulations include higher levels of feedback from AGN while quenching star formation in the CGM. We note that adding the SZ component will substantially improve the constraining power of such observations.
In addition to the unique feature in SZ signals where we can get both enhancement and decrement from AGN feedback there is an added advantage of SZ effect being redshift independent. Kar Chowdhury et al. (2022) reported that X-ray studies of feedback effects will only be restricted to low redshift systems. This is due to the fact that the X-ray dimming happens due to the distance effects and hence with sim
Figure 9: Simulated ALMA maps constructed using the same observing parameters as Fig. 4 but at 135 GHz (Band 4), 100 GHz (Band 3), and 42 GHz (Band 1) for the _most active_ lower redshift (\(z=.016\)) BH. **Top Panel** The mock ALMA tSZ maps for no feedback. **Second Panel** The corresponding signal-to-noise maps, **Third Panel** Same maps, but now for the no-jet feedback mode. **Fourth Panel** The signal-to-noise maps corresponding to the third panel. See Table 1 and 2 for feedback nomenclature and black hole properties. The ALMA results are summarized in Table 4.
ilar physical conditions it is harder to detect high redshift systems with X-rays. SZ effect, on the other hand is a distortion in the CMB and hence is a redshift independent observable (Carlstrom et al., 2002). Further, AGN are more active at higher redshifts, resulting in higher level of energy injection into the IGM/ICM. Thus the effect is generally seen to be more pronounced at high redshifts (Chatterjee et al., 2008). Our results show that the SZ signal is indeed enhanced at higher redshifts confirming previous work, and hence it is possible to detect the high redshift signal with current ALMA configuration (evident from Figs. 4, 5, 6 and 7).
Recently Wadekar et al. (2023) made use of extensive machine learning tools to study the effect of feedback parameters on the Y-M (integrated Y distortion and halo mass) relation of galaxy groups and clusters. They used the SIMBA and the Illustris-TNG simulations and demonstrated that, stronger the AGN feedback parameter (e.g., jet speed in the case of SIMBA) higher is the deviation of the Y-M scaling from self-similarity. They also demonstrate that with the SIMBA prescriptions the Y parameter for the entire halo gets substantially suppressed due to the inclusion of jet- feedback in the simulation. The statistical studies of Wadekar et al. (2023), along with the results obtained from the AGN feedback parameter space exploration, clearly indicate the strong role of jet feedback in suppressing the SZ signal in halos at all mass scales. This is completely consistent with our current results, where we focused on the thermodynamic parameters of individual systems. As discussed before, our analysis provide yet another robust tool, namely the SZ effect to observationally constrain the models used in current cosmological simulations.
In this study we use the most compact antenna configuration for ALMA and four frequency bands (320 GHz, 135 GHz, 100 GHz, and 42 GHz, corresponding to ALMA Bands 7, 4, 3 and 1) to investigate how the detection of the SZ signal depends on the frequency and spatial sampling of the array. We note that the lower frequency bands capture more of the signal on large angular scales, particularly important for our low redshift simulation (Fig. 8 and Fig. 9). Simulations can thus be a useful guide to optimizing observing strategies. We note that our study focuses on the thermal SZ distortions from quasar feedback and hence detection of the spectral distortion from the CMB in a single band is adequate. In future, we wish to include simulations of the kinetic SZ effect, and to distinguish the kinetic and thermal SZ components we need to consider multiband observations with similar uv-plane coverage. Finally, although visibility-plane model-fitting has been used for analysis of the SZ effect in ALMA data (e.g., Brownson et al., 2019), we note that the good uv-plane sampling of ALMA results in high fidelity images that are able to accurately represent the details of the SZ signal without resorting to model fitting of the visibility data.
The importance of AGN/quasar feedback in galaxy evolution has been very well studied in the literature. The prospect of detecting AGN/quasar feedback via the SZ effect serves as a potential probe to observe the signal to high redshifts. In this work we show that the compact configuration of the Atacama Large Millimeter Array not only provides an avenue to detect the signal at higher redshifts, it also potentially has the constraining power to observationally distinguish between current feedback models in numerical simulations. Our results also establish that the best route to constrain feedback
models lies in a combined X-ray-SZ analysis of the same system.
## 5 Acknowledgements
The authors would like to thank the referee for making very important suggestions which greatly helped in improving the draft. A.C. and S.C. thank Inter-University Centre for Astronomy and Astrophysics (IUCAA) for providing computational support through the Pegasus supercomputing facility. S.C. acknowledges support from the Department of Science and Technology, GOI, through the SERB- CRG-2020-002064 grant and from the Department of Atomic Energy, GOI, for the 57/14/10/2019-BRNS grant. RKC thanks National Natural Science Foundation of China (HKU12122309) for financial support. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.
|
2307.04220 | All meromorphic traveling waves of cubic and quintic complex
Ginzburg-Landau equations | For both cubic and quintic nonlinearities of the one-dimensional complex
Ginzburg-Landau evolution equation, we prove by a theorem of Eremenko the
finiteness of the number of traveling waves whose squared modulus has only
poles in the complex plane, and we provide all their closed form expressions.
Among these eleven solutions, five are provided by the method used. This allows
us to complete the list of solutions previously obtained by other authors. | Robert Conte, Micheline Musette, Ng Tuen Wai, Wu Chengfa | 2023-07-09T16:25:51Z | http://arxiv.org/abs/2307.04220v1 | # All meromorphic traveling waves of cubic and quintic complex Ginzburg-Landau equations
###### Abstract
For both cubic and quintic nonlinearities of the one-dimensional complex Ginzburg-Landau evolution equation, we prove by a theorem of Eremenko the finiteness of the number of traveling waves whose squared modulus has only poles in the complex plane, and we provide all their closed form expressions. Among these eleven solutions, five are provided by the method used. This allows us to complete the list of solutions previously obtained by other authors.
keywords: complex cubic and quintic Ginzburg-Landau equation, closed-form solutions, Nevanlinna theory, nonlinear optics, turbulence, traveling waves, patterns, coherent structures, defect-mediated turbulence, dark solitons. Pacs: 02.30.Hq, 02.30.+g Msc: 34M04, 35Q99 An error in the style file elsarticle.cls prevents us to put the acute accent on Universite.
###### Contents
* 1 Introduction
* 2 Previous methods
* 3 Movable singularities of CGL
* 4 An exhaustive method
* 4.1 Theorem of Eremenko
* 4.2 Subequation method
* 4.3 A property of CGL3/5
* 4.4 The method
* 5 Application of the exhaustive method to CGL
* 5.1 CGL3 elliptic solution
* 5.2 CGL5 elliptic solution
* 5.3 CGL5 homoclinic defect
* 5.4 CGL5 homoclinic bound state of two dark solitons
* 5.5 CGL5 rational solution
* 6 Conclusion and perspectives
* A CGL3 source or propagating hole, pulse, front
* B CGL5 front, source or sink, pulse
## 1 Introduction
In 1950 Ginzburg and Landau [22] introduced a description of superconductivity, in the absence of an external magnetic field, as a second order phase transition in which the order parameter is a complex function \(A(x,t)\) (which they denoted \(\Psi\) for its connection with quantum mechanics) deriving from a free energy quartic in \(|A|\) ("\(\varphi^{4}\) theory"). This assumption naturally led them, in the one-dimensional case, to an evolution equation invariant under a translation of the phase \(\arg A\), now known as the one-dimensional cubic complex Ginzburg-Landau equation CGL3
\[(\text{CGL3})\;iA_{t}+pA_{xx}+q|A|^{2}A-i\gamma A=0,p\gamma\,\text{Im}(q/p) \neq 0, \tag{1}\]
in which \(p,q\) are complex constants and \(\gamma\) a real constant.
This CGL3 equation later turned out to be a generic equation arising from the approximation of a slowly varying amplitude, with applications to quite various physical phenomena, such as spatio-temporal turbulence, Bose-Einstein condensation, and more recently numerous fields of nonlinear optics, as detailed in several reviews [38][4][50][40][3][20].
CGL3 describes for instance the formation of patterns near a Hopf bifurcation, \(\gamma\) measuring the difference between the order parameter and its critical value. When the bifurcation is subcritical, the cubic term is insufficient to describe the system and one must take account of the next
nonlinearity compatible with the phase invariance, thus defining the complex quintic equation CGL5,
\[\text{(CGL5) }iA_{t}+pA_{xx}+q|A|^{2}A+r|A|^{4}A-i\gamma A=0,pr\gamma\operatorname{Im }(r/p)\neq 0, \tag{2}\]
in which \(r\) is a complex constant.
The phase diagrams of both CGL3 and CGL5 are quite rich [8, Fig. 1][23, Fig. 1a] and comprise a variety of chaotic and regular phases. Moreover, a remarkable feature is the existence, observed in both computer and real experiments, of a very small number of elementary patterns able to describe most regimes and, more importantly, to act as separators between the different regimes. When they are traveling waves, (\(c\) and \(\omega\) real constants, \(M\) and \(\varphi\) real functions, \(a\) complex function),
\[A=\sqrt{M(\xi)}e^{i(-\omega t+\varphi(\xi))}=a(\xi)e^{-i\omega t},\xi=x-ct, \tag{3}\]
these coherent structures have been classified [51, Fig. 1] according to their topology (pulses, fronts, shocks, holes, sinks, defects, etc) and the nature of their orbits: homoclinic (equal values of \(\lim_{x\to-\infty}|A|\) and \(\lim_{x\to+\infty}|A|\)) or heteroclinic (unequal values).
For instance, a CGL3 heteroclinic hole has been analytically found by Bekki and Nozaki [5], and the CGL3 homoclinic hole has only been observed in numerical experiments by van Hecke [23] but not found analytically.
We restrict here to the situation in which the ratio of the highest nonlinearity coefficient \(r\) or \(q\) by the dispersion coefficient \(p\) is not real, i.e. when the CGL equation is dissipative [3]. Indeed, when this ratio is real, the behaviour is no more dissipative but dispersive and, at least in the cubic case, the CGL equation then has the same singularity structure than the nonlinear Schrodinger equation (NLS). While the exact solutions of NLS are numerous, very few exact solutions are known in the dissipative case.
Before the introduction of the method described in section 4, only six analytic expressions for traveling waves were known: a heteroclinic front, a homoclinic pulse and a heteroclinic source, for both CGL3 and CGL5, they are recalled in the appendix.
In the present paper, we enumerate _all_ those traveling waves which belong to a rather natural class. Indeed, the six just mentioned exact traveling waves share a nice property: the only singularities of their squared modulus \(M\) which depend on the initial conditions (in short, _movable_ singularities) are poles, in the complex plane of course. Conversely, if one requires the movable singularities of \(M\) to be only poles (in short, \(M\) to be meromorphic on \(\mathbb{C}\)), there exists a mathematical method able to yield all the resulting values of \(M\) in closed form, and our main result is the following.
**Theorem 1**.: _The CGL3 and CGL5 equations admit exactly eleven different traveling wave solutions in which \(M\) is meromorphic on \(\mathbb{C}\), and their closed form is known._
Table 1 (the vocabulary of its legend will be defined below) displays the main features of these traveling waves, in which the real parameters \(d_{r}\), \(d_{i}\), \(e_{r}\), \(e_{i}\), \(\kappa_{r}\), \(\kappa_{i}\), \(g_{r}\), \(g_{i}\), equivalent to \(p,q,r,\gamma,c,\omega\), are defined by
\[\frac{q}{p}=d_{r}+id_{i},\frac{r}{p}=e_{r}+ie_{i},\frac{c}{p}=\kappa_{\rm r}- i\kappa_{\rm i},\frac{\gamma+i\omega}{p}=g_{r}+ig_{i}-\frac{1}{2}\kappa_{\rm r} \kappa_{\rm i}-\frac{i}{4}\kappa_{\rm r}^{2}. \tag{4}\]
This provides us with five more (and only five) analytic traveling wave solutions. Three of them are unbounded and therefore unphysical, and the two others [13] represent coherent structures already observed.
The first one is a CGL5 topological defect. The occurence of defects [17][24] is a major mechanism [52] of transition to a turbulent state. Although this "defect-mediated turbulence" has been mostly documented in two-dimensional CGL3 [36][37], there exists a range of parameters of CGL5, which includes the values of the exact defect, displaying a similar "hole-mediated turbulence" [48, Fig. 3b][49, Fig. 4][37, p 278]: for a destabilizing CGL5 term (negative \(\delta\) in the notation of Ref. [48]) one observes a succession of phase slips (every time \(M\) vanishes), which create hole-shock collisions, ending in a process of as many annihilations as creations.
The second bounded traveling wave is a bound state of two CGL5 dark solitons, which has been observed in numerical simulations [2, Fig. 4].
The paper is organized as follows. Section 2 reviews the methods which succeeded to find some solutions, and also unsuccessful methods, with the reasons for their failure. Section 3 recalls all the Laurent series of \(M\), whose knowledge is a prequisite for what follows. The method to find all meromorphic solutions is presented in section 4. Next section 5 enumerates the five such CGL solutions which were found by this method.
## 2 Previous methods
Quite different methods have been used to try and obtain closed form traveling wave solutions of CGL3/5. The succesful ones are the following.
1. The assumption \(A\) equal to the product of a function of \(t\) by a real function of \(x\) immediately yields the homoclinic pulse (A.2) of CGL3 [27].
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|l|} \hline CGL & Type & Codim & H(\(M\)) & H(\((\log a)^{\prime}\)) & Sol & Branches & Topology & Ref \\ \hline \hline
3 & Ellip & 2 & 2P2 & 4P1 & [12, (3.181)] & 1 & Unbounded, periodic & [12] \\ \hline \hline
3 & Trigo & 1 & 1P2 & 2P1 & (A.4)\({}_{1}\) & 2 & Heteroclinic source/hole & [5] \\ \hline
3 & Trigo & 2 & 1P2 & 1P1 & (A.4)\({}_{2}\) & 2 & Homoclinic pulse & [27] \\ \hline
3 & Trigo & 2 & 1P2 & 1P1 & (A.4)\({}_{3}\) & 2 & Heteroclinic front & [44] \\ \hline \hline
5 & Ellip & 4 & 4P1 & 3P1 & (36) & 1 & Unbounded, periodic & [14] \\ \hline \hline
5 & Trigo & 2 & 1P1 & 1P1 & (B.4)\({}_{1}\) & 4 & Heteroclinic front & [51] \\ \hline
5 & Trigo & 3 & 2P1 & 3P1 & (B.4)\({}_{2}\) & 2 & Heteroclinic source/sink & [39] \\ \hline
5 & Trigo & 3 & 2P1 & 2P1 & (B.4)\({}_{3}\) & 2 & Homoclinic pulse & [51] \\ \hline
5 & Trigo & 5 & 4P1 & 5P1 & [13, (11)] & 1 & Homoclinic defect & [13] \\ \hline
5 & Trigo & 5 & 4P1 & 6P1 & (48) & 2 & Homoclinic bound state & [13] \\ \hline \hline
5 & Rat.l & 5 & 4P1 & 6P1 & (51) & 2 & Unbounded & Here \\ \hline \hline \end{tabular}
\end{table}
Table 1: The 11 meromorphic solutions \(M\) of CGL5 and CGL3, ordered by type (elliptic, trigonometric, rational) and number of poles of \(M\). Columns display: codimension (number of real constraints on \(\kappa_{1},g_{r},g_{i},d_{r},d_{i},e_{r}\) (CGL5) or \(\kappa_{1},g_{r},g_{i},d_{r}\) (CGL3)), number of poles (\(n\)P\(p\) means \(n\) poles of order \(p\)) in the Hermite decomposition of \(M\) and \((\log a)^{\prime}\), solution \(A\) and its number of branches, topology, reference.
2. The Hirota method [26] consists in writing, when this is possible, the original partial differential equation (PDE) only with the so-called Hirota derivation operators \(D_{x}\) and \(D_{t}\), i.e. without the usual derivation operators \(\partial_{x}\), \(\partial_{t}\). Such a writing, which is indeed possible for CGL3 [44], implies _ipso facto_ the existence of various solitary waves. This allowed Bekki and Nozaki to obtain a CGL3 front (A.3) [44] and a CGL3 hole (A.1) with an arbitrary velocity [5].
3. The replacement of the third order ODE (8) for \(M(\xi)\) by an equivalent polynomial dynamical system in three real components \((M,L=M^{\prime}/M,\psi=\varphi^{\prime}-\kappa_{t}/2)\)[51] \[\frac{\mathrm{d}}{\mathrm{d}\xi}\left(\begin{array}{c}M\\ L\\ \psi\end{array}\right)=\left(\begin{array}{c}ML\\ -\frac{3}{4}L^{2}+\kappa_{i}L+2\psi^{2}-2e_{r}M^{2}-2d_{r}M-2g_{i}\\ -L\psi+\kappa_{i}\psi-e_{i}M^{2}-d_{i}M+g_{r}\end{array}\right),\] (5) followed by heuristic constraints on these three components. This allowed van Saarloos and Hohenberg to discover the CGL5 front (B.1) and the CGL5 pulse (B.3), with the respective constraints [51, Eqs. (3.38), (3.51)] (\(c_{j},d_{j}\) adjustable constants), \[(\text{front})\,\psi=c_{1}+c_{2}\frac{M^{\prime}}{M},\,\frac{M^{ \prime}}{M}=c_{3}+c_{4}M,\,\,^{\prime}=\frac{\mathrm{d}}{\mathrm{d}\xi},\] (6) \[(\text{pulse})\,\psi=d_{1}+d_{2}\frac{M^{\prime}}{M},\,\frac{{M^{ \prime}}^{2}}{M^{2}}=d_{3}+d_{4}M+d_{5}M^{2}.\] (7)
4. Truncation methods as initiated by Weiss _et al._[55]. By implementing an extension [42] of the WTC truncation method, Marcq _et al._[39] found the CGL5 source or sink (B.2).
5. The enforcement by Hone [30], for nondegenerate elliptic solutions, of the classical necessary condition that the sum of the residues of one or more Laurent series of \(M\) (or more generally of any rational function of derivatives of \(M\)) inside a period parallelogram vanishes. This method allowed Vernov [54] to find an elliptic solution of CGL5, however not the most general one for reasons explained in Section 5.2.
Since tremendous efforts [31][21][37] have been made to search for additional closed form solutions, it is also worth mentioning why other methods failed in the case of CGL.
1. The search for elliptic solutions can be made by assuming the squared modulus to be a polynomial of \(\mathrm{sn},\mathrm{cn},\mathrm{dn}\), or a polynomial of \(\wp,\wp^{\prime}\)[33]. Since elliptic solutions are generically not polynomials of such functions, this explains the failure for CGL of this too restrictive assumption.
2. Similarly, another search for elliptic solutions [32] by requiring the squared modulus \(M\) to obey the most general first order second degree binomial ODE of Briot and Bouquet \({M^{\prime}}^{2}=\sum_{j=0}^{4}c_{j}M^{j}\) fails to find any elliptic solution and only finds various known trigonometric degeneracies. Indeed, this assumption can only yield homographic functions of \(\wp,\wp^{2},\wp^{3},\wp^{\prime}\).
3. Innumerable "new methods" are regularly proposed, such as the "Exp-method", "\(G^{\prime}/G\) expansion method", "simplest equation method", "homogeneous balance method", etc, but they are just copies of the previously mentioned methods, see the criticisms in Refs [34] and [47].
## 3 Movable singularities of CGL
Traveling waves (3) are characterized by a third order nonlinear ordinary differential equation (ODE) for the squared modulus \(M(\xi)\)[32, p. 18][43],
\[\begin{split}&(G^{\prime}-2\kappa_{\rm i}G)^{2}-4GM^{2}(e_{i}M^{2}+d_{ i}M-g_{r})^{2}=0,\\ & G=\frac{MM^{\prime\prime}}{2}-\frac{M^{\prime 2}}{4}-\frac{ \kappa_{\rm i}}{2}MM^{\prime}+g_{i}M^{2}+d_{r}M^{3}+e_{r}M^{4},\\ &\varphi^{\prime}=\frac{\kappa_{\rm r}}{2}+\frac{G^{\prime}-2 \kappa_{\rm i}G}{2M^{2}(g_{r}-d_{i}M-e_{i}M^{2})}.\end{split} \tag{8}\]
After a solution \(M\) of (8)\({}_{1}\) has been obtained, the value of \(\varphi^{\prime}\) follows from (8)\({}_{3}\) and the complex amplitude \(A\) from (3).
Therefore, we do not need here the structure of singularities of the CGL PDE, established in [7] (CGL3) and [39] (CGL5), we only need the structure of singularities of the third order ODE (8). Let us first recall that its solution \(G=0\) must be discarded since it is not a solution of the original system. Indeed, the direct substitution of \(A=\sqrt{M(x-ct)}e^{-i\alpha t+iK(x-ct)}\) in CGL3/5 immediately yields \(e_{i}=0\) (CGL5 case) or \(d_{i}=0\) (CGL3 case), which we explicitly discard as said above. Under the assumption made in the Introduction (\(q/p\) not real if CGL3, \(r/p\) not real if CGL5), this third order ODE evidently fails the Painleve test [12] since CGL is chaotic, and, near a movable singularity \(\xi_{0}\) (which we set to zero because of the invariance under translation), it admits exactly two Laurent series for CGL3 [43],
\[M=A_{0}^{2}\xi^{-2}\left[1+\frac{\kappa_{\rm i}}{3}\xi+\frac{(5\alpha^{2}-1) \kappa_{\rm i}^{2}+12g_{i}+24\alpha g_{r}}{36(1+3\alpha^{2})}\xi^{2}+\mathcal{ O}(\xi^{3})\right], \tag{9}\]
and four Laurent series for CGL5 [14, Eq. (21)] [15, Eq. (18)]
\[M=A_{0}^{2}\xi^{-1}\left[1+\left(\frac{\kappa_{\rm i}}{4}+\frac{2d_{r}A_{0}^{ 2}-2e_{i}d_{i}A_{0}^{6}}{4(1+4\alpha^{2})}\right)\xi+\mathcal{O}(\xi^{2}) \right], \tag{10}\]
in which the pair \((A_{0}^{2},\alpha)\) of real constants takes two (CGL3) or four (CGL5) values [7; 39],
\[\text{(CGL3)}\left(-1+i\alpha\right)(-2+i\alpha)p+A_{0}^{2}q=0,\alpha^{2}-3 \frac{d_{r}}{d_{i}}\alpha-2=0,A_{0}^{2}=\frac{3\alpha}{d_{i}},d_{i}\neq 0, \tag{11}\]
\[\text{(CGL5)}\left(-\frac{1}{2}+i\alpha\right)\left(-\frac{3}{2}+i\alpha \right)p+A_{0}^{4}r=0,\ \alpha^{2}-2\frac{e_{r}}{e_{i}}\alpha-\frac{3}{4}=0,A_{0}^{4}=\frac{2\alpha}{e_ {i}},e_{i}\neq 0. \tag{12}\]
## 4 An exhaustive method
The necessary method previously presented in Refs. [43; 11] and recalled in the present section stems from a quite simple observation: for all solutions found by the methods of section 2, the only movable singularities of \(M(\xi)\) are poles, in the complex plane \(\mathbb{C}\) of course. Conversely, let us make the _single_ assumption that all the movable singularities of \(M(\xi)\) are poles (i.e. that \(M\) is meromorphic on \(\mathbb{C}\)).
Given this assumption, the method which allows one to find all traveling waves meromorphic on \(\mathbb{C}\) relies on the following past achievements.
1. The characterization, by Briot and Bouquet Briot and Bouquet (1991), of all first order autonomous ODEs having a singlevalued general solution by a privileged class of functions, made of elliptic functions and their successive degeneracies (rational functions of one exponential \(e^{k\xi}\), rational functions).
2. The generalization, by Hermite Hermite (1961), to elliptic functions and their degeneracies of the well known partial fraction decomposition of a rational function as the sum of a polar part and an entire part. See details in (H
which matches the first property (a single top degree term, \(u^{3}\)) but not the second since one of its Laurent series,
\[u=x^{-1}+c_{1}+(-c_{1}^{2}-a/3)x+O(x^{2}) \tag{16}\]
contains the arbitrary coefficient \(c_{1}\), making the number of Laurent series infinite. Indeed, the linearizing transformation \(u=\varphi^{\prime}/\varphi,\varphi^{{}^{\prime\prime\prime}}+a\varphi^{\prime} +b\varphi=0\) expresses the general solution as a rational function of two different exponential functions.
### Subequation method
It relies on a classical theorem of Briot and Bouquet.
**Theorem 3**: _[_6_, theorem XVII p. 277]__. Given two elliptic functions \(u,v\) with the same periods of respective elliptic orders \(m,n\) (i.e. numbers of poles in a period parallelogram, counting multiplicity), they are linked by an algebraic equation_
\[F(u,v)\equiv\sum_{k=0}^{m}\sum_{j=0}^{n}a_{j,k}u^{j}v^{k}=0,\ a_{j,k}\ \text{constant}, \tag{17}\]
_with \(\deg(F,u)=\text{order}(v)\), \(\deg(F,v)=\text{order}(u)\). If in particular \(v\) is the derivative of \(u\), the first order ODE obeyed by \(u\) takes the precise form_
\[F(u,u^{\prime})\equiv\sum_{k=0}^{m}\sum_{j=0}^{2m-2k}a_{j,k}u^{j}u^{\prime}u^ {\prime k}=0,\ a_{0,m}=1. \tag{18}\]
Then, given some algebraic autonomous ODE of any order \(N\), which may admit elliptic solutions, such as (13), the successive steps to obtain all such solutions are [43; 11] (we skip here some unessential details):
1. Enumerate all Laurent (not Taylor) series of the \(N\)-th order ODE, \[u=x^{p}\sum_{j=0}^{+\infty}u_{j}x^{j},\ -p\in\mathbb{N}.\] (19)
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline type & codimension & \(b^{2}/(\mu v)\) & \(\nu K/\mu^{3}\) \\ \hline \hline elliptic & \(1\) & \(16\) & arbitrary \\ \hline trigo & \(2\) & \(16\) & \(-18\) \\ \hline trigo & \(2\) & \(16\) & \(-8\) \\ \hline trigo & \(2\) & \(144/47\) & \(-1800/47^{3}\) \\ \hline trigo & \(2\) & \(256/73\) & \(-4050/73^{3}\) \\ \hline trigo & \(2\) & \(0\) & \(-4950/19^{3}\) \\ \hline trigo & \(2\) & \(0\) & \(450/19^{3}\) \\ \hline rational & \(3\) & \(0\) & \(0\) \\ \hline \end{tabular}
\end{table}
Table 2: All solutions of KS, Eq. (13), which are meromorphic on \(\mathbb{C}\). The second and third lines are degeneracies of the elliptic solution. In the last line, \(b=\mu=K=0\).
2. For all subsets (including the empty set) of the set of Laurent series, perform the remaining steps.
3. Compute the sums \(m=\sum p\) and \(n=\sum(p+1)\) of the pole orders of the Laurent series of \(u\) and \(u^{\prime}\) in the current subset, and define the first order equation \(F(u,u^{\prime})=0\) (the subequation), \[F(u,u^{\prime})\equiv\sum_{k=0}^{m}\sum_{j=0}^{n}a_{jk}u^{j}{u^{ \prime}}^{k}=0,\ a_{0,m}=1.\] (20)
4. Compute enough terms \(J\) in each Laurent series, with \(J\) slightly greater than the maximal number \((m+1)^{2}\) of coefficients \(a_{j,k}\) in (20).
5. Require each Laurent series (19) of the current subset to obey the subequation \(F(u,u^{\prime})=0\), \[F\equiv x^{m(p-1)}\left(\sum_{j=0}^{J}F_{j}x^{j}+\mathcal{O}(x^{J+1})\right), \ \forall j\ :\ F_{j}=0.\] (21) and solve this **linear overdetermined** system for \(a_{jk}\).
In the above example (13) (one triple pole for \(u\), one quadruple pole for \(u^{\prime}\)), only one subequation needs be considered,
\[{u^{\prime}}^{3}+(a_{02}+a_{12}u){u^{\prime}}^{2}+(a_{01}+a_{11}u+a_{21}u^{2} )u^{\prime}+(a_{00}+a_{10}u+a_{20}u^{2}+a_{30}u^{3}+a_{40}u^{4})=0, \tag{22}\]
and it is sufficient to stop the series at \(J=16\) to find all the 8 subequations.
Each subequation (a first order ODE) is then integrated by any method, such as: the Hermite decomposition, the computer algebra package algcurves of Maple [28; 29], or other [6; Chap. IV]).
### A property of CGL3/5
As seen in section 3, the third order ODE (8)\({}_{1}\) for \(M(\xi)\) possesses the second property of Eremenko's theorem (finiteness of the number of Laurent series, two for CGL3, four for CGL5). As to the first property (one top degree term), it is true at least for \(e_{r}\neq 0\), the single term being \(-4e_{r}e_{i}^{2}M^{10}\). In order to remove this restriction on \(e_{r}\), the strategy adopted in [14] is different and consists in proving that, if \(M(\xi)\) is meromorphic and not rational, firstly it must have infinitely many poles, secondly it must be periodic. The precise statement is then
Theorem 4: _[_14_, Theorem I p. 156]_. For all values of the CGL parameters \(p,q,r\) (complex), \(\gamma\) (real) and of the traveling waves parameters \(c,\omega\) (real), for both CGL3 (\(q/p\) not real) and CGL5 (\(r/p\) not real), all solutions \(M(\xi)\) meromorphic on \(\mathbb{C}\) are elliptic or degenerate elliptic and therefore obey a nonlinear ODE of first order whose degree is at most two (CGL3) or four (CGL5)._
### The method
Given the above mentioned preliminary results, the successive steps to build the complex amplitude \(A(x,t)\) of all meromorphic traveling waves of CGL3/5 are then the following.
1. Determine all first order subequations for \(M(\xi)\) of degree at most two (CGL3) and four (CGL5).
2. Integrate each subequation by any method (Hermite decomposition, Maple algcurves package [28; 29] or other [6; Chap. IV]).
3. For each such expression \(M(\xi)\), compute the logarithmic derivative \((\log a)^{\prime}\) by the formula \[\varphi^{\prime}=(8)_{3},(\log a)^{\prime}=\frac{M^{\prime}}{2M}+i\varphi^{ \prime},\] (23) and establish its Hermite decomposition.
4. Compute the logarithmic primitive \(a\) of this Hermite decomposition, and therefore \(A\), as a product of complex powers of entire functions \(\sigma(\xi-\xi_{j})\) of Weierstrass [1; Chap. 18] or its degeneracies \((2/k)\sinh(k((\xi-\xi_{j})/2)\) and \(\xi-\xi_{j}\).
Since every meromorphic solution \(M\) can be characterized either by its Hermite decomposition or by its first order equation (the "subequation"), it is advisable to combine these two representations in order to overcome the sometimes heavy computations involved. In particular, in the worst case (four simple poles of \(M\) for CGL5 and \(\kappa_{\rm i}=0\)), it proves technically more efficient to compute the subequation first.
Finally, using elliptic or trigonometric identities, the obtained mathematical expression \(A\) is displayed as a physically relevant formula, i.e. \(M(\xi)\) bounded for \(\xi\) real and \(A\) exhibiting the desired properties (homoclinic or heteroclinic, front or pulse or source/sink, defect, etc).
## 5 Application of the exhaustive method to CGL
The subequation is defined as the most general autonomous first order ODE with \(n\) poles of the same order \(s\) whose general solution can be elliptic or degenerate elliptic [6; theorem XVII p. 277],
\[F(M,M^{\prime})\equiv\sum_{k=0}^{ns}\sum_{j=0}^{n(s+1)-2k}a_{j,k}M^{j}{M^{ \prime}}^{k}=0,js+k(s+1)\leq ns(s+1),a_{0,n}=1. \tag{24}\]
The selection rule on \((j,k)\) states that no term can have a singularity degree higher than that of \({M^{\prime}}^{ns}\).
Only six such subequations need to be established: for CGL3, \(s=2\) and \(n=1,2\); for CGL5, \(s=1\) and \(n=1,2,3,4\). Moreover, the case \(n=2\) of CGL5 splits into two subcases when one requires the two residues \(A_{0}^{2}\) to solve the terms of highest singularity degree
\[M=A_{0}^{2}\xi^{-1}:\ a_{40}\left(A_{0}^{2}\xi^{-1}\right)^{4}+a_{21}\left(A_ {0}^{2}\xi^{-1}\right)^{2}\left(-A_{0}^{2}\xi^{-2}\right)+a_{02}\left(-A_{0}^ {2}\xi^{-2}\right)^{2}=0, \tag{25}\]
because the two values of \(A_{0}^{2}\) can be either opposite or nonopposite, see (12).
Practically, the computation splits into two successive phases. The first one is the resolution of a _linear_ algebraic system in the unknowns \(a_{j,k}\), this is quick and easy, see e.g. [43]. The second phase is the resolution of a _nonlinear_ algebraic system in the real parameters \(d_{r},d_{i},e_{r},e_{i},\kappa_{\rm i},g_{r},g_{i}\) (\(\kappa_{r}\) drops out) and the complex locations of the poles; since the Groebner package of Maple fails to solve most of these nonlinear systems, one has to do it "by hand", i.e. to choose which variables to eliminate in order to factorize some equations into smaller equations (see e.g. [12; SS3.3.9.3]), a process which is time, storage and effort consuming.
Finally, the method generates eleven solutions: four of CGL3 and seven of CGL5, see Table 1.
Among them, five had never been found by previous methods, they are presented in next sections. These are precisely all those solutions whose number of poles is maximal. For completeness, the six others are recalled in an Appendix. For each solution, the displayed information is: the first order autonomous ODE for \(M(\xi)\), its solution, two expressions for the complex amplitude. The first one, which arises from the algorithm, is a product of complex powers of entire functions \(\sigma(\xi-\xi_{j})\) or its degeneracies \((k/2)\sinh(k((\xi-\xi_{j})/2)\) and \(\xi-\xi_{j}\). The second one is the product of a positive modulus by a phase factor of modulus unity, written so as to display the physical nature of the solution (homoclinic or heteroclinic, pulse, front, sink, etc).
### CGL3 elliptic solution
Nongenerate elliptic solutions are easier to determine, for two reasons. As shown by Briot and Bouquet [6, SS181 p. 278], their first order ODE cannot contain the power one of \(M^{\prime}\), which excludes the value \(k=1\) in (18). The second reason is the necessary condition that the sum of the residues of the Laurent series (9) of \(M\) (or more generally of any power of a derivative of \(M\) of any order) inside a period parallelogram vanishes. Assuming \(M\) to have only one pole yields no elliptic solution, but, with two poles, \(M\) generates the condition \(d_{r}\kappa_{\rm i}=0\), \(M^{2}\) the condition \((\kappa_{\rm i}^{2}+6g_{i})\kappa_{\rm i}=0\). Indeed, the two constraints \(d_{r}=0,g_{i}=-(1/6)\kappa_{\rm i}^{2}\) do generate a unique elliptic solution. Its particular case \(\kappa_{\rm i}=0\) is also elliptic.
This solution occurs for the exponent \(\alpha=\pm\sqrt{2}\),
\[\left\{\begin{array}{l}d_{r}=0,\ g_{i}=-\frac{1}{6}\kappa_{\rm i }^{2}\,,\\ 3^{7}\,7^{6}(d_{i}M^{\prime})^{4}-2^{3}\ 3^{5}\ 7^{5}\ \kappa_{\rm i}(7d_{i}M-2g_{r}) (d_{i}M^{\prime})^{3}\\ +2\ 3^{2}\ 7^{4}\kappa_{\rm i}^{2}\left[18(7d_{i}M-g_{r})(35d_{i}M-17g_{r})-49 \kappa_{\rm i}^{4}\right](d_{i}M^{\prime})^{2}\\ -2^{3}\ 3^{4}\ ((7d_{i}M)^{2}-56g_{r}d_{i}M-2g_{r}^{2})^{3}\\ +7^{2}\kappa_{\rm i}^{4}\left[(-3(7d_{i}M)^{2}+2^{6}\ 7g_{r}d_{i}M+66g_{r}^{2}+ 49\kappa_{\rm i}^{4})^{2}\\ \qquad\qquad-7(147(d_{i}M)^{2}+28g_{r}d_{i}M+36g_{r}^{2})(441(d_{i}M)^{2}-308g_ {r}d_{i}M+24g_{r}^{2})\right]=0.\end{array}\right. \tag{26}\]
_Remark_. The restrictive assumption \(d_{r}=1/2\) made in Refs. [43][30] prevented this elliptic solution to be found earlier. The reason why it was also not detected in Ref. [53] is different: since the number of Laurent series of \(\psi\) is infinite because of the presence of an arbitrary coefficient (see e.g. [15, Eq. (21)]), the function \(\psi\) should not be used to build a subequation for \(\psi(\xi)\).
After scaling, the solution of (26) depends on a single parameter \(g_{r}/\kappa_{\rm i}^{2}\), unless \(\kappa_{\rm i}\) vanishes, in which case the subequation has the binomial type,
\[d_{r}=g_{i}=\kappa_{\rm i}=0:\ (d_{i}M^{\prime})^{4}-\frac{8}{9}\left((d_{i}M)^{ 2}-8\frac{g_{r}}{7}d_{i}M-2\left(\frac{g_{r}}{7}\right)^{2}\right)^{3}=0. \tag{27}\]
The simplest way to integrate (26) is to represent \(M\) by its Hermite decomposition in which one of the two poles is put at the origin,
\[M=\frac{3\sqrt{2}}{d_{i}}\left[\wp(\xi)-\wp(\xi-\xi_{a})+\frac{ \kappa_{\rm i}}{3}\left(\zeta(\xi)-\zeta(\xi-\xi_{a})-\zeta(\xi_{a})\right)+c_ {0}\right]. \tag{28}\]
Indeed, the coefficients of the two polar parts are known, these are those of the two Laurent series (9) for \(\alpha=\pm\sqrt{2}\). The technique [18] to determine the constants
is to identify the different Laurent series with the expansions of \(M\) near the various poles (here \(0,\xi_{a}\)). The result is
\[\left\{\begin{array}{ll} M&=\frac{3\sqrt{2}}{d_{i}}\left[\wp(\xi)-\wp(\xi-\xi_{a})+ \frac{\kappa_{\rm i}}{3}\left(\zeta(\xi)-\zeta(\xi-\xi_{a})-\zeta(\xi_{a}) \right)+\frac{2\sqrt{2}g_{r}}{21}\right]\\ &=\frac{1}{d_{i}}\left[3\sqrt{2}\wp(\xi)+\frac{4}{7}g_{r}+\frac{ \sqrt{2}}{12}\kappa_{\rm i}^{2}+\frac{27\sqrt{2}g_{r}-14\kappa_{\rm i}^{2}g_{r }-882\sqrt{2}\kappa_{\rm i}\wp^{\prime}(\xi)}{49(36\wp(\xi)+\kappa_{\rm i}^{2} )}\right.\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad- \frac{12\kappa_{\rm i}g_{r}(\sqrt{2}g_{r}\kappa_{\rm i}+126\wp^{\prime}(\xi) )}{49(36\wp(\xi)+\kappa_{\rm i}^{2})^{2}}\right],\\ &\wp(\xi_{a})=-\left(\frac{\kappa_{\rm i}}{6}\right)^{2},\ \wp^{\prime}(\xi_{a})= \frac{\sqrt{2}}{126}\kappa_{\rm i}g_{r},\\ g_{2}=\frac{g_{r}^{2}}{49}+\frac{\kappa_{\rm i}^{4}}{108}\wp_{3}=\left(\frac{g_ {r}^{2}}{7}+\frac{\kappa_{\rm i}^{4}}{18}\right)\frac{\kappa_{\rm i}^{2}}{324},\\ g_{2}^{3}-27g_{3}^{2}=2^{-4}3^{-7}7^{-6}g_{r}^{2}(243g_{r}^{2}+98\kappa_{\rm i}^{ 4})(144g_{r}^{2}+49\kappa_{\rm i}^{4}).\end{array}\right. \tag{29}\]
The only degeneracy to a simply periodic solution occurs for \(g_{r}=0\), in which case the subequation is decomposable,
\[g_{r}=0:\ 3d_{i}^{2}(9{M^{\prime}}^{2}-12\kappa_{\rm i}MM^{\prime}+2\kappa_{ \rm i}^{2}M^{2})+\kappa_{\rm i}^{6}\pm 3\sqrt{2}d_{i}(2\kappa_{\rm i}^{3}M^{ \prime}-6d_{i}^{2}M^{3}-\kappa_{\rm i}^{4})=0, \tag{30}\]
and represents the propagating hole (A.1) for the particular values \(\alpha=\pm\sqrt{2},g_{r}=0\).
When \(g_{r}\) is nonzero, the remaining question is to determine whether the elliptic expression (29) represents a real bounded solution or not. This single complex expression depends on the (omitted) arbitrary complex origin \(\xi_{0}\) of \(\xi\), therefore it represents in fact two real solutions, one for \(\xi_{0}=0\) and one for \(\xi_{0}\) equal to the nonreal half-period (exactly like \(\tanh(\xi-\xi_{0})\) represents both the bounded front \(\tanh(\xi)\) and the unbounded real function \(\coth(\xi)\)). In the present case, none of the two real solutions is bounded on the real axis \(\xi\) (see Figures 3.4 and 3.5 in [12]), therefore the solution is physically meaningful only for its degeneracy \(g_{r}=0\) to the traveling hole.
The expression of the complex amplitude \(A\) as a product of powers of \(\sigma\) functions can be found in [12, Eq. (3.181)].
### CGL5 elliptic solution
In order to detect a CGL5 elliptic solution, the criterium of residues needs to be applied not only to \(M\) but also [15] to \(\left(M^{(k)}\right)^{j}\), \((j,k)=(0,2),(0,3),(0,4),(1,2)\), it shows that \(M\) must possess four poles and the parameters must obey four real constraints.
Only one subequation exists, for \(\alpha=\pm\sqrt{3}/2\)[14][15],
\[\left\{\begin{array}{ll} e_{r}=d_{r}=d_{i}=0,\ g_{i}=-\frac{3}{16}\kappa_{\rm i}^{2},\\ e_{i}^{2}{M^{\prime}}^{4}-2\kappa_{\rm i}e_{i}^{2}{M^{\prime}}^{3}+\frac{1}{2 }\kappa_{\rm i}^{2}(3e_{i}M^{2}-g_{r})e_{i}{M^{\prime}}^{2}-\frac{1}{3^{4}}e_{ i}{M^{2}}(3e_{i}M^{2}-4g_{r})^{3}\\ \qquad+\frac{1}{32}\kappa_{\rm i}^{4}(-9e_{i}^{2}M^{4}+6g_{r}e_{i}M^{2}+2g_{r} ^{2})+\frac{3^{4}}{2^{12}}\kappa_{\rm i}^{8}=0.\end{array}\right. \tag{31}\]
The particular case \(\kappa_{\rm i}=0\) of this solution was first found by Vernov [54], who obtained two binomial subequations of the type of Briot and Bouquet,
\[e_{r}=d_{r}=d_{i}=g_{i}=\kappa_{\rm i}=0,g_{r}\neq 0,\ \left\{\begin{array}{ll} e_{i}^{2}(3M^{\prime})^{4}-e_{i}M^{2}\left(3e_{i}M^{2}-4g_{r}\right)^{3}=0,\\ 9{\psi^{\prime}}^{2}-12\mu^{4}-g_{r}^{2}=0,\psi=\varphi^{\prime}-\kappa_{\rm r }/2.\end{array}\right. \tag{32}\]
The reason why he did not find the full subequation is his use of \(\psi\) instead of \(M\): since there exists a Laurent series of \(\psi\) with an arbitrary coefficient, the number of Laurent series of \(\psi\) is infinite and Eremenko's theorem does not apply to the third order ODE for \(\psi\).
Integrating (31) as a rational function of \(\wp\) and \(\wp^{\prime}\)[1, SS18.11], as done in [14, Eq. (46)], creates useless complications (Landen transformation, etc), so let us shortly describe the good procedure.
Let us for convenience define the notation
\[e_{1}=\frac{\kappa_{\rm i}^{2}}{48},e_{0}=\frac{g_{r}}{36}. \tag{33}\]
Assuming a Hermite decomposition with one of the four poles at the origin, another pole (\(\xi_{2}\)) is real and the two others (\(\xi_{1},\xi_{3}\)) are complex conjugate,
\[\left\{\begin{array}{l}\frac{M}{A_{0}^{2}}=\frac{\kappa_{\rm i }}{4}+\zeta(\xi,g_{2},g_{3})+\sum_{k=1}^{3}i^{k}\left(\zeta(\xi-\xi_{k},g_{2}, g_{3})+\zeta(\xi_{k},g_{2},g_{3})\right),i^{2}=-1,\\ \quad\quad=\frac{\kappa_{\rm i}}{4}-\kappa_{\rm i}\frac{\frac{3}{2}e_{1}\wp( \xi)+6e_{0}^{2}+3e_{1}^{2}}{\left(\wp(\xi)+2e_{1}\right)^{2}+12e_{0}^{2}}+ \wp^{\prime}\frac{\left[-\frac{1}{2}(\wp(\xi)+2e_{1}+2\sqrt{3}e_{0})^{2}+6 \sqrt{3}e_{1}e_{0}\right]}{\left(\wp(\xi)-e_{1}\right)\left[(\wp(\xi)+2e_{1} )^{2}+12e_{0}^{2}\right]},\\ \wp(\xi_{2},g_{2},g_{3})=e_{1},\wp^{\prime}(\xi_{2},g_{2},g_{3})=0,\\ \wp(\xi_{1},g_{2},g_{3})=-2e_{1}+2i\sqrt{3}e_{0},\wp^{\prime}(\xi_{1},g_{2},g _{3})=\left(\sqrt{3}e_{0}+i\frac{3}{2}e_{1}\right)\kappa_{\rm i},\\ \wp(\xi_{3},g_{2},g_{3})=-2e_{1}-2i\sqrt{3}e_{0},\wp^{\prime}(\xi_{3},g_{2},g _{3})=\left(\sqrt{3}e_{0}-i\frac{3}{2}e_{1}\right)\kappa_{\rm i},\\ g_{2}=-24(e_{1}^{2}+2e_{0}^{2}),\ g_{3}=4(7e_{1}^{2}+12e_{0}^{2})e_{1},\ {\wp^{ \prime}}^{2}=4(\wp-e_{1})(\wp^{2}+e_{1}\wp+7e_{1}^{2}+12e_{0}^{2}),\\ g_{2}^{3}-27g_{3}^{2}=-2^{4}\ 3^{3}(9e_{1}^{2}+16e_{0}^{2})(3e_{1}^{2}+4e_{0}^{2}) ^{2}=-\frac{(81\kappa_{\rm i}^{4}+256g_{r}^{2})(64g_{r}^{2}+27\kappa_{\rm i}^{ 4})^{2}}{2^{20}\ 3^{9}}.\end{array}\right. \tag{34}\]
Again, none of the two associated real solutions for the modulus \(M\) is bounded, therefore the solution is unphysical. Moreover, the only nonelliptic degeneracy is the rational solution defined by
\[e_{r}=d_{r}=d_{i}=g_{i}=\kappa_{\rm i}=g_{r}=0,3{M^{\prime}}^{4}-e_{i}^{2}M^{ 8}=0. \tag{35}\]
As explained in Section 5.1, the logarithmic derivative of \(Ae^{i\omega t}\) is the sum of functions with six simple poles only, made of the four poles \(0,\xi_{1},\xi_{2},\xi_{3}\) of \(M\) and of two poles \(\xi_{4},\xi_{5}\) out of the four zeroes of \(M\),
\[(\log a)^{\prime} =c_{0}+\frac{-1+i\sqrt{3}}{2}(\zeta(\xi)+\zeta(\xi-\xi_{2}))+ \frac{-1-i\sqrt{3}}{2}(\zeta(\xi-\xi_{1})+\zeta(\xi-\xi_{3}))\] \[+(\zeta(\xi-\xi_{4})+\zeta(\xi-\xi_{5})). \tag{36}\]
_Remark_. The expression (36) belongs to the class of assumptions
\[M=\text{regular}+\sum_{j=1}^{N}\mathcal{D}_{j}\log\psi(\xi-\xi_{j}), \tag{37}\]
in which \(N\) is at most equal to the number of poles inside a period parallelogram, \(\psi\) is some entire function (in the simplest cases the solution of a linear ODE with constant coefficients), and \(\mathcal{D}_{j}\) is the "singular part operator" i.e. the linear differential operator which represents all the polar part of the Laurent series. Therefore the decomposition devised by Hermite in 1888 can be identified with a "\(N\)-family truncation method" for autonomous algebraic ODEs.
### CGL5 homoclinic defect
The most involved situation is CGL5 with at the same time four poles, a genus zero subequation (i.e. a degenerate elliptic solution) and \(\kappa_{\rm i}=0\). Indeed, among the nonlinear algebraic equations in the fixed parameters \(\kappa_{\rm i},e_{r},d_{r},d_{i},g_{r},g_{i}\) and the movable locations \(\xi_{j}\) of the poles, many of them contain the factor \(\kappa_{\rm i}\), which allows one to rapidly discard the case \(\kappa_{\rm i}\neq 0\), and the remaining nonzero equations for \(\kappa_{\rm i}=0\) are much bigger. After elimination of all variables but \(e_{r}/e_{i}\), one obtains ten values of \(e_{r}/e_{i}\), among them two complex ones (discarded) and
\[(2e_{r}-3e_{i})(1089e_{i}^{6}-81327e_{r}^{2}e_{i}^{4}+323512e_{r}^{4}e_{i}^{2}+4 56976e_{r}^{6})e_{r}=0. \tag{38}\]
These three factors yield the three solutions now presented.
For \(\alpha=3/2\pm\sqrt{3}\), there exists one four-pole subequation [13]
\[\left\{\begin{array}{l}\kappa_{\rm i}=0,\,\frac{e_{r}}{e_{i}}= \frac{3}{2},\frac{d_{r}}{d_{i}}=\frac{29}{15},\frac{g_{r}}{g_{i}}=-\frac{12}{3 5},g_{i}=\frac{7d_{i}^{2}}{12e_{i}},\\ \left(M^{\prime\,2}+e_{i}M\left(M+\frac{2d_{i}}{3e_{i}}\right)\left(M^{2}+ \frac{6d_{i}}{5e_{i}}M+\frac{d_{i}^{2}}{3e_{i}^{2}}\right)\right)^{\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
### CGLS homoclinic bound state of two dark solitons
Also at the price of five constraints among the fixed parameters,
\[\left\{\begin{array}{l}\kappa_{i}=0,\\ \dfrac{e_{r}}{e_{i}}=\lambda=\text{ one of the four real roots of }1089-81327\lambda^{2}+323512\lambda^{4}+456976\lambda^{6}=0,\\ d_{r}=d_{i}\lambda\dfrac{828038745921+7649070764998\lambda^{2}+9025535790856 \lambda^{4}}{1386644084775},\\ g_{r}=\dfrac{d_{i}^{2}}{e_{i}}\dfrac{-58513290148629717+11 13015753503375224\lambda^{2}+1243896610551884848\lambda^{4}}{178728931719095040} \\ g_{i}=\dfrac{d_{i}^{2}}{e_{i}}\lambda\dfrac{119473478956925997-1 651180178874084664\lambda^{2}-1567990451264571568\lambda^{4}}{1608560385471855 360},\end{array}\right. \tag{42}\]
there exists a subequation,
\[\begin{array}{l}\left(M^{\prime 2}+c_{6}P_{2a}(M)P_{2b}(M)\right)^{2}-c_{7}(M +c_{1})^{2}P_{2a}(M)^{3}=0,\\ P_{2a}(M)=M^{2}+c_{2}M+c_{3},P_{2b}(M)=M^{2}+c_{4}M+c_{5},\end{array} \tag{43}\]
where the \(P_{n}\)'s are polynomials of degree \(n\) and the constants \(c_{j}\) are polynomial in \(\lambda,e_{i},d_{i}\).
To each one of these four real roots \(e_{r}/e_{i}\),
\[\lambda=\pm 0.1192,\lambda=\pm 0.4300, \tag{44}\]
correspond two values \(\alpha_{1},\alpha_{2}\) of the exponent \(\alpha\) defined in (12), whose product is \(-3/4\),
\[\begin{array}{l}\lambda=\dfrac{e_{r}}{e_{i}}=\dfrac{\alpha}{2}- \dfrac{3}{8\alpha},\\ \lambda=\pm 0.1192,(\alpha_{1},\alpha_{2})=(\mp 0.7550,\pm 0.9934),\\ \lambda=\pm 0.4300,(\alpha_{1},\alpha_{2})=(\mp 0.5369,\pm 1.397),\end{array} \tag{45}\]
and one real bounded modulus,
\[M=M_{0}+\dfrac{d_{i}}{e_{i}}\dfrac{K_{1}-(K_{1}+K_{2})\cosh^{2}\dfrac{k\xi}{2} }{1-(2+D_{1})\cosh^{2}\dfrac{k\xi}{2}+(1+D_{1}+D_{0})\cosh^{4}\dfrac{k\xi}{2}}. \tag{46}\]
Its coefficients are algebraic expressions of \(\lambda=e_{r}/e_{i}\),
\[\left\{\begin{array}{l} k^{2}=\frac{d_{i}^{2}}{e_{i}}\lambda \frac{470354925826628997+15744055491100758536\lambda^{2}+16800138410952093392 \lambda^{4}}{2010700481839819200},\\ M_{0}=\frac{d_{i}}{e_{i}}\frac{-344373082347+2958053216864\lambda^{2}+3382994698 928\lambda^{4}}{493029007920},\\ \coth^{2}\frac{k(\xi_{B}-\xi_{A})}{2}+\coth^{2}\frac{k(\xi_{B}+\xi_{A})}{2}=2 \frac{51161905779+18447053072\lambda^{2}-65496542176\lambda^{4}}{70239007575},\\ \coth^{2}\frac{k(\xi_{B}-\xi_{A})}{2}-\coth^{2}\frac{k(\xi_{B}+\xi_{A})}{2}=8i \sqrt{3}\lambda\frac{18414161641353-66747311650346\lambda^{2}-97033527316232 \lambda^{4}}{25496759749725},\\ \coth^{2}\frac{k\xi_{A}}{2}+\coth^{2}\frac{k\xi_{B}}{2}\\ \coth\frac{k\xi_{A}}{2}\coth\frac{k\xi_{B}}{2}=2i\sqrt{3}\lambda\frac{16223643 -41722436\lambda^{2}-37472032\lambda^{4}}{6800175},\\ D_{1}=-\coth^{2}\frac{k\xi_{A}}{2}-\coth^{2}\frac{k\xi_{B}}{2},\,D_{0}=\coth^{ 2}\frac{k\xi_{A}}{2}\coth^{2}\frac{k\xi_{B}}{2},\\ -\frac{K_{2}}{K_{1}}=\frac{2\alpha\coth_{B}+i\sqrt{3}\coth_{A}}{2\alpha\coth_{ A}+i\sqrt{3}\coth_{B}}\coth_{A}\coth_{B}\cdot\coth_{A}=\coth\frac{k\xi_{A}}{2}, \coth_{B}=\coth\frac{k\xi_{B}}{2},\\ K_{1}=-\frac{e_{i}k(2\alpha\coth_{A}+i\sqrt{3}\coth_{B})}{2d_{i}\alpha}\sqrt{ \frac{2\alpha}{e_{i}}},\end{array}\right. \tag{47}\]
whose numerical values are (13, Table I),
These two homoclinic patterns have the shape of a double well (see (13, Figures 2 and 3)) and they move with an arbitrary velocity if \(p\) is real, otherwise they are stationary. They define two bound states of two CGL5 dark solitons, as reported in (2, Fig. 4).
The complex amplitude \(A\) is the product of powers of six sinh functions
\[\frac{A}{K_{0}}=\mbox{rhs of [13, Eq. (14)]}, \tag{48}\]
in which \(K_{0}\) is determined by the condition \(\lim_{\xi\to 0}A\bar{A}/M=1\).
### CGL5 rational solution
The subequation
\[\begin{array}{l}\kappa_{\rm i}=e_{r}=d_{r}=g_{i}=0,g_{r}=\frac{3}{128} \left(1-5\sqrt{5}\right)\frac{d_{i}^{2}}{e_{i}},\\ M^{\prime 4}-\frac{e_{i}^{2}}{3}\left[M+\left(1-\frac{5}{16}\left(1+\sqrt{5} \right)\right)\frac{d_{i}}{e_{i}}\right]^{3}\left[M+\frac{3}{16}\left(1+\sqrt {5}\right)\frac{d_{i}}{e_{i}}\right]^{5}=0,\end{array} \tag{49}\]
has two branches (one for each sign of \(\sqrt{5}\)), this is one of the binomial equations of Briot and Bouquet, and its solution is rational
\[M=\frac{d_{i}}{e_{i}}\left[-\frac{3(1+\sqrt{5})}{16}-\frac{768(2+\sqrt{5})}{(d _{i}(\xi-\xi_{0}))^{4}/e_{i}^{2}-3(8(3+\sqrt{5}))^{2}}\right]. \tag{50}\]
The limit \(d_{i}=0\), \(M=A_{0}^{2}/\xi\) is recovered after the translation \(\xi_{0}=2\sqrt{3}(1+\sqrt{5})/(A_{0}^{2}d_{i})\).
The amplitude \(A\) is the product of powers of six affine functions of \(\xi\),
\[\begin{split} A&=K_{0}e^{-i\omega t}+i\frac{\kappa _{r}}{2}\xi\left((d_{i}\xi)^{2}/e_{i}-8i(1-\sqrt{5})\right)\\ &\left((d_{i}\xi)^{2}/e_{i}-8\sqrt{3}(3+\sqrt{5})\right)^{(-1+i \sqrt{3})/2}\\ &\left((d_{i}\xi)^{2}/e_{i}+8\sqrt{3}(3+\sqrt{5})\right)^{(-1-i \sqrt{3})/2},K_{0}^{2}=-3d_{i}\frac{1+\sqrt{5}}{16e_{i}}.\end{split} \tag{51}\]
The squared modulus is never bounded because two poles are real, therefore the solution is unphysical.
## 6 Conclusion and perspectives
In a previous version of the method [14], for each solution we performed the independent integration of two elliptic equations, namely the ODE for \(M\) and the ODE for \((\log a)^{\prime}\), with the result that the two elliptic functions \(\wp\) involved had different invariants \(g_{2},g_{3}\) linked by a Landen transformation. The present method avoids this useless complication.
Searching for additional meromorphic traveling waves of CGL is now proven to be hopeless.
The eleven meromorphic traveling waves of CGL require between one and five constraints on the parameters of the ODE, while the local representation of \(M(\xi)\) by a Laurent series near one of its movable singularities does not require any constraint. In order to fill this gap, i.e. to build new closed form singlevalued traveling wave solutions, necessarily nonmeromorphic, able to remove at least some of these constraints, the guidelines indicated by Painleve in his "Theoreme general" [45, pages 381-382] and in his proof of the theorem of addition of Weierstrass [46, SS41 p 51] would certainly be quite useful.
This could provide an analytic description of the CGL3 homoclinic traveling hole, observed in spatiotemporal intermittency [23] but never found analytically.
At the PDE level, only one closed form solution is known: the CGL3 collision of two fronts [44],
\[\text{(CGL3)}\ \left\{\begin{array}{l}A=A_{0}e^{-i\omega t}\left[\frac{k} {2}\sinh\frac{k}{2}x\right]\left[\cosh\frac{k}{2}x+e^{-(3/2)\gamma t}\right]^{ -1+i\alpha},\\ c=0,p_{r}=0,k^{2}=-\frac{2\gamma}{p_{i}},\omega=-\frac{3\gamma}{2},\end{array}\right. \tag{52}\]
which involves only one double pole of \(M\). It would be worth extending the present method to PDEs and looking for solutions \(M(x,t)\) with two poles (CGL3) or at most four poles (CGL5).
## Acknowledgements
Constructive remarks of the referees greatly helped us to improve the manuscript.
The authors are pleased to thank Joceline Lega for sharing her expertise. The first author thanks the Department of mathematics and the Institute of mathematical research of HKU, and the Institute for Advanced Study of SZU for support and hospitality. The first two authors thank CIRM, Marseille (grant no. 2311) and IHES, Bures-sur-Yvette for their hospitality. The third author was partially supported by the RGC grant no. 17307420. The first and last authors were
partially supported by the National Natural Science Foundation of China (grant no. 11701382). The last author was partially supported by Guangdong Basic and Applied Basic Research Foundation, China (grant no. 2021A1515010054).
The authors have no competing interests to declare.
**List of all meromorphic solutions**
The five solutions obtained by the present method are listed in section 4. The six others are recalled in this Appendix for completeness.
## Appendix A CGL3 source or propagating hole, pulse, front
These three solutions (heteroclinic source or propagating hole [5][37, Fig. 5], homoclinic pulse [27], and heteroclinic front [44]) possess a subequation with only one double pole, for the respective parameter values,
\[\text{(source/hole)}\;g_{i}=\frac{2}{3\alpha}g_{r}-\frac{1+\alpha^{2}}{9 \alpha^{2}}\kappa_{i}^{2}, \tag{16}\]
\[\text{(pulse)}\;\kappa_{i}=0,\;g_{i}=\frac{1-\alpha^{2}}{2\alpha}g_{r}, \tag{17}\]
\[\text{(front)}\;g_{r}=0,g_{i}=\frac{2}{9}\kappa_{i}^{2}. \tag{18}\]
The homoclinic or heteroclinic physical bounded expressions for the complex amplitudes are,
\[\frac{A}{K_{0}}=e^{i\left[\alpha\log\cosh\frac{k}{2}\xi-\omega t+\frac{\kappa_ {r}}{2}\xi\right]}\times\left\{\begin{array}{l}\text{(source)}\;\left[ \frac{k}{2}\tanh\frac{k}{2}\xi+(X+iY)c\right]e^{iKc\xi},\\ \text{(pulse)}\;\text{(--}ik\;\text{sech}\;kx),c=0,\\ \text{(front)}\;\frac{k}{2}\left[\tanh\frac{k}{2}\xi\pm 1\right]e^{iKc\xi}, \end{array}\right. \tag{19}\]
in which the real constants \(X,Y,K\) only depend on \(p,q,\gamma\), see for instance [9].
When applied to a suitable complex variable [9], the truncation procedure [55] generates one short complex relation between the three real parameters \(\omega,c^{2},k^{2}\),
\[\frac{i\gamma-\omega}{p}=\left(\frac{c}{2p}\right)^{2}+\left\{\begin{array}[] {l}\text{(source)}\;(3i\alpha-2)\frac{k^{2}}{4},\\ \text{(pulse)}\;(1-i\alpha)^{2}k^{2},\\ \text{(front)}\;\frac{k^{2}}{4}.\end{array}\right. \tag{20}\]
## Appendix B CGL5 front, source or sink, pulse
These three solutions (a heteroclinic front [51], a homoclinic source/sink [41; 39], a homoclinic pulse [51]) share a common simplicity when they are described by the truncation [55] of the suitable complex variable already mentioned [9].
Under the following constraints and values of \(k^{2},\tau_{b}\),
\[\text{(front)}\ \left\{\begin{array}{l}g_{r}=-\dfrac{(\kappa_{ \text{i}}+2A_{0}^{2}d_{r}+4\tau_{1})(\kappa_{\text{i}}+2\tau_{1})}{4\alpha}, \tau_{1}=-\dfrac{\kappa_{\text{i}}}{4}+A_{0}^{2}\dfrac{2\alpha d_{i}-d_{r}}{2(1 +4\alpha^{2})},k^{2}=4\tau_{1}^{2},\\ g_{i}=\dfrac{(\kappa_{\text{i}}+2A_{0}^{2}d_{r}+4(1-2\alpha^{2})\tau_{1})^{2}}{( 4\alpha)^{2}}+\tau_{1}(2A_{0}^{2}d_{r}+(3-4\alpha^{2})\tau_{1}),\end{array}\right.\] (B.1)
\[\text{(source)}\ \left\{\begin{array}{l}\kappa_{\text{i}}=0,\dfrac{g_{i}} {(3+2\alpha^{2})d_{i}+5\alpha d_{r}}=\dfrac{g_{r}}{12\alpha(d_{i}+2\alpha d_{ r})}=\dfrac{A_{0}^{4}[(1-2\alpha^{2})d_{i}+3\alpha d_{r}]}{4\alpha^{2}(1+4 \alpha^{2})^{2}},\\ k^{2}=2\dfrac{g_{r}}{\alpha}-4g_{i},\coth\dfrac{k\xi_{b}}{2}=\tau_{b}=A_{0}^{2 }\dfrac{(3-2\alpha^{2})d_{i}+7\alpha d_{r}}{2\alpha(1+4\alpha^{2})},\end{array}\right.\] (B.2)
\[\text{(pulse)}\ \left\{\begin{array}{l}\kappa_{\text{i}}=0,\ d_{r}= \dfrac{(2\alpha^{2}-1)}{3\alpha}d_{i},\ g_{i}=\dfrac{(1-4\alpha^{2})}{4\alpha} g_{r},\\ k^{2}=-\dfrac{g_{r}}{\alpha},\coth\dfrac{k\xi_{b}}{2}=\tau_{b},k\left(\dfrac{k}{2 \tau_{b}}+\dfrac{2\tau_{b}}{k}\right)=\dfrac{2A_{0}^{2}d_{i}}{3\alpha},\end{array}\right.\] (B.3)
the bounded, physical expressions (heteroclinic front, homoclinic source and pulse) of the complex amplitudes are,
\[\dfrac{A}{A_{0}}=e^{i\left[-\omega t+\dfrac{\kappa_{\text{r}}}{2 }\xi\right]}\times\left\{\begin{array}{l}\text{(front)}\ \left(\dfrac{k}{2}\left(\tanh\dfrac{k\xi}{2}+1\right)\right)^{1/2}e^{i\left[ \alpha\log\cosh\dfrac{k\xi}{2}+\kappa\xi\right],\\ \text{(source)}\ \left(\dfrac{k\sinh kb}{\cosh k\xi-\cosh kb}+k\tanh \dfrac{kb}{2}\right)^{1/2}e^{i\left[\alpha\log(\cosh k\xi-\cosh kb)\right], \\ \text{(pulse)}\ \left(\dfrac{k\sinh(kb)}{\cosh k\xi-\cosh kb}\right)^{1/2}e^{ i\left[\alpha\log(\cosh k\xi-\cosh kb)\right].\end{array}\right.\] (B.4)
The most compact expressions characterizing the parameters of these solutions are obtained in complex notation [10], these are: the definition of \(\alpha\) and \(A_{0}^{2}\),
\[r=-\dfrac{p}{A_{0}^{4}}\left(-\dfrac{1}{2}+i\alpha\right)\left(-\dfrac{3}{2}+ i\alpha\right),\] (B.5)
and two additional relations,
\[\text{(front)}\ q=\dfrac{2ip}{A_{0}^{2}}\left(-\dfrac{1}{2}+i\alpha\right) \left[\kappa+\dfrac{\kappa_{\text{r}}}{2}-\dfrac{c}{2p}+\dfrac{k}{4}(2\alpha+ 3i)\right],\ \dfrac{i\gamma-\omega}{p}=\left(\dfrac{c}{2p}\right)^{2}-\left(\kappa+ \dfrac{\kappa_{\text{r}}}{2}-\dfrac{c}{2p}+\dfrac{k}{4}(2\alpha+i)\right)^{2},\] (B.6)
\[\text{(source)}\ q=\dfrac{2kp}{A_{0}^{2}\sinh kb}\left(-\dfrac{1}{2}+i\alpha \right)\left(2-i\alpha-\cosh kb\right),\ \dfrac{i\gamma-\omega}{p}=\left(\dfrac{c}{2p}\right)^{2}+\left(\dfrac{k}{2} \right)^{2}+\frac{3}{2}\left(-\dfrac{1}{2}+i\alpha\right)\left(\dfrac{k}{\cosh \dfrac{kb}{2}}\right)^{2}.\] (B.7)
\[\text{(pulse)}\ q=-\dfrac{p}{A_{0}^{2}}\left(-\dfrac{1}{2}+i\alpha\right) \left(-1+i\alpha\right)\,2k\coth(kb),\ \dfrac{i\gamma-\omega}{p}=\left(\dfrac{c}{2p}\right)^{2}+\left(-\dfrac{1}{2}+ i\alpha\right)^{2}k^{2}.\] (B.8) |
2310.11584 | BasahaCorpus: An Expanded Linguistic Resource for Readability Assessment
in Central Philippine Languages | Current research on automatic readability assessment (ARA) has focused on
improving the performance of models in high-resource languages such as English.
In this work, we introduce and release BasahaCorpus as part of an initiative
aimed at expanding available corpora and baseline models for readability
assessment in lower resource languages in the Philippines. We compiled a corpus
of short fictional narratives written in Hiligaynon, Minasbate, Karay-a, and
Rinconada -- languages belonging to the Central Philippine family tree subgroup
-- to train ARA models using surface-level, syllable-pattern, and n-gram
overlap features. We also propose a new hierarchical cross-lingual modeling
approach that takes advantage of a language's placement in the family tree to
increase the amount of available training data. Our study yields encouraging
results that support previous work showcasing the efficacy of cross-lingual
models in low-resource settings, as well as similarities in highly informative
linguistic features for mutually intelligible languages. | Joseph Marvin Imperial, Ekaterina Kochmar | 2023-10-17T21:05:20Z | http://arxiv.org/abs/2310.11584v1 | BasahaCorpus: An Expanded Linguistic Resource for Readability Assessment in Central Philippine Languages
###### Abstract
Current research on automatic readability assessment (ARA) has focused on improving the performance of models in high-resource languages such as English. In this work, we introduce and release BasahaCorpus as part of an initiative aimed at expanding available corpora and baseline models for readability assessment in lower resource languages in the Philippines. We compiled a corpus of short fictional narratives written in Hiligaynon, Minasbate, Karaya, and Rinconada--languages belonging to the Central Philippine family tree subgroup--to train ARA models using surface-level, syllable-pattern, and n-gram overlap features. We also propose a new _hierarchical cross-lingual modeling_ approach that takes advantage of a language's placement in the family tree to increase the amount of available training data. Our study yields encouraging results that support previous work showcasing the efficacy of cross-lingual models in low-resource settings, as well as similarities in highly informative linguistic features for mutually intelligible languages.1
Footnote 1: [https://github.com/imperialite/BasahaCorpus-HierarchicalCrosslingualARA](https://github.com/imperialite/BasahaCorpus-HierarchicalCrosslingualARA)
## 1 Introduction
To ensure optimal reading comprehension, literary resources such as books need to be assigned to readers based on their reading level assessment (Carrell, 1987). Readability assessment can be tackled with a variety of methods ranging from the application of rule-based approaches using such widely accepted formulas as Flesch-Kincaid (Kincaid et al., 1975), to the use of software for linguistic feature extraction such as Coh-Metrix (Graesser et al., 2004) or LFTK (Lee et al., 2021), to the application of extensive machine learning models (Vajjala and Meurers, 2012; Xia et al., 2016). Recently, the latter have been the focus of research on ARA due to the availability of increasingly complex models, including deep learning architectures, that yield better text representations and outperform simpler (e.g., formula-based) approaches (Vajjala, 2022). These are, however, mostly practical if trained on high-resource data like English.
Beyond the limelight on high-resource languages like English, languages that do require extensive research efforts and initiatives are commonly considered _low-resource_. One such group of languages comprises over \(170\) living languages spoken in the Philippines, and one of the three concluding recommendations in the final report of the five-year (2013-2018) USAID Basa Pilipinas ("_Read Philippines_") Program2 targeting the im
Figure 1: The central subgroup of the Philippine language family tree highlighting the origins of the target languages Hiligaynon, Minasbate, Karaya, and Rinconada (both underlined and marked with *). Tagalog, Bikol, and Cebuano are also underlined as they are part of further experiments. The complete visualization of this language tree can be found in Zorc (1976).
provement of literacy in the Philippines was the _"provision of curriculum-based teaching and learning materials (TLMs) and quality supplementary reading materials"_. This entails the need for a variety of age-appropriate texts for young learners in the Philippines to be accessible at home and in school. Following this, the Department of Education further encouraged the development of reading materials for the Mother Tongue-Based Multilingual Education (MTB-MLE) scheme implemented at primary levels of school education in the Philippines.3 Thus, the Philippine education sector needs access _not only_ to sufficient age-appropriate learning and reading materials _but also_ to automated tools for assessing text difficulty that can work effectively for the variety of languages covered by MTB-MLE.
Footnote 3: [https://www.deped.gov.ph/2016/10/24/mother-tongue-based-learning-makes-lessonsmore-interactive-and-easier-for-students/](https://www.deped.gov.ph/2016/10/24/mother-tongue-based-learning-makes-lessonsmore-interactive-and-easier-for-students/)
This work contributes a number of resources to the Philippine language research landscape, catering to the challenges identified above, and aligns with other recent research efforts aimed at low-resource languages such as the IndoNLP (Wiliie et al., 2020; Winata et al., 2023). First, to increase accessibility and encourage more research endeavors in low-resource Philippine languages, we collect and release BasahaCorpus,4 a compilation of children's narratives and short stories from Let's Read Asia online library in four Philippine languages (Hiligaynon, Minasbate, Karay-a, and Rinconda). Second, we train a number of baseline readability assessment models using various cross-lingual setups leveraging the hierarchy in the language family tree. Lastly, we apply a simple model interpretation technique to identify how different linguistic features act as predictors of text complexity for each language.
Footnote 4: _Basaha_ generally denotes verbal form of _"read"_ in Bisayan languages.
## 2 A Closer Look into Central Philippine Languages
The Philippines is an archipelago with a rich linguistic background, as evidenced by over \(170\) languages at the backbone of its culture.5 In particular, the central Philippine language subtree is considered the largest among other subtrees as it is composed of a number of branches, including the national language _Tagalog_ (also known as _Filipino_), the _Bikol_ languages, the _Bisayan_ languages, and the _East Mindanao_ languages (McFarland, 2004). Focusing on the languages of interest for this study, we show where _Hiligaynon_, _Minasbate_, _Karay-a_, and _Rinconada_ are situated in this tree (see Figure 1) and how they are classified within specific subgroups (Zorc, 1976). In the following sections, we integrate the hierarchical relationships of these four languages with other languages in the family tree into our experiments.
Footnote 5: [https://www.ethnologue.com/country/PH/](https://www.ethnologue.com/country/PH/)
### Data from Let's Read Asia
Let's Read Asia6 is an online library of community-translated literary materials sponsored by The Asia Foundation. Using the data collected from the website, we built BasaACorpus by compiling short stories written in Philippine languages, which were only available in Hiligaynon, Minasbate, Karay-a, and Rinconada, thus defining our choice to work on these languages. The compiled data are distributed across the first three years of elementary education (L1, L2, and L3). We provide information on the language family, origin subgroup, linguistic vitality, availability of language resources, and whether these languages are used as a medium of instruction in classes in Table 1.
Footnote 6: [https://www.letsreadasia.org/](https://www.letsreadasia.org/)
We also compute basic statistical information about the data per language and per level, including mean word and sentence count and vocabulary size, and report it in Table 2. All languages in BasaACorpus exclusively use Latin script as is the case for the majority of languages in the Philippines. In terms of the distribution of resources, we obtained explicit permission from Let's Read Asia to use this data in this research with the further goal of publicly sharing the resources, which are licensed under Creative Commons BY 4.0.
## 3 Linguistic Features for Low-Resource Languages
Due to the lack of available NLP tools to extract advanced linguistic information from texts that would work for all four target languages as well as enough data to use deep learning approaches, we derive the same linguistic features as used by the previous work on ARA in other low-resource Philippine languages such as Tagalog (Imperial et al., 2019; Imperial and Ong, 2020, 2021a) and Cebuano (Imperial et al., 2022). However, we note that this study is the first to explore baseline
modeling of text complexity in the four selected languages - Hiligaynon, Minasbate, Karay-a, and Rinconada. We list all extracted features below.
**Traditional Features**. We extract \(7\) frequency-based features using count-based text characteristics. Despite being often considered simplistic, these predictors have consistently proven to be effective in text readability detection (Pitler and Nenkova, 2008; Imperial and Ong, 2021). As in the previous work on a Cebuano dataset Imperial et al. (2022), our extracted text-level features include the _total number of unique words, total number of words per document, average word length per document, average number of syllables (per word), total sentence count per document, average sentence length per document_, and the total number of polysyllable words.
**Syllable-pattern Orthographic Features**. We also extract \(11\) orthographic predictors taking into account all possible syllable patterns in the Philippine language space. These consonant (\(c\)) and vowel (\(v\)) patterns include _v, cv, vc, cvc, vcc, cccvc, cccvcc_. We also extract _consonant clusters_ and measure the average length of consonant groups in a word without an intervening vowel which have been proven to influence reading difficulty (Chard and Osborn, 1999).
**Cross-lingual N-gram Overlap**. A new Cross-NGO feature has recently been introduced by Imperial and Kochmar (2023), which exploits mutual intelligibility or the degree of language relatedness via n-gram overlap between languages from the same family tree. Since the study reported a significant boost in performance for readability assessment in low-resource languages using this feature, we see a clear practical application for it in this work. We estimate the _bigram_ and _trigram overlap_ between the four target central Philippine languages (Hiligaynon, Minasbate, Karay-a, and Rinconada), as well as the three parent languages (Tagalog, Bikol, and Cebuano) for a total of \(14\) new features. As an additional reference, we add two
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline
**Language** & **Family** & **Indigenous** & **Vitality** & **Instruction** &
\begin{tabular}{l} **Digital Language** \\ **Support** \\ \end{tabular} \\ \hline Hiligaynon (HIL) & Central & No & Institutional & Yes & Ascending \\ Minasbate (MSB) & Central & No & Institutional & As a subject & Still \\ Kinaray-a (KRJ) & West & Stable indigenous & Institutional & Yes & Emerging \\ Rinconada (BTO) & Inland Bikol & Stable indigenous & Stable & As a subject & Still \\ \hline \hline \end{tabular}
\end{table}
Table 1: Description of the language data for Hiligaynon, Minasbate, Karay-a, and Rinconada. We also include important information from Ethnologue about each language’s classification and status, including origin subgroup (identified as indigenous or not), linguistic vitality, whether a language is used as a medium of instruction or as a subject, and the availability of online digital resources (for terminology, please refer to the Ethnologue page).
\begin{table}
\begin{tabular}{l r r r r r} \hline \hline
**Language** & **Total** & **Level** & \begin{tabular}{l} **Document** \\ **Count** \\ \end{tabular} & \begin{tabular}{l} **Mean Word** \\ **Count** \\ \end{tabular} & \begin{tabular}{l} **Mean Sent** \\ **Count** \\ \end{tabular} &
\begin{tabular}{l} **Vocabulary** \\ \end{tabular} \\ \hline \multirow{3}{*}{Hiligaynon (HIL)} & \multirow{3}{*}{133} & L1 & 65 & 198.8 & 20.7 & 2043 \\ & & L2 & 22 & 296.5 & 39.1 & 1539 \\ & & L3 & 46 & 610.0 & 57.3 & 4137 \\ \hline \multirow{3}{*}{Minasbate (MSB)} & \multirow{3}{*}{271} & L1 & 124 & 240.4 & 28.7 & 3836 \\ & & L2 & 77 & 360.5 & 43.0 & 4097 \\ & & L3 & 70 & 578.6 & 62.5 & 5520 \\ \hline \multirow{3}{*}{Karay-a (KRJ)} & \multirow{3}{*}{177} & L1 & 61 & 191.0 & 21.4 & 1937 \\ & & L2 & 34 & 410.9 & 47.7 & 2309 \\ \cline{1-1} & & L3 & 82 & 569.1 & 60.3 & 6264 \\ \hline \multirow{3}{*}{Rinconada (BTO)} & \multirow{3}{*}{195} & L1 & 117 & 261.0 & 30.0 & 4222 \\ & & L2 & 36 & 521.7 & 59.5 & 3313 \\ \cline{1-1} & & L3 & 42 & 505.2 & 55.1 & 3958 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Overview and basic statistics of the data included in the BasaAcCorpus per language and per grade level with the total number of documents. The vocabulary denotes the total count of unique words per class and per language. An increasing linear relationship can be observed for both mean word count and mean sentence count per document as the level increases.
tables quantifying the similarities via cross-lingual n-gram overlap of the \(7\) Philippine languages used in this study in Tables 7 and 6 of Appendix A.
## 4 Hierarchical Cross-lingual Modeling
Cross-lingual modeling for automatic readability assessment typically involves extracting language-independent features or experimenting with a classifier trained on the data in one language and applied to comparable data in another language (Madrazo Azpiazu and Pera, 2020; Weiss et al., 2021; Lee and Vajjala, 2022; Imperial and Kochmar, 2023). In this study, we introduce a new approach to training readability assessment models using close relationships between languages in the family tree, which we refer to as **hierarchy-based cross-lingual modeling**. For this setup, our cross-lingual training process is divided into iterations involving different combinations of language data in the training split with respect to the hierarchy the languages occupy in the Central Philippine language subtree, as illustrated in Figure 1. We list all such feature combinations below:
**Monolingual (L).** This setup uses the standard train-test split involving one language only.
**Lang + Parent Lang (L+P).** This setup combines the parent language or _lingua franca_ with respect to their subgroup. For Hiligaynon, Minasbate, and Karay-a, belonging to the Bisayan languages in the Visayas region, we add the Cebuano language, while for Rinconada, classified as a member of the Bikol languages belonging to the Luzon region, we add the Central Bikol language both from Imperial and Kochmar (2023).
**Lang + National Lang (L+N).** This setup combines the four target languages with Tagalog in the training data extracted from Imperial and Kochmar (2023), Imperial and Ong (2020) and Imperial et al. (2019). Tagalog is the recognized national language taught as a subject in all elementary and secondary schools in the Philippines.
**Lang + Parent Lang + National Lang (L+P+N).** This setup combines each target language with its corresponding parent language and Tagalog in the training data.
**All Langs (*L).** This setup combines all seven languages used in the previous iterations (Hiligaynon, Minasbate, Karay-a, Rinconada, Bikol, Cebuano, and Tagalog) in the training data.
We note that the Tagalog, Bikol, and Cebuano data from Imperial and Kochmar (2023) that we use for this study also follows the same category distribution (L1, L2, and L3) as our four languages from Table 2. For all model training in this study, we use Random Forest with default hyperparameters from Scikit-Learn(Pedregosa et al., 2011), as the efficacy of this algorithm was reported in the previous studies on ARA in the Philippine languages (Imperial et al., 2022; Imperial and Ong, 2021). We perform stratified \(5\)-fold sampling for all three classes to be properly represented. Hyper-parameter values of the Random Forest model can also be found in Table 5 of Appendix A.
## 5 Results and Discussion
Below, we summarize and discuss the most notable insights obtained from the conducted experiments.
### Using extensive multilingual training data results in generally better performance.
Table 3 shows the results of the ablation experiments using the hierarchical cross-lingual modeling setup as described in Section 4. We observe that using the all languages setup (*L) for the training data helps obtain the best accuracy performance for the three Bisayan languages (HIL, MSB and KRJ) but not for Rinconada (BTO) from the Bikol subgroup. However, the score for Rinconda with the *L setup is still higher than for the monolingual model and even the L+P+N setup. A \(t\)-test confirms that the scores obtained with the *L setup are significantly higher than those obtained with the monolingual setup L for all languages at \(\alpha\) = \(0.05\) level (\(p\) = \(0.048\)). These results support the findings of Imperial and Kochmar (2023), who showed the importance of combining data from closely related
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**Language** & **L** & **L+P** & **L+N** & **L+P+N** & ***L** \\ \hline
**HIL** & 0.697 & 0.666 & 0.629 & 0.592 & **0.814** \\
**MSB** & 0.641 & 0.555 & 0.611 & 0.574 & **0.648** \\
**KRJ** & 0.618 & 0.628 & 0.657 & 0.629 & **0.685** \\
**BTO** & 0.682 & **0.846** & 0.743 & 0.714 & 0.717 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Accuracy across the experiments described in Section 4 using various combinations of language datasets and leveraging hierarchical relations between languages.
languages (or languages within the same family tree) for performance improvements in ARA models.
### Stronger mutual intelligibility improves model performance.
Following Imperial and Kochmar (2023), we compute the overlap of the top 25% of the trigrams and bigrams in order to estimate mutual intelligibility between languages from the Bisayan and Bikol subgroups and their respective parent languages, Cebuano and Bikol. We find that Rinconada (BTO) has the highest overlap (\(0.696\) for trigrams and \(0.887\) for bigrams) with its parent language (Bikol) - a fact that explains why the best results for this language are obtained with the L+P combination. For comparison, the other three languages (Hiligaynon, Minasbate, and Karay-a) show \(n\)-gram overlaps of \(0.609\), \(0.579\), and \(0.540\) for trigrams, and \(0.863\), \(0.789\), and \(0.809\) for bigrams with their parent language, respectively. See the trigram and bigram overlap results in Tables 6 and 7 in Appendix A.
### Support for traditional features in approximating text complexity for Philippine languages.
Finally, we identify the most informative linguistic features by calculating the mean decrease in impurities when splitting by the Random Forest models trained with all languages (*L) and applied to the test splits for each of the four languages. Table 4 shows that all models for all languages demonstrate the same order in top features which are all considered count-based predictors. We believe that this finding corroborates results from previous work showing that frequency-based features such as word and sentence counts are still viable measures of text complexity for Philippine languages (Macahilig, 2014; Gutierrez, 2015).
## 6 Related Work
In the past, cross-lingual modeling was applied to classification-based ARA both for non-related (Vajjala and Rama, 2018; Madrazo Azpiazu and Pera, 2020; Weiss et al., 2021; Lee and Vajjala, 2022; Mollanorozy et al., 2023) and highly-related languages (Imperial and Kochmar, 2023), with the latter reporting favorable results under predefined language similarity constraints such as high n-gram overlap.7 Research efforts that greatly contribute to the development of low-resource and cross-lingual readability assessment systems often focus on corpus building and the development of baseline models. Previous efforts of this kind have covered a wide array of languages, including Bangla (Islam et al., 2012; Islam and Rahman, 2014), Tibetan (Wang et al., 2019), Arabic (Saddiki et al., 2018; Al Khalil et al., 2020), Vietnamese (Doan et al., 2022), Bengali (Chakraborty et al., 2021), and Bikol (Imperial and Kochmar, 2023). As compared to the previous works, ours is the first study in readability assessment that investigates the effects of different language hierarchies in modeling text complexity for languages belonging to different subgroups of a greater family tree, with the focus on central Philippine languages.
Footnote 7: Here, we interpret _language relatedness_ as a measure of similarity of linguistic characteristics such as n-gram overlap (Imperial and Kochmar, 2023).
## 7 Summary
We introduce BasaAcCorpus, a compilation of language resources that includes a collected corpus of short stories and baseline readability assessment models for four central Philippine languages. We show that, through a _hierarchical cross-lingual modeling approach_, a readability model trained with all the available Philippine language data generally performs better compared to using single-language datasets. Through model interpretation, we also provide further support for the use of frequency-based features such as word and sentence counts as effective predictors of complexity in Philippine languages. This study serves as a response to the call for more research efforts, theoretically grounded baselines, and accessible data for low-resource languages.
\begin{table}
\begin{tabular}{l l l l} \hline \hline \multicolumn{1}{c}{**HIL**} & \multicolumn{2}{c}{**MSB**} \\ \hline _word count_ & 0.096 & _word count_ & 0.111 \\ _sentence count_ & 0.079 & _sentence count_ & 0.077 \\ _polysyll count_ & 0.053 & _polysyll count_ & 0.049 \\ _avg sent len_ & 0.045 & _avg sent len_ & 0.039 \\ _tag trigram sim_ & 0.038 & _tag trigram sim_ & 0.037 \\ \hline \hline \multicolumn{1}{c}{**KRJ**} & \multicolumn{2}{c}{**BTO**} \\ \hline _word count_ & 0.102 & _word count_ & 0.115 \\ _sentence count_ & 0.074 & _sentence count_ & 0.072 \\ _polysyll count_ & 0.052 & _polysyll count_ & 0.047 \\ _avg sent len_ & 0.042 & _avg sent len_ & 0.044 \\ _tag trigram sim_ & 0.036 & _tag trigram sim_ & 0.037 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Most informative features per language identified using mean decrease in impurity scores calculated for the Random Forest models.
## Limitations
**Limited feature sets for low-resource languages.** Due to severely limited existing NLP research efforts for the Philippine languages that this work addresses, we have to resort to a small number of feature extraction methods covering surface-level characteristics, syllable patterns, and n-gram overlap that have been previously applied to the related Philippine languages such as Tagalog and Cebuano (Imperial and Ong, 2020, 2021a; Imperial et al., 2022; Imperial and Kochmar, 2023). Nevertheless, we believe that our findings are valuable as this work provides the baseline for readability assessment in Hiligaynon, Minasbate, Karay-a, and Rinconada in addition to being the first one to address these languages. We hope that future research efforts will lead to a substantial expansion in the data available for these languages, which in turn will help researchers develop and apply more advanced ARA models to these languages and benchmark them against the results reported in our paper. We consider our work the first step in the direction of addressing such challenging tasks as ARA in low-resource Philippine languages.
**Low variety of the data.** This work uses only fictional short stories in specific grade levels (L1, L2, and L3) as training data for the readability assessment models, which may be considered a limitation in the application domain. While the same features can be extracted and applied to other text forms in various domains such as news articles or poems, we do not claim that the results will generalize or apply to such other datasets.
## Ethics Statement
We foresee no serious ethical implications from this study.
## Acknowledgements
We thank the anonymous reviewers for their constructive feedback and the ACs, SACs, and PCs for their appreciation of this work. We also thank the community translators and maintainers of the online library of Let's Read Asia for keeping the digital resources in the Philippine languages freely available for everyone. JMI is supported by the UKRI Centre for Doctoral Training in Accountable, Responsible, and Transparent AI (ART-AI) [EP/S023437/1] of the University of Bath and by the Study Grant Program of National University Philippines.
|
2304.02301 | MUFIN: Improving Neural Repair Models with Back-Translation | Automated program repair is the task of automatically repairing software
bugs. A promising direction in this field is self-supervised learning, a
learning paradigm in which repair models are trained without commits
representing pairs of bug/fix. In self-supervised neural program repair, those
bug/fix pairs are generated in some ways. The main problem is to generate
interesting and diverse pairs that maximize the effectiveness of training. As a
contribution to this problem, we propose to use back-translation, a technique
coming from neural machine translation. We devise and implement MUFIN, a
back-translation training technique for program repair, with specifically
designed code critics to select high-quality training samples. Our results show
that MUFIN's back-translation loop generates valuable training samples in a
fully automated, self-supervised manner, generating more than half-a-million
pairs of bug/fix. The code critic design is key because of a fundamental
trade-off between how restrictive a critic is and how many samples are
available for optimization during back-translation. | André Silva, João F. Ferreira, He Ye, Martin Monperrus | 2023-04-05T08:49:49Z | http://arxiv.org/abs/2304.02301v1 | # MUFIN: Improving Neural Repair Models with Back-Translation
###### Abstract.
Automated program repair is the task of automatically repairing software bugs. A promising direction in this field is self-supervised learning, a learning paradigm in which repair models are trained without commits representing pairs of bug/fix. In self-supervised neural program repair, those bug/fix pairs are generated in some ways. The main problem is to generate interesting and diverse pairs that maximize the effectiveness of training. As a contribution to this problem, we propose to use back-translation, a technique coming from neural machine translation. We devise and implement MUFIN, a back-translation training technique for program repair, with specifically designed code critics to select high-quality training samples. Our results show that MUFIN's back-translation loop generates valuable training samples in a fully automated, self-supervised manner, generating more than half-a-million pairs of bug/fix. The code critic design is key because of a fundamental trade-off between how restrictive a critic is and how many samples are available for optimization during back-translation.
automated program repair, self-supervised learning, back-translation +
Footnote †: (c)2023 Association for Computing Machinery. ACM ISBN 978-x-xx-xxx-xxx-x/YY/MM... $15.00[https://doi.org/10.1145/mnmnmn.mnmn](https://doi.org/10.1145/mnmnmn.mnmn)
+
Footnote †: (c)2023 Association for Computing Machinery. ACM ISBN 978-x-xx-xxx-xxx-x/YY/MM... $15.00[https://doi.org/10.1145/mnmnmn.mnmnmn](https://doi.org/10.1145/mnmnmn.mnmnmn)
+
Footnote †: (c)2023 Association for Computing Machinery. ACM ISBN 978-x-xxx-xxx-xxx-x/YY/MM... $15.00[https://doi.org/10.1145/mnmnmn.mnmnmn](https://doi.org/10.1145/mnmnmn.mnmnmn)
+
Footnote †: (c)2023 Association for Computing Machinery. ACM ISBN 978-x-xxx-xxx-xxx-x/YY/MM... $15.00[https://doi.org/10.1145/mnmnmn.mnmnmn](https://doi.org/10.1145/mnmnmn.mnmnmn)
+
Footnote †: (c)2023 Association for Computing Machinery. ACM ISBN 978-x-xxx-xxx-xxx-x/YY/MM... $15.00[https://doi.org/10.1145/mnmnmn.mnmnmn](https://doi.org/10.1145/mnmnmn.mnmnmn)
+
Footnote †: (c)2023 Association for Computing Machinery. ACM ISBN 978-x-xxx-xxx-xxx-x/YY/MM... $15.00[https://doi.org/10.1145/mnmnmn.mnmnmn](https://doi.org/10.1145/mnmnmn.mnmnmn)
+
Footnote †: (c)2023 Association for Computing Machinery. ACM ISBN 978-x-xxx-xxx-xxx-x/YY/MM... $15.00[https://doi.org/10.1145/mnmnmn.mnmnmn](https://doi.org/10.1145/mnmnmn.mnmnmn)
+
Footnote †: (c)2023 Association for Computing Machinery. ACM ISBN 978-x-xxx-xxx-xxx-x/YY/MM... $15.00[https://doi.org/10.1145/mnmnmn.mnmnmn](https://doi.org/10.1145/mnmnmn.mnmnmn)
+
Footnote †: (c)2023 Association for Computing Machinery. ACM ISBN 978-x-xxx-xxx-xxx-x/YY/MM... $15.00[https://doi.org/10.1145/mnmnmn.mnmnmn](https://doi.org/10.1145/mnmnmn.mnmnmn)
## 1. Introduction
Over the last decades, software systems have evolved to become some of the most complex human artifacts ever. Developing them is, typically, a multi-step effort realized by teams composed of differently skilled individuals. Debugging, the activity of finding and fixing software bugs, is one of the most demanding activities of software development (Becker et al., 2017). It requires developers to analyze and understand code and error logs, code they have not written themselves, and to find an edit to the program that fixes the bug, Aka a patch.
Automated Program Repair (APR) (Becker et al., 2017) is an active research domain about automatically finding patches to software bugs without requiring human developer time. APR is foreseen to save valuable person-hours, reduce the costs of developing and maintaining software systems, and decrease the number of people required in such endeavors, ultimately leading to an ever-automated world.
Traditional APR approaches (e.g., (Becker et al., 2017; Becker et al., 2017; Becker et al.
knowledge, we are the first to apply this concept to functional program repair.
We evaluate MUFIN on two widely-accepted benchmarks for program repair in Java: QuixBugs (QuixBugs, 2018) and Defects4J (Dafecs, 2018). Our results show that MUFIN out-performs the baseline models, correctly repairing +12 (Defects4J) and +4 (QuixBugs) bugs than the second best-performing model. Furthermore, we study the impact of the critic design in MUFIN. Our experimental results confirm the importance of the critic choice by revealing a trade-off between critic restrictiveness and the quantity of training samples. The best performing critic is based solely on the compilation results: more and less permissive critics achieve worse effectiveness.
To sum up, the main contributions of this paper are:
* We design and implement MUFIN: a novel self-supervised approach for functional program repair. As opposed to collecting commits for training, MUFIN generates valuable training samples.
* We conceive three families of code critics for self-supervised functional program repair. To the best of our knowledge, we are the first to study back-translation critics for program repair in Java.
* We perform a set of experiments to measure to what extent MUFIN improves over a strong baseline. Our results demonstrate the usefulness of MUFIN in improving the quality of the generated patches and in improving patch compilability.
* We make the code, datasets, and experimental results of our study, publicly available at [https://github.com/andre15silva/mufin](https://github.com/andre15silva/mufin).
## 2. The MUFIN approach
MUFIN is an original self-supervised approach for automated program repair. It is illustrated in Figure 1. The core novelty of MUFIN is the training procedure which combines three training stages, the latter two being part of a loop.
MUFIN requires an initialized neural breaker and neural fixer (_MUFIN Initialization_). The goal of the neural fixer is to generate correct code from buggy code, and that of the neural breaker is to generate buggy code from correct code. Both can come from previous work if available or can be trained from scratch, see subsection 2.1.
In the second stage, MUFIN uses a back-translation loop (_MUFIN Back-Translation_) to fine-tune both the breaker and the fixer models alternately. Back-translation involves using a model that translates from one domain to another to generate new training samples for a second model that translates in the opposite direction, and vice versa, iteratively improving both models (Levy et al., 2019).
The primary goal of MUFIN Back-Translation is to accumulate valuable training samples for both the breaker and fixer in a completely self-supervised manner. The generated training samples are stored and used collectively at each optimization step. This results in an increasing number of training samples being used for fine-tuning with each iteration, improving the performance of the models over time.
The outcome of the entire training process is a high-quality neural fixer. The automated generation of training samples is crucial in exposing the model to a diverse set of samples and thus improving its generalization ability. The core advantage of the training process is that it is entirely self-supervised and does not require the tedious and expensive collection of pairs of bug/fix as done in supervised program repair.
We now describe each component of MUFIN in detail.
### MUFIN Initialization
The MUFIN Initialization stage serves the purpose of initializing two neural models: a neural fixer and a neural breaker. Both models need to be available to bootstrap the subsequent MUFIN Back-Translation stage. They must exhibit reasonable performance, as their outputs are used as training samples during the back-translation loop.
#### Fixer Initialization
A neural fixer generates correct code from buggy code. A reasonable neural fixer can be obtained through various means: 1) from past research having produced a publicly-available, reusable, and fine-tunable fixer model, 2) through traditional supervised training with commits, or 3) by doing self-supervised training with mechanically-generated samples, with samples (i.e., from artificial buggy code to correct code).
#### Breaker Initialization
A neural breaker generates buggy code from correct code. While fixer models have been produced and shared in past research, breaker models are very rare (Shi et al., 2018). Thus, there is an asymmetry between the availability of breakers versus fixers, calling specific work for initializing a breaker. As a matter of fact, training a breaker from scratch is, at the moment, necessary for implementing MUFIN.
In MUFIN Initialization, the training data used for initializing the breaker is generated by a mechanical breaker: a mechanical breaker is composed of manually defined corruption rules that modify correct code such that it becomes buggy (it is not a neural network). A mutation testing tool is an example of such a mechanical breaker.
Given a mechanical-breaker, MUFIN starts with correct programs which have a passing test suite. For each correct program, the mechanical breaker generates multiple bugs by applying each corruption rule to multiple locations inside the correct sample.
Listing 1 gives an example of a bug generated by the mechanical breaker. In this example, the mechanical breaker swaps the first two parameters of the method call _serializeFields_. By modifying the program like this, the mechanical breaker modifies its behavior and introduces a bug.
### MUFIN Back-Translation
The goal of the MUFIN Back-Translation stage is to improve the initialized fixer and breaker models by iteratively using one's output to train the other. MUFIN accomplishes this by using unpaired data (i.e., correct samples which are not linked to a buggy version or buggy samples which are not linked to a fixing patch). A critic (Section 2.3) filters outputs that do not meet a quality criterion, with the intent of maximizing the quality of the samples used for training.
The process begins by either applying the initial breaker to seed correct programs or by applying the initial neural fixer to seed buggy programs. Assuming the latter, the tentative correct patches output by the fixer are then filtered according to a correct code critic: a correct code critic filters samples and keeps only the ones
it considers correct. Then, the neural breaker is fine-tuned with the self-supervised samples to translate from correct programs to buggy programs.
Given the improved neural breaker, tentative buggy patches are generated from correct programs. In turn, the tentative buggy patches are filtered according to a buggy code critic: a buggy code critic filters samples and keeps only the ones it considers buggy. Then, the neural fixer is fine-tuned with the self-supervised samples to translate from buggy programs to correct programs.
MUFIN Back-Translation can be configured to run for \(N\) iterations and to generate \(K\) tentative patches at each generation step using beam search. In each iteration, both neural models are improved with back-propagation. The data generated from one iteration is used in subsequent iterations as seed data.
At the end of the back-translation loop, one can throw away the breaker if we only do program repair, but the breaker may be reused in other tasks, see subsection 5.1. The most important outcome is the final neural fixer, which is subsequently used for inference.
### Critics In Back-Translation
In back-translation, the generation of high-quality training samples presents a significant challenge, as it can hamper the effectiveness of the process (K
_Compiler-based critics_. A critic may be based on compilation alone. If the compilation process fails, the program cannot be executed, and the presence of functional bugs cannot be evaluated. The behavior of each critic is as follows:
_compiler correct code critic_: Keeps all programs that compile successfully.
_compiler buggy code critic_: Keeps all programs that compile successfully. The idea is that we want to network focus on functional bugs which compile but have failing test suites, and not on compiler bugs.
_Tests-based critics_. A critic may be based on test execution. Failing unit tests are existential proofs of the presence of a functional bug. The behavior of each critic is as follows:
_tests correct code critic_: Keeps all programs that compile and pass all tests successfully.
_tests buggy code critic_: Keeps all programs that compile successfully but have at least one failing unit test.
### Mufin Inference
MUFIN Inference corresponds to the application of MUFIN on real-world bugs. During this stage, MUFIN utilizes the final neural fixer to repair the buggy code. Once presented with the code to be fixed, per previous work (Friedman et al., 2017), MUFIN employs fault localization (e.g., GZoltar (Zoltar, 2017), Flaccoo (Flaccoo, 2017)) to identify a list of suspicious locations. The neural fixer is then applied to these locations. For each location, MUFIN uses beam search to generate and rank a list of \(K\) patches.
### Breaking Location Selection
In the back-translation loop, an important aspect is the selection of locations on which the breaker model will operate to generate bugs. With MUFIN handling projects consisting of multiple source code files, there is a challenge in choosing the right locations to corrupt.
One naive solution would be to randomly select source code lines. However, randomly selecting source code lines may not be effective, as the chosen locations may not be relevant. For example, a line outside the class declaration could be chosen, leading to additional noise being introduced in the back-translation process.
To tackle this challenge, MUFIN iterates over each file of each program and uses AST-based analysis (Zoltar, 2017) to identify statements located inside code blocks. For each statement, a pair of line numbers indicating the starting and ending line numbers of the statement are outputted. Given the identified locations, the breaker model receives their input representations and returns a number of tentatively buggy snippets of code.
### Input Representation
The neural models are presented with a sequence of tokens. To create this input representation, we follow existing research (Friedman et al., 2017; Gori et al., 2017; Gori et al., 2017) and incorporate contextual information and fault localization information about the code snippet that requires modification. Our input representation comprises three parts that are concatenated. The first part contains the \(n\) lines (configurable) that precede the code section requiring modification. The second part is delimited by two special tokens, [START_BUGGY] and [END_BUGGY], which mark the beginning and end of the code portion to be modified. Lastly, the third part includes the lines that follow the code portion to be modified. The same representation is used for both the neural fixer and neural breaker models.
### Implementation
We reuse He et al's mechanical breakers used in SelfAPR SelfAPR (HuggingFace, 2017). We implement MUFIN's neural models with a state-of-the-art encoder-decoder Transformer architecture T5 (He et al., 2017), from HuggingFace. We set the model dimensions to follow t5-small, with: \(d_{model}=512\), \(d_{ff}=2048\), \(d_{kv}=64\), 8-headed attention mechanisms and 6 layers in each the encoder and decoder for a total of approximately 60M parameters. We use the PLBART (Pleaned et al., 2017) Sentence-Piece tokenizer from HuggingFace. We hold out 2% of the training data for validation. The models are optimized by AdamW (Kingmae et al., 2014; He et al., 2015), with a batch size of 16, a learning rate of 1e-4, and a gradient decay of 0.01, on a single GPU (NVIDIA RTX A4000). At each training stage, we use an early-stopping loop with validation loss.
We set \(K\) to 10 when generating correct code in MUFIN Back-Translation, following previous work (He et al., 2015), and to 1 when generating buggy code due to the number of code locations available. We set \(K\) to 100 during MUFIN Inference, a value within the range of related work (Friedman et al., 2017; He et al., 2015; He et al., 2015). We employ early stopping during patch generation, such that generation stops when all beam sequences have reached the EOS token.
## 3. Experimental Methodology
### Research Questions
* **RQ1 (Impact of Back-Translation)**: What is the impact of MUFIN's back-translation compared with a baseline model without back-translation?
* **RQ2 (Impact of Critics)**: To what extent does the critic design impact MUFIN's effectiveness?
### Dataset Construction
To evaluate MUFIN, we construct datasets for both training and testing purposes. We constrain our training and testing datasets to single-hunk bugs, due to the limitations of current neural models in repairing multi-location bugs (Krishnan et al., 2017).
For training, we construct two input datasets from the original Bears dataset (He et al., 2017): one comprising correct programs and another composed of buggy programs. Bears is a dataset of reproducible Java bugs. It contains 251 bugs collected from 72 different open-source projects. We choose Bears as its samples are accompanied by a compiler and test suite, which MUFIN's critics require.
For testing, we consider two widely adopted benchmarks: Defects4J (Friedman et al., 2017) and QuixBugs (Pleaned et al., 2017). Defects4J v2.0 contains 835 bugs collected from 17 different open-source projects. QuixBugs contains 40 programs collected from the Quixey Challenge.
We now explain in detail how each dataset is constructed.
#### 3.2.1. Training Datasets
_Correct Program Dataset_. We follow the approach of Ye et al. (Ye et al., 2017) to extract the earliest sample of each project. From each sample, we use the patched version as a correct program. This ensures that we do not over-sample from any project, given each project is represented by several samples. Programs that cannot be reproduced locally are filtered out to ensure data consistency.
From the original Bears dataset, we obtain a total of 56 correct programs. The discrepancy between the number of projects and the number of correct programs is due to failures in reproducing 16 correct programs, out of which 14 fail to compile and 2 fail to pass the tests. Overall, the dataset consists of 1,342,614 lines of code and 70,160 unit tests.
Buggy Program DatasetTo construct the buggy program dataset, we attempt to reproduce all available samples locally and remove the ones which we fail to reproduce. For each sample, we keep only the buggy version. Our final collection consists of 61 buggy programs from the Bears dataset.
#### 3.2.2. Testing Datasets
To create our testing datasets, we apply filters to the Defects4J and QuixBugs datasets, selecting only bugs that meet two criteria: (i) are reproducible, and (ii) are not included in the training datasets.
We collect a total of 428 test samples from Defects4J and 39 test samples from QuixBugs. Notably, we exclude all bugs from the _JacksonDatabind_ project to prevent data spillover. This decision is made because both Bears (used in training) and Defects4J (used in testing) contain samples collected from this project. We have taken precautions to prevent further cross-dataset contamination and are not aware of any other such instances.
### Baseline Model
MUFIN Back-Translation requires reasonable models as input: a fixer and a breaker. For our experiments, we need to control for model configuration (i.e., training dataset, input representation, architecture, hyper-parameters). To this end, we train neural fixers and breakers from scratch. This gives us a baseline model for which we can demonstrate that back-translation improves.
Our baseline models are trained with mechanically-generated samples produced by the corruption model of SelfAPR (Zhou et al., 2017) applied to the correct program dataset. The neural breaker is trained to translate correct programs to buggy programs, while the neural fixer is trained to translate buggy programs to correct programs. These models are subsequently used to initialize MUFIN Back-Translation in RQ1 (subsection 3.4) and RQ2 (subsection 3.5). For giving more perspective, we also use a baseline model trained according to related work BugLab (Zhou et al., 2017), named accordingly.
### Methodology for RQ1
In RQ1, we compare MUFIN with the baseline model described in subsection 3.3. To this end, we pick the best MUFIN model according to hyper-optimization on the configuration space, including the critic in MUFIN Back-Translation.
In order to evaluate the performance of the neural fixer models, we use the widely accepted Defects4J (Kang et al., 2017) and QuixBugs (Zhou et al., 2017) benchmarks. We compute 1) the number of plausibly repaired bugs (i.e., when the human-written tests pass) and 2) the number of correctly repaired bugs (i.e., when the generated patch is equivalent to the human-written patch, checked manually).
To eliminate biases inherited from the fault localization step, we consider perfect fault localization in our experimental setup, as done in related work (Zhou et al., 2017; Zhou et al., 2017; Zhou et al., 2017). This approach enables us to focus solely on the patch generation step of the APR pipeline and facilitates a fair comparison of performance across different approaches (Zhou et al., 2017).
### Methodology for RQ2
In RQ2, we study the extent to which the critic design impacts the overall effectiveness of MUFIN. To this end, we fine-tune the baseline model described in subsection 3.3 with MUFIN Back-Translation using three different strategies: 1) without a critic, 2) with the _complex_ critic family, and 3) with the _tests_ critic family. The critic families are described in subsection 2.3. The models are trained and evaluated following the methodology of RQ1. Additionally, we compute the percentage of compilable patches generated by each model on each test dataset to measure the syntactic quality of the generated patches.
## 4. Experimental Results
### RQ1 Results (Back-Translation)
In RQ1, we compare MUFIN's repair effectiveness with the baseline model described in subsection 3.3 and BugLab (Zhou et al., 2017). The results of the evaluation are presented in Table 1, which shows the performance of each approach on both test datasets. The table is structured as follows: the first column displays the approach used to train each model, while the second and third columns indicate the number of training samples used in each training stage, with column # 3 being the number of generated samples with self-supervision in MUFIN Back-Translation. The last two columns give the testing results on both QuixBugs and Defects4J. For each cell X/Y, X denotes the number of correctly repaired bugs (i.e., when the human-written tests pass), while Y indicates the number of plausibly repaired bugs (i.e., when the human-written tests pass).
BugLab (Zhou et al., 2017), which is optimized with 821,311 samples during the MUFIN Initialization stage, can repair 2 and 16 bugs from QuixBugs and Defects4J, respectively. Additionally, it can plausibly repair 4 and 31 bugs from QuixBugs and Defects4J, respectively.
The baseline model, which is optimized with 3,942,935 samples during the MUFIN Initialization stage, can repair 2 and 16 bugs from QuixBugs and Defects4J, respectively. Additionally, it can plausibly repair 4 and 43 bugs from QuixBugs and Defects4J, respectively.
MUFIN's golden model correctly repairs 28 bugs from Defects4J and 6 bugs from QuixBugs. MUFIN clearly outperforms the baseline model: MUFIN correctly repairs +12 Defects4J bugs and the same trend is observed in QuixBugs. Since MUFIN is based on the exact same baseline model, it means that the improvement comes from MUFIN's core contribution: the back-translation loop. By iteratively
\begin{table}
\begin{tabular}{c|c c|c c} \hline \hline Approach & \multicolumn{2}{c|}{\# Training Samples} & \multicolumn{2}{c}{Testing} \\ & Initialization & Self-Supervised & QuixBugs & D4J \\ \hline BugLab (Zhou et al., 2017) & 821,311 & - & 2/4 & 16/31 \\ Baseline Model & 3,942,935 & - & 2/4 & 16/43 \\
**MUFIN** & 3,942,935 & 197,959 & **6/7** & **28/62** \\ \hline \hline \end{tabular}
\end{table}
Table 1. MUFIN’s repair effectiveness w.r.t. state-of-the-part self-supervised functional program repair approaches. MUFIN correctly and plausibly fixes more bugs than any other approach over two benchmarks.
improving the neural fixer and breaker models using one's output to train the other, MUFIN generates valuable training samples which improve generalization over the unseen testing datasets. Similarly, better performance is observed for plausible patches: MUFIN plausibly repairs 62 bugs from Defects4J and 7 bugs from QuixBugs, which is better than the baseline.
Generalization ExampleLet us now consider a bug that only MUFIN's model can repair. For example, Listing 2 shows MUFIN's correct patch for bug Cli-5 from Defects4J. In this patch, the right-hand-side value of an assignment statement is modified from an arithmetic operation to a literal. Neither the baseline model nor BugLab are able to correctly repair this bug. This suggests that MUFIN generated valuable training samples modifying arithmetic expressions.
Patch Correctness over BeamFigure 2 plots the number of cumulatively correctly repaired bugs across the inference beam by each model on both test benchmarks. It is useful in perceiving wherein the beam the correct patches are located. As observed in both subfigures per benchmark, MUFIN identifies the correct patch earlier in the beam. This indicates that MUFIN improves patch prioritization inside the beam. This means that the underlying likelihood driving the beam better captures correctness thanks to MUFIN.
Comparison with State-of-the-Art PerformancePer our methodology, we use smaller models than related work (e.g., our models have 60M parameters vs. 220M from SelfAPR(Zhu et al., 2019)) because our goal is to precisely measure the improvement given by back-translation. We note that those experimental results do not reflect state-of-the-art repair effectiveness (Zhu et al., 2019). We believe that applying our original back-translation loop to large language models would be as beneficial as what our experiments demonstrate. However, this requires computation power that is outside of our lab capacity.
Answer to RQ1: RQ1 is based on a carefully designed protocol to measure the impact of back-translation. MUFIN's back-translation training enables the model to correctly repair +12 (Defects4J) and +4 (QuixBugs) bugs than the baseline model. Since the final model comes from the same baseline, the improvement in effectiveness is causally explained by back-translation training. Back-translation provides the models with more training samples and thus improves its generalization over the testing datasets.
### RQ2 Results (Critics)
In RQ2, we study the impact of the critic design in MUFIN. Table 2 shows the effectiveness of each critic across two datasets: QuixBugs and Defects4J. The table reads as follows. The first meta-column gives information regarding the approach and critic used to train the model, while the second meta-column gives the number of training samples used in each stage, with column # 3 being the number of generated samples with self-supervision in MUFIN Back-Translation. The third meta-column comprises the repair effectiveness results. For each cell X/Y, X denotes the number of correctly repaired bugs (i.e., when the human-written tests pass),
Figure 2. Number of cumulatively correctly repaired bugs across the beam by each model on both test benchmarks. MUFIN not only repairs more bugs in total but does so consistently across the beam.
while Y indicates the number of plausibly repaired bugs (i.e., when the human-written tests pass). The last meta-column presents the patch compilability results. Each cell X% denotes the percentage of compilable patches out of all generated patches for the given dataset.
Repair EffectivenessIn total, all MUFIN models show higher repair effectiveness than the baseline model. MUFIN models correctly repair +8 (_no critic_), +12 (critic = _compiler_), and +7 (critic = _tests_) Defects4J bugs than the baseline model, while also finding plausible patches for a higher number of bugs. The same is observed on QuixBugs, where MUFIN models correctly repair +3 (_no critic_), +4 (critic = _compiler_), and +3 (critic = _tests_) bugs than the initial model. Regardless of the critic, the fine-tuning process in MUFIN Back-Translation is useful. This is clearly evidenced that even with no critic, performance improves.
At the same time, we observe that the critic has an impact on the final repair effectiveness of the model. The most effective model is trained using the _compiler_ critic, correctly repairing a total of 28 Defects4J bugs.
Let us now look at the training sample reduction due to critics. As observed in Table 2, MUFIN with no critic generates 679,140 self-supervised samples for fine-tuning. The _compiler_ critic keeps only one-third of the data, resulting in 197,959 training samples. Finally, we see that the _tests_ critic is extremely selective, resulting in only 21,196 training samples. Since MUFIN with the _tests_ critic performs worse than with no critic, it means that the filtering is too extreme, and the resulting scarcity of self-supervised samples is ineffective. This is contrary to our initial expectations given that the _tests_ critic results in high-quality data: a bug is 100% sure a bug and a fix is most likely a fix given that we consider projects with strong test suites.
We note that despite more permissive critics such as _compiler_, nothing prevents the breaker from generating interesting bugs that are valuable for fine-tuning the models. The breaker trained with the _compiler_ critic does not simply introduce trivial bugs. For instance, consider the example of a bug introduced by MUFIN (_compiler_)'s breaker in Listing 3. In this case, the buggy patch introduces an entirely new statement composed of a method call that is not available in the input's context. This bug compiles. Even without being formally validated by a failing test case, this bug can be considered a functional bug as it introduces unintended behavior in the program.
Overall, there is a clear trade-off between critic restrictiveness (i.e., how restrictive a critic is in enforcing sample quality) and the number of training samples generated for back-translation. The _compiler_ critic is the best in that trade-off.
Patch CompilabilityThe last meta-column of Table 2 shows that all MUFIN models exhibit higher patch compilability rates than the initial baseline model. For example, both MUFIN (_compiler_) and MUFIN (_nocoritic_) produce 17% of compilable patches over all bugs and a beam size of 100. Clearly, MUFIN (_tests_) as the critic, despite improving repair effectiveness results w.r.t. to the baseline model, fails to do the same in patch compilability. In the case of QuixBugs, MUFIN (_tests_)'s patch compilability rate is even lower than the baseline.
These results further emphasize the trade-off between critic restrictiveness, as a proxy of sample quality, and the number of samples available for fine-tuning. Both MUFIN (_no critic_) and MUFIN (_compiler_) generate a higher number of samples that improve the syntactic quality of the generated patches. The same argument explains why no critic slightly outperforms in terms of compilability, because it yields 3x more training samples.
Critic Execution TimeDifferent critics require different amounts of time to filter samples. We note that using the _tests_ critic is far more computationally intensive than using the _compiler_ critic, as the former not only compiles but also executes test suites. To overcome this problem, future work might favor purely static critics that are lightweight and fast.
Answer to RQ2Our results show the relevance of critic design for the effectiveness of the final fixer model. The best critic for MUFIN is the _compiler_ critic, which filters the top-quality one-third of the generated data, resulting in a neural fixer capable of correctly repairing 28 Defects4J bugs, 12 more bugs than the non-fine-tuned baseline. We clearly observed and discussed in detail the trade-off between critic restrictiveness and the quantity of available training samples.
## 5. Discussion
### Breaker Repurposing
While the goal of this work is to repair buggy programs, bugs are a central component of several other software engineering tasks. Many approaches to such tasks require large amounts of executable bugs to either be built or executed. For example, experimental work on fault localization (Zhu et al., 2017) and bug detection (Zhu et al., 2017) require large amounts of bugs for achieving sound results.
Collecting such large amounts of executable bugs does not scale as it requires intense human effort. A neural model capable of generating bugs is, therefore, one possible solution to this issue: by applying a breaker model to several locations in a few correct projects we can obtain a very large number of executable bugs. To that extent, MUFIN's breaker model has inherent value. By design, MUFIN's breaker model is trained to generate quality bugs according to the critics.
The prolificacy of MUFIN's breaker to generate bugs is evidenced by our experimental results. With the most liberal setup, MUFIN (_no critic_), the breaker alone generates 678,540 bugs. That represents an average of approx. 12,117 bugs seeded per project. Furthermore, when using a critic that retains only those samples which successfully compile and have failing unit tests (MUFIN (_tests_), we obtain bugs which have the guarantee to be built and have at least one failing unit test. Our data is composed of 21,180 such bugs, which is an order of magnitude higher than the number of executable bugs available in manually curated datasets from the literature.
Listing 4 shows an example of a bug generated by a MUFIN breaker model. Here, the breaker model introduces a functional bug that triggers at least one failing unit test. More specifically, the breaker model closes an output stream before intended, releasing all
system resources associated with it, by introducing a _close()_ method call right before a _flush()_ method call. When the _flush()_ method call is executed, an _IOException_ is thrown due to the stream being closed.
Also, one can consider mutation testing (Zhu et al., 2017) as dependant on a breaker. To that extent, the breaker could be used as a plug-and-play component in a mutation testing infrastructure, in the spirit of related work on neural mutation (Zhu et al., 2017; Zhang et al., 2017).
For all these reasons, MUFIN's usefulness outreaches program repair, and the breaker which is initially a by-product can become a valuable asset for future research. We make our breaker models and generated bugs available to the scientific community. All bugs have the property of being buildable and runnable with at least one failing test for experiments in automated software engineering.
### Threats to Validity
We identify as internal threats to the validity of our results potential bugs in our implementation and errors in our manual patch analysis. To address these, we make our implementation and experimental artifacts publicly available.
As external threat, we identify the focus on a single programming language (Java). To mitigate external threats, we evaluate on two well-established Java program repair benchmarks. In principle, our experimental results should generalize to arbitrary programming languages and repair benchmarks.
### Future Work
In RQ2, we have studied the impact of the critic design on the effectiveness of the fixer models. However, some relevant questions remain unanswered. First, it remains unknown the extent to which the critic design depends on the effectiveness of the initial fixer/breaker models. Second, it also remains unanswered how the seed datasets in MUFIN Back-Translation impact the generalization of the fixer model. Our intuition is that the larger, but also more diverse and realistic, the seed datasets are, the higher the impact of fine-tuning with back-translation. Third, it remains to be studied if and how further iterations of MUFIN Back-Translation improve the neural fixer model, as well as when and how this process converges.
## 6. Related Work
### Automated Program Repair
_Heuristic-based_ approaches (Brock et al., 2015; Chen et al., 2016; Chen et al., 2017; Chen et al., 2018) generate and validate patch candidates by first computing the search space of modifications based on the programmed heuristics, and then by running the tentatively patched program against the set of provided tests. The patch candidates that make the modified program pass all tests are considered correct.
_Constraint-based_ approaches (Chen et al., 2016; Chen et al., 2017; Chen et al., 2018) follow a different strategy. First, they identify constraints that must be met in order to repair the bug. Then, a program synthesis technique is guided by the identified constraints to generate patches.
\begin{table}
\begin{tabular}{c|c c|c c|c c} \hline \hline Approach & \multicolumn{2}{c|}{\# Training Samples} & \multicolumn{3}{c|}{Repair} & Patch Compilability \\ & Initialization & Self-Supervised & QuixBugs & D4J & QuixBugs & D4J \\ Baseline Model & 3,942,935 & - & 2/4 & 16/43 & 17.64\% & 12.73\% \\ MUFIN (_no critic_) & 3,942,935 & 679,140 & 5/7 & 24/57 & **21.33\%** & **17.07\%** \\ MUFIN (_compiler_) & 3,942,935 & 197,959 & **6/7** & **28/62** & **21.33\%** & 17.00\% \\ MUFIN (_tests_) & 3,942,935 & 21,196 & 5/8 & 23/51 & 16.54\% & 13.32\% \\ \hline \hline \end{tabular}
\end{table}
Table 2. MUFIN’s effectiveness with three different critics across two testing datasets. Fine-tuning with MUFIN Back-Translation always improves the fixer’s effectiveness. There exists a trade-off between critic restrictiveness and effectiveness. The critic _compiler_ leads to the best results.
_Learning-based_ approaches (Levy et al., 2017; Liu et al., 2018; Liu et al., 2019; Liu et al., 2019; Liu et al., 2019; Liu et al., 2019; Liu et al., 2019; Liu et al., 2019; Liu et al., 2019; Liu et al., 2019; Liu et al., 2019) leverage pairs of buggy and correct samples to learn models which can generate patches. Typically, deep learning techniques are employed to obtain the patch generation models. MUFIN differs from traditional supervised learning repair approaches (Liu et al., 2019; Liu et al., 2019; Liu et al., 2019; Liu et al., 2019; Liu et al., 2019) in being self-supervised.
### Self-Supervised Learning on Code
Self-Supervised Learning is a paradigm of machine learning which consists in transforming unpaired data into paired data to train machine learning models in a supervised manner. It is a solution to the cost of obtaining supervised training data. In recent years, Self-Supervised Learning has become increasingly popular in the Natural Language Processing (NLP) world and has been used successfully in the task of learning word representations (Zhou et al., 2019; Liu et al., 2019), in pre-training language models (Zhou et al., 2019; Liu et al., 2019; Liu et al., 2019) and in pre-training code language models (Liu et al., 2019; Liu et al., 2019; Liu et al., 2019).
Self-Supervised Learning has been little applied to coding tasks. Kommrusch et al. (Kommrusch et al., 2017) learn to prove program equivalence by self-supervision over complete and incomplete proofs in an iterative training procedure. Ni et al. (Ni et al., 2019) use a similar approach to generate and sample correct and partially-correct code solutions for math reasoning problems.
Self-Supervised Learning can be applied to APR. Loriot et al. (Loriot et al., 2019) learn to fix Checkstyle violations by training models on artificially injected violations. Yasunaga et al. (Yasunaga et al., 2019) propose a self-supervised procedure for compilation bugs based on artificial bugs. Both techniques' corruption procedures work at the character and token levels and do not aim at introducing functional bugs, a key difference from MUFIN's goal. Vasic et al. (Vasic et al., 2019) synthetically replace variable names to generate a training dataset for a joint localize and repair model for variable misuse bugs. Allamanis et al. (Allamanis et al., 2019) train a bug detection and repair model with artificially generated bugs of four different categories. Ye et al. (Yasunaga et al., 2019) propose SelfAPR: a self-supervised framework to train neural models with execution diagnostic information. The key difference between these works is that none of them use back-translation, as MUFIN does.
### Back-Translation on Code
Back-translation is a Self-Supervised Learning technique for neural machine translation that generates synthetic training data by translating a text from a source language to a target language and back to the source language (Zhou et al., 2019; Liu et al., 2019). Roziere et al. (Roziere et al., 2019, 2020) apply back-translation to the task of code translation, which is not program repair. Wang et al. (Wang et al., 2020) use back-translation in their data augmentation strategy to discover causal relations between the input source (code and comments) and corrected bugs. Their goal is to improve the prediction interpretability of APR models. Ahmad et al. (Ahmad et al., 2020) also use back-translation in the task of code translation with a target-to-NL-to-source objective instead of a target-to-source-to-target objective.
Drain et al. (Drain et al., 2020) use back-translation to train a repair model. The main differences between this work and MUFIN are that they do not use critics and do not consider complex functional bugs in Java.
Yasunaga and Liang (Yasunaga and Liang, 2019) propose Break-lt-Fix-lt (BIFI) for repairing compilation errors, augmenting it by introducing critics. The main differences between BIFI and MUFIN are the following. First and foremost, BIFI focuses on simple compilation bugs, where the patches mostly consist of single character changes, such as a missing semi-column. On the contrary, MUFIN repairs functional bugs with failing test suites, where patches change multiple tokens and change the AST structure. Our experiment is the first-ever proof of concept that back-translation can be used beyond few-character changes.
## 7. Conclusion
In this paper, we presented MUFIN, a novel self-supervised approach for automated program repair in Java. MUFIN introduces a novel back-translation loop with critics dedicated to functional bugs. To demonstrate the usefulness of MUFIN, we experiment across two widely accepted Java program repair benchmarks. Our results show that MUFIN improves the patch quality w.r.t. baseline models, both in terms of repair and patch compilability. Also, we demonstrate that not all critics are equal. The _compiler_ critic based on the compilation results, leads to the best and most consistent results, being able to provide neural networks with high-value training samples while not being too restrictive so that a good number of self-supervised training samples can be employed in the back-translation loop.
We note that self-supervised neural program repair is a relatively unexplored research direction. Yet, our results show that this paradigm has great potential for improving any model. To this end, we make our implementation and experimental artifacts publicly available to foster future research on this topic.
|
2302.11795 | Bridging Synthetic and Real Images: a Transferable and Multiple
Consistency aided Fundus Image Enhancement Framework | Deep learning based image enhancement models have largely improved the
readability of fundus images in order to decrease the uncertainty of clinical
observations and the risk of misdiagnosis. However, due to the difficulty of
acquiring paired real fundus images at different qualities, most existing
methods have to adopt synthetic image pairs as training data. The domain shift
between the synthetic and the real images inevitably hinders the generalization
of such models on clinical data. In this work, we propose an end-to-end
optimized teacher-student framework to simultaneously conduct image enhancement
and domain adaptation. The student network uses synthetic pairs for supervised
enhancement, and regularizes the enhancement model to reduce domain-shift by
enforcing teacher-student prediction consistency on the real fundus images
without relying on enhanced ground-truth. Moreover, we also propose a novel
multi-stage multi-attention guided enhancement network (MAGE-Net) as the
backbones of our teacher and student network. Our MAGE-Net utilizes multi-stage
enhancement module and retinal structure preservation module to progressively
integrate the multi-scale features and simultaneously preserve the retinal
structures for better fundus image quality enhancement. Comprehensive
experiments on both real and synthetic datasets demonstrate that our framework
outperforms the baseline approaches. Moreover, our method also benefits the
downstream clinical tasks. | Erjian Guo, Huazhu Fu, Luping Zhou, Dong Xu | 2023-02-23T06:16:15Z | http://arxiv.org/abs/2302.11795v1 | Bridging Synthetic and Real Images: a Transferable and Multiple Consistency aided Fundus Image Enhancement Framework
###### Abstract
Deep learning based image enhancement models have largely improved the readability of fundus images in order to decrease the uncertainty of clinical observations and the risk of misdiagnosis. However, due to the difficulty of acquiring paired real fundus images at different qualities, most existing methods have to adopt synthetic image pairs as training data. The domain shift between the synthetic and the real images inevitably hinders the generalization of such models on clinical data. In this work, we propose an end-to-end optimized teacher-student framework to simultaneously conduct image enhancement and domain adaptation. The student network uses synthetic pairs for supervised enhancement, and regularizes the enhancement model to reduce domain-shift by enforcing teacher-student prediction consistency on the real fundus images without relying on enhanced ground-truth. Moreover, we also propose a novel multi-stage multi-attention guided enhancement network (MAGGE-Net) as the backbones of our teacher and student network. Our MAGGE-Net utilizes multi-stage enhancement module and retinal structure preservation module to progressively integrate the multi-scale features and simultaneously preserve the retinal structures for better fundus image quality enhancement. Comprehensive experiments on both real and synthetic datasets demonstrate that our framework outperforms the baseline approaches. Moreover, our method also benefits the downstream clinical tasks.
Fundus image, teacher-student model, image enhancement
## I Introduction
Retinal images are widely used by ophthalmologists or automated image analyzing systems as a non-invasive way to detect and monitor various eye and body diseases [1], such as glaucoma, diabetic retinopathy, and hypertension. Unfortunately, a study of 5,575 patients found that about 12% of fundus images are not of adequate quality to be readable by ophthalmologists [2]. The quality of fundus images varies due to equipment limitations, ophthalmologists' experience, and patient eye movement, which could negatively affect clinical decision making. Image enhancement methods are therefore proposed as a remedy. Traditional fundus image enhancement methods [3, 4, 5, 6] were mainly based on hand-crafted priors, and they could not satisfactorily handle the complexity of varied low-quality cases. To solve this issue, the deep learning methods were proposed to learn more general priors from large amounts of paired low-quality and high-quality images [7, 8, 9, 10, 11, 12, 13, 14]. Therefore, the existing methods resort to either i) synthetic image pairs, such as synthesizing low-quality fundus images by degrading real high-quality ones [7], or ii) unpaired supervision models, such as CycleGAN-like ones [11, 15], for enhancement. However, both approaches have limitations. On one hand, due to the domain shift between the synthetic and the real fundus images, the models trained on synthetic image pairs have limited capability to generalize well to real clinical fundus images. On the other hand, the models trained with unpaired supervision mainly translate image styles and could not well preserve the local details of structures.
To bridge this gap, in this work, we propose a new end-to-end optimized method that simultaneously conducts image enhancement and domain adaptation in one-shot based on the well-known mean teacher framework [16]. By imitating self-supervised learning, mean teacher framework was proposed to be used for unsupervised domain adaptation task in [17]. The domain gap is naturally reduced by the consistency regularization in the mean teacher framework, which enforces the predictions of the teacher network and the student network to be consistent around each unlabeled (target domain) image. Mean teacher aims to learn a smoother domain-invariant function from unlabeled (target domain) images than the model purely trained on labeled (source domain) images. In this paper, we adapt the mean teacher framework to our cross-domain enhancement network through both multi-stage enhancement consistency and multi-level segmentation consistency. Specifically, our method consists of a student network and a teacher network with identical architecture, while the latter is an exponential moving average of the former. The |
2301.08794 | Robot Skill Learning Via Classical Robotics-Based Generated Datasets:
Advantages, Disadvantages, and Future Improvement | Why do we not profit from our long-existing classical robotics knowledge and
look for some alternative way for data collection? The situation ignoring all
existing methods might be such a waste. This article argues that a dataset
created using a classical robotics algorithm is a crucial part of future
development. This developed classic algorithm has a perfect domain adaptation
and generalization property, and most importantly, collecting datasets based on
them is quite easy. It is well known that current robot skill-learning
approaches perform exceptionally badly in the unseen domain, and their
performance against adversarial attacks is quite limited as long as they do not
have a very exclusive big dataset. Our experiment is the initial steps of using
a dataset created by classical robotics codes. Our experiment investigated
possible trajectory collection based on classical robotics. It addressed some
advantages and disadvantages and pointed out other future development ideas. | Batu Kaan Oezen | 2023-01-20T20:37:46Z | http://arxiv.org/abs/2301.08794v1 | Robot Skill Learning via Classical Robotics-Based Generated Datasets: Advantages, Disadvantages and Future Improvement
###### Abstract
_Why do we not profit from our long-existing classical robotics knowledge and look for some alternative way for data collection? The situation ignoring all existing methods might be such a waste. This article argues that a dataset created using a classical robotics algorithm is a crucial part of future development. This developed classic algorithm has a perfect domain adaptation and generalization property, and most importantly, collecting datasets based on them is quite easy. It is well known that current robot skill-learning approaches perform exceptionally badly in the unseen domain, and their performance against adversarial attacks is quite limited as long as they do not have a very exclusive big dataset. Our experiment is the initial steps of using a dataset created by classical robotics codes. Our experiment investigated possible trajectory collection based on classical robotics. It addressed some advantages and disadvantages and pointed out other future development ideas._
**Keywords: Robot skill learning, Imitation learning[17], Data collection strategies, MoveIt framework[6], Tiago robot[9], Ros Navigation Stack[7].**
## 1 Introduction
Current robot skill learning algorithms are either based on smaller and similar datasets or massive and diverse trajectories datasets. Each of them has its own advantages and disadvantages. For example, one basic robot skill can be learned by just overfitting on one small dataset, but this situation will cause bad generalization. Robot skill learning tasks based on similar small datasets are generally overfitted in literature, and their performance is limited because of not diverse motion variety. The extensive dataset for robot skill learning is also limited, and they do not have various environments, robots, and cases. This situation blocks possible real-life usage of learning-based robotics. Many experiments[11] show the importance of extensive datasets related to robot skill learning. But it is still an open question of the best way to collect robot skill learning datasets. Kinesthetic[5] teaching and Teleoperation[17] are trendy in literature, but their inefficiency might prohibit them from being the correct way to collect datasets. On the other hand, reinforcement learning-based[12] agents achieve amazing work, such as autonomous driving tasks. However, the convergence of reinforcement learning-based[24] agents is extremely long for robot skill learning tasks, and their real-life application might be dangerous.
Most importantly, researchers developed a lot of classical robotics algorithms over decades, and it is a big waste not using our solid robotics knowledge for dataset collection. In literature, this way of dataset collection is limited or rare. This situation brings us to using classical robotics for trajectory-based dataset collection, achieving robot skill learning tasks, and investigating our mentioned methods. During the experiment, robotics manipulation tasks are performed based on computer vision(object localization), slam(mapping and navigation), control theory, and robotics via using MoveIt[6] framework and ROS navigation stack[21]. It impresses the advantages and disadvantages of the mentioned method for robot skill learning and points the way for future robot learning development.
## 2 Related Work
Collected datasets, learning algorithms, and neural network structure might be the most important part of robot skill learning tasks. This part discusses possible learning algorithms and neural network structures.
When we come to the learning algorithm, there are two milestone learning algorithms for robot skill learning. One of them is imitation learning[17] gained a lot of attention because of its end-to-end learning success in robot skill learning, and the other one, Reinforcement learning[24], started later showing some incredible success in the area, but it currently often
overperforms imitation learning for robot skill learning. The imitation learning[17] approach performs similarly to classical deep neural network architecture. It only needs one robot manipulation task dataset and learns all possible actions using this dataset. On the other hand, some of the Reinforcement learning methods[24] are quite popular in robot skill learning. Classical reinforcement learning[24] based on designed rewards, inverse reinforcement learning[8], and Offline reinforcement learning methods[24] are some of the popular reinforcement learning algorithms[24] for acquiring abilities for the robot. A short summary of these mentioned methods can be defined as upcoming sentences. Inverse Reinforcement Learning (IRL)[8]is a method used in robotics to learn an agent's reward function from its observed behavior. It is an inverse problem of the traditional Reinforcement Learning[12] (RL) approach, where an agent learns a policy by maximizing a known reward function. In IRL [8], the goal is to infer the underlying reward function that generated a given set of expert demonstrations. Offline Reinforcement Learning[24] (ORL) is a method of training reinforcement learning[24] (RL) agents using previously collected data rather than online interactions with the environment. This approach is similar to imitation learning[17], the only difference is that instead of training deep neural networks like imitation learning[17], where the next action is predicted, there is a trained reinforcement learning[24] algorithm with collected trajectories. The experiment[24] defends that the offline reinforcement learning algorithms[24] performed better than the imitation learning[17] algorithm for robot skill learning tasks for some aspects. Handling partial observability and stochasticity, sub-optimal expert demonstration, generalization, and future improvement via online RL[24] are advantages over imitation learning[17]. However, the simplicity of imitation learning[17] made its usage more popular. It was our main reason for using imitation learning[17] in our task.
Representation learning[19] and predictive learning[19] are one of the other most crucial crucial part for robot skill learning. Representation learning[19] compresses the information coming from sensors and produces a latent and sparse representation of inputs. This produced lower dimensional representation cause the system to learn with lower computation power and fewer parameters in the predictive learning[19] part. Generally, variational convolutional encoders[23] and convolutional autoencoders[23] are the most popular mechanism. The variational encoder[23] has one advantage over classical encoders[23] because they produce generally normalized latent information, and it improves the performance of the predictive learning[19] part. Generally, LSTM[23], attention-LSTM [15], and transformers-based architectures[18] are preferred in the predictive part. The main goal of the predictive learning[19] part is developing one neural network that can guess robot state for one step later, and it can select appropriate action based on that knowledge.
Figure 1: Tiago Robot[9]
Figure 3: Example Transformer Neural network[18] based on predictive learning and representation learning
Figure 2: Example LSTM Neural network[23] based on predictive learning and representation learning
## 3 Dataset Collection
This part uses two different dataset collection methods. The first one is for the long horizon, and the second one is for the short horizon.
Our first designed system uses a point cloud-based based object localization and Movebase[21] based approaching to object. Movebase[21] is a ROS (Robot Operating System) package that provides an implementation of a navigation stack[21]. The navigation stack[21] is a collection of algorithms that are used to plan and execute safe and efficient trajectories for a robot to move from one location to another. The specific algorithms used in the Movebase[21] package may vary depending on the particular implementation, but they typically include global and local planners, a path controller, and collision avoidance mechanisms. The global planner algorithm is responsible for creating map based, while the local planner algorithm is responsible for generating detailed control commands for the robot to execute. The path controller is responsible for following the planned path, and the collision avoidance mechanisms are used to ensure that the robot does not collide with obstacles in its environment. For grasping the object, MoveIt[6] is used, which is based on Inverse Kinematics, Forward Kinematics, collision detection, and path planning and following the planned path. In figure 2, you can see the designed GUI for the robot manipulation task. This GUI is developed just for testing functions. Our object localization algorithm for point cloud is located in figure 7.
Figure 6 shows the action follow charts of our developed algorithm. Environmental mapping is completed via the navigation stack[21]; afterward, robot localization is completed using created map features. The robot starts to localize the object like in figure 7; it uses a voxel grid filter[20] to reduce the input dataset to increase computation efficiency, then remove noise features via a statistical outlier filter[10]. Lastly, the object is localized color-based segmentation[2]. The Center location of the point cloud is the object location.
Figure 4: Example attention based Neural network[15] based on predictive learning and representation learning
Figure 5: GUI for using Tiago Robot[9] Robot
Figure 6: Action flow chart
Figure 7: Object Localization Frame Work
The second dataset collection method for robot object grasping was created based on Tiago[9] pre-implemented by the producer company grasping code. This implementation uses ArUco marker[16] based object localization using OpenCV[1]/PCL[4] library and grasping using MoveIt[6] library to grasp the object. During the recording, 10 different grasping objects are recorded for both implementations. The recorded dataset consists of all Tiago[9] joints states, move base command, RGB image, and disparity image for the first described method. Another recording dataset has all [9] joint states, RGB images, and disparity images.
## 4 Skill Learning
Imitation learning[17] is selected for analyzing our collected trajectories because of its simplicity. It uses the principle of using current information and predicting the next state, and the robot decides the next state using this method in real-time application. Below part, two different robotics software are used to collect for trajectory. When the dataset is investigated, the one with Movebase[21]-MoveIt[6] has complexity and a long recording time. It is also detected that MoveIt[6] did not produce a similar grasping pattern for each grasping.
The learning part has two main part. In the first part, two different encoder-decoder[23] is trained for RGB and disparity images on the Cifar-10 dataset[13]. The second part, the architecture shown in figure 5, is trained end to end. Images and Disparities are standardized, and joint angles and move-base commands are normalized. The mean square loss function is defined as in equation 1. It uses a predictive learning[19] approach based on the current image frame, current robot state, and current disparity and predicts one step later robot state, based on this information, real-time robot application decides the next actions. The loss function calculates the MSE between next robot state and predicted next robot state.
\(\mathcal{L}(x_{t+1},\hat{x}t+1)=\frac{1}{N}\sum(x_{t+1}-\hat{x}_{t+1})^{2}\) (1)
During the learning, models have trained 10000 epochs (32 hours with Nvidia Gtx 1650 GPU/4GB Memory).
Possible transformer usage is also tested instead of using the LSTM[23] layer.
## 5 Experimental Result
The first experiment with Encoder-LSTM-Decoder[23] with long horizontal trajectories was unsuccessful. The system could not sufficiently decrease loss function, showing that the current neural network architecture cannot learn very long and diverse action patterns. Another problem related to this case is that the system has a huge output state(movebase[21] state and all joints).
Our neural network learning using a dataset not using movebase[21] with a short horizon successfully minimized the loss function. The experimental result is shown in figure 6. One problem related to default moving grasping makes different manipulating objects, such as approaching an object from its left or right side. Still, it does not produce extremely similar grasping patterns like teleoperation[17] and kinesthetic[5] teaching.
Last part of the experiment, transformer usage is investigated instead of using LSTM layers[23]. It is seen that transformers[18]' learning has a slower learning speed compared to LSTM[23]. Because of the slow learning nature of transformers[18], the experiment is stopped. The result could not be investigated.
## 6 Discussion
During the experiment, it is shown that the hardware system plays the most important role. Because of hardware system limitations, it is highly possible that our learning system for a long horizontal dataset failed.. Firstly, our current hardware did not support auto
Figure 8: Neural Network Architecture
Figure 9: Experimental Result
mated dataset production; moreover learning time of a very big dataset is extremely high for our current hardware. This situation hindered our experiment for long horizontal datasets and also big dataset collection case. In contrast to a problem faced during the first experiment, the robot could learn successfully touch the object for each training case for our limited scenario. It also shows that our system is able to learn dataset-generated classical robotics codes, and it is open to further development.
This experiment shows that robot learning tasks are generally based on similar trajectories and environments. This situation is mainly related to a limited dataset and hardware system limitations. At the same time, a not diverse dataset limits the generalistic quality of the trained neural network, and they perform overfitted tasks. This situation is undesirable for deep learning-based robotics applications in real life. Robots developed not generalistic datasets might easily fail to perform the basic task, and their domain adaptation characteristic is extremely limited.
Our experiment addressed one problem related to MoveIt[6]. Producing similar grasping patterns from MoveIt[6] is a bit complicated. Actually, having a big diverse dataset is something desirable, but in our case, limited computational power blocks our neural networks learning performance. Firstly, having a big dataset extremely extends our learning time, and diverse motions in datasets decrease our neural networks learning performance. It is highly expected that our mechanism might produce better results with strong high-performance computing system experiments showing the possibility of generating a dataset based on classical robotics algorithms. Our neural network's success on the second dataset shows the importance of classical robotics algorithms for dataset generation. Clearly, kinesthetic[5] teaching, teleoperation[17], and reinforcement learning [12] have massive efficiency problems. They are most likely not the optimal way for collecting robotics datasets.
## 7 Future Improvement
If the system will be developed with fewer computing units, even though it is undesirable to have a similar pattern, improving MoveIt[6] motion grasping and making their grasping pattern similar is a must for increasing the success of grasping actions. In this way, our system losses its generalization ability but increase its performance in the specific grasping task.
Another alternative might be using/ action-based robotics approaches[22]. The mixture of expert models[22] is one of the popular ways to develop an action-based robotics system [22]. Instead of developing end-to-end learning, developing multiple models for different actions might increase overall performance. As an example, for dataset-1, the user can collect four different action datasets for each action, such as: approaching the object, opening the Tiago[9] arm, grasping the object, and placing the object. Afterward can train four different models for each action, expertized on just one of them, and uses MoE[22] for action selection. It again may lose from generalization but increase its performance on specific tasks.
Moreover, model learning[3] might also increase overall performance. It could be a great idea to teach two neural networks, one for just controlling robot gripper position like a cartesian controller and another one just learning robots gripper trajectories based on their location. In this way, we can reduce our big unknown joint space states and motion states to a limited number. Instead of using a learning-based cartesian controller, it could be a good idea to use some classical controller systems.
Figure 10: The moment Tiago[9] touches the object
Lastly, human nature is a system that learns from its previous experience. It means that human nature learns to learn. This pattern brings the question of why not robot does not apply the same principle. Meta-learning[14] is a method for developing a system based on using train data to learn. Afterward, the user feeds task data to the model, and the system performs exceptionally well in many experiments. Our classical robotics-based dataset collection is an excellent way to collect the train data, and it can also collect vast task data. This way, the system can improve its overall performance.
Combining all mentioned mechanisms might also produce some big achievements.
## 8 Conclusion
The amount of implemented robotics code (not learning-based) is high. These codes are generally appropriate for automated dataset production in simulation environments and real life. Ignoring these advanced robot codes and focusing on kinesthetic[5] teaching-based or teleoperation[17]-based dataset collection could be more inefficient. This experiment first showed that dataset collection with classical robotics codes could contribute to the area of robot learning. It is known that some tasks are not solvable via classical robotics, but their trajectories can be recorded via teleoperations[17] and kinesthetic[5] teaching.
This experiment fingers the idea of classical robotics-based dataset collection for meta-learning[14]. Datasets created via classical robotics can be used as meta-train datasets[14]. It also addresses a lot of future development ideas based on action-based, model learning, and controller-based learning.
This experiment also emphasizes the importance of balancing dataset diversity. Due to limited computation, a big, diverse dataset generally damages the learning of robots.
|
2307.02651 | The geography of innovation dynamics | Cities and metropolitan areas are major drivers of creativity and innovation
in all possible sectors: scientific, technological, social, artistic, etc. The
critical concentration and proximity of diverse mindsets and opportunities,
supported by efficient infrastructures, enable new technologies and ideas to
emerge, thrive, and trigger further innovation. Though this pattern seems well
established, geography's role in the emergence and diffusion of new
technologies still needs to be clarified. An additional important question
concerns the identification of the innovation pathways of metropolitan areas.
Here, we explore the factors that influence the spread of technology among
metropolitan areas worldwide and how geography and political borders impact
this process. Our evidence suggests that political geography has been highly
important for the diffusion of innovation till around two decades ago, slowly
declining afterwards in favour of a more global innovation ecosystem. Further,
the visualisation of the evolution of countries and metropolitan areas in a 2d
space of competitiveness and diversification reveals the existence of two main
innovation pathways, discriminating between different strategies towards
progress. Our work provides insights for policymakers seeking to promote
economic growth and technological advancement through tailored investments in
prioritarian innovation areas. | Matteo Straccamore, Vittorio Loreto, Pietro Gravino | 2023-07-05T20:54:44Z | http://arxiv.org/abs/2307.02651v1 | # The geography of innovation dynamics
###### Abstract
Cities and metropolitan areas are major drivers of creativity and innovation in all possible sectors: scientific, technological, social, artistic, etc. The critical concentration and proximity of diverse mindsets and opportunities, supported by efficient infrastructures, enable new technologies and ideas to emerge, thrive, and trigger further innovation. Though this pattern seems well established, geography's role in the emergence and diffusion of new technologies still needs to be clarified. An additional important question concerns the identification of the innovation pathways of metropolitan areas. Here, we explore the factors that influence the spread of technology among metropolitan areas worldwide and how geography and political borders impact this process. Our evidence suggests that political geography has been highly important for the diffusion of innovation till around two decades ago, slowly declining afterwards in favour of a more global innovation ecosystem. Further, the visualisation of the evolution of countries and metropolitan areas in a 2d space of competitiveness and diversification reveals the existence of two main innovation pathways, discriminating between different strategies towards progress. Our work provides insights for policymakers seeking to promote economic growth and technological advancement through tailored investments in prioritiarian innovation areas.
## 1 Introduction
In our increasingly interconnected world, diffusion processes play a crucial role in determining the evolution of our societies. For this reason, a well-established and growing literature is focusing on studying the different instances of the phenomenon, from information diffusion in social networks [1, 2] to the spreading of diseases [3, 4, 5]. Particular attention converged on the diffusion of innovations [6, 7] and technologies [8, 9, 10]. The adoption of patent data to monitor technological innovation is well established [11, 12, 13]. For the past few decades, patent data have become a workhorse for the literature on technical change, due mainly to the growing availability of data about patent documents [14]. This ever-increasing data availability (e.g., PATSTAT, REGPAT and Google Patents [15]) has facilitated and prompted researchers worldwide to investigate various questions regarding the patenting activity. For example, on the nature of inventions, their network structure, and their role in explaining technological change [14, 16, 17]. One of the characteristics of patent documents is the presence of codes associated with the claims in patent applications. These codes mark the boundaries of the commercial exclusion rights demanded by inventors. Claims are classified based on the technological areas they impact, according to existing classifications (e.g., the IPC classification [18]), to allow the evaluation by patent offices. Mapping claims to classification codes allows localizing patents and patent applications within the technology "semantic" space [19].
In addition to the semantic space defined through technological codes, patents and innovations live in a physical space. It is known, for instance, the role that cities and metropolitan areas play in fostering creativity and innovation. Thanks to a critical concentration and proximity of diverse mindsets and opportunities, urban infrastructures enable new technologies and ideas to emerge, thrive, and trigger further innovation. Still, more is needed to know about the interplay between geography's role and the innovation processes' semantics. Technology and innovation diff processes take place, in fact, in a geographical layer that still needs to be studied, both from the physical and political points of view.
Cities and metropolitan areas (MAs) appear thus as the right level to investigate the role of geography in innovation processes. To date, approximately 55% of the global population lives in urban areas, which represent the core of innovation [20, 21], economy [22], science [23], and much more. According to a report by the World Bank [24], MAs generate about 80% of global GDP. They attract businesses and industries, creating jobs and driving innovation [25]; also, from an environmental perspective, MAs can be more sustainable than rural areas due to their greater efficiency in resource use and transportation [26]. For all these reasons, we focus on metropolitan areas as the smallest geographical entities, after countries and regions, essential for economic growth and development.
Many recent studies have relied on network-based techniques to unfold the complex interplay among patents, technological codes, and geographical reference areas. We decided to use the framework of bipartite networks [27], which are suitable whenever systems involve interactions between pairs of entities. For example, in ecology, interactions between two types of species can be described using bipartite networks, such as plant-pollinator networks [28] or seed-disperser networks [29]. Bipartite networks are also used in social [30], economic [31, 32], and biological [33] systems.
With the tools described above and a specific focus on metropolitan areas, this paper investigates the factors that influence the spread of technology among metropolitan areas worldwide and how geography and political borders impact this process. We reveal that the current innovation pathways can be effectively predicted if one considers a non-trivial interplay between, on the one hand, the similarity between the technological content of cities and, crucially, belonging to the same country. In particular, our evidence suggests that political geography has been highly important for the diffusion of innovation till around two decades ago, slowly declining afterwards in favour of a more global innovation ecosystem. To this end, we improved current similarity-based prediction algorithms, i.e., algorithms based on the principle that the more two MAs are technologically similar, the higher the probability they will accomplish similar evolutionary technological paths. In particular, the improvement is substantial to forecast the so-called MAs technical "debut", i.e., the first-ever patent produced by a MA with a given technological code, where current models cannot formulate predictions.
We further visualise the evolution of countries and metropolitan in a 2d space of competitiveness and diversification. To this end, we adopted the UMAP dimensionality reduction algorithm [34] to visualise the different technological paths of countries and MAs. We discover the existence of two main innovation pathways, discriminating between different strategies towards progress. For instance, "Western" countries and BRICS-like countries follow very different routes in this space, which we can define in terms of distinctive technological traits.
The paper is organised as follows. Section 2 describes the data used in this work. In Section 3, we introduce the methodologies used in our work, explaining the details of the similarity measures and testing procedures adopted. In Section 4, we present the results discussing the relevance of political geography, i.e., belonging to the same country, to obtain better predictive results, in particular, to predict the emergence of a brand-new technology in the portfolio of a given MA. We also display the innovation pathways of countries and MAs. Finally, in Section 5, we summarise the main results and highlight the hints the present work can give to future works addressing the questions arising from this study.
## 2 Data
### Technology Codes and Metropolitan Areas (MAs)
We adopt the PATSTAT database (www.epo.org/searching-for-patents/business/patstat) to provide information about patents and technology codes. The database contains approximately 100 million patents registered in about 100 Patent Offices. Each patent is associated with a code that uniquely identifies the patent and a certain number of associated technology codes. The WIPO (World International Patent Office) uses the IPC (International Patent Classification) standard [18] to assign technology codes to each patent. IPC codes make a hierarchical classification based on six levels called digits, which give progressively more details about the technology used. The first digit represents the macro category. For instance, the code Cxxxxx corresponds to the macro category "Chemistry; Metallurgy" and Hxxxxx to the macro category "Electricity". Considering the subsequent digits, we have, for instance, with C01xxx, the class "Inorganic Chemistry" and with C07xxx the class "Organic Chemistry".
For the metropolitan areas (MAs), we adopted a database (see next section) to match the unique patent identifier and its technology code to the corresponding MA. To geolocalise the patents, we adopted the De Rassenfosse et al. database [35] that contains entries on 18.9 million patents from 1980 to 2014. This is the first dataset about first filing applications from around the world, organised according to the location of applicants, i.e., companies or laboratories. This information helps study the geography of innovation and understand the spatial distribution of patented inventions. The geolocalisation is performed by linking the postal codes of applicant addresses to latitude and longitude and, as a result, to countries, regions, and MAs. The database contains information about the first application and assigns multiple technology codes to patents with more than one. The data is sourced from PATSTAT, WIPO, REGPAT, and the Japanese, Chinese, German, French, and British patent offices. Finally, each patent has unique identifiers, technology codes, and geographical coordinates (latitude and longitude). More information about De Rassenfosse et al. and PATSTAT database can be found in the Supplementary Information.
### Data Preparation
To clean the data, the first step consists of associating the technology codes of a patent with a specific MA by matching latitude and longitude information for each patent with the MAs borders obtained by the Global Human Settlement Layer [36]. This way, we can select the patents within each MA's boundaries with their technology codes. Once this operation is completed, it is possible to build, year by year, the bipartite network that links MAs to technology codes. We represent the bipartite networks through bi-adjacency rectangular matrices \(\mathbf{V}^{y}\) whose elements \(V^{y}_{a,t}\) are integers indicating how many times a technology code
appeared in different patents in a given MA \(a\) in year \(y\).
Our network features 2865 MAs connected to 650 4-digit technology codes. We decided to work with four digits instead of more or less because with the 4-digit we can have a technological resolution such that these are neither too similar nor too far apart. With more digits, we would have trivial results: for example, the 4-digit code A01C (Planting; Sowing; Fertilising) contains codes A01C-15 (Fertiliser distributors) and A01C-21 (Methods of fertilising). With fewer digits, we would have the opposite problem. In addition, multiple digits would have inherent problems with the PATSTAT database due to changes in database versions. Over time, new codes are born, or others are removed. The 4-digits choice appears as the most stable.
Our networks are represented by a set of matrices \(\mathbf{V}^{y}\) for each year, \(y\), from 1980 to 2010. Each year \(y\) matrix element \(V_{ad}^{y}\) counts how many times, in the year \(y\), the technology \(t\) appears in the MA \(a\). Finally, we binarise the matrices \(\mathbf{V}\) simply using 0 as a threshold to obtain 30 \(\mathbf{M}^{y}\) matrices:
\[M_{at}^{y}=\begin{cases}1&\text{if}\quad V_{at}^{y}\neq 0\\ 0&\text{if}\quad V_{at}^{y}=0\end{cases}\]
We decided to apply this binarisation procedure instead of the standard approaches like Revealed Comparative Advantage (RCA) [37] because we are interested to know which MA is adopting for the first time a given technology.
## 3 Methods
### Similarity measures
By the term _Similarity_, we mean a measure of closeness between nodes in the same layer. In previous studies [38, 39, 40], the similarity in the layer of items was used to study how an element of the layer of users may evolve in the future. For example, in [31], the similarity between technologies was used to predict the future technology production of firms. In [39, 40], the similarity between products was used to predict countries' future product exportation competitiveness. We can apply the general similarity measure defined in literature [41] to our MA-technology networks as:
\[B_{tt^{\prime}}^{y}=\frac{1}{N_{1}}\sum_{a}\frac{M_{at}^{y}M_{at^{\prime}}^{ y}}{N_{2}}, \tag{1}\]
in the case of technology similarity (between items), or
\[B_{aa^{\prime}}^{y}=\frac{1}{N_{1}}\sum_{t}\frac{M_{at}^{y}M_{a^{\prime}t}^{ y}}{N_{2}}, \tag{2}\]
in the case of similarity of MAs (between users). Here \(N_{1}\) and \(N_{2}\) are two parameters through which it is possible to define several types of similarity.
The simplest type is called _co-occurrence_[41], and it is defined putting \(N_{1}=N_{2}=1\). Given two nodes of the same layer, this measure counts how many common neighbour nodes they have in the other layer. In our case, we measure how many MAs do the technology \(t\) and \(t^{\prime}\) in the same year or how many technologies are done by both MAs, \(a\) and \(a^{\prime}\), in the same year. However, different similarity measures can be found in the literature based on the value given to \(N_{1}\) and \(N_{2}\). We define by \(d_{a}=\sum_{t}M_{a,t}\) the diversification of the MA \(a\), i.e., the number of technologies done by \(a\), and by \(u_{t}=\sum_{a}M_{a,t}\) the ubiquity of technology \(t\), i.e., the number of MAs active in that technology sector. Among the broadest similarity measures used are:
* Technology Space (TS). This similarity is based on the Product Space of [38] and it has \(N_{1}=\max(u_{t},u_{t^{\prime}})\) and \(N_{2}=1\) (or \(N_{1}=\max(d_{a},d_{a^{\prime}})\) and \(N_{2}=1\) in the MA layer). Using this type of normalisation, one gives a lower connection weight to those technologies done by many MAs;
* Resource Allocation (RA) [42]. This similarity is obtained with \(N_{1}=1\) and \(N_{2}=d_{a}\) (\(N_{1}=1\) and \(N_{2}=u_{t}\) for MA layer). It is used to modulate the contributions of common neighbours with high degrees. If a MA has high diversification, RA will penalise the link between its technologies, given the triviality of their link. If the MA makes all the technologies, it is a given that each technology is linked with all the others.;
* Taxonomy (TAX) [43]. For this similarity \(N_{1}=\max(u_{t},u_{t^{\prime}})\) and \(N_{2}=d_{a}\) (\(N_{1}=\max(d_{a},d_{a^{\prime}})\) and \(N_{2}=u_{t}\) for the MA layer). The Technology Space gives a higher similarity score to technology with a low ubiquity (i.e., technology done by a few MAs) and, consequently, bias towards them. However, the idea is that these complex technologies are done by MAs (a few numbers) that do approximately all the others. Consequently, it is impossible to justify a city's path from non-complex technologies to complex ones. Normalising also for the diversification, we avoid this problem as we penalise low ubiquity scores and complex technologies are weighted more.
Following Hidalgo et al. [38], we define the quantities:
\[\omega_{at}^{rec}=\frac{\sum_{t^{\prime}}M_{at^{\prime}}B_{tt^{\prime}}}{\sum_{t^{ \prime}}B_{tt^{\prime}}}\qquad\omega_{at}^{MA}=\frac{\sum_{a^{\prime}}M_{at^{ \prime}}B_{ad^{\prime}}}{\sum_{a^{\prime}}B_{ad^{\prime}}}. \tag{3}\]
\(\omega_{at}^{rec}\) measures how much the technologies done by the MA \(a\) are similar to the technology \(t\). \(\omega_{at}^{dec}\) is thus high if MA \(a\) develops technologies close to the technology \(t\)\(\omega_{at}^{MA}\), instead, measures how much a given technology \(t\) is spread among MAs similar to the MA \(a\). \(\omega_{at}^{MA}\) is thus high if technology \(t\) is spread among MAs surrounding MA \(A\)).
Given these definitions, we can use \(\omega_{at}^{dec}\) (\(\omega_{at}^{MA}\)) as a prediction score: higher is \(\omega_{at}^{dec}\) (\(\omega_{at}^{MA}\)), the higher the probability that an MA \(a\) will start developing the technology \(t\).
### Testing Procedure
Given a matrix \(\mathbf{M}^{v}\), one of our purposes is to predict the same matrix \(\delta\) years later, \(\mathbf{M}^{v+\delta}\). The basic idea is that higher values in \(\omega_{at}^{rec}\) or \(\omega_{at}^{MA}\) will correspond to new technologies, i.e., more 1s, in \(\mathbf{M}^{v+\delta}\). To this end, we have to keep into account two elements.
* Class Imbalance. We are treating our problem as a classification one, i.e., we want to predict if a MA will do or not a given technology. Class labels, in our case, are 0s and 1s, but the number of elements equal to 1 is approximately only 5%. To treat this unbalance correctly, we adopted the Area Under the Precision-Recall Curve [44].
* Autocorrelation. With the term _autocorrelation_, we mean that if a MA does or does not do a given technology in a specific year, with a high probability it will continue his current behaviour in the future. To avoid this problem, the evaluation is performed only for activations events, i.e., events in which the technology is not done in the year \(y\) and it is done at year \(y+\delta\). This strategy allows the healing of autocorrelation problems. Furthermore, it helps us study the diffusion of the technological process. We are more interested, in fact, in understanding where a new technology will be triggered, rather than knowing which ones will not.
## 4 Results
### Predictions
#### Geographic proximity and country diffusion
We analyse technology code diffusion timing to study the role of physical and political geography in innovation dynamics. Consider the MA where a specific technology code \(t\) first appears. We define the _Mean Time Distance_ as the average time distance between the first appearance of \(t\) and its other first appearances in other MAs. After averaging over all technologies, we aggregate this mean on different spatial distance ranges to analyse the relationship with physical geography. On the other hand, to consider political geography, we calculate the average on the subsets of MAs belonging or not to the same country. In Fig. 1, we report our analysis on the Mean Time Distance.
Two important observations are in order. First, for the overall set of MAs, the Mean Time Distance increases on average with the geographical distance, signalling an important role of geography in the diffusion of innovation. Second, the Mean Time Distance is always shorter for the subset of MAs belonging to the same country, and it does not show a strong dependence from the spacial distance until the scale \(10^{3}\) Km. After this scale, we see how a dependency from the spacial distance is stronger but more fluctuating (growing and then decreasing). This evidence is probably due to the distribution of MAs' distances, which are affected by seas and oceans. In fact, until the scale \(10^{3}\) Km, the distribution of distances (presented in Supplementary Information) follows a power law with exponent \(\sim 2\), corresponding to an isotropic distribution in two dimensions. After that scale, the seas and oceans break the isotropy assumption, making the distribution less predictable and ultimately affecting Mean Time Distance. But also in this range, the MAs couples from the same country show a way lower Mean Time distance. Therefore, we can consider political geography as predominant over physical geography in the dynamics of innovation.
#### Role of countries: an improved model
In works concerning similarity and forecast on bipartite networks, it's common to compute the prediction using the links between the items layer (technology codes, in our case), i.e., using \(\omega_{at}^{dec}\). However, mathematically, we have seen that it is possible to calculate a similarity between the nodes of both layers, i.e., also considering \(\omega_{at}^{MA}\). In the work of Albora et al. [45], the authors show how a mean between the two scores can outperform the standard method. They also propose a linear combination of item-based and user-based estimations, showing how this method outperforms the others. In our case, to get the prediction, we utilised this last method computing a linear combination of technology and MA densities instead:
\[S_{at}^{v+\delta}=\alpha\omega_{at}^{rec}+\beta\omega_{at}^{MA}. \tag{4}\]
where \(S_{at}^{y+\delta}\) is the forecast for the year \(y+\delta\). If we consider MAs with no patent in year \(y\), regardless of the similarities used, the predictions obtained from \(\omega_{at}^{rec}\) and \(\omega_{at}^{MA}\) will always be zero by construction. This outcome is due to the presence, in the rows of \(\mathbf{M}\) matrices related to those MAs, of only 0s. Given the relevance of belonging to a country unveiled through our previous results, we included that information to predict when a given MA will start patenting a specific technology for the first time. To this end, we define:
\[\omega_{at}^{C}=\sum_{at}M_{at}^{y}\frac{C_{ad}}{\sum_{a}C_{aa}}, \tag{5}\]
where \(C_{ad^{\prime}}=1\) if \(a\) and \(a^{\prime}\) belong to the same country, 0 otherwise and \(\sum_{a}C_{aa}\) is the number of MAs in the same country as \(a\), inserted to avoid size effects. \(\omega_{at}^{C}\) represents the average values of technologies done by the MAs of a specific country. As explained in the Method section, the higher the value of \(\omega_{at}^{C}\) is, the higher the probability that \(M_{at}^{y+\delta}=1\).
Our prediction model is thus a linear combination of the three previous contributions: technology similarity, MA similarity and information on belonging to the same country:
\[S_{at}^{y+\delta}=\alpha\omega_{at}^{rec}+\beta\omega_{at}^{MA}+(1-\alpha- \beta)\omega_{at}^{C}. \tag{6}\]
Also in this case, the higher the value of \(S_{at}^{y+\delta}\), the higher the probability to have \(M_{at}^{y+\delta}=1\). Because of the Autocorrelation problem explained in the Method section, we decided to evaluate our predictions on the so-called _activation_ elements, i.e., the matrix elements \(M_{at}^{y}=0\) and that in \(y+\delta\) could become 1.
In Fig. 2, we compare the prediction for \(\delta=10\) of the four metrics of similarity defined above. We also compare our model (continue curves) and classic models, i.e., models using the items-items similarity \(\omega_{at}^{dec}\) (dotted lines). We can see how our model curves outperform all the dotted ones. In Supplementary Information, we also report the analysis done by using \(\delta=1\) and \(\delta=5\).
If we consider MA with no technologies in \(y\), both \(\omega_{at}^{rec}\) and \(\omega_{at}^{MA}\) are 0 by definition. In this case, the predictions of our models are only due to \(\omega_{at}^{C}\), which represents the influence of countries.
Figure 1: Mean Time Distance (see the text for the definition) aggregated on different spacial distance ranges belonging. The error for each beam is determined by calculating the mean standard deviation. Due to the significant number of points per beam, the error is often not visually discernible in most plots. The blue curve corresponds to the aggregation of all MAs. We observe the overall increase in the Mean Time Distance, signalling an important role of geographical distances. Second, we split the set of MAs into two subsets of pairs of MAs belonging (orange curve) or not (green curve) to the same country. The second important observation is that belonging to the same country greatly reduces diffusion times.
In this specific case, we compared our results (Model) against a null model (Rand) and a model based on the spatial distance (Dist) to validate our findings. The null model prediction for each MA is a redistribution of the predicted technologies in the whole vector of the technological codes. If, for a given MA, we predict \((0,0,1,0)\), the null model would predict \((0.25,0.25,0.25,0.25)\). On the other hand, the spatial distance model uses geodetic distances between MA as similarities. In Tab. 1, we compare, for different values of \(\delta\), the models' performances on technological debuts of MAs by summing the areas under the curves for all years. Our model, informed on country membership, is the most successful in estimating future technologies made by an MA with a null technology portfolio.
### Model analysis
In this section, we analyse the behaviour of the best parameters \(\alpha\) and \(\beta\) over the years. For each metric, we show in Fig. 3**a** the optimal values of \(\alpha\) and \(\beta\) over the years considering \(\delta=10\). In Supplementary Information, we have reported the same analysis for \(\delta=1\) and \(\delta=5\). In this figure, we can see a common trend. Both \(\alpha\) and \(\beta\) tend to stay constant till the end of
\begin{table}
\begin{tabular}{c|c|c|c} & \(\delta_{1}\) & \(\delta_{5}\) & \(\delta_{10}\) \\ Model & 0.075 & 0.094 & 0.138 \\ Dist & 0.052 & 0.079 & 0.102 \\ Rand & 0.017 & 0.032 & 0.076 \\ \end{tabular}
\end{table}
Table 1: **Models comparison**. In the table, we compare, for different values of \(\delta\), the values of the areas under the curves of the predictions made on the MAs with zero technologies using the information belonging to the same country, geographic distances, and the random case. Same-country membership is the information that most successfully gives us an estimate of future technologies made by an MA with 0 technology portfolio.
Figure 2: **Performances of predictions models.** Continue curves represent the prediction scores of our improved model (Eq. 6) for the four similarity metrics defined in the text, TS, RA, TAX and CO. For comparison, dotted curves report the same prediction scores of the classical model based on the item-item similarity \(\omega_{at}^{dec}\). Our improved model outperforms the classic approaches. Error ranges are obtained using a 5-fold cross-validation to select the best parameter values \(\alpha\) and \(\beta\) out-of-samples.
the 90s'. After that, their values tend to increase, as all four similarity metrics predicted. This analysis is confirmed by the descending behaviour, in Fig. 3**b**, of the term \(1-\alpha-\beta\), representing the importance of belonging to a country. These pieces of evidence suggest that political geography has been highly important for the diffusion of innovation till around two decades ago. After that, the evidence indicates that the overall ecosystem of MAs became more global and based more on similarities between technologies and MAs. At the beginning of the data years, the country term \(1-\alpha-\beta\) has a positive contribution, but around the end of the 90s', it tends to decrease and even becomes negative. We interpret this result as a change in the dynamics of innovation in countries where the similarity between technologies and MA starts to become more important than the country itself. This is likely because, instead of following national trends, many MAs could have begun to copy MAs in other countries.
### The paths to innovation
In this last section, we focus on innovation paths, i.e., the paths followed by countries and metropolitan areas towards technological innovation. Though diversification is a good proxy for progress to innovation, we need another metric to represent similarities between the countries' development strategies. We define, in particular, a metric that quantifies how competitive a country \(c\) is in a specific technology code \(t\) in year \(y\) relative to other countries, based on the number of MAs in \(c\) that patent with that technology code. Similarly, we can quantify how competitive a MA \(a\) is compared to other MAs. For each country, we define the following:
\[G_{ct}^{y}=\frac{C_{ct}/C_{c}}{C_{wt}/C_{w}}, \tag{7}\]
\(C_{ct}\) counts how many MAs in the country \(c\) do the technology \(t\), and \(C_{c}\) is the number of MAs in the country \(c\). \(C_{wt}\) counts how many MAs are in the entire database patent with the technology code \(t\), and \(C_{w}\) is the total number of MAs. Therefore, \(G_{ct}^{y}\) measures the fraction of MAs in \(c\) that do the technology \(t\) compared to the entire word for the year \(y\). We define with \(\tilde{G}_{c}^{y}\) the vector that represents the average of \(G_{ct}^{y}\) over all technologies \(t\), and it represents the competitive position of the country \(c\) for the year \(y\). Similarly, for each MA, we define the following:
\[G_{at}^{y}=\frac{M_{a\in c,t}}{C_{ct}/C_{c}}. \tag{8}\]
and, similarly, \(\tilde{G}_{a}^{y}\) is the average of \(G_{at}^{y}\) over all technologies \(t\) and it represents the competitive position of MA \(a\) for the year \(y\). For every year, \(G_{ct}^{y}\) and \(G_{at}^{y}\) are vectors with 650 entries, corresponding to the total number of technologies. Using UMAP, we reduced the dimensionality to one and defined the similarity embedding. We found that this embedding is strongly anti-correlated with the modules of \(G_{at}\) and \(G_{ct}\) (see the Supplementary Information for further information). This evidence
Figure 3: **Analysis of model optimal parameters with \(\delta=10\).****a**: Optimal \(\alpha\) and \(\beta\) over the years for different similarity metrics. We can see how both started to increase around 2000. **b**: The contribution of country information over the years, estimated as \(1-\alpha-\beta\). We show how the contribution of country information is positive in the early years, but around the late 90s’, this tends to decrease and even become negative.
implies that the lower the similarity embedding, the higher the competitiveness of countries or MAs. We can thus use the similarity embedding as a reverse measure of competitiveness and plot the time evolution of each country and each MA in a two-dimensional scatter plot determined by the two quantities: similarity embedding (a reverse proxy for competitiveness) and diversification. We report the results in Fig. 4 for countries and Fig. 5 for metropolitan areas. Each point on the two plots is a pair country/year and MA/year.
We have highlighted the paths over time, followed by a selection of countries and MAs. Two typical patterns emerge that we denote as the "upper" path and the "lower" path. This pattern is particularly evident for countries. A country or MA that moves from left to right increases its diversification but not the competitiveness in the technologies that it does. Instead, movements from the upper part to the bottom are associated with growth in terms of competitiveness, keeping fixed diversification. The main difference between the two typical paths is the order of these movements. In the "upper" path, we first observe an increasing diversification and then an increase in competitiveness. In the "lower" path, the opposite occurs: first, an increase in competitiveness followed by a diversification increase. We coloured with different shades of the same colour the evolution of some countries belonging to the two typical paths.
Finally, to highlight the technology difference between the "upper" and the "lower" paths of both figures, we divided the diversification into ranges of size 100 (except the last one). For each range, we focus on the highest and lowest 25th percentile and aggregate the technologies to the 1st digit, representing the general technological category. We compare the technological categories present in the two sets to highlight the most distinctive ones, i.e., those with the greatest difference in rank based on
Figure 4: **Country’s 1D Similarity embedding vs. diversification.** Each point represents a country in a given year. For some countries, we plotted the trajectory over time. We can see how countries tend differently to reach a point of accumulation where the most developed countries are. In the lower part, we find the typical path of Western countries, and we report, for example, France, Canada, New Zealand and Israel. To highlight the technology difference between the “upper” and the “lower” paths, we divided the diversification into ranges of size 100 (except the last one). We focus on each range’s highest and lowest 25th percentile, aggregate the technologies to the 1st digit, and identify the most distinctive of the two subsets. The relative icons are reported on the top and bottom of each diversification range. The “upper” part is dominated mainly by the BRICS, Russia, India, China and Brazil. In technology code terms, we can highlight the differences between the two extreme paths: the “upper” part dominates mostly in manufacturing technology as Textiles and Paper. The leftmost part, i.e., the least diverse, particularly dominates in technologies devoted to Human necessities. The “lower” part dominates in most sophisticated technologies such as Electricity, Fixed construction and Mechanical engineering.
their frequency in the subset. For instance, if a technological category \(X\) is the most common in the top 25% set and the least common in the bottom 25% set, \(X\) will be considered as distinctive of the top set while, if it had been the most common in both sets, it would not have been considered distinctive. See Supplementary Information for more details.
In Fig. 5, we show the results for MAs. Unlike countries, we do not observe a point of accumulation between MAs. We observe how some MAs get closer to others, such as Moscow to Milan, Seoul to Tokyo or Shanghai to New York. From a technological point of view, results are consistent with countries. The upper part is dominated by manufacturing technologies, while at the bottom one is more evident dominance of Electricity technologies.
## 5 Discussion
This study provides valuable insights into technology diffusion among MAs worldwide and how geography impacts this process. Comparing geographic proximity, we find that belonging to a country is relevant in determining the likelihood of technology diffusion between metropolitan areas. Results indicate that, at equal geographical distances, technology diffusion occurs more readily across metropolitan areas belonging to the same country.
We develop a predictive model for future technology production of MAs that considers similarities between technologies and metropolitan areas and adds the contribution related to belonging to the same country. This last term allows for predictions even for metropolitan areas with empty technology portfolios. Our model outperforms traditional algorithms, particularly when one focuses on the case of technological debuts, i.e., when a metropolitan area starts developing a technology for the first time.
The study of the forecasts and the models' parameters highlights the increasing importance of similarities between technologies and metropolitan areas as years pass. In particular, around the end of the 90s, belonging to a country lost its significance as a predictor of innovation paths in favour of the similarity among technologies and metropolitan areas. This finding suggests a change in the dynamics of innovation. To get a deeper insight into this phenomenology, we represented the temporal paths of MAs and countries in the technological space of innovations. This space comprises two dimensions,
Figure 5: **MA’s 1D Similarity embedding vs. diversification.** Each point represents a MA in a given year. To highlight the technology difference between each diversification range’s “upper” and the “lower” paths, we follow the same procedure of Fig. 4. The technology differences show that the lower path dominates in Electricity technologies, while the upper path dominates in Chemistry, Textiles and Paper technologies. We see how some MAs tend to chase others (Seoul vs Tokyo, and Moscow vs Milan), though, unlike the countries’ case, no single accumulation point emerges.
corresponding to technological competitiveness and the diversification of countries and metropolitan areas. We singled out two main paths, one followed by most Western countries and the other by the BRICS ones.
In Fig. 4 the presence of a main growth path (with the country as New Zealand, Israel, France, etc.) is evident. The upper part is instead dominated mainly by the BRICS, Russia, India, China, Brazil and South Africa. We can highlight the differences between the two paths in technology code terms: the upper part dominates mostly in manufacturing technology, as Textiles and Paper. The rightmost part, i.e., the least diverse, particularly dominates Human necessities technologies. The lower part dominates in most sophisticated technologies such as Electricity, Fixed construction and Mechanical engineering.
The model developed in this study can predict technology diffusion transparently and understandably, differently from other "black box" predictive models present in literature. These features allow for informed decision-making regarding investment and innovation. From this perspective, our scheme could be a valuable tool for policymakers to guide investment decisions and priorititive innovation areas.
On a scientific level, this study opens the door to future work and questions. First, starting from the model presented in this work, which is focused on activations, i.e., first occurrences of a given technology, one could generalise to predict also predict "shutdowns", i.e., when a technological category is not patented any more. Furthermore, model simulation can be used to build green and sustainable pathways and highlight them at the level of MAs, regions, countries or companies. Finally, the relationship between forecasts and macroeconomic variables such as GDP can be explored to improve our understanding of innovation and economic dynamics.
Before concluding, it is essential to understand the limitation of the model. The only use of patents as a proxy for innovation [46] represents one of the constraints. Inventions do not represent all forms of knowledge production in the economy, nor do patents cover all generated knowledge [47]. Moreover, it has been argued that the disadvantage of using patents is that it is difficult to estimate their value [48]. Second, remote working and dispersed research teams can mitigate the concentration of innovation in urban areas [49, 50, 51, 52], and future studies linked with this should take this phenomenon into account.
|
2303.01877 | Quantum Merlin-Arthur proof systems for synthesizing quantum states | Complexity theory typically focuses on the difficulty of solving
computational problems using classical inputs and outputs, even with a quantum
computer. In the quantum world, it is natural to apply a different notion of
complexity, namely the complexity of synthesizing quantum states. We
investigate a state-synthesizing counterpart of the class NP, referred to as
stateQMA, which is concerned with preparing certain quantum states through a
polynomial-time quantum verifier with the aid of a single quantum message from
an all-powerful but untrusted prover. This is a subclass of the class stateQIP
recently introduced by Rosenthal and Yuen (ITCS 2022), which permits
polynomially many interactions between the prover and the verifier. Our main
result consists of error reduction of this class and its variants with an
exponentially small gap or a bounded space, as well as how this class relates
to other fundamental state synthesizing classes, i.e., states generated by
uniform polynomial-time quantum circuits (stateBQP) and space-uniform
polynomial-space quantum circuits (statePSPACE). Furthermore, we establish that
the family of UQMA witnesses, considered as one of the most natural candidates,
is in stateQMA. Additionally, we demonstrate that stateQCMA achieves perfect
completeness. | Hugo Delavenne, François Le Gall, Yupan Liu, Masayuki Miyamoto | 2023-03-03T12:14:07Z | http://arxiv.org/abs/2303.01877v3 | # Quantum Merlin-Arthur proof systems for synthesizing quantum states
###### Abstract
Complexity theory typically focuses on the difficulty of solving computational problems using classical inputs and outputs, even with a quantum computer. In the quantum world, it is natural to apply a different notion of complexity, namely the complexity of synthesizing quantum states. We investigate a state-synthesizing counterpart of the class NP, referred to as stateQMA, which is concerned with preparing certain quantum states through a polynomial-time quantum verifier with the aid of a single quantum message from an all-powerful but untrusted prover. This is a subclass of the class stateQIP recently introduced by Rosenthal and Yuen (ITCS 2022), which permits polynomially many interactions between the prover and the verifier. Our main result consists of error reduction of this class and its variants with an exponentially small gap or a bounded space, as well as how this class relates to other fundamental state synthesizing classes, i.e., states generated by uniform polynomial-time quantum circuits (stateBQP) and space-uniform polynomial-space quantum circuits (statePSPACE). Additionally, we demonstrate that stateQCMA achieves perfect completeness. Our proof techniques are based on the quantum singular value transformation introduced by Gilyen, Su, Low, and Wiebe (STOC 2019), and its adaption to achieve exponential precision with a bounded space.
## 1 Introduction
Classical and quantum complexity theory typically concentrates on the computational difficulty of solving problems with _classical_ inputs and outputs. However, quantum computers have the ability to handle not only classical problems, but also quantum tasks, such as synthesizing quantum states. The most famous example is preparing ground states of a physical system [10, 11], which even dates back to Feynman's original ideas [14]. Analogous tasks are also commonplace in quantum cryptography and generalized notions of the pseudo-randomness, such as quantum money [1] and pseudorandom quantum states [13]. This motivates the study of complexity of synthesizing quantum states.
In [1], Aaronson investigated the concept on quantum state complexity, leading to the _state synthesis problem_. This problem involves generating a quantum state \(\rho\) from the all-zero state based on a quantum circuit with a succinct description acting on \(n\) qubits with the depth up to exponential. The resulting state \(\rho\) is supposed to be close to the designated _target state_\(|\psi\rangle\)1. This problem is solvable in (quantum) polynomial space (PSPACE), i.e., a quantum computer running in exponential time but using polynomially many gates can generate a state that well approximates the target state.
Quantum computers are seemingly not capable of solving any PSPACE problem in polynomial time, while _polynomially many_ messages interactive protocols with the help of an all-powerful and _untrusted_ prover (known as _interactive proofs_, IP) captures the full computational power of polynomial-space computation, referred to as the celebrated IP = PSPACE theorem [14, 15]. A recent line of works [13, 15, 16] initializes the study on the _interactive_ state synthesis problem. Rosenthal and Yuen [13] denote the polynomial-space-preparable state families as statePSPACE2 and show that such state families are preparable by interactive synthesis protocols, which belongs to the class stateQIP. Afterwards in [15], the authors explore the state synthesis problem by taking advantage of fairly powerful and _trusted_ oracles. Recently, Metger and Yuen [11] manage to prove the equivalence between state families that are preparable using polynomial space and those generated by interactive synthesis protocols, that is, stateQIP = statePSPACE, which is the state-synthesizing counterpart of the IP = PSPACE theorem (and its quantum analogue [17]).
Footnote 2: The definition of statePSPACE is a bit subtle: although all quantum states can be well-approximated by an exponentially long gate sequence owing to the Solovay-Kitaev theorem [18], this exponential gate sequence is not necessarily _space-uniform_.
However, there is currently a lack of fine-grained characterizations of computationally easier state families, viz., state families that are efficiently preparable (e.g., efficiently simulating view of quantum statistical zero-knowledge [20]), or state families that are synthesizable via simply one-message interactive protocols (e.g., efficient verification of pure quantum states in the adversarial scenario [16]). This opens up opportunities for our main results.
### Main results
In this work, we are particularly interested in state families that are preparable by _one-message_ protocol, denoted as stateQMA, which is obviously a subclass of stateQIP. Let us first devote to defining stateQMA informally (see Section 3.2 for a formal definition). A state family in stateQMA is a family \(\{|\psi_{n}\rangle\}_{n\in\mathbb{N}}\) indexed by _natural numbers_ such that there is a verifier that has the following properties, verifying whether the target state \(|\psi_{n}\rangle\) corresponds to a given input \(1^{n}\) is well-approximated. The verifier's computation, which is a polynomial-size _unitary_ quantum circuit3, takes a quantum-proof state \(|w\rangle\) (with no limitations on the preparation method) and ancillary qubits in the state \(|0\rangle\) as input. After performing the verification circuit, a designated output qubit will be measured on the computational basis, and the verifier accepts if the measurement outcome is \(1\). If the verifier accepts, the verification circuit has prepared the resulting state \(\rho_{n,w}\) on the remaining qubits that is a good approximation of the target state \(|\psi_{n}\rangle\) (if the verifier rejects, the resulting state could be anything). The acceptance probability is viewed _the success probability_ for approximately preparing \(|\psi_{n}\rangle\).
Footnote 3: In particular, extending to general quantum circuits does not change the class stateQMA owing to the principle of deferred measurement. However, such extensions do not immediately work for space-bounded stateQMA, see Remark 3.3.
More precisely, the state family is in the class stateQMA\({}_{\delta}[c,s]\) for some \(0\leq s<c\leq 1\) and \(\delta\geq 0\), if the resulting state \(\rho_{n,w}\) is \(\delta\)-close to the target state \(|\psi_{n}\rangle\) provided that the verifier accepts with probability at least \(s\) (soundness condition); and additionally there exists a quantum witness that makes the verifier accepts with probability at least \(c\) (completeness condition).
It is pretty evident that stateQMA\({}_{\delta}[c,s]\subseteq\textsf{stateQMA}_{\delta^{\prime}}[c^{\prime},s^{ \prime}]\) if \(c^{\prime}\leq c\), \(s^{\prime}\geq s\) and \(\delta^{\prime}\geq\delta\). However, how crucially does stateQMA\({}_{\delta}[c,s]\) depend on its parameters? For commonplace complexity classes, viz. BQP, QMA, QIP, etc., the dependence on such parameters is very weak: the class remains the same so long as the completeness \(c\) and soundness \(s\) differ by at least some inverse polynomial. This is known as _error reduction_, which typically involves performing the verification circuit in parallel and taking the majority vote.
However, error reduction for stateQMA requires a more nuanced approach. A simple parallel repetition of the verification circuit ends with a tensor product of the resulting state that evidently differs from the original state family. Therefore, error reduction for stateQMA does need to preserve not only the quantum witness state \(|w\rangle\), but also the resulting state \(\rho_{n,w}\), referred to as the _doubly-preserving error reduction_ in Theorem 1.1.
**Theorem 1.1** (Doubly-preserving error reduction for stateQMA, informal of Theorem 5.1).: _For any \(c(n)-s(n)\geq 1/\mathrm{poly}(n)\) and \(0\leq c(n),s(n)\leq 1\), we have_
\[\mathsf{stateQMA}_{\delta}[c,s]\subseteq\mathsf{stateQMA}_{\delta}[1-2^{-l(n)},2^{-l(n)}].\]
Nevertheless, applying Theorem 1.1 to a _polynomial-space-bounded_ variant of \(\mathsf{stateQMA}_{\delta}[c,s]\), which we denote by \(\mathsf{stateQMA}_{\mathsf{U}}\mathsf{PSPACE}^{\mathsf{off}}\)4, will result in _exponential_ space. To address this, we generalize Theorem 1.1 in a manner that preserves the polynomial space complexity. Here in the class \(\mathsf{stateQMA}_{\mathsf{U}}\mathsf{PSPACE}^{\mathsf{off}}\), the verifier's computation stays polynomially space-bounded but may take _exponential time_ and the gap between the completeness \(c\) and the soundness \(s\) is at least some inverse-exponential.
Footnote 4: We emphasize that \(\mathsf{stateQMA}_{\mathsf{U}}\mathsf{PSPACE}^{\mathsf{off}}\) is not a state-synthesizing counterpart of the class \(\mathsf{NPSPACE}\), see Remark 3.6.
**Theorem 1.2** (Doubly-preserving error reduction for \(\mathsf{stateQMA}_{\mathsf{U}}\mathsf{PSPACE}^{\mathsf{off}}\), informal of Theorem 5.1).: _For any \(c(n)-s(n)\geq\exp(-\mathrm{poly}(n))\) and \(0\leq c(n),s(n)\leq 1\), we have_
\[\mathsf{stateQMA}_{\mathsf{U}}\mathsf{PSPACE}^{\mathsf{off}}{}_{\delta}[c(n),s (n)]\subseteq\mathsf{stateQMA}_{\mathsf{U}}\mathsf{PSPACE}^{\mathsf{off}}{}_{ \delta}\big{[}1-2^{-l(n)},2^{-l(n)}\big{]}.\]
We note that Theorem 1.1 is a state-synthesis counterpart of the witness-preserving error reduction for QMA[11, 12]. Likewise, Theorem 1.2 shares similarities with error reduction for unitary quantum computations [10] in the context of synthesizing states. Along the line of Marriott and Watrous [11], we demonstrate that logarithmic-size quantum witness states are useless for stateQMA, and this variant is referred to as \(\mathsf{stateQMA}[\mathrm{log}]\). Here \(\mathsf{stateBQP}\) is defined as a subclass of \(\mathsf{statePSPACE}\) with only polynomially many quantum gates.
**Corollary 1.3** (Informal of Theorem 5.5).: \(\mathsf{stateQMA}_{\delta}[\mathrm{log}]=\mathsf{stateBQP}_{\delta}\)_._
Resembling the approach of Fefferman and Lin [13], we demonstrate that a variant of \(\mathsf{stateQMA}\) that admits an exponentially small gap between completeness and soundness, known as \(\mathsf{statePreciseQMA}\), is contained in \(\mathsf{statePSPACE}\). Surprisingly, Corollary 1.4 shows that the distance parameter \(\delta\) remains _unchanged_, while a similar \(\mathsf{statePSPACE}\) containment following from [12] will worsen the distance parameter \(\delta\), namely \(\mathsf{stateQMA}_{\delta}\subseteq\mathsf{statePSPACE}_{\delta+1/\mathrm{poly}( n)}\).
**Corollary 1.4** (Informal of Theorem 5.7).: \(\mathsf{statePreciseQMA}_{\delta}\subseteq\mathsf{statePSPACE}_{\delta}\)_._
Furthermore, we prove that \(\mathsf{stateQCMA}\), which is a variant of \(\mathsf{stateQMA}\) in which the optimal quantum witness state is classical (i.e., a binary string) for the completeness condition, can archive the perfect completeness. This result is analogous to the \(\mathsf{QCMA}=\mathsf{QCMA}_{1}\) theorem [12, 13] for synthesizing quantum states.
**Theorem 1.5** (\(\mathsf{stateQCMA}\) achieves perfect completeness, informal of Theorem 6.1).: _For any \(c(n)-s(n)\geq 1/\mathrm{poly}(n)\) and \(0\leq c(n),s(n)\leq 1\), we have \(\mathsf{stateQCMA}_{\delta}[c,s]\subseteq\mathsf{stateQCMA}_{\delta}[1,s^{ \prime}]\) for some \(s^{\prime}\) such that \(1-s^{\prime}(n)\geq 1/\mathrm{poly}(n)\)._
In addition, it is worth noting that Theorem 1.5 also straightforwardly extends to \(\mathsf{statePreciseQCMA}\).
### Proof techniques
The proof of Theorem1.1 and Theorem1.2 employs the quantum linear algebra techniques developed by Gilyen, Su, Low, and Wiebe [14], specifically the quantum singular value discrimination.
Error reduction for stateQMA by manipulating singular values.To elaborate on the intuition, we begin by briefly reviewing the witness-preserving error reduction for QMA[15, 16]. Consider a QMA verification circuit \(V_{x}\) that takes a quantum witness state \(|w\rangle\) (on the register \(\mathsf{W}\)) and ancillary qubits in the state \(|0\rangle\) as input. The corresponding acceptance probability is \(\|\!\|1\rangle\langle 1|_{\mathrm{out}}V_{x}|w\rangle|\bar{0}\rangle\|_{2}^{2}\), which is equal to a quadratic form \(\langle w|M_{x}|w\rangle\) where the matrix \(M_{x}:=\langle\bar{0}|V_{x}^{\dagger}|1\rangle\langle 1|_{\mathrm{out}}V_{x}| \bar{0}\rangle\). It is not hard to see the maximum acceptance probability of \(V_{x}\) is the largest eigenvalue of \(M_{x}\). We then view \(M_{x}=\Pi_{\mathrm{in}}\Pi\Pi_{\mathrm{in}}\) as a product of Hermitian projectors \(\Pi_{\mathrm{in}}\) and \(\Pi\) where \(\Pi_{\mathrm{in}}=I_{\mathsf{W}}\otimes|\bar{0}\rangle\langle\bar{0}|\) and \(\Pi=V_{x}^{\dagger}|1\rangle\langle 1|_{\mathrm{out}}V_{x}\). Remarkably, there exists an orthogonal decomposition of the Hilbert space, which the projectors \(\Pi_{\mathrm{in}}\) and \(\Pi\) act on, into _one-dimensional_ and _two-dimensional_ common invariant subspaces. This elegant decomposition property is referred as to the Jordan lemma5[17]. Marriott and Watrous [15] then take advantage of the Jordan lemma and present error reduction for QMA that preserves the quantum witness state.
Footnote 5: See [11] for the detailed statement of the Jordan lemma, as well as a simple proof.
However, this error reduction technique does not automatically preserve the resulting state, as required in stateQMA, we thus need a more sophisticated technique, namely the quantum singular value transformation [14]. This technique generalizes the qubitization technique introduced by Low and Chuang [13] that inspired by the aforementioned decomposition property. Moving on to the maximum acceptance probability of a stateQMA verifier \(V_{n}\), it corresponds to the square root of the largest singular value of the matrix \(A_{n}=\Pi_{\mathrm{out}}V_{n}\Pi_{\mathrm{in}}\) where \(\Pi_{\mathrm{out}}:=|1\rangle\langle 1|_{\mathrm{out}}\) is the final measurement. In Section3.2 of [14], the authors extend the Jordan lemma to the singular value scenarios. In particular, \(\mathrm{Img}(\Pi_{\mathrm{in}})\) and \(\mathrm{Img}(\Pi_{\mathrm{out}})\) can be decomposed into one-dimensional or two-dimensional common invariant subspaces. Now let us focus on the specific case of stateQMA, we notice that the right singular vectors of \(A_{n}\) correspond to the quantum witness state \(|w\rangle\), as well as the left singular vectors correspond to the resulting state \(\rho_{n,w}\). Therefore, we result in _doubly-preserving error reduction_ for stateQMA (Theorem1.1) by manipulating the singular values accordingly6.
Footnote 6: Concretely speaking, the analysis of error reduction based on majority votes essentially corresponds to obtaining tail bounds for the Binomial distribution. By leveraging the central limit theorem, it becomes sufficient to estimate tail bounds for the normal distribution, referred to as the error function \(\mathrm{erf}(x)\). The approximation polynomials of the sign function in [13] then achieve this task.
It is noteworthy that Theorem1.1 differs from Theorem38 in [14] since our construction is based on the _projected unitary encoding_ (e.g., the presented matrix \(A_{n}\)) instead of the block-encoding. Furthermore, for establishing Theorem1.2, we make use of an _exponential-degree_ approximation polynomial of the sign function that all coefficients within the _exponential precision_ are computable in PSPACE[15]. We additionally observe that the proof techniques in [15] can be straightforwardly adapted to _projected unitary encodings_ instead of the block-encodings originally utilized in their work.
Applications of error reduction for stateQMA.Along the line of Theorem3.13 in [15], with Theorem1.1, it seems to straightforwardly make for Corollary1.3. Nevertheless, the resulting state raises some concern upon initial inspection. We fortunately circumvent this caveat by a careful analysis. Specifically, utilizing the error reduction for stateQMA, we begin with a verifier with completeness \(1-2^{-p(n)}\) and soundness \(2^{-p(n)}\) where \(p\) is a polynomial of \(n\). Then we
replace the short quantum witness state \(|w\rangle\) with a completely mixed state \(I_{\mathsf{W}}\), which gives us a computation meeting the soundness condition such that the soundness \(s\) is preserved and the gap between the completeness and the soundness shrinks to some inverse-polynomial of \(n\). Although the new resulting state \(\rho_{I_{\mathsf{W}}}\) may greatly differ from \(\rho_{n,w}\), the definition of stateQMA guarantees that \(\rho_{I_{\mathsf{W}}}\) is also close to the target state because the acceptance probability of the verifier with \(I_{\mathsf{W}}\) is greater than the soundness \(s\). This proof also easily extends to Corollary 1.4 employing Theorem 1.2. In addition, it is noteworthy that stateBQP achieves perfect completeness with a worsening distance parameter \(\delta^{\prime}\). By incorporating error techniques for both stateBQP and state\({}_{\mathsf{U}}\)PSPACE, the difference between the new distance parameter \(\delta^{\prime}\) and the original one can be made _exponentially small_. Furthermore, we remark that stateBQP is not trivially contained in stateQMA, still this is effortless7. We therefore complete the other direction in Corollary 1.3.
Footnote 7: See Section 3.2 (in particular, Proposition 3.7) for a detailed elaboration.
stateQCMA achieves perfect completeness.Our proof for Theorem 1.5 takes inspiration from [11, 12], but it requires several modifications. Note that our concern in stateQCMA is not only the maximum acceptance probability but also the resulting state after performing the verification circuit and the final measurement. To meet these requirements, we must choose a specific universal gateset \(\mathcal{S}\) such that \(\mathcal{S}\) can generate a dense subgroup of \(\mathrm{SU}(2^{n})\), and all quantum states generated by these gates in \(\mathcal{S}\) have rational entries. For this reason, we opt for the "Pythagorean gateset"8[11, 10]. To ensure that the resulting state is indeed close to the target state, we slightly adjust the construction outlined in [12].
Footnote 8: See Remark 6.2 for the details.
### Related works
In addition to the state-synthesizing complexity classes explored in prior works [11, 12, 13] and the present study, there are other investigations focusing on the quantum state synthesis problem from diverse perspectives, including cryptography and distributed computing. From a cryptographic standpoint, a recent work [10] examines non-interactive zero-knowledge (NIZK) proofs for quantum states, with a relevance to the setting of stateQCMA. Another concurrent study [11] addresses zero-knowledge proofs for quantum states, considering scenarios that pertinent to both stateQMA and stateQIP scenarios. On the other hand, in the realm of distributed computing, another recent work [13] delves into quantum state synthesis through the utilization of distributed quantum Merlin-Arthur protocols.
### Discussion and open problems
Reduction and completeness in state-synthesizing complexity theory.In the context of state-synthesizing complexity theory, including prior works [11, 12, 13] and our own, the concepts of _reduction_ and _completeness_ have not been defined. However, these concepts hold significant importance in (quantum) complexity theory. The immediate challenge lies in appropriately defining these concepts, such as reduction, in a manner that ensures the resulting states exhibit reasonable behavior before and after the application of the reduction.
The computational power of statePreciseQMA.Although Corollary 1.4 establishes that a statePSPACE containment of statePreciseQMA, the reverse inclusion, namely statePSPACE\(\subseteq\) statePreciseQMA, remains an open problem. The main challenge lies in adapting existing proof techniques that demonstrate PSPACE\(\subseteq\)PreciseQMA[11, 11, 12], as these techniques heavily rely on notions of _completeness_ or _reduction_ for the class PSPACE.
Preliminaries
In this paper we assume the reader is familiar with the basic concepts and notions of quantum computation (see, e.g., [10] for the reference). We begin by introducing some useful linear-algebraic notations. Let \(M\) be a matrix, and we denote the largest eigenvalue of \(M\) as \(\lambda_{\max}(M)\), as well as the largest singular value of \(M\) as \(\sigma_{\max}(M)\). In addition, we will employ three Schatten norms of matrices in this paper, which include:
* Trace norm \(\|M\|_{1}:=\operatorname{Tr}(\sqrt{M^{\dagger}}M)\);
* Frobenius norm \(\|M\|_{2}:=\sqrt{\operatorname{Tr}(M^{\dagger}M)}\);
* Operator norm \(\|M\|_{\infty}:=\sigma_{\max}(A)=\sqrt{\lambda_{\max}(A^{\dagger}A)}\).
### Distances for quantum states
For any (probably mixed) quantum states \(\rho_{0}\) and \(\rho_{1}\), we measure the closeness between these states by the trace distance \(\operatorname{td}(\rho_{0},\rho_{1}):=\frac{1}{2}\|\rho_{0}-\rho_{1}\|_{1}\). Moreover, we will utilize the following properties of the trace distance (see Section 9.2.1 in [10] for a detailed explanation):
**Contractivity**: For any quantum channel \(\Phi\), we have \(\operatorname{td}(\Phi(\rho_{0}),\Phi(\rho_{1}))\leq\operatorname{td}(\rho_{ 0},\rho_{1})\);
**Convexity**: For any non-negative coefficients \(\{p_{i}\}_{i}\) such that \(\sum_{i}p_{i}=1\), we have
\[\operatorname{td}\bigl{(}\rho_{0},\sum\nolimits_{i}p_{i}\rho_{1}^{(i)}\bigr{)} \leq\sum\nolimits_{i}p_{i}\operatorname{td}\bigl{(}\rho_{0},\rho_{1}^{(i)} \bigr{)}.\]
### Gatesets matter for synthesizing quantum states
A _gateset_ is a finite set of unitary matrices each of which acts on a finite-dimensional quantum system. A gateset \(\mathcal{S}\) is _universal_ if the subgroup generated by \(\mathcal{S}\) is dense in \(\operatorname{SU}(2^{n})\) for large enough \(n\). We mainly use the common universal gateset \(\{\textsc{CNOT},\textsc{Hadamard},\textsc{T}\}\) for convenience, and further note that complexity classes remain unchanged for all reasonable choices of gatesets that all entries are algebraic numbers9, owing to the Solovay-Kitaev theorem [11] and its space-efficient variant [14].
Footnote 9: For a comprehensive explanation of the conditions on choosing gatesets, see Theorem 2.10 in [13].
Additionally, to achieve perfect completeness, we require a particular _"Pythagorean" gateset_ in Section 6, introduced by Jordan and Nagaj [15], which consists of CNOT and "Pythagorean" gates10\(\frac{1}{5}\bigl{(}\begin{smallmatrix}4&-3\\ 3&4\end{smallmatrix}\bigr{)}\) and \(\frac{1}{5}\bigl{(}\begin{smallmatrix}4&3i\\ 3i&4\end{smallmatrix}\bigr{)}\).
Footnote 10: An analogous gateset is also utilized in [11] with a slightly different “Pythagorean” gate.
We end with Lemma 2.1 which states changing the gateset of a family of quantum circuits that prepares a certain family of quantum states will make the resulting state negligibly far from the target state. Lemma 2.1 highlights the robustness of the state-synthesizing procedure with respect to the choice of gateset.
**Lemma 2.1** (Changing gateset worsens the distance paramter).: _Consider a family of (verification) circuits \(\{Q_{n}\}_{n\in\mathbb{N}}\) that prepares the corresponding family of resulting states \(\{\rho_{n}\}_{n\in\mathbb{N}}\)11. Let \(|\psi_{n}\rangle_{n\in\mathbb{N}}\) be a certain family of target states, and the distance between these two families is \(\delta(n):=\operatorname{td}(\rho_{n},\psi_{n})\). We then construct a circuit family \(\{Q^{\prime}_{n}\}_{n\in\mathbb{N}}\), using a designated gateset \(\mathcal{G}\) that is closed under adjoint, such that the new distance \(\delta^{\prime}(n)\leq\delta(n)+\exp(-\mathrm{poly}(n))\)._
Footnote 11: In particular, we first perform (verification) circuit \(Q_{n}\) on the input state, which is not necessarily an all-zero state, then we measure the designated output qubit. If the measurement outcome is \(1\), we obtain the resulting state corresponding to \(Q_{n}\) on the remained qubits.
Proof.: Let \(\{\tilde{\rho}_{n}\}_{n\in\mathbb{N}}\) be the resulting state corresponding to the new circuit family \(\{Q^{\prime}_{n}\}_{n\in\mathbb{N}}\). It suffices to show that \(\operatorname{td}(\tilde{\rho}_{n},\rho_{n})\leq\exp(-\mathrm{poly}(n))\) for all \(n\in\mathbb{N}\), since the triangle inequality of the trace distance deduces that \(\delta^{\prime}(n)=\operatorname{td}(\tilde{\rho}_{n},\psi_{n})\leq \operatorname{td}(\tilde{\rho}_{n},\rho_{n})+\operatorname{td}(\rho,\psi_{n}) =\operatorname{td}(\tilde{\rho}_{n},\rho_{n})+\delta(n)\).
Consider a quantum channel \(\Phi(\rho)=\frac{\Pi_{\mathrm{out}}\rho\Pi_{\mathrm{out}}}{\mathrm{Tr}(\Pi_{ \mathrm{out}}\rho\Pi_{\mathrm{out}})}\) where \(\Pi_{\mathrm{out}}:=|1\rangle\langle 1|_{\mathrm{out}}\) that post-selects the output qubit to be \(1\), and let \(|w\rangle\) be a (quantum) witness12. Then we have derived that
Footnote 12: If \(Q_{n}\) and \(Q^{\prime}_{n}\) are not verification circuits, then this witness state is simply an all-zero state.
\[\operatorname{td}\left(\Phi\left(Q_{n}(|w\rangle\langle w|\otimes| \bar{0}\rangle\langle\bar{0}|)Q^{\dagger}_{n}\right),\Phi\left(Q^{\prime}_{n}(| w\rangle\langle w|\otimes|\bar{0}\rangle\langle\bar{0}|)(Q^{\prime}_{n})^{ \dagger}\right)\right)\] \[\leq \operatorname{td}\left(Q_{n}(|w\rangle\langle w|\otimes|\bar{0} \rangle\langle\bar{0}|)Q^{\dagger},Q^{\prime}_{n}(|w\rangle\langle w|\otimes| \bar{0}\rangle\langle\bar{0}|)(Q^{\prime}_{n})^{\dagger}\right)\] \[= \sqrt{1-\langle w|\langle\bar{0}|Q^{\dagger}_{n}Q^{\prime}_{n}|w \rangle|\bar{0}\rangle}\] \[\leq \|Q_{n}|w\rangle|\bar{0}\rangle-\tilde{Q}_{n}|w\rangle|\bar{0} \rangle\|_{2}\] \[= \|Q_{n}-\tilde{V}_{n}\|_{\infty}\|w\rangle|\bar{0}\rangle\|_{2},\]
where the second line owing to the contractive property, the fourth line due to the fact that \((\langle\psi|\phi\rangle-1)^{\dagger}(\langle\psi|\phi\rangle-1)\geq 0\) for any pure states \(|\psi\rangle\) and \(|\phi\rangle\).
Note that the gates of \(\mathcal{G}\) are closed under adjoint. By a space-efficient Solovay-Kitaev theorem (i.e., Theorem 4.3 in [12]), we can construct \(Q_{n}\) such that \(\|Q_{n}-Q^{\prime}_{n}\|_{\infty}\leq\epsilon\) by a deterministic algorithm running in time \(\mathrm{poly}\log(1/\epsilon)\) and space \(O(\log(1/\epsilon))\). We thus obtain \(\|Q_{n}-\tilde{Q_{n}}\|_{\infty}\ \leq\exp(-\mathrm{poly}(n))\) as desired.
## 3 Basic state-synthesizing complexity classes
In this section, we will define state-synthesizing complexity classes involved in this paper.
### State-synthesizing complexity classes with bounded time or space
We begin with two crucial notations on circuit families. A family of quantum circuits \(\{Q_{n}\}_{n\in\mathbb{N}}\) is _uniformly generated_ if there exists a deterministic polynomial-time Turing machine that on input \(1^{n}\) outputs the description of \(Q_{n}\), and the space is implicitly polynomially-bounded. Likewise, a family of quantum circuits \(\{Q_{n}\}_{n\in\mathbb{N}}\) is _space-uniformly generated_ if there exists a deterministic polynomial-space Turing machine that on input \(1^{n}\) outputs the description of \(Q_{n}\) which acts on polynomially many qubits. Then we move on to the class captures efficiently preparable quantum state families.
**Definition 3.1** (\(\mathsf{stateBQP}_{\delta}[\gamma]\)).: _A family of quantum states \(\{|\psi_{n}\rangle\}_{n\in\mathbb{N}}\) is in \(\mathsf{stateBQP}_{\delta}[\gamma]\) if each \(|\psi_{n}\rangle\) is an \(n\)-qubit state, and there exists a uniformly generated quantum circuit family \(\{Q_{n}\}_{n\in\mathbb{N}}\) such that for all \(n\in\mathbb{N}\), the circuit \(Q_{n}\), which takes no inputs, outputs a density matrix \(\rho_{n}\) satisfying \(\operatorname{td}(\rho_{n},\psi_{n})\leq\delta(n)\) if \(Q_{n}\) successes. Here, we define the success of circuit \(Q_{n}\) as the measurement outcome of the designated output qubit being \(1\). Additionally, the success probability of \(Q_{n}\) is at least \(\gamma\)._
For convenience, we define \(\mathsf{stateBQP}_{\delta}:=\mathsf{stateBQP}_{\delta}[2/3]\). Similarly, we denote \(\mathsf{stateBQP}\) with an exponentially small success probability as \(\mathsf{statePreciseBQP}_{\delta}:=\cup_{\gamma\geq\exp(-\mathrm{poly}(n))} \mathsf{stateBQP}_{\delta}[\gamma]\). Afterwards, we consider a state-synthesizing counterpart of unitary quantum polynomial space \(\mathsf{BQ}_{\mathsf{U}}\mathsf{PSPACE}\) in [10]. In particular, we define \(\mathsf{statePSPACE}\) in terms of _unitary quantum computation_, denoted as \(\mathsf{state}_{\mathsf{U}}\mathsf{PSPACE}\).
**Definition 3.2** (\(\mathsf{state}_{\mathsf{U}}\mathsf{PSPACE}_{\delta}[\gamma]\)).: _A family of quantum states \(\{|\psi_{n}\rangle\}_{n\in\mathbb{N}}\) is in \(\mathsf{state}_{\mathsf{U}}\mathsf{PSPACE}_{\delta}[\gamma]\) if each \(|\psi_{n}\rangle\) is an \(n\)-qubit state, and there exists a space-uniform family of unitary quantum circuits \(\{Q_{n}\}_{n\in\mathbb{N}}\) such that for all \(n\in\mathbb{N}\), the circuit \(Q_{n}\), which takes no inputs, outputs a density matrix \(\rho_{n}\) satisfying \(\mathrm{td}(\rho_{n},\psi_{n})\leq\delta(n)\) if \(Q_{n}\) successes. Here, we define the success of circuit \(Q_{n}\) as the measurement outcome of the designated output qubit being \(1\). Additionally, the success probability of \(Q_{n}\) is at least \(\gamma\)._
We denote \(\mathsf{state}_{\mathsf{U}}\mathsf{PSPACE}_{\delta}:=\mathsf{state}_{\mathsf{U} }\mathsf{PSPACE}_{\delta}[2/3]\). It is worth noting that there are three differences between our definition \(\mathsf{state}_{\mathsf{U}}\mathsf{PSPACE}\) and \(\mathsf{statePSPACE}\) as defined in [13, 14]: 1) our definition only admits unitary quantum computation whereas \(\mathsf{statePSPACE}\) also permits general quantum circuits (see Remark 3.3); 2) our definition does not assume the perfect completeness, even though this property is always achievable for our definition with a worsening distance parameter; and 3) our definition merely deals with pure state families, while \(\mathsf{statePSPACE}\) in [14] also accommodates mixed state families.
_Remark 3.3_ (Synthesizing states by general quantum circuits).: General quantum circuits allow not only unitary quantum gates but also intermediate measurements and resetting qubits. Because of the principle of deferred measurement, admitting intermediate measurements does not make the time-efficient class more powerful. Recent developments on eliminating intermediate measurements in quantum logspace [15, 16, 17] improve this to the space-efficient scenario. Nevertheless, these new techniques do not automatically extend to state-synthesizing classes since they do not (even approximately) preserve the resulting state.
\(\mathsf{stateBQP}\) and \(\mathsf{state}_{\mathsf{U}}\mathsf{PSPACE}\) achieve perfect completeness.It is worth noting that the simulator used in quantum statistical zero-knowledge (QSZK) [21] is an instance of \(\mathsf{stateBQP}\). Interestingly, one can assume that the corresponding simulator achieves perfect completeness with a worsening distance parameter \(\delta^{\prime}\)13. To achieve this, one can simply replace the designated output qubit of the \(\mathsf{QSZK}\) simulator, or a \(\mathsf{stateBQP}\) circuit in general, with a qubit in state \(|1\rangle\). This technique applies to any state-synthesizing complexity class with bounded time or space, including \(\mathsf{stateBQP}\) and \(\mathsf{state}_{\mathsf{U}}\mathsf{PSPACE}\).
Footnote 13: To be specific, this fact is demonstrated in the \(\mathsf{QSZK}\)-hardness proof of the Quantum State Distinguishability Problem, as presented in Theorem 6 in [21].
**Proposition 3.4**.: _For any \(0\leq\delta(n),\gamma(n)\leq 1\), choosing \(\delta^{\prime}:=\gamma\delta+1-\gamma\), then we know that_
\[\mathsf{stateBQP}_{\delta}[\gamma]\subseteq\mathsf{stateBQP}_{\delta^{ \prime}}[1]\text{ and }\mathsf{state}_{\mathsf{U}}\mathsf{PSPACE}_{\delta}[\gamma]\subseteq \mathsf{state}_{\mathsf{U}}\mathsf{PSPACE}_{\delta^{\prime}}[1].\]
Moreover, we note that achieving perfect completeness has little effect on the distance parameter when combined with the error reduction techniques employed by the aforementioned classes. Specifically, the new distance parameter \(\delta^{\prime}\) can be made _exponentially close_ to \(\delta\).
Proof.: Let \(\rho\) be the resulting state upon acceptance with respect to either a \(\mathsf{stateBQP}\) circuit or a \(\mathsf{state}_{\mathsf{U}}\mathsf{PSPACE}\) circuit, and similarly let \(\rho_{*}\) be the resulting state upon rejection. Also, let \(\hat{\rho}\) be the resulting state when we replace the output qubit with a qubit in state \(|1\rangle\). Then we obtain:
\[\mathrm{td}(|\psi_{n}\rangle\langle\psi_{n}|,\hat{\rho}) =\mathrm{td}(|\psi_{n}\rangle\langle\psi_{n}|,\gamma\cdot\rho+(1- \gamma)\cdot\rho_{*})\] \[\leq\gamma\cdot\mathrm{td}(|\psi_{n}\rangle\langle\psi_{n}|,\rho )+(1-\gamma)\cdot\mathrm{td}(|\psi_{n}\rangle\langle\psi_{n}|,\rho_{*})\] \[\leq\gamma\delta+1-\gamma.\]
Here, the second line follows from the convexity of the trace distance, and the third line holds because the trace distance is at most \(1\)
### Quantum Merlin-Arthur proof systems for synthesizing quantum states
We will now discuss stateQMA, which is a state-synthesizing counterpart of the class \(\mathsf{NP}\). Moreover, it is a natural subclass of \(\mathsf{stateQIP}\), as defined [14], since we merely admit one message from the prover to the verifier.
**Definition 3.5** (\(\mathsf{stateQMA}_{\delta}[c,s]\)).: _A family of quantum states \(\{|\psi_{n}\rangle\}_{n\in\mathbb{N}}\) is in \(\mathsf{stateQMA}_{\delta}[c,s]\) if each \(|\psi_{n}\rangle\) is an \(n\)-qubit state, and there is a uniformly generated polynomial-size unitary quantum circuits family \(\{V_{n}\}_{n\in\mathbb{N}}\) acting on \(m+k\) qubits where \(m\) is the number of working qubits and \(k\) is the number of ancillary qubits such that both \(m(n)\) and \(k(n)\) are polynomials of \(n\). Let \(\rho_{n,w}\) be the resulting states of \(V_{n}\) on the input \(|w\rangle|0^{k}\rangle\) conditioned on the measurement outcome of the output qubit is \(1\). Moreover, we allow the verifier to trace out some qubits in the resulting state for convenience. Then the following conditions hold:_
* _Completeness._ _There is an_ \(m\)_-qubit state_ \(|w\rangle\) _such that_ \(\Pr\left[V_{n}\text{ accepts }|w\rangle\right]\geq c(n)\)_._
* _Soundness._ _For any_ \(m\)_-qubit state_ \(|w\rangle\) _such that_ \(\operatorname{td}(\tilde{\rho}_{n,w},|\psi_{n}\rangle\langle\psi_{n}|)\geq \delta(n)\)_, we have_ \(\Pr\left[V_{n}\text{ accepts }|w\rangle\right]\leq s(n)\)_,_
_where \(c(n),s(n),\delta(n)\) are efficiently computable functions satisfying \(c(n)-s(n)\geq 1/\mathrm{poly}(n)\), \(0\leq s(n)<c(n)\leq 1\), and \(0\leq\delta(n)\leq 1\)._
_In addition, we use the notation \(\mathsf{stateQMA}_{\delta}[l,c,s]\) to represent \(\mathsf{stateQMA}\) with witness states of bounded length, where \(l(n)\) is the number of qubits employed by the witness states. If \(l(n)\) is a polynomial function of \(n\), we will omit the parameter \(l\)._
We define \(\mathsf{stateQMA}_{\delta}:=\mathsf{stateQMA}_{\delta}[2/3,1/3]\). Additionally, we also require to define the variant of \(\mathsf{stateQMA}\) with an exponentially small promise gap14 and the variant of \(\mathsf{stateQMA}\) with a logarithmic-length witness:
Footnote 14: To be specific, the gap between the acceptance probability in completeness and soundness is inverse-exponential.
* \(\mathsf{statePreciseQMA}_{\delta}:=\cup_{c-s\geq\exp(-\mathrm{poly}(n))} \mathsf{stateQMA}_{\delta}[c,s]\);
* \(\mathsf{stateQMA}_{\delta}[\log,c,s]:=\cup_{l(n)\leq O(\log n)}\mathsf{ stateQMA}_{\delta}[l,c,s]\).
Furthermore, we need to define \(\mathsf{stateQMA}_{\mathsf{U}}\mathsf{PSPACE}^{\mathsf{off}}\), a polynomial-space-bounded variant of \(\mathsf{stateQMA}\), which corresponds to state families are preparable by a polynomial-space-uniform unitary quantum circuit family \(\{V_{n}\}_{n\in\mathbb{N}}\) and the size of the witness is a polynomial of \(n\). It is noteworthy that \(\mathsf{stateQMA}_{\mathsf{U}}\mathsf{PSPACE}^{\mathsf{off}}\) is not a state-synthesizing counterpart of \(\mathsf{NPSPACE}\), as explained in Remark 3.6.
_Remark 3.6_ (Online vs. offline access to quantum witness state).: Online and offline access to a classical witness in the class \(\mathsf{NP}\) are equivalent, but the models significantly differs in space-bounded classical computation and so do quantum counterparts15. The class \(\mathsf{NPSPACE}\) has online access to an _exponential-length_ classical witness, whereas \(\mathsf{stateQMA}_{\mathsf{U}}\mathsf{PSPACE}^{\mathsf{off}}\) has offline access to a _polynomial-length_ quantum witness, suggesting \(\mathsf{stateQMA}_{\mathsf{U}}\mathsf{PSPACE}^{\mathsf{off}}\) is not a state-synthesizing analogue of \(\mathsf{NPSPACE}\).
Footnote 15: See Section 5.3.1 in [11] for elaborations on space-bounded classical computation. Regarding quantum scenarios, recent advancements in [10] signifies that quantum analogues of \(\mathsf{NPSPACE}\) with online access to an exponential-length witness are more powerful than the offline-access variant \(\mathsf{QMA}\mathsf{PSPACE}\) defined in [14], implying a quantum analogue of Savitch’s theorem is unlikely to hold.
By restricting the optimal witness for the completeness condition to be classical (i.e., binary strings), we result in the class \(\mathsf{stateQCMA}\), and so does the precise variant \(\mathsf{statePreciseQCMA}\) that has an exponentially small promise gap. Likewise, we define \(\mathsf{stateQCMA}_{\delta}:=\mathsf{stateQCMA}_{\delta}[2/3,1/3]\).
stateBQP is in stateQMA is non-trivial but effortless.Finally, we remark that stateBQP is not trivially contained in stateQMA. The issue is that for a circuit \(Q_{n}\) associated with a state family in stateBQP, _the soundness condition of_ stateQMA may not hold. In particular, if the resulting state (for any input quantum state of \(Q_{n}\)) is far apart from the target state, then there is _no guarantee_ that the corresponding acceptance probability is at most some threshold \(s\).
To resolve this issue, we measure the input quantum state of \(Q_{n}\) (also viewed as a quantum witness state for a stateQMA verifier) on the computational basis, and reject if the outcome is not an all-zero string. We now proceed with a formal statement with a proof.
**Proposition 3.7**.: _For any \(1/\mathrm{poly}(n)\leq\gamma\leq 1\) and \(0\leq\delta\leq 1\), \(\mathsf{stateBQP}_{\delta}[\gamma]\subseteq\mathsf{stateQMA}_{\delta}[\gamma, \gamma^{\prime}]\) where \(\gamma^{\prime}(n)>0\)._
Proof.: For any given \(\mathsf{stateBQP}_{\delta}[\gamma]\) circuit \(C_{n}\) with \(m(n)\) working qubits, we construct a new stateQMA verification circuit \(V_{n}\). We first introduce a new register \(\mathsf{W}\) for the \(m(n)\)-qubit witness state \(|w\rangle\), and the remaining is the given stateBQP circuit. Then the construction of \(V_{n}\) follows from Algorithm 1.
```
1. Perform the stateBQP circuit \(C_{n}\) without the final measurement;
2. Measure all qubits in \(\mathsf{W}\) (the witness state \(|w\rangle\)) on the computational basis. Reject if the measurement outcome is not \(0^{m}\);
3. Measure the designated output qubit in the stateBQP circuit. Accept and produce the resulting state if the measurement outcome is \(1\), otherwise reject.
```
**Algorithm 1**stateQMA verification circuit \(V_{n}\)
It suffices to analyze this protocol. For the completeness condition, it is evident that the optimal witness is \(0^{m}\). Then guaranteed by the given stateBQP circuit, we obtain \(\Pr\left[V_{n}\text{ accepts }0^{m}\right]\geq\gamma\). For the soundness condition, since any witness \(|w\rangle\) that is orthogonal to \(|0^{m}\rangle\) will be simply rejected, then we have derived that
\[\Pr\left[V_{n}\text{ accepts }|w\rangle\right]=|\langle w|0^{m}\rangle|^{2} \cdot\Pr\left[C_{n}\text{ accepts}\right]\geq\gamma|\langle w|0^{m}\rangle|^{2}.\]
Therefore, for any witness state \(|w\rangle\), we conclude that \(\Pr\left[V_{n}\text{ accepts }|w\rangle\right]>0\), and the resulting state \(\rho_{n,w}\) satisfies \(\mathrm{td}(\rho_{n,w},|\psi_{n}\rangle\langle\psi_{n}|)=\mathrm{td}(\rho_{n,0 ^{m}},|\psi_{n}\rangle\langle\psi_{n}|)\leq\delta\) as desired.
## 4 Quantum singular value discrimination, revisited
In this section, we aim to prove Theorem 4.1, which serves as a "meta theorem" for singular value discrimination using different approximation polynomials with varying parameters. Our approach differs from Theorem 3.2.9 in [11] as we employ the _projected unitary encoding_ with an _odd polynomial_ to preserve the output of the verification circuit, as specified in Definition 3.5.
We will start by defining the _projected unitary encoding_. We say that \(U\) is a project unitary encoding of linear operator \(A\) if \(\|A-\tilde{\Pi}U\Pi\|_{\infty}\leq\epsilon\) where orthogonal projectors \(\tilde{\Pi}\) and \(\Pi\) may also act on some ancillary qubits.
**Theorem 4.1** (Singular value discrimination with bounded time and space).: _Consider \(0\leq a<b\leq 1\) and a projected unitary encoding \(A:=\tilde{\Pi}U\Pi\), where \(U\) acts on \(s_{1}(n)\) qubits. Let \(|\psi\rangle\) be an unknown quantum state, a right singular vector of \(A\) with a singular value either below \(a\) or above \(b\). Suppose there exists a degree-\(d\) odd polynomial \(S\) satisfying \(|S(x)-\mathrm{sgn}(x)|\leq\epsilon\) for
all \(x\in[-1,1]\setminus(-\delta,\delta)\), where \(d=O\left(\delta^{-1}\log(1/\epsilon)\right)\). Additionally, assume that the coefficients of \(S\) and the description of quantum circuits implementing the QSVT with respect to \(S\) can be computed in deterministic time \(t(n)\) and space \(s_{2}(n)\). Using this QSVT implementation, one can distinguish between the two cases with error probability at most \(\epsilon\). The time complexity is \(t(n)\), and the space complexity is \(s_{1}(n)+1\) qubits and \(s_{2}(n)\) bits._
Our construction is heavily influenced by [10] (and Lemma 4.7 taken from [11]) and we utilize specific statements from [13] for ease of use. We will show that, for any \(\epsilon\) equal to \(1/\mathrm{poly}(n)\) or \(1/\exp(-\mathrm{poly}(n))\), when equipped with different approximation polynomials for the sign function, the space complexity parameters \(s_{1}(n)\) and \(s_{2}(n)\) remain \(\mathrm{poly}(n)\) whereas the time complexity parameter \(t(n)\) may differ greatly.
### A general framework
Let us now define the singular value decomposition for projected unitary encodings.
**Definition 4.2** (Singular value decomposition of a projected unitary, adapted from Definition 2.3.1 in [13]).: _Given a projected unitary encoding of \(A\), denoted by \(U\), associated with orthogonal projectors \(\Pi\) and \(\tilde{\Pi}\) on a finite-dimensional Hilbert space \(\mathcal{H}_{U}\). Namely, \(A=\tilde{\Pi}U\Pi\). Then there exists orthonormal bases of \(\Pi_{i}\) and \(\tilde{\Pi}_{i}\) such that \(\Pi\): \(\{|\psi_{i}\rangle:i\in[d]\}\), where \(d:=\mathrm{rank}(\Pi)\), of a subspace \(\mathrm{Img}(\Pi)=\mathrm{span}\left\{|\psi_{i}\rangle\right\}\); \(\tilde{\Pi}\colon\left\{|\tilde{\psi}_{i}\rangle:i\in[\tilde{d}]\right\}\), where \(\tilde{d}:=\mathrm{rank}(\tilde{\Pi})\), of a subspace \(\mathrm{Img}(\tilde{\Pi})=\mathrm{span}\big{\{}|\tilde{\psi}_{i}\rangle\big{\}}\). These bases ensure that the singular value decomposition \(A=\sum_{i=1}^{\min\{d,\tilde{d}\}}\sigma_{i}|\tilde{\psi}_{i}\rangle\langle \psi_{i}|\) where singular values \(\sigma_{i}>\sigma_{j}\) for any \(i<j\in[\min\{d,\tilde{d}\}]\)._
With these definitions in place, we present the alternating phase modulation as Lemma 4.3, which serves as the key ingredient for implementing quantum singular value transformations. It is worth noting that Lemma 4.3 deviates from Theorem 2.3.7 in [13] due to our assumption that the sequence of rotation angles is already provided. The rotation angles \(\Phi\) in this context correspond to the polynomial \(P\) and will be utilized in computing the descriptions of quantum circuits implementing QSVT corresponding to \(P\).
**Lemma 4.3** (QSVT by alternating phase modulation, adapted from Theorem 2.3.7 in [13]).: _Let \(P\in\mathbb{C}[x]\) be a polynomial and \(\Phi\in\mathbb{R}^{n}\) be the corresponding sequence of rotation angles. We can construct \(P^{\mathrm{(SV)}}(\tilde{\Pi}U\Pi)=\tilde{\Pi}U_{\Phi}\Pi\) with a single ancillary qubit when \(n\) is odd. Here, \(U_{\Phi}\) represents a quantum circuit that implements the QSVT associated with \(P\) using the corresponding rotation angles specified by \(\Phi\)._
We will now proceed with proving Theorem 4.1.
Proof of Theorem 4.1.: Given an exact projected unitary encoding \(\tilde{\Pi}U\Pi\) with a singular value decomposition \(W\Sigma V^{\dagger}=\tilde{\Pi}U\Pi\), equipped with Lemma 4.3, it suffices to construct an odd polynomial \(P\) associated with a sequence angles \(\Phi\in\mathbb{R}^{m}\) where \(m=O(\log(1/\varepsilon)/\delta)\) such that \(\left\|\tilde{\Pi}_{\geq t+\delta}U_{\Phi}\Pi_{\geq t+\delta}-I\otimes\sum_{ i:\sigma_{i}\geq t+\delta}|\tilde{\psi}_{i}\rangle\langle\psi_{i}|\right\|\leq\varepsilon\) and \(\left\|\left(\left\langle+\right|\otimes\tilde{\Pi}_{\leq t+\delta}\right)U _{\Phi}\left(\left|+\right\rangle\otimes\Pi_{\leq t-\delta}\right)-0\right\|\leq\varepsilon\).
We here define singular value threshold projectors as \(\Pi_{\geq\delta}:=\Pi V\Sigma_{\geq\delta}V^{\dagger}\Pi\), so does \(\Pi_{\leq\delta}\). Similarly, \(\tilde{\Pi}_{\geq\delta}:=\Pi U\Sigma_{\geq\delta}U^{\dagger}\Pi\), and so does \(\tilde{\Pi}_{\leq\delta}\). With construction of this procedure, we then apply an \(\epsilon\)-approximate singular value projector by choosing \(t=(a+b)/2\) and \(\delta=(b-a)/2\). Then, we measure \(\left|+\right\rangle\left\langle+\right|\otimes\Pi\): If the final state is in \(\mathrm{Img}(\left|+\right\rangle\left\langle+\right|\otimes\Pi)\), there exists a singular value \(\sigma_{i}\) above \(b\); Otherwise, all singular values \(\sigma_{i}\) must be below \(a\).
It is left to implement singular value threshold projectors. In fact, \(U_{\Phi}\) can be achieved using _a single ancillary qubit_ with \(m\) uses of \(U\),\(U^{\dagger}\), \(\mathrm{C}_{\Pi}\)NOT, \(\mathrm{C}_{\tilde{\Pi}}\)NOT, and single qubit gates.
Implementing singular value threshold projectors.We begin by constructing an odd polynomial \(P\in\mathbb{R}[x]\) of degree \(m=O(\log(1/\epsilon^{2})/\delta)\) that approximates the function \(\frac{1}{2}[(1-\epsilon)\cdot\operatorname{sgn}(x+t)+(1-\epsilon)\cdot \operatorname{sgn}(x-t)+2\epsilon\cdot\operatorname{sgn}(x)]\) with \(\epsilon^{2}/4\) precision on the interval \([-1,1]\setminus(-t-\delta,-t+\delta)\cup(t-\delta,t+\delta)\).
This construction of \(P\) is based on the degree-\(d\) approximation polynomial \(S(x)\) specified in theorem statement. This polynomial \(S(x)\) satisfies the condition \(|S(x)-\operatorname{sgn}(x)|\leq\epsilon\) for all \(x\in[-1,1]\setminus(-\delta,\delta)\) where \(d=O(\delta^{-1}\log(1/\epsilon))\). Additionally, we have \(|P(x)|\leq 1\) for any \(-1\leq x\leq 1\), together with: \((-1)^{z}P(x)\in[0,\epsilon]\) if \((-1)^{z}x\in[0,t-\delta]\) and \((-1)^{z}P(x)\in[1-\epsilon,1]\) if \((-1)^{z}\in[t+\delta,1]\) for \(z\in\{0,1\}\).
Now it suffices to construct a projected unitary encoding \(U\) such that \(\|\tilde{\Pi}U_{\Phi}\Pi-P^{(\mathrm{SV})}(\tilde{\Pi}U\Pi)\|\leq\epsilon\), which is achieved by Lemma 4.3 as long as rotation angles \(\Phi\) corresponding to \(P(x)\) can be computed in deterministically \(\operatorname{poly}(d)\) time and \(s_{2}(n)\) space. Moreover, the gate complexity follows from Lemma 2.3.9 in [11].
### Scenarios in time-efficient and space-bounded
Equipped with Theorem 4.1, utilizing a polynomial approximation of the sign function in Corollary 6 of [10], as well as the angle finding algorithms in Theorems 2.2.1-2.2.3 of [11], we conclude the time-efficient singular value discrimination.
**Theorem 4.4** (Time-efficient singular value discrimination).: _Consider \(0\leq a,b\leq 1\) and an exact projected unitary encoding \(A:=\tilde{\Pi}U\Pi\). Let \(|\psi\rangle\) be an unknown state, a right singular vector of \(A\) with a singular value either below \(a\) or above \(b\) such that \(b-a\geq 1/\mathrm{poly}(n)\). Employing the quantum singular value transform (QSVT) with a degree-\(O\big{(}\frac{\log 1/\epsilon}{b-a}\big{)}\) odd polynomial, one can distinguish the two cases with error probability at most \(\epsilon\). Moreover, the time complexity of implementing this QSVT is \(\mathrm{poly}\log 1/\epsilon\cdot\mathrm{poly}(1/(b-a))\), and we can compute the description of this quantum circuit implementation in \(\mathsf{P}\)._
Proof Sketch.: It suffices to construct a degree-\(d\) approximation polynomial of the sign function, as well as find rotation angles used in Lemma 4.3, in \(\mathsf{P}\). The approximation polynomial is achievable in [11, 11] equipped with Proposition 4.5.
**Proposition 4.5** (Polynomial approximation of the sign function, Corollary 6 in [10]).: _For all \(\delta>0\), \(\epsilon\in(0,1/2)\), there exists an efficiently computable odd polynomials \(P\in\mathbb{R}[x]\) of degree \(n=O(\log(1/\epsilon)/\delta)\) s.t. \(\forall x\in[-2,2],|P(x)|\leq 1\) and \(\forall x\in[-2,2]\setminus(-\delta,\delta):|P(x)-\operatorname{sgn}(x)|\leq\epsilon\)._
Regarding finding angles, this is achievable in time \(\tilde{O}(d^{3}\mathrm{poly}\log(1/\epsilon))\) using the recent developments [1, 12]. This results in a quantum circuit implementing the desired QSVT, and the description of this quantum circuit implementation can be computed in \(\mathsf{P}\).
Now we move to _the space-bounded scenario_, stated in Theorem 4.6.
**Theorem 4.6** (Space-bounded singular value discrimination).: _Consider \(0\leq a,b\leq 1\) and an exact projected unitary encoding \(A:=\tilde{\Pi}U\Pi\). Let \(|\psi\rangle\) be an unknown state, a right singular vector of \(A\) with a singular value either below \(a\) or above \(b\) such that \(b-a\geq\exp(-\mathrm{poly}(n))\). Employing the quantum singular value transform (QSVT) with a degree-\(O\big{(}\frac{\log 1/\epsilon}{b-a}\big{)}\) odd polynomial, one can distinguish the two cases with error probability at most \(\epsilon\). In addition, quantum circuits implementing this QSVT utilizes \(\mathrm{poly}(n)\) qubits, and we can compute the description of this quantum circuit implementation in \(\mathsf{PSPACE}\)._
The proof of Theorem 4.6 is closely derived from the _space-bounded_ quantum singular value transformation techniques in [13, Lemma 3.13]. This is because their proof techniques can be straightforwardly adapted to the context of projected unitary encodings.
Proof Sketch.: Analogous to the time-efficient scenario (e.g., Theorem 4.4), it is sufficient to implement the QSVT corresponding to the sign function, as long as we have an exponentially good (bounded) polynomial approximation of the sign function, as stated in Lemma 4.7.
**Lemma 4.7** (Exponentially good approximation to the sign function, adapted from Lemma 2.10 in [13]).: _For any \(\epsilon\geq 2^{-\mathrm{poly}(n)}\), there exists a degree \(d=O(\epsilon^{-1}\log(1/\epsilon))=O(2^{\mathrm{poly}(n)})\) and \(\mathsf{PSPACE}\)-computable coefficients \(c_{0},\cdots,c_{d}\) such that \(\forall x\in[-1,1]\setminus[-\epsilon,\epsilon]\), \(\left|\mathrm{sgn}(x)-P_{d}^{\mathrm{sgn}}(x)\right|\leq\epsilon\) where \(P_{d}^{\mathrm{sgn}}:=\sum_{i=0}^{d}c_{i}T_{i}\) and \(T_{i}(x)\) is the Chebyshev polynomial (of the first kind)16. Furthermore, the coefficient vector \(c=(c_{1},\cdots,c_{d})\) has norm bounded by \(\|c\|_{1}\leq O(\log d)\)._
Footnote 16: The Chebyshev polynomials (of the first kind) \(T_{k}(x)\) are defined via the following recurrence relation: \(T_{0}(x)=1\), \(T_{1}(x)=x\), and \(T_{k+1}(x)=2xT_{k}(x)-T_{k-1}(x)\). For \(x\in[-1,1]\), an equivalent definition is \(T_{k}(\cos\theta)=\cos(k\theta)\).
However, instead of directly constructing the sequence of rotation angles in the time-efficient scenario, we employ the linear combination of unitaries (LCU) technique [1]. This technique can be readily adapted to space-bounded scenarios (as demonstrated in Lemma 3.6 in [13]) and is also applicable to projected unitary encodings. Next, we focus on implementing the quantum singular value transformation (QSVT) for the Chebyshev polynomials \(T_{i}(x)\) with odd \(i\in[1,d]\) utilized in Lemma 4.7. To achieve this, we examine the proof of Lemma 2.2.7 in [10] and straightforwardly adapt it to the context of projected unitary encodings. Consequently, implementing an exponential-degree QSVT corresponding to the sign function requires polynomially many qubits, and the description of this circuit implementation can be computed in \(\mathsf{PSPACE}\).
## 5 Doubly-preserving error reduction for state-synthesizing classes
In this section, we will present error reduction for stateQMA and its variants which preserves not only _the (quantum) witness state_ but also _the resulting state_ that well-approximates the target state facilities.
### Doubly-preserving error reduction for stateQMA and more
We start by stating error reduction for stateQMA, this leads to
\[\mathsf{stateQMA}_{\delta}=\cup_{c-s\geq 1/\mathrm{poly}(n)}\mathsf{stateQMA}_{ \delta}[c,s].\]
**Theorem 5.1** (Error reduction for stateQMA).: _For any efficiently computable \(c(n),s(n),\delta(n)\) such that \(0\leq s(n)<c(n)\leq 1\), \(c(n)-s(n)\geq 1/\mathrm{poly}(n)\), we have that for any polynomial \(l(n)\),_
\[\mathsf{stateQMA}_{\delta}[c,s]\subseteq\mathsf{stateQMA}_{\delta}[1-2^{-l},2^ {-l}].\]
_Moreover, the number of repetitions is \(O(l(n)/(\sqrt{c}-\sqrt{s}))\)._
Proof.: Our proof closely relates to Theorem 38 in [1] besides an additional analysis on resulting states.
Amplifying the promise gap by QSVT.Note that the acceptance probability of a state-QMA verifier \(V_{n}\) taking \(\left|w\right>\) as a quantum witness is \(\Pr\left[V_{n}\text{ accepts }\left|\psi\right>\right]=\||1\rangle \langle 1_{\mathrm{out}}V_{n}|0^{k}\rangle|w\rangle\|_{2}^{2}\geq c\text{ or }\leq s\). Then consider a projected unitary encoding \(M_{n}:=\Pi_{\mathrm{out}}V_{n}\Pi_{\mathrm{in}}\) such that \(\|M_{n}\|\geq\sqrt{c}\) or \(\sqrt{s}\) where \(\Pi_{\mathrm{in}}:=(\left|0\right>\left<0\right|^{\otimes k}\otimes I_{m})\) and \(\Pi_{\mathrm{out}}:=\left|1\rangle\langle 1_{\mathrm{out}}\otimes I_{m+k-1}\right.\) Since \(\|M_{n}\|=\sigma_{\max}(M_{n})\) where \(\sigma_{\max}(M_{n})\) is the largest singular value of \(M_{n}\), it suffices to distinguish the largest singular value of \(M_{x}\) are either below \(\sqrt{s}\) or above \(\sqrt{c}\). By setting \(a=\sqrt{s},b=\sqrt{c}\), and \(\varepsilon=2^{-l(n)}\), this task is a straightforward corollary of the time-efficient singular value discrimination (Theorem 4.4).
QSVT preserves both the witness state and the resulting state.Utilizing notations of Definition 4.2, we notice that the construction in the proof of Theorem 4.4 essentially maps \(M_{n}=\sum_{i}\sigma_{i}|\tilde{\psi}_{i}\rangle\langle\psi_{i}|\) to \(f(M_{n})=\sum_{i}f(\sigma_{i})|\tilde{\psi}_{i}\rangle\langle\psi_{i}|\), for an odd polynomial \(f\), such that \(f(x)\in[1-\varepsilon,1]\) if \(x\geq b\) and \(f(x)\in[0,\varepsilon]\) if \(x\leq a\) for any \(0\leq x\leq 1\). Since both left and right singular vectors are invariant in \(f(M_{n})\), the resulting state and the witness state are clearly preserved after reducing errors.
Utilizing the space-bounded quantum singular value discrimination (Theorem 4.6), we will now improve Theorem 5.1 to \(\mathsf{stateQMA}_{\mathsf{U}}\mathsf{PSPACE}^{\mathsf{off}}\), which may have _exponential precision_. This is partially analogous to [10] since Theorem 4.6 works merely for _polynomial space_.
**Theorem 5.2** (Error reduction for \(\mathsf{stateQMA}_{\mathsf{U}}\mathsf{PSPACE}^{\mathsf{off}}\)).: _Given any efficiently computable \(c(n)\) and \(s(n)\), also \(\delta(n)\) such that \(0\leq s(n)<c(n)\leq 1\) and \(c(n)-s(n)\geq\exp(-\mathrm{poly}(n))\), then for any polynomial \(l(n)\)_
\[\mathsf{stateQMA}_{\mathsf{U}}\mathsf{PSPACE}^{\mathsf{off}}{}_{\delta}[c,s] \subseteq\mathsf{stateQMA}_{\mathsf{U}}\mathsf{PSPACE}^{\mathsf{off}}{}_{ \delta}[1-2^{-l(n)},2^{-l(n)}].\]
_Here \(t^{\prime}(n)=t(n)\cdot R(n)\) where the number of repetitions \(R(n)=O(l(n)/(\sqrt{c}-\sqrt{s}))\). Moreover, we need an additional ancillary qubit._
In addition, by forcing the input state of the "verification circuit" to be an all-zero state in the proof of Theorem 5.1, namely replacing the projector \(\Pi_{\mathrm{in}}:=\left(\left|0\right\rangle\left\langle 0\right|^{\otimes k} \otimes I_{m}\right)\) with \(\Pi^{\prime}_{\mathrm{in}}:=\left|0\right\rangle\left\langle 0\right|^{\otimes k+m}\), we will straightforwardly result in error reduction for \(\mathsf{stateBQP}\).
**Theorem 5.3** (Error reduction for \(\mathsf{stateBQP}\)).: _For any polynomials \(p(n)\) and \(q(n)\) such that \(1/p(n)\leq\gamma<1\), we have_
\[\mathsf{stateBQP}_{\delta}[\gamma]\subseteq\mathsf{stateBQP}_{\delta}[1-2^{q( n)}].\]
_Moreover, the number of repetitions is \(O(q(n)/\sqrt{\gamma})\)._
Theorem 5.3 implies that \(\mathsf{stateBQP}_{\delta}=\cup_{1\leq\gamma^{-1}\leq\mathrm{poly}(n)} \mathsf{stateBQP}_{\delta}[\gamma]\). Also, combining with Proposition 3.4, \(\mathsf{stateBQP}\) achieves perfect completeness with a slightly modified distance parameter \(\delta^{\prime}\). Particularly, \(\mathsf{stateBQP}_{\delta}[\gamma]\subseteq\mathsf{stateBQP}_{\delta^{\prime}} [1]\) where \(\delta^{\prime}(n):=\delta(n)+\exp(-\mathrm{poly}(n))\).
Likewise, together with the projector \(\Pi^{\prime}_{\mathrm{in}}:=\left|0\right\rangle\left\langle 0\right|^{ \otimes k+m}\) and the space-bounded quantum singular value discrimination (Theorem 4.6), we will deduce error reduction for \(\mathsf{state}_{\mathsf{U}}\mathsf{PSPACE}\).
**Theorem 5.4** (Error reduction for \(\mathsf{state}_{\mathsf{U}}\mathsf{PSPACE}\)).: _For any efficiently computable \(\gamma(n),\delta(n)\) such that \(\exp(-\mathrm{poly}(n))\leq\gamma\leq 1\), we have that for any polynomial \(l(n)\),_
\[\mathsf{state}_{\mathsf{U}}\mathsf{PSPACE}_{\delta}[\gamma]\subseteq\mathsf{ state}_{\mathsf{U}}\mathsf{PSPACE}_{\delta}[1-2^{-l(n)}].\]
_Moreover, the number of repetitions is \(O\left(l(n)/\sqrt{\gamma}\right)\), and we need an additional ancillary qubit._
A direct consequence of Theorem 5.4 is that \(\mathsf{state}_{\mathsf{U}}\mathsf{PSPACE}_{\delta}=\cup_{\gamma\geq 2^{- \mathrm{poly}(n)}}\mathsf{state}_{\mathsf{U}}\mathsf{PSPACE}_{\delta}[\gamma]\). Similarly, by utilizing Proposition 3.4, we can also conclude that
\[\mathsf{state}_{\mathsf{U}}\mathsf{PSPACE}_{\delta}[\gamma]\subseteq\mathsf{ state}_{\mathsf{U}}\mathsf{PSPACE}_{\delta^{\prime}}[1]\text{ where }\delta^{\prime}=\delta+\exp(-\mathrm{poly}(n)).\]
### Application 1: \(\mathsf{stateQMA}\) with a short message is as weak as \(\mathsf{stateBQP}\)
Recall that \(\mathsf{stateQMA}_{\log}\) is a variant of \(\mathsf{stateQMA}\) where the witness state is logarithmic-size. Analogous to Theorem 3.10 in [11], a short message is also useless for \(\mathsf{stateQMA}\).
**Theorem 5.5** (A short message is useless for stateQMA).: _For any \(0\leq s(n)<c(n)\leq 1\) and \(c(n)-s(n)\geq 1/\mathrm{poly}(n)\), there exists a polynomial \(q(n)\) such that_
\[\mathsf{stateQMA}_{\delta}[\log,c,s]\subseteq\mathsf{stateBQP}_{\delta}[1/q(n)].\]
Proof.: Consider a state family \(\{|\psi_{n}\rangle\}_{n\in\mathbb{N}}\in\mathsf{stateQMA}_{\delta}[\log,c,s]\), we then notice that this state family is also in \(\mathsf{stateQMA}_{\delta}[\log,1-2^{-p(n)},2^{-p(n)}]\) where \(p(n)\) is a polynomial of \(n\) after performing error reduction (Theorem 5.1). Now let \(\{V_{n}\}_{n\in\mathbb{N}}\) be the family of quantum verifiers with negligible errors.
Removing the witness by a "random-guess" state.Now consider a \(\mathsf{stateBQP}\) algorithm that applies \(V_{n}\) with the witness being a completely mixed state \(\tilde{I}_{m}:=2^{-m}I_{m}\) on \(m=O(\log n)\) qubits. It accepts iff \(V_{n}\) accepts. For the analysis, we define \(M_{n}:=(|1\rangle\langle 1|_{\mathrm{out}}\otimes|0\rangle\,\langle 0|^{\otimes m +k-1})V_{n}(I_{m}\otimes|0\rangle\,\langle 0|^{\otimes k})\). Then \(\Pr\left[V_{n}\text{ accepts}|w\rangle\right]=\|M_{n}|w\rangle\|_{2}^{2}\), which infers that the acceptance probability of the \(\mathsf{stateBQP}\) algorithm is
\[\Pr\bigl{[}V_{n}\text{ accepts }\tilde{I}_{m}\bigr{]}=\mathrm{Tr}(M_{n}^{ \dagger}M_{n}2^{-m}I_{m})=2^{-m}\mathrm{Tr}(M_{n}^{\dagger}M_{n})=2^{-m} \lambda_{\max}(M_{n}^{\dagger}M_{n}), \tag{1}\]
where \(\lambda_{\max}(M_{n}^{\dagger}M_{n})\) is the largest eigenvalue of \(M_{n}^{\dagger}M_{n}\). For the completeness, there exists \(|w\rangle\) such that \(\lambda_{\max}(M_{n}^{\dagger}M_{n})\geq\Pr\left[V_{n}\text{ accepts }|w\rangle\right]\geq 1-2^{-p(n)}\). Plugging it into Equation (1), we have \(\Pr\left[V_{n}\text{ accepts }\tilde{I}_{m}\right]\geq 2^{-m(n)}(1-2^{-p(n)})\geq 1 /q(n)\), where \(q(n):=2^{O(m(n))}\) is a polynomial of \(n\). For the soundness, we obtain \(\Pr\left[V_{n}\text{ accepts }\tilde{I}_{m}\right]\geq 2^{-p(n)}\), it guarantees that \(\mathrm{td}(\rho_{\tilde{I}_{m}},\psi_{n})\leq\delta\) owing to Definition 3.5.
We remark that Theorem 5.5 straightforwardly adapts to any \(\mathsf{stateQMA}\) verifier with witness states of polynomial-size, which is a state-synthesizing counterpart of \(\mathsf{QMA}\subseteq\mathsf{PP}\)[23, 24].
**Proposition 5.6** ("\(\mathsf{stateQMA}\subseteq\mathsf{statePP}\)").: \(\mathsf{stateQMA}_{\delta}\subseteq\mathsf{statePreciseBQP}_{\delta}\)_._
Proof.: Consider a family of quantum states \(\{|\psi_{n}\rangle\}_{n\in\mathbb{N}}\in\mathsf{stateQMA}_{m}[c,s,\delta]\) where \(m(n)\) is the size of the witness state. Analogous to Theorem 5.5, we have derived that
\[\mathsf{stateQMA}_{\delta}[m,c,s]\subseteq\mathsf{stateQMA}_{\delta}[m,1-2^{- m^{\prime}},2^{-m^{\prime}}]\subseteq\mathsf{stateBQP}_{\delta}[2^{-m}],\]
where \(m^{\prime}(n)=m(n)\cdot n^{2}\).
### Application 2: \(\mathsf{statePreciseQMA}\) is in \(\mathsf{statePSPACE}\)
Finally, we provide a state-synthesizing analogue of \(\mathsf{PreciseQMA}\subseteq\mathsf{BQPSPACE}\)[19, 19].
**Theorem 5.7**.: \(\mathsf{statePreciseQMA}_{\delta}\subseteq\mathsf{state}_{\mathsf{U}}\mathsf{ PSPACE}_{\delta}\)_._
Proof.: Consider a family of quantum states \(\{|\psi_{n}\rangle\}_{n\in\mathbb{N}}\in\mathsf{statePreciseQMA}_{\delta}[m,c,s]\) where \(m(n)\) is the size of witness state. We begin by observing that
\[\mathsf{statePreciseQMA}_{\delta}[m,c,s]\subseteq\mathsf{stateQMA}_{\mathsf{U} }\mathsf{PSPACE}_{\delta}^{\mathsf{off}}[m,c,s].\]
Then we replace the witness with a "random-guess" state analogous to the proof of Theorem 5.5. Utilizing error reduction for \(\mathsf{stateQMA}_{\mathsf{U}}\mathsf{PSPACE}^{\mathsf{off}}\) (Theorem 5.2), we have derived that
\[\mathsf{statePreciseQMA}_{\delta}[m,c,s]\subseteq\mathsf{stateQMA}_{\mathsf{U} }\mathsf{PSPACE}_{\delta}^{\mathsf{off}}[m,1-2^{-m^{\prime}},2^{-m^{\prime}}] \subseteq\mathsf{state}_{\mathsf{U}}\mathsf{PSPACE}_{\delta}[2^{-m}],\]
where \(m^{\prime}(n)=m(n)\cdot n^{2}\). Employing error reduction for \(\mathsf{state}_{\mathsf{U}}\mathsf{PSPACE}\) (Theorem 5.2), we conclude that
\[\mathsf{statePreciseQMA}_{\delta}[m,c,s]\subseteq\mathsf{state}_{\mathsf{U}} \mathsf{PSPACE}_{\delta}[2^{-m}]\subseteq\mathsf{state}_{\mathsf{U}} \mathsf{PSPACE}_{\delta}[1-2^{-l}],\]
where \(l(n)\) is a polynomial of \(n\). This completes the proof.
stateQCMA achieves perfect completeness
In this section, we will present a state-synthesizing analogue of the \(\mathsf{QCMA}=\mathsf{QCMA}_{1}\) theorem [16].
**Theorem 6.1** (stateQCMA is closed under perfect completeness).: _For any efficiently computable functions \(c(n)\), \(s(n)\) and \(\delta(n)\) such that \(c(n)-s(n)\geq 1/\mathrm{poly}(n)\), we have that_
\[\mathsf{stateQCMA}_{\delta}[c,s]\subseteq\mathsf{stateQCMA}_{\delta^{\prime}}[ 1,s^{\prime}],\]
_where \(s^{\prime}=\frac{1}{2}\left(\frac{s}{c}\right)^{3}-2\left(\frac{s}{c}\right)^ {2}+\frac{5}{2}\left(\frac{s}{c}\right)\) and \(\delta^{\prime}=\delta+\exp(-\mathrm{poly}(n))\)._
_Furthermore, for any \(\mathsf{stateQCMA}\) verifier utilizing the "Pythagorean" gateset, we have \(\delta^{\prime}=\delta\)._
It is noteworthy that while error reduction for \(\mathsf{stateQCMA}\) (a corollary of Theorem 5.1) preserves the distance between the resulting state and the target state, this distance-preserving property in Theorem 6.1 is _gateset-dependent_ since changing gatesets will worsen the distance parameter (Lemma 2.1). This is because the key insight from [16], that the maximum acceptance probability of a \(\mathsf{QCMA}\) (or \(\mathsf{stateQCMA}\)) verifier can be expressed with polynomially many bits, only applies to certain gatesets, elaborating in Remark 6.2.
_Remark 6.2_ (On choices of the gateset).: For state-synthesizing complexity classes, we require to deal with not only the maximum acceptance probability but also the resulting states after the computation. To well-approximate any quantum states with a designated gateset \(\mathcal{S}\), this gateset \(S\) must generate a dense subgroup of \(\mathrm{SU}(2^{n})\) for large enough \(n\). However, this does not hold for the gateset used in [16]17. We then use the "Pythagorean" gateset in [16] where real and imaginary parts of all matrix entries are rational numbers.
Footnote 17: To be specific, the proof of Theorem 3.2 in [17] indicates that the gateset consists of Toffoli and Hadamard merely generates a dense subgroup of \(\mathrm{SO}(8)\).
Analogous to [16], to achieve perfect completeness, we also utilize the quantum rewinding lemma [20] with a single iteration. As stated in Lemma 6.4, it suffices to construct a new \(\mathsf{stateQCMA}\) verifier that the acceptance probability is exactly \(1/2\) for the completness condition.
Our construction of the new \(\mathsf{stateQCMA}\) verifier with perfect completeness tightly follows the construction for \(\mathsf{QCMA}\) (i.e., Figure 1 in [16]). The only difference is that the unitary transformation \(Q\) since the original construction in [16] cannot preserve the resulting states, which also leads to a slightly different analysis for the soundness condition.
We begin with two lemmas that are crucial for our analysis.
**Lemma 6.3** (Acceptance probabilities from the "Pythagorean" gateset is rational, adapted from [16]).: _For all unitary transform \(U\) on an \(n\)-qubit system that consists of \(l(n)\) gates from the "Pythagorean" gateset where \(l\) is a polynomial, the probability \(p_{\mathrm{acc}}\) that the first qubit of \(U|0^{n}\rangle\) is found in the state \(|1\rangle\) (i.e., the measurement on the computational basis) is expressed as \(p_{\mathrm{acc}}=\frac{k}{5^{l(n)}}\) where \(k\) is an integer from the range \([0,5^{l(n)}]\)._
**Lemma 6.4** (Success probability of the quantum rewinding, adapted from Lemma 8 in [20]).: _Let \(Q\) be a quantum circuit such that \(Q|\psi\rangle=\sqrt{p(\psi)}|0\rangle|\psi_{0}\rangle+\sqrt{1-p(\psi)}|1\rangle |\psi_{1}\rangle\), and \(p:=p(\psi)\in(0,1)\) is constant over all choices of the input \(|\psi\rangle\). Then the probability of producing \(|\psi_{0}\rangle\) within \(t\) iterations is \(1-(1-p)(1-2p)^{2t}\)._
_Particularly, the probability of producing \(|\psi_{0}\rangle\) by the single-iteration quantum rewinding of \(Q\) is \(p+4p(1-p)^{2}\), as well as this achieves the perfect success probability when \(p=1/2\)._
Now we proceed with the actual proof.
Proof of Theorem 6.1.: Let \(V_{n,w}\) be a \(\mathsf{stateQCMA}_{\mathcal{S}}[c,s]\) verification circuit with a classical witness \(w\) of size \(m(n)\) where \(c(n)\) is a sufficiently large constant18. Using Lemma 2.1, we first convert \(V_{n,w}\) to another circuit that utilizes a polynomial number of gates from the gateset \(\mathcal{G}\). Note that while we can assume that the completeness parameter \(c(n)\) does not increase by this conversion, the trace distance error increases to \(\delta^{\prime}(n):=\delta(n)+\exp(-\mathrm{poly}(n))\). In the rest of the proof, we use \(V_{n,w}\) as the converted circuit. So \(V_{n,w}\) is assumed to be a \(\mathsf{stateQCMA}_{\mathcal{S}^{\prime}}[c,s]\) circuit. Utilizing by Lemma 6.3, the acceptance probability of each \(V_{n,w}\) is expressed as \(k_{n,w}/5^{l(n)}\) for some integer \(k_{n,w}\in\{0,1,\cdots,5^{l(n)}\}\) and polynomial \(l\) which represents the size of the circuit \(V_{n,w}\).
Footnote 18: This is achievable by first applying error reduction on the given \(\mathsf{stateQCMA}\) verifier.
Now we construct a \(\mathsf{stateQCMA}_{\mathcal{S}^{\prime}}[1,s^{\prime}]\) verifier such that the verification circuit \(\tilde{V}_{n,w}\) is the single-iteration quantum rewinding of \(Q_{w,k}\) with a pre-processing, where the unitary transformation \(Q_{w,k}\) is shown in Algorithm 2. To be precise, \(\tilde{V}_{n,w}\) will simply reject if \(k/5^{l(n)}<c(n)\)19, then \(\tilde{V}_{n,w}\) will perform the construction in Figure 1.
Footnote 19: The claimed maximum acceptance probability \(k/5^{l(n)}\) are supposed to correspond to _legal_ witnesses.
```
1. Prepare the uniform superposition \(\frac{1}{\sqrt{2k}}\sum_{z\in\{0,1,\cdots,2k-1\}}|z\rangle\) in \(\mathsf{S}\) ;
2. Apply \(V_{n,w}\) on \(\mathsf{R}\);
3. Apply a Toffoli gate that targets at \(\mathsf{O}\), and the first control qubit is the designated output qubit in \(\mathsf{R}\), as well as the second control qubit is decided by whether the integer in \(\mathsf{S}\) at most \(5^{l(n)}\).
```
**Algorithm 2**Unitary Transformation \(Q_{w,k}\)
In Algorithm 2, all registers \(\mathsf{O},\mathsf{R},\mathsf{S}\) are assumed to be initialized to all-zero state, where \(\mathsf{O}\) is a single-qubit register, and \(\mathsf{R}\) is the \((q(n)+n)\)-qubit register for some polynomial \(q\), on which the verification circuit \(V_{n,w}\) acts (and the last \(n\)-qubit of \(\mathsf{R}\) is assumed to contain the output), as well as \(\mathsf{S}\) is a \(t\)-qubit register where \(t\) is the minimum integer satisfying \(2^{t}\geq 5^{l(n)}\). Moreover, the first step in Algorithm 2 can be implemented by the exact amplitude amplification.20 We then show that the unitary transformation \(Q_{w,k}\) indeed preserves the resulting states of the verification circuit \(V_{n,w}\) as Proposition 6.5, whose proof is deferred to the last part of this section.
Figure 1: The new verification circuit \(\tilde{V}_{n,w}\)
**Proposition 6.5**.: _For the unitary transformation \(Q_{w,k}\) specified in Algorithm 2, the success probability is \(\Pr\left[Q_{w,k}\text{ accepts}\right]:=\|\Pi_{\mathrm{acc}}Q_{w,k}|\bar{0} \rangle_{(\mathsf{O},\mathsf{R},\mathsf{S})}\|^{2}=k_{n,w}/(2k)\) where \(\Pi_{\mathrm{acc}}:=|1\rangle\langle 1|_{\mathsf{O}}\otimes I_{(\mathsf{R}, \mathsf{S})}\). Moreover, \(Q_{w,k}\) preserves the state synthesized by \(V_{n,w}\)._
Plugging Proposition 6.5 into Lemma 6.4, we obtain that
\[\Pr\left[\tilde{V}_{n,w}\text{ accepts }\right]=\frac{1}{2}\left(\frac{k_{n,w}}{k} \right)^{3}-2\left(\frac{k_{n,w}}{k}\right)^{2}+\frac{5}{2}\left(\frac{k_{n,w }}{k}\right):=g\left(\frac{k_{n,w}}{k}\right), \tag{2}\]
as guaranteed that \(k\geq c(n)\cdot 2^{l(n)}\). We further notice the following fact21:
Footnote 21: Fact 6.6 follows from the facts that \(x=1\) and \(x=5/3\) are roots of \(g^{\prime}(x)=0\), as well as \(g^{\prime}(0)>0\).
**Fact 6.6**.: _For \(0\leq x\leq 1\), \(g(x)\) defined in Equation (2) is monotonically increasing._
It is thus left to analyze the maximum acceptance probability of \(\tilde{V}_{n,w}\).
* For the completeness condition, equipped with Fact 6.6, the maximum achieves if \(x:=k_{n,w}/k=1\). Namely, the claimed maximum acceptance probability \(k\) of \(V_{n,w}\) is indeed the maximum acceptance probability \(k_{n,w}\).
* For the soundness condition, we observe below for any illegal witness \(w\in\{0,1\}^{m(n)}\): \[\frac{k_{n,w}}{k}\leq\frac{s(n)\cdot 5^{l(n)}}{c(n)\cdot 5^{l(n)}}=\frac{s(n)}{ c(n)}.\] Since \(k_{n,w}/k\leq s(n)/c(n)\), this indicates that the acceptance probability of \(V_{n,w}\) is \(s^{\prime}(n)\leq g(s(n)/c(n))\). By Fact 6.6, \(\Pr\left[V_{n,w}\text{ accepts }\right]\leq s^{\prime}(n)\) holds for any choice of \(k\) as long as the witness \(w\) is illegal. Note that \(c\) is a constant and the promise gap \(c-s\geq 1/p(n)\) where \(p(n)\) is a polynomial of \(n\), this deduces that \[s^{\prime}(n)\leq g\left(1-\frac{c}{p(n)}\right)=1-\frac{1}{2}\left[\left( \frac{c}{p(n)}\right)^{2}+\left(\frac{c}{p(n)}\right)^{3}\right]:=1-\frac{1}{ q(n)}.\] (3)
We complete the proof by noticing \(q(n)\) is a polynomial of \(n\).
In the remaining of this section, we complete the proof of Proposition 6.5.
Proof of Proposition 6.5.: Let \(V_{n,w}|\bar{0}\rangle_{\mathsf{R}}=|0\rangle|\phi_{0}\rangle+|1\rangle|\phi_{1}\rangle\) be the quantum state just before the final measurement of the verification circuit \(V_{n,w}\). This deduces that
\[\Pr\left[V_{n,w}\text{ accepts }\right]=\||1\rangle\langle 1|_{\mathrm{out}}V_{n,w} |\bar{0}\rangle_{\mathsf{R}}\|_{2}^{2}=\langle\phi_{1}|\phi_{1}\rangle=k_{n,w }/5^{l(n)}.\]
The output state conditioned on accepting is then defined as the state we obtain by tracing out all but the last \(n\)-qubit of the density matrix \(\frac{5^{l(n)}}{k_{n,w}}|\phi_{1}\rangle\langle\phi_{1}|\). The quantum state in \((\mathsf{O},\mathsf{R},\mathsf{S})\) before the last step in Algorithm 2 is then
\[|0\rangle_{\mathsf{O}}\otimes\left[|0\rangle|\phi_{0}\rangle+|1\rangle|\phi_{ 1}\rangle\right]_{\mathsf{R}}\otimes\frac{1}{\sqrt{2k}}\sum_{0\leq z\leq 2k}|z \rangle_{\mathsf{S}}\]
and therefore the state after applying Algorithm 2 is \(Q_{w,k}|\bar{0}\rangle_{(\mathsf{O},\mathsf{R},\mathsf{S})}\) as below:
\[|0\rangle_{\mathsf{O}}\otimes|0\rangle|\phi_{0}\rangle_{\mathsf{R}}\otimes \frac{1}{\sqrt{2k}}\sum_{z=0}^{2k-1}|z\rangle_{\mathsf{S}}+|0\rangle_{\mathsf{ O}}\otimes|1\rangle|\phi_{1}\rangle_{\mathsf{R}}\otimes\frac{1}{\sqrt{2k}}\sum_{z=5^{l(n )}}^{2k-1}|z\rangle_{\mathsf{S}}+|1\rangle_{\mathsf{O}}\otimes|1\rangle|\phi_ {1}\rangle_{\mathsf{R}}\otimes\frac{1}{\sqrt{2k}}\sum_{z=0}^{5^{l(n)}-1}|z \rangle_{\mathsf{S}}\]
We thus conclude that
\[\Pr\left[Q_{w,k}\text{ accepts}\right]=\langle\phi_{1}|\phi_{1}\rangle\cdot\left\| \frac{1}{\sqrt{2k}}\sum_{z=0}^{5^{(n)}-1}|z\rangle_{\mathsf{S}}\right\|_{2}^{2 }=\frac{k_{n,w}}{5^{l(n)}}\cdot\frac{5^{l(n)}}{2k}=\frac{k_{n,w}}{2k}.\]
Furthermore, we notice that the resulting state of \(Q_{w,k}\) is
\[(|1\rangle\langle 1|\otimes(|\phi_{1}\rangle\langle\phi_{1}|)_{\mathsf{R}} \otimes\left(|0\rangle\langle 0|\otimes|+\rangle\langle+|^{\otimes l(n)} \right)_{\mathsf{S}}.\]
By tracing out \(\mathsf{S}\) and all but the last \(n\)-qubit of \(\mathsf{R}\), we get the resulting state of \(V_{n,w}\).
## Acknowledgments
The authors were supported by JSPS KAKENHI Grants Nos. JP19H04066, JP20H00579, JP20H05966, JP20H04139, JP21H04879 and MEXT Quantum Leap Flagship Program (MEXT Q-LEAP) Grants Nos. JPMXS0118067394 and JPMXS0120319794. MM was also supported by JST, the establishment of University fellowships towards the creation of science technology innovation, Grant No. JPMJFS2120. We thank Henry Yuen and Chinmay Nirkhe for helpful discussions. YL thanks Naixu Guo for helpful discussions on quantum singular value transformations. Circuit diagrams were drawn by the Quantiz package [11].
|
2307.13984 | Particle realization of Bondi-Metzner-Sachs symmetry in 2+1 space-time | We construct a Lorentz invariant massive particle model in (2+1) space-time
with an enlarged set of symmetries which includes Bondi-Metzner-Sachs (BMS)
translations (supertranslations), using the non-linear realization framework.
The Hamiltonian formalism for the resulting Lagrangian is constructed, and the
infinite phase-space constraints and the set of gauge transformations are
analysed. We also compute the massless limit of the theory in phase-space.
After eliminating the gauge degrees of freedom, the physical reduced space is
left only with the degrees of freedom of a standard Poincar\'e particle but
with a residual set of symmetries that we prove to be BMS. A similar result for
the massless limit, including in this case superrotations, is pointed out. | Carles Batlle, VÃctor Campello, Joaquim Gomis | 2023-07-26T06:50:39Z | http://arxiv.org/abs/2307.13984v2 | # Particle realization of Bondi-Metzner-Sachs symmetry in \(2+1\) space-time
###### Abstract
We construct a Lorentz invariant massive particle model in (2+1) space-time with an enlarged set of symmetries which includes Bondi-Metzner-Sachs (BMS) translations (supertranslations), using the non-linear realization framework. The Hamiltonian formalism for the resulting Lagrangian is constructed, and the infinite phase-space constraints and the set of gauge transformations are analysed. We also compute the massless limit of the theory in phase-space. After eliminating the gauge degrees of freedom, the physical reduced space is left only with the degrees of freedom of a standard Poincare particle but with a residual set of symmetries that we prove to be BMS. A similar result for the massless limit, including in this case superrotations, is pointed out.
## 1 Introduction
Bondi-Metzner-Sachs symmetry [1; 2], which originated as an extension of Poincare symmetry for asymptotically flat space-times, has received a lot of renewed attention in the last 15 years. One of the interest is to deduce Weinberg's soft graviton theorems [3] as the Ward identities of BMS supertranslations [4; 5; 6; 7]. An overview of recent developments can be found in [8]. For the relation of BMS symmetry with celestial holography see, for example, [9; 10]
The BMS group, which is the semi-direct product of Lorentz and supertraslations (including ordinary space-time translations) was extended in [11; 12; 13] with the inclusion of superrotations, obtaining the extended BMS group, given by the semidirect product of Lorentz and superrotations and supertranslations.
A canonical realization of the BMS group was constructed using the Fourier modes of a free massive Klein-Gordon (KG) field in \(2+1\). In the case of a massless KG field, one can construct an extended BMS symmetry which includes superrotations [14]. This approach was pioneered in [15]. In this context, BMS symmetries are not associated with asymptotically flat space-times. Instead, they appear as a generalization of the components
\(p^{\mu}\) of ordinary momenta, by noting that the supertranslations obey the Beltrami differential equation of the hyperbolic space \(H_{2+1}\) or alternatively as the differential equation associated with one of the Lorentz Casimirs, \(C=J^{2}-\vec{K}^{2}\), where \(J\) is the angular momentum and \(\vec{K}\) are the two-dimensional boosts. The massless case is obtained from a suitable limit of the Beltrami equation or through the Lorentz Casimir \(C\) (see references above and also [16]).
With the idea of further exploring BMS symmetry, we propose a massive particle realization in \(2+1\) dimensions, based on non-linear realizations of symmetry algebras (see [17; 18] and references therein). The particle Lagrangian is constructed in an infinite-dimensional space, generalizing the Minkowski space, that we call BMS space. The infinite number of coordinates are associated with the supertranslations, and we also use two Goldstone coordinates associated with the two broken boost generators. The Hamiltonian formalism for the resulting Lagrangian is constructed, and the infinite phase-space first-class constraints and set of gauge transformations are analysed. We also compute the massless limit of the model. We obtain the physical reduced space after eliminating the gauge degrees of freedom by introducing an infinite set of gauge fixing constraints. This space is left only with the degrees of freedom of a standard Poincare particle. Since in the gauge fixing procedure the rigid symmetries are maintained we prove that the ordinary relativistic massive Poincare particle is invariant under a realization of the BMS symmetry that we construct using the compensating gauge transformations, which are necessary to preserve the gauge fixing under supertranslations. In the massless case, we further find an infinite set of superrotations that are symmetries of the massless relativistic particle. It turns out that the infinite set of BMS coordinates associated with the supertraslations are not physical because they can always be gauged away, and therefore the model does not have the so-called soft BMS modes. These results agree with the fact that the quadratic Casimir of BMS algebra coincides with the quadratic Casimir of the Poincare algebra.
A different approach to the definition of BMS particles in \(2+1\) dimensions is based on the coadjoint orbit approach [19; 20; 21; 22]. However, as far as we know, no particle action has been constructed in the literature using this approach. For the relation among the non-linear realization framework and the coadjoint orbit method, see for example [18].
The paper is organized as follows. Section 2 we derive the BMS particle Lagrangian in \(2+1\) space-time using the non-linear realization approach. The canonical analysis of this Lagrangian is given in Section 3, including the discussion of the constraints, the reduced phase space and the gauge transformations induced by the first-class constraints. Section 4 presents the generators that realize the Poincare symmetry in BMS coordinates, and shows that the theory is indeed invariant under them. The massless limit of the theory is computed in Section 5, and it is seen that the set of symmetry generators is extended to include superrotations. The gauge fixing of the theory is presented in Section 6, and the physical degrees of freedom of the BMS particle are determined. Our results are discussed in Section 7, and some possible extensions and connections with other approaches are also considered. Detailed proofs of some of the results are presented in Appendixes (A), (B) and (C), and Appendix (D) contains a discussion of the quadratic Casimir in BMS space.
Non-linear realization of the BMS algebra in \(2+1\) dimensions
The extended BMS algebra [12] without central extensions is given by \((m,n\in\mathbb{Z})\)
\[[L_{n},P_{m}] = i(n-m)P_{n+m},\] \[[P_{n},P_{m}] = 0,\] \[[L_{n},L_{m}] = i(n-m)L_{n+m}. \tag{1}\]
We are mainly interested in the subalgebra formed by the Lorentz generators \(L_{0}\), \(L_{\pm 1}\) and the supertranslations \(P_{n}\):
\[[L_{1},L_{-1}]=2iL_{0},\quad[L_{1},L_{0}]=iL_{1},\quad[L_{-1},L_{ 0}]=-iL_{-1}, \tag{2}\] \[[L_{-1},P_{m}]=-i(m+1)P_{m-1},\quad[L_{0},P_{m}]=-imP_{m},\quad[L_ {1},P_{m}]=-i(m-1)P_{m+1},\] (3) \[[P_{n},P_{m}]=0. \tag{4}\]
This subalgebra is \(BMS_{3}\). The BMS space is the homogeneous space \(BMS_{3}/SO(2,1)\), which locally is given by
\[g_{0}(\{x\})=\prod_{n\in\mathbb{Z}}e^{iP_{n}x^{n}}. \tag{5}\]
In order to construct a massive BMS particle we should consider the coset \(BMS_{3}/SO(2)\), locally given by
\[g(\{x\},u,v)=g_{0}(\{x\})e^{iL_{-1}v}e^{iL_{1}u}=g_{0}(\{x\})U(u,v), \tag{6}\]
where \(U(u,v)\) is a general Lorentz parametrized by the Goldstone coordinates \(u,v\). and \(\{x^{n}\}_{n\in\mathbb{Z}}\) are the BMS coordinates (with \(x^{0}\), \(x^{\pm 1}\) related to ordinary \(2+1\) space-time). The BMS coordinates are complex, with \(x^{n}\) and \(x^{-n}\) complex conjugate of each other.
The Maurer-Cartan form associated to \(g\) is
\[\Omega(g)=-ig^{-1}\mathrm{d}g=U^{-1}g_{0}^{-1}\mathrm{d}g_{0}\,U+U^{-1} \mathrm{d}U \tag{7}\]
that is
\[\Omega(g)=\sum_{n\in\mathbb{Z}}\mathrm{d}x^{n}\,U^{-1}P_{n}U-iU^{-1}\mathrm{ d}U. \tag{8}\]
In the spirit of obtaining spinless particle actions, see for example [17], we are only interested in the terms \(\Omega_{P_{0}}\) of \(\Omega\) proportional to \(P_{0}\)1, which can only come from \(U^{-1}P_{n}U\). The detailed computation is presented in Appendix A, and the result is
Footnote 1: In \(2+1\) dimensions one can also consider the term proportional to \(L_{0}\), which can be used to construct models of particles with spin (see, for example, [23]), but we are only interested in the spinless case.
\[\Omega_{P_{0}}=\mathrm{d}x^{0}(1-2uv)+\sum_{n=1}^{\infty}\mathrm{d}x^{n}(-1)^ {n}v^{n}(n+1-2uv)+\mathrm{d}x^{-1}2u+\sum_{n=2}^{\infty}\mathrm{d}x^{-n}u^{n} \frac{n+1-2uv}{(1-uv)^{n}}. \tag{9}\]
Following the standard procedure, we integrate the pullback of \(\Omega_{P_{0}}\) to the world-line of the particle
\[S[\{x\},u,v]=-\mu\int\mathrm{d}\tau(\dot{x}^{0}(1-2uv)-2\dot{x}^{1 }v(1-uv)+2\dot{x}^{-1}u\] \[+\,\sum_{n=2}^{\infty}\dot{x}^{n}(-1)^{n}v^{n}(n+1-2uv)+\sum_{n=2} ^{\infty}\dot{x}^{-n}u^{n}\frac{n+1-2uv}{(1-uv)^{n}})\] \[=\,-\mu\int\!\!\mathrm{d}\tau\!\left(\sum_{n=0}^{\infty}\dot{x}^{ n}(-1)^{n}v^{n}(n+1-2uv)+\sum_{n=1}^{\infty}\dot{x}^{-n}u^{n}\frac{n+1-2uv}{(1 -uv)^{n}}\right), \tag{10}\]
and define the BMS particle Lagrangian
\[\mathcal{L}=-\mu\left(\sum_{n=0}^{\infty}\dot{x}^{n}(-1)^{n}v^{n}(n+1-2uv)+ \sum_{n=1}^{\infty}\dot{x}^{-n}u^{n}\frac{n+1-2uv}{(1-uv)^{n}}\right). \tag{11}\]
The contribution to (10) of the ordinary space-time coordinates, _i.e._\(x^{0}\), \(x^{\pm 1}\), is
\[S_{0}=-\mu\int\mathrm{d}\tau\left(\dot{x}^{0}(1-2uv)+2\dot{x}^{1}(-v+v^{2}u)+ 2\dot{x}^{-1}u\right)\equiv\int\mathrm{d}\tau\mathcal{L}_{0}. \tag{12}\]
This action corresponds to an ordinary spinless relativistic particle in flat (2+1) Minkowski space-time, as can be seen by computing the momenta
\[p_{0} =\,\frac{\partial\mathcal{L}_{0}}{\partial\dot{x}^{0}}=-\mu(1-2 uv), \tag{13}\] \[p_{1} =\,\frac{\partial\mathcal{L}_{0}}{\partial\dot{x}^{1}}=-2\mu(-v+ v^{2}u),\] (14) \[p_{-1} =\,\frac{\partial\mathcal{L}_{0}}{\partial\dot{x}^{-1}}=-2\mu u, \tag{15}\]
and checking the mass-shell condition
\[-p_{0}^{2}+p_{1}p_{-1}=-\mu^{2}. \tag{16}\]
Actually, if one computes the EOM given by (12) for the boost variables,
\[-v\dot{x}^{0}+v^{2}\dot{x}^{1}+\dot{x}^{-1} =\,0, \tag{17}\] \[-u\dot{x}^{0}-\dot{x}^{1}+2uv\dot{x}^{1} =\,0, \tag{18}\]
solves them for \(u\), \(v\),
\[u=\frac{\dot{x}^{1}}{\pm\sqrt{(\dot{x}^{0})^{2}-4\dot{x}^{1}\dot{x}^{-1}}}, \quad v=\frac{\dot{x}^{0}\pm\sqrt{(\dot{x}^{0})^{2}-4\dot{x}^{1}\dot{x}^{-1}}} {2\dot{x}^{1}}, \tag{19}\]
makes the change of space variables from the complex ones \(x^{\pm 1}\) to the real ones \(x_{1}\), \(x_{2}\), given by
\[x^{\pm 1}=\frac{1}{2}(x_{1}\pm ix_{2}), \tag{20}\]
and substitutes the resulting expressions for \(u\), \(v\) in (12), one gets the ordinary space-time action with Lagrangian
\[\mathcal{L}_{0}^{*}=\mp\mu\sqrt{(\dot{x}^{0})^{2}-(\dot{x}_{1})^{2}-(\dot{x}_{ 2})^{2}}. \tag{21}\]
Canonical analysis of the BMS particle action
In order to understand the structure of the BMS Lagrangian (11) we perform the Hamiltonian analysis. The momenta are given by
\[\pi_{u} = \frac{\partial\mathcal{L}}{\partial\dot{u}}=0, \tag{13}\] \[\pi_{v} = \frac{\partial\mathcal{L}}{\partial\dot{v}}=0,\] (14) \[p_{n} = \frac{\partial\mathcal{L}}{\partial\dot{x}^{n}}=-\mu(-1)^{n}v^{n} (n+1-2uv),\ \ n=0,1,2,\ldots,\] (15) \[\bar{p}_{n} = \frac{\partial\mathcal{L}}{\partial\dot{x}^{-n}}=-\mu u^{n}\frac{ n+1-2uv}{(1-uv)^{n}},\ \ n=1,2,\ldots. \tag{16}\]
Notice that, since the \(x^{-n}\) are complex conjugates of the \(x^{n}\), then the fact that \(\mathcal{L}\) is real implies that \(p_{n}\) and \(\bar{p}_{n}\) are also complex conjugates of each other.
From the expressions of the momenta one gets the set of primary constraints \(\pi_{u}=0\), \(\pi_{v}=0\) and \(\phi_{n}=0\), \(\bar{\phi}_{n}=0\), with
\[\phi_{n} := p_{n}+\mu(-1)^{n}v^{n}(n+1-2uv),\ \ n=0,1,2,\ldots \tag{17}\] \[\bar{\phi}_{n} := \bar{p}_{n}+\mu u^{n}\frac{n+1-2uv}{(1-uv)^{n}},\ \ n=1,2,\ldots \tag{18}\]
The non-zero Poisson brackets between these constraints are
\[\{\phi_{n},\pi_{u}\} = -2\mu(-1)^{n}v^{n+1},\ n=0,1,2,\ldots \tag{19}\] \[\{\phi_{n},\pi_{v}\} = \mu(-1)^{n}v^{n-1}(n(n+1)-2(n+1)uv),\ n=0,1,2,\ldots\] (20) \[\{\bar{\phi}_{n},\pi_{u}\} = \frac{\mu u^{n-1}}{(1-uv)^{n+1}}(2u^{2}v^{2}-2nuv+n^{2}-2uv+n),\ n =1,2,\ldots\] (21) \[\{\bar{\phi}_{n},\pi_{v}\} = \frac{\mu u^{n+1}}{(1-uv)^{n+1}}(n-1)(-2uv+n+2),\ n=1,2,\ldots \tag{22}\]
Since the Lagrangian is homogeneous of degree one in the velocities, the canonical Hamiltonian is identically zero and one must consider the Dirac Hamiltonian
\[H_{D}=\lambda_{u}\pi_{u}+\lambda_{v}\pi_{v}+\sum_{n=0}^{\infty}\lambda_{n}\phi _{n}+\sum_{n=1}^{\infty}\bar{\lambda}_{n}\bar{\phi}_{n}, \tag{23}\]
where the \(\lambda\) are arbitrary functions.
If we order the constraints as \((\pi_{u},\pi_{v},\phi_{1},\bar{\phi}_{1},\phi_{2},\bar{\phi}_{2},\ldots)\), the infinite-dimensional matrix of Poisson brackets between them has the form
\[M_{\infty}=\left(\begin{array}{ccccc}0&A_{1}&A_{2}&A_{3}&\cdots\\ -A_{1}^{T}&0&0&0&\cdots\\ -A_{2}^{T}&0&0&0&\cdots\\ -A_{3}^{T}&0&0&0&\cdots\\ \vdots&\vdots&\vdots&\vdots&\ddots\end{array}\right), \tag{24}\]
where all the entries are \(2\times 2\) blocks and
\[A_{i}=\left(\begin{array}{c}\{\pi_{u},\phi_{i}\}\ \{\pi_{u},\bar{\phi}_{i} \}\\ \{\pi_{v},\phi_{i}\}\ \{\pi_{v},\bar{\phi}_{i}\}\end{array}\right),\quad i=1,2,3,\ldots \tag{21}\]
Since, for instance,
\[A_{1}=2\mu\left(\begin{array}{cc}-v^{2}&-1\\ 1-2uv&0\end{array}\right), \tag{22}\]
has non-zero determinant, provided that \(1-2uv\neq 0\), it turns out that all the (infinite dimensional) columns to the right of \(A_{1}\) can be expressed as linear combinations of the columns which contain \(A_{1}\). Hence, considering also the first two columns of \(M_{\infty}\), one can show that \(M_{\infty}\) has a rank equal to 4. This means that, at most, we can select 4 second-class constraints, including necessarily \(\pi_{u}\) and \(\pi_{v}\), plus another two which allow us to eliminate \(u\) and \(v\) in terms of two of the momenta \(p_{i}\) and \(\bar{p}_{i}\). Also, notice that the number of first class constraints is infinite.
Notice that, although \(u\), \(v\) can be eliminated from any two of the \(\phi\), \(\bar{\phi}\), it is convenient to select \(\phi_{1}\), \(\bar{\phi}_{1}\), as it was done for the case of the pure Poincare particle. The four selected constraints
\[\pi_{u},\ \ \pi_{v},\ \ \phi_{1},\ \ \bar{\phi}_{1} \tag{23}\]
are second class, with the Poisson bracket matrix
\[M=\left(\begin{array}{cc}0&A_{1}\\ -A_{1}^{T}&0\end{array}\right)=2\mu\left(\begin{array}{cccc}0&0&-v^{2}&-1\\ 0&0&1-2uv&0\\ v^{2}&-1+2uv&0&0\\ 1&0&0&0\end{array}\right), \tag{24}\]
which has determinant \(16\mu^{4}(1-2uv)^{2}\) and inverse
\[M^{-1}=\frac{1}{2\mu}\left(\begin{array}{cccc}0&0&0&1\\ 0&0&-\frac{1}{1-2uv}&\frac{v^{2}}{1-2uv}\\ 0&\frac{1}{1-2uv}&0&0\\ -1&-\frac{v^{2}}{1-2uv}&0&0\end{array}\right). \tag{25}\]
From this, one can define the Dirac bracket
\[\{A,B\}_{D}=\{A,B\}-\{A,\Psi_{i}\}M_{ij}^{-1}\{\Psi_{j},B\}, \tag{26}\]
where \(\Psi_{j}\in\{\pi_{u},\pi_{v},\phi_{1},\bar{\phi}_{1}\}\). If one demands the stability of the primary second-class constraints, _i.e._
\[\dot{\Psi}_{i}=\{\Psi_{i},H_{D}\}\stackrel{{\cal M}}{{=}}0, \tag{27}\]
where \({\cal M}\) is the submanifold defined by the constraints \(\Psi_{i}\), one can determine the values of the four arbitrary functions \(\lambda_{u}\), \(\lambda_{v}\), \(\lambda_{1}\), \(\bar{\lambda}_{1}\). The result is that \(\lambda_{u}=\lambda_{v}=0\), while \(\lambda_{1}\) and \(\bar{\lambda}_{1}\) are quite involved functions of all the other \(\lambda\). However, in the reduced space \({\cal M}\) we can set \(\phi_{1}=\bar{\phi}_{1}=0\), and the reduced Dirac Hamiltonian is
\[H_{\cal M}=\lambda_{0}\phi_{0}+\sum_{n=2}^{\infty}(\lambda_{n}\phi_{n}+\bar{ \lambda}_{n}\bar{\phi_{n}}), \tag{28}\]
where all the constraints are assumed to be computed on \(\mathcal{M}\). Using \(\phi_{1}=0\) and \(\bar{\phi}_{1}=0\) to effectively eliminate \(u\) and \(v\) in terms of \(p_{1}\) and \(\bar{p}_{1}\) one has
\[u = -\frac{1}{2\mu}\bar{p}_{1}, \tag{3.21}\] \[v = -\frac{\mu}{\bar{p}_{1}}\pm\frac{1}{\bar{p}_{1}}\sqrt{\mu^{2}+p_ {1}\bar{p}_{1}}, \tag{3.22}\]
from which it also follows that
\[uv=\frac{1}{2}\mp\frac{1}{2\mu}\sqrt{\mu^{2}+p_{1}\bar{p}_{1}}. \tag{3.23}\]
Then, on the reduced space,
\[\phi_{0}=p_{0}\pm\sqrt{\mu^{2}+p_{1}\bar{p}_{1}}, \tag{3.24}\]
(the "square" of this constraint yields the ordinary quadratic mass-shell condition, with the two signs for the energy), and
\[\phi_{n} = p_{n}+\mu(-1)^{n}\left(-\frac{\mu}{\bar{p}_{1}}\pm\frac{1}{\bar {p}_{1}}\sqrt{\mu^{2}+p_{1}\bar{p}_{1}}\right)^{n}\left(n\pm\frac{1}{\mu} \sqrt{\mu^{2}+p_{1}\bar{p}_{1}}\right), \tag{3.25}\] \[\bar{\phi}_{n} = \bar{p}_{n}+\mu(-1)^{n}\bar{p}_{1}^{n}\frac{n\pm\frac{1}{\mu} \sqrt{\mu^{2}+p_{1}\bar{p}_{1}}}{\left(\mu\pm\sqrt{\mu^{2}+p_{1}\bar{p}_{1}} \right)^{n}}\] (3.26) \[= \bar{p}_{n}+\mu(-1)^{n}\left(\frac{\mu}{\bar{p}_{1}}\pm\frac{1}{ \bar{p}_{1}}\sqrt{\mu^{2}+p_{1}\bar{p}_{1}}\right)^{-n}\left(n\pm\frac{1}{\mu }\sqrt{\mu^{2}+p_{1}\bar{p}_{1}}\right),\]
for \(n\geq 2\) (as a check, these expressions become identities for \(n=1\)). The case \(n=0\) of (3.24) can also be included in either (3.25) or (3.26). The constraints can also be written as
\[\phi_{n} =p_{n}+\frac{\mu}{\bar{p}_{1}^{n}}f_{n}^{\pm}(p_{1}\bar{p}_{1}), \tag{3.27}\] \[\bar{\phi}_{n} =\bar{p}_{n}+\mu\bar{p}_{1}^{n}g_{n}^{\pm}(p_{1}\bar{p}_{1}), \tag{3.28}\]
for \(n=0,2,3,\ldots\), where
\[f_{n}^{\pm}(x) =\left(\mu\mp\sqrt{\mu^{2}+x}\right)^{n}\left(n\pm\frac{1}{\mu} \sqrt{\mu^{2}+x}\right), \tag{3.29}\] \[g_{n}^{\pm}(x) =\left(-\mu\mp\sqrt{\mu^{2}+x}\right)^{-n}\left(n\pm\frac{1}{\mu} \sqrt{\mu^{2}+x}\right), \tag{3.30}\]
which satisfy, for \(n\geq 1\),
\[\frac{\mathrm{d}}{\mathrm{d}x}f_{n}^{\pm}(x) =\mp\frac{n+1}{2\sqrt{\mu^{2}+x}}f_{n-1}^{\pm}(x), \tag{3.31}\] \[\frac{\mathrm{d}}{\mathrm{d}x}g_{n}^{\pm}(x) =\pm\frac{n-1}{2\sqrt{\mu^{2}+x}}g_{n+1}^{\pm}(x), \tag{3.32}\]
and also the second-order recurrence relation
\[(n-1)f_{n+1}^{\pm}(x)\pm 2n\sqrt{\mu^{2}+x}f_{n}^{\pm}(x)+(n+1)xf_{n-1}^ {\pm}(x)=0,\quad n\geq 1, \tag{3.33}\] \[(n+1)g_{n-1}^{\pm}(x)\pm 2n\sqrt{\mu^{2}+xg_{n}^{\pm}}(x)+(n-1)xg_{n+ 1}^{\pm}(x)=0,\quad n\geq 1. \tag{3.34}\]
The signs \(\pm\) which appear in the above expressions correspond to different sheets of the constraint manifold in reduced space, parametrized by \(p_{1}\) and \(\bar{p}_{1}\). Notice that constraints \(\phi_{n},\bar{\phi}_{n}\) are first class. Therefore we expect that the physical degrees of freedom of the BMS particle will be finite-dimensional.
In the reduced phase space, with the boost variables \(u\), \(v\) eliminated in terms of \(p_{1}\), \(\bar{p}_{1}\), the symmetry between the momenta corresponding to coordinates with positive and negative index is restored, and also that the constraint \(\bar{\phi}_{n}\) is the complex conjugate of \(\phi_{n}\), thereby justifying the notation.
It will also be convenient to introduce the functions of \(P_{1}\), \(\bar{P}_{1}\) defined by
\[P_{n}=-\frac{\mu}{\bar{p}_{1}^{n}}f_{n}^{\pm}(p_{1}\bar{p}_{1}), \tag{3.35}\] \[\bar{P}_{n}=-\mu\bar{p}_{1}^{n}g_{n}^{\pm}(p_{1}\bar{p}_{1}), \tag{3.36}\]
and, in particular,
\[P_{0}=\mp\sqrt{\mu^{2}+p_{1}\bar{p}_{1}}, \tag{3.37}\]
in terms of which the constraints are \(\phi_{n}=p_{n}-P_{n}\), \(\bar{\phi}_{n}=\bar{p}_{n}-\bar{P}_{n}\), \(\phi_{0}=p_{0}-P_{0}\). Notice also that \(P_{1}=p_{1}\), \(\bar{P}_{1}=\bar{p}_{1}\).
Due to the fact that \(M^{-1}\) does not have contributions in the lower-right square, the Dirac brackets of the variables \(x^{n}\), \(x^{-n}\), \(p_{n}\), \(\bar{p}_{n}\) do not change with respect to the Poisson ones,
\[\{x^{n},p_{m}\}_{D}=\{x^{n},p_{m}\}=\delta_{m}^{n}, \tag{3.38}\] \[\{x^{-n},\bar{p}_{m}\}_{D}=\{x^{-n},\bar{p}_{m}\}=\delta_{m}^{n}. \tag{3.39}\]
First of all, the \(x^{n}\), \(x^{-n}\), \(p_{n}\), \(\bar{p}_{n}\) have zero Poisson brackets with all the 4 second-class constraints for \(n\geq 2\), so only the cases for \(n=1\) need to be discussed. Since \(p_{1}\), \(\bar{p}_{1}\) have also zero brackets with all the constraints, all brackets involving either of them remain also unchanged. Finally,
\[\{x^{1},x^{-1}\}_{D}=0-\{x^{1},\Psi_{i}\}M_{ij}^{-1}\{\Psi_{j},x^{-1}\},\]
but the only non-zero brackets of the \(x^{\pm 1}\) are with \(\phi_{1}\) and \(\bar{\phi}_{1}\), and this selects the lower-right square of \(M^{-1}\), which is identically zero. The non-trivial Dirac brackets are those involving \(u\), \(v\), \(\pi_{u}\), \(\pi_{v}\) and \(x^{\pm 1}\):
\[\{u,x^{1}\}_{D}=0,\quad\{v,x^{1}\}_{D}=-\frac{1}{2\mu}\frac{1}{1- 2uv}, \tag{3.40}\] \[\{u,x^{-1}\}_{D}=\frac{1}{2\mu},\quad\{v,x^{-1}\}_{D}=\frac{1}{2 \mu}\frac{v^{2}}{1-2uv},\] (3.41) \[\{\pi_{u},u\}_{D}=\{\pi_{u},v\}_{D}=\{\pi_{v},u\}_{D}=\{\pi_{v},v \}_{D}=0,\] (3.42) \[\{\pi_{u},x^{1}\}_{D}=\{\pi_{u},x^{-1}\}_{D}=\{\pi_{v},x^{1}\}_{D }=\{\pi_{v},x^{-1}\}_{D}=0. \tag{3.43}\]
Summing up, in the reduced phase space one has coordinates
\[\{x^{n},\ x^{-m},\ p_{n},\ \bar{p}_{m}\}_{\begin{subarray}{c}n=0,1,2,\ldots,\\ m=1,2,\ldots\end{subarray}} \tag{43}\]
with Hamiltonian (30), where the constraints are given by (34), (35), (36), and with ordinary brackets. The first class constraints \(\phi_{n},\bar{\phi}_{n}\) will generate infinite gauge transformations given by the canonical generator
\[G=\epsilon(\tau)\phi_{0}+\sum_{m\geq 2}\left(\alpha_{m}(\tau)\phi_{m}+\beta_{ m}(\tau)\bar{\phi}_{m}\right). \tag{44}\]
For instance, the re-parametrization associated to \(\phi_{0}=p_{0}\pm\sqrt{\mu^{2}+p_{1}\bar{p}_{1}}\) acts only on the standard space-time coordinates, and is given by
\[\delta x^{0} = \epsilon(\tau)\{x^{0},\phi_{0}\}_{D}=\epsilon(\tau), \tag{45}\] \[\delta x^{1} = \epsilon(\tau)\{x^{1},\phi_{0}\}_{D}=\pm\epsilon(\tau)\frac{\bar {p}_{1}}{2\sqrt{\mu^{2}+p_{1}\bar{p}_{1}}}=-\epsilon\frac{\bar{p}_{1}}{2P_{0}},\] (46) \[\delta x^{-1} = \epsilon(\tau)\{x^{-1},\phi_{0}\}_{D}=\pm\epsilon(\tau)\frac{p_{ 1}}{2\sqrt{\mu^{2}+p_{1}\bar{p}_{1}}}=-\epsilon\frac{p_{1}}{2P_{0}}, \tag{47}\]
where the function \(P_{0}\) of \(p_{1},\bar{p}_{1}\) has been used to rewrite the final expression. Furthermore, \(\delta p_{0}=\delta p_{1}=\delta\bar{p}_{1}=0\) and hence also \(\delta u=\delta v=0\).
In order to check the action of this symmetry on the Lagrangian \(\mathcal{L}\) it suffices to consider the action on \(\mathcal{L}_{0}\), since the re-parametrization acts only on \(x^{0},x^{\pm 1}\). It is convenient to re-define the arbitrary function \(\epsilon(\tau)\) as \(\epsilon(\tau)/(2P_{0})\), so that
\[\delta x^{0}=2P_{0}\epsilon(\tau),\ \delta x^{1}=-\epsilon(\tau)\bar{p}_{1},\ \delta x^{-1}=-\epsilon(\tau)p_{1}, \tag{48}\]
and also to write the Lagrangian in terms of the momenta \(p_{1},\bar{p}_{1}\),2
Footnote 2: If one uses the equations of motion for \(p_{1},\bar{p}_{1}\) to eliminate these non-dynamical variables, the result is the standard Poincaré Lagrangian in the form (21).
\[\mathcal{L}_{0}=\dot{x}^{0}P_{0}+\dot{x}^{1}p_{1}+\dot{x}^{-1}\bar{p}_{1}. \tag{49}\]
One has then, taking into account that the \(p_{1}\), \(\bar{p}_{1}\) do not transform under this symmetry,
\[\delta\mathcal{L}_{0} =2\frac{\mathrm{d}}{\mathrm{d}\tau}(\epsilon P_{0})P_{0}-\frac{ \mathrm{d}}{\mathrm{d}\tau}(\epsilon\bar{p}_{1})p_{1}-\frac{\mathrm{d}}{ \mathrm{d}\tau}(\epsilon p_{1})\bar{p}_{1} \tag{50}\] \[=\epsilon(2P_{0}\dot{P}_{0}-\frac{\mathrm{d}}{\mathrm{d}\tau}(p_{ 1}\bar{p}_{1}))+\dot{\epsilon}(2P_{0}^{2}-2p_{1}\bar{p}_{1})=\epsilon\frac{ \mathrm{d}}{\mathrm{d}\tau}(P_{0}^{2}-p_{1}\bar{p}_{1})+2\dot{\epsilon}(P_{0} ^{2}-p_{1}\bar{p}_{1})\] (51) \[=\frac{\mathrm{d}}{\mathrm{d}\tau}(2\mu^{2}\epsilon), \tag{52}\]
where \(P_{0}^{2}-p_{1}\bar{p}_{1}=\mu^{2}\) has been used.
Similarly, one can study the transformation of the Lagrangian (11) under the transformation generated by \(\phi_{m}\) for \(m\geq 2\). We write the Lagrangian (11) in the notationally convenient form
\[\mathcal{L}=\sum_{n=0}^{\infty}\dot{x}^{n}P_{n}+\sum_{n=1}^{\infty}\dot{x}^{-n }P_{-n}, \tag{53}\]
where \(P_{-n}=\bar{P}_{n}\) and all of them are functions of \(p_{1}\), \(p_{-1}=\bar{p}_{1}\) (or of \(u\) and \(v\)).
As shown in Appendix B, if \(G=\alpha_{m}(\tau)\phi_{m}\) one has
\[\delta x^{n}=\{x^{n},G\}_{D}=\begin{cases}\alpha_{m}&\text{if $n=m$},\\ -\alpha_{m}(m+1)\frac{P_{m-1}}{2P_{0}}&\text{if $n=1$},\\ \alpha_{m}(m-1)\frac{P_{m+1}}{2P_{0}}&\text{if $n=-1$},\\ 0&\text{otherwise},\end{cases} \tag{3.55}\]
and then
\[\delta\mathcal{L}=\frac{\mathrm{d}}{\mathrm{d}\tau}\left(\epsilon_{m}(2P_{0}P_ {m}-(m+1)p_{1}P_{m-1}+(m-1)\bar{p}_{1}P_{m+1})\right), \tag{3.56}\]
where the parameter of the transformation has been written as \(\alpha_{m}=2P_{0}\epsilon_{m}\). Notice that the function inside the total derivative depends on \(p_{1}\), \(\bar{p}_{1}\) and hence, through (2.14), (2.15), on the original Lagrangian variables \(u,v\). Similarly, for \(G=\beta_{m}\bar{\phi}_{m}\), one has
\[\delta x^{n}=\{x^{n},G\}_{D}=\begin{cases}\beta_{m}&\text{if $n=-m$},\\ \beta_{m}(m-1)\frac{\bar{P}_{m+1}}{2P_{0}}&\text{if $n=1$},\\ -\beta_{m}(m+1)\frac{\bar{P}_{m-1}}{2P_{0}}&\text{if $n=-1$},\\ 0&\text{otherwise},\end{cases} \tag{3.57}\]
and then,
\[\delta\mathcal{L}=\frac{\mathrm{d}}{\mathrm{d}\tau}\left(\epsilon_{m}(2P_{0} \bar{P}_{m}-(m+1)\bar{p}_{1}\bar{P}_{m-1}+(m-1)p_{1}\bar{P}_{m+1})\right), \tag{3.58}\]
with \(\beta_{m}=2P_{0}\epsilon_{m}\).
Notice that, in fact, these generators do not yield real transformations of \(\mathcal{L}\), since they do not respect the fact that \(x^{n}\) and \(x^{-n}\) are complex conjugates, but this can be easily solved by working with the real generators
\[G_{m}=\epsilon_{m}(\phi_{m}+\bar{\phi}_{m}),\quad\epsilon_{m}^{ *}=\epsilon_{m}, \tag{3.59}\] \[\bar{G}_{m}=\epsilon_{m}(\phi_{m}-\bar{\phi}_{m}),\quad\epsilon_ {m}^{*}=-\epsilon_{m}, \tag{3.60}\]
for \(m=2,3,\ldots\), and using (3.56) and (3.58) to obtain the corresponding variations of the Lagrangian.
## 4 Realization of Lorentz symmetry in BMS space
From now on we will use the notation \(p_{-n}=\bar{p}_{n}\) in order to obtain more compact expressions.
The generators of the Lorentz group in physical \(2+1\) space-time with coordinates \(x^{0}\), \(x_{1}\), \(x_{2}\) and corresponding canonical momenta \(p_{0}\), \(P_{1}\), \(P_{2}\) are given by \(K_{0}=x_{1}P_{2}-x_{2}P_{1}\) for rotations and \(K_{i}=x^{0}P_{i}+x_{i}p_{0}\), \(i=1,2\), for boosts. The relation with the \(x^{\pm 1}\) coordinates is given by \(x^{\pm 1}=\frac{1}{2}(x_{1}\pm ix_{2})\), which induces for the momenta the relation \(p_{\pm 1}=P_{1}\mp iP_{2}\).
Defining \(J=-iK_{0}\), \(K_{\pm}=K_{1}\mp iK_{2}\), one has, in terms of the coordinates \(x^{0}\), \(x^{\pm 1}\) and their associated canonical momenta \(p_{0}\), \(p_{\pm 1}\),
\[J =x^{1}p_{1}-x^{-1}p_{-1}, \tag{10}\] \[K_{+} =x^{0}p_{1}+2p_{0}x^{-1},\] (11) \[K_{-} =x^{0}p_{-1}+2p_{0}x^{1}, \tag{12}\]
which obey the Lorentz \(SO(2,1)\) algebra \(\{K_{+},K_{-}\}=2J\), \(\{J,K_{+}\}=K_{+}\), \(\{J,K_{-}\}=-K_{-}\). These generators extended to the BMS space with coordinates \(x^{n},p_{n}\), \(n\in\mathbb{Z}\) as
\[J =\sum_{n=-\infty}^{+\infty}nx^{n}p_{n}\] \[=\cdots-2x^{-2}p_{-2}\boxed{-x^{-1}p_{-1}+0\cdot x^{0}p_{0}+x^{1} p_{1}}+2x^{2}p_{2}+\cdots, \tag{13}\] \[K_{+} =\sum_{n=-\infty}^{+\infty}(1-n)x^{n}p_{n+1}\] \[=\cdots+3x^{-2}p_{-1}\boxed{+2x^{-1}p_{0}+x^{0}p_{1}}+0\cdot x^{ 1}p_{2}-x^{2}p_{3}+\cdots,\] (14) \[K_{-} =\sum_{n=-\infty}^{+\infty}(1+n)x^{n}p_{n-1}\] \[=\cdots-x^{-2}p_{-3}+0\cdot x^{-1}p_{-2}\boxed{+x^{0}p_{-1}+2x^{1} p_{0}}+3x^{2}p_{1}+\cdots, \tag{15}\]
Notice that, under complex conjugation, \((K_{+})^{*}=K_{-}\), and that \(J^{*}=-J\), as it must be according to their definition from the real generators \(K_{0}\), \(K_{1}\) and \(K_{2}\).
This set of generators, together with the supertranslation generators
\[\mathcal{P}_{n}=p_{n}, \tag{16}\]
provides a realization of BMS (Lorentz + supertranslations) in phase space, with \(\{x^{n},p_{m}\}=\delta_{m}^{n}\). Indeed, one has
\[\{K_{+},K_{-}\}=2J,\quad\{J,K_{+}\}=K_{+},\quad\{J,K_{-}\}=-K_{-}, \tag{17}\] \[\{K_{+},\mathcal{P}_{n}\}=(1-n)\mathcal{P}_{n+1},\quad\{K_{-}, \mathcal{P}_{n}\}=(1+n)\mathcal{P}_{n-1},\] (18) \[\{J,\mathcal{P}_{n}\}=n\mathcal{P}_{n},\quad\{\mathcal{P}_{n}, \mathcal{P}_{m}\}=0, \tag{19}\]
with \(n,m\in\mathbb{Z}\). The connection with the abstract algebra (2-2) is made by means of the identifications \(K_{+}\mapsto-iL_{1}\), \(K_{-}\mapsto iL_{-1}\), \(J\mapsto iL_{0}\), \(\mathcal{P}_{n}\mapsto P_{n}\).
The fact that the extended generators satisfy the correct algebra is not enough to state that we have a realization of BMS symmetry. Indeed, one must prove that the generators \(K_{+}\), \(K_{-}\), \(J\), and \(\mathcal{P}_{n}\) are conserved charges of the system. In this case, one must prove that the generators have weakly zero Poisson brackets with all the first-class constraints \(\phi_{0}\), \(\phi_{n}\), \(\bar{\phi}_{n}\), \(n=2,3,\ldots\), appearing in the reduced Hamiltonian (3.2). Here, weakly zero means zero up to the constraints, and we will denote this by \(\simeq 0\).
Since the constraints do not depend on the coordinates, this condition is trivially satisfied by the generators of supertranslations \(\mathcal{P}_{n}\). For \(J\) one has, using that \(\{p_{n},J\}=-np_{n}\), \(\{p_{-n},J\}=np_{-n}\), \(\{p_{1}p_{-1},J\}=0\),
\[\{\phi_{n},J\} =\{p_{n}+\mu p_{-1}^{-n}f_{n}^{\pm}(p_{1}p_{-1}),J\}=-np_{n}+\mu f_ {n}^{\pm}(p_{1}p_{-1})\{p_{-1}^{-n},J\}\] \[=-np_{n}-n\mu f_{n}^{\pm}(p_{1}p_{-1})p_{-1}^{-n-1}\{p_{-1},J\}=- np_{n}-n\mu f_{n}^{\pm}(p_{1}p_{-1})p_{-1}^{-n-1}p_{-1}\] \[=-n\phi_{n}\simeq 0,\] \[\{\bar{\phi}_{n},J\} =\{p_{-n}+\mu p_{-1}^{n}g_{n}^{\pm}(p_{1}p_{-1}),J\}=np_{-n}+\mu g _{n}^{\pm}(p_{1}p_{-1})\{p_{-1}^{n},J\}\] \[=np_{-n}+n\mu g_{n}^{\pm}(p_{1}p_{-1})p_{-1}^{n-1}\{p_{-1},J\}=np _{n}+n\mu g_{n}^{\pm}(p_{1}p_{-1})p_{-1}^{n-1}p_{1}\] \[=n\bar{\phi}_{n}\simeq 0.\]
The invariance of the constraints under the generators is slightly less trivial for the boosts. For \(K_{+}\), using that \(\{p_{n},K_{+}\}=-(1-n)p_{n+1}\), \(\{p_{-n},K_{+}\}=-(1+n)p_{-n+1}\), and in particular that \(\{p_{1},K_{+}\}=0\), \(\{p_{-1},K_{+}\}=-2p_{0}\),
\[\{\phi_{n},K_{+}\} =\{p_{n}+\mu p_{-1}^{-n}f_{n}^{\pm}(p_{1}p_{-1}),K_{+}\}\] \[=-(1-n)p_{n+1}+\mu f_{n}^{\pm}(p_{1}p_{-1})\{p_{-1}^{-n},K_{+}\}+ \mu p_{-1}^{-n}\{f_{n}^{\pm}(p_{1}p_{-1}),K_{+}\}\] \[=-(1-n)p_{n+1}+\mu f_{n}^{\pm}(p_{1}p_{-1})(-np_{-1}^{-n-1}(-2p_{ 0}))+\mu p_{-1}^{-n}(f_{n}^{\pm})^{\prime}(p_{1}p_{-1})p_{1}(-2p_{0})\] \[\stackrel{{(\ref{eq:2.2})}}{{=}}-(1-n)p_{n+1}+2\mu np _{0}p_{-1}^{-n-1}f_{n}^{\pm}(p_{1}p_{-1})\pm\mu(n+1)p_{0}p_{1}p_{-1}^{-n}\frac {f_{n-1}^{\pm}(p_{1}p_{-1})}{\sqrt{\mu^{2}+p_{1}p_{-1}}}\] \[\stackrel{{\phi_{0}}}{{\simeq}}-(1-n)p_{n+1}\mp 2\mu n \sqrt{\mu^{2}+p_{1}p_{-1}}p_{-1}^{-n-1}f_{n}^{\pm}(p_{1}p_{-1})-\mu(n+1)p_{1} p_{-1}^{-n}f_{n-1}^{\pm}(p_{1}p_{-1})\] \[\stackrel{{(\ref{eq:2.2})}}{{\simeq}}0,\] \[\{\bar{\phi}_{n},K_{+}\} =\{p_{-n}+\mu p_{-1}^{n}g_{n}^{\pm}(p_{1}p_{-1}),K_{+}\}\] \[=-(1+n)p_{-n+1}+\mu g_{n}^{\pm}(p_{1}p_{-1})\{p_{-1}^{n-1},K_{+} \}+\mu p_{-1}^{n}\{g_{n}^{\pm}(p_{1}p_{-1}),K_{+}\}\] \[=-(1+n)p_{-n+1}+n\mu g_{n}^{\pm}(p_{1}p_{-1})p_{-1}^{n-1}(-2p_{0} )+\mu p_{-1}^{n}(g_{n}^{\pm})^{\prime}(p_{1}p_{-1})p_{1}(-2p_{0})\] \[\stackrel{{(\ref{eq:2.2})},{\phi_{0}}}{{\simeq}}-(1+n )p_{-n+1}\mp 2\mu n\sqrt{\mu^{2}+p_{1}p_{-1}}p_{-1}^{n-1}g_{n}^{\pm}(p_{1}p_{-1}) +\mu(n-1)p_{1}p_{-1}^{n}g_{n+1}^{\pm}(p_{1}p_{-1})\] \[\stackrel{{\phi_{n-1}}}{{\simeq}}(1+n)\mu p_{-1}^{n- 1}g_{n-1}(p_{1}p_{-1})\] \[\qquad\mp 2\mu n\sqrt{\mu^{2}+p_{1}p_{-1}}p_{-1}^{n-1}g_{n}^{\pm}(p_{ 1}p_{-1})+\mu(n-1)p_{1}p_{-1}^{n}g_{n+1}^{\pm}(p_{1}p_{-1})\] \[=\mu p_{-1}^{n-1}\left((n+1)g_{n-1}(x)\pm 2n\sqrt{\mu^{2}+x}g_{n}^{\pm}( x)+(n-1)xg_{n+1}^{\pm}(x)\right)_{x=p_{1}p_{-1}}\] \[\stackrel{{(\ref{eq:2.2})}}{{=}}0.\]
Similarly, one can show that \(\{\phi_{n},K_{-}\}\simeq 0\), \(\{\bar{\phi}_{n},K_{-}\}\simeq 0\). This can be done by direct
calculation as above or using that the Poisson bracket structure is real and then
\[\{\phi_{n},K_{-}\}^{*}=\{\bar{\phi}_{n},K_{+}\}\simeq 0, \tag{4.11}\] \[\{\bar{\phi}_{n},K_{-}\}^{*}=\{\phi_{n},K_{+}\}\simeq 0. \tag{4.12}\]
One concludes then that the extended Poincare generators are conserved charges for our system. A discussion of the Casimirs of the Lorentz and Poincare groups in BMS space is presented in Appendix D, where, in particular, it is shown that the only quadratic Casimir of the BMS group is the standard one Poincare Casimir, \(C_{2}=p_{0}^{2}-p_{1}p_{-1}\), see appendix D.
One might be tempted to generalize the Lorentz generators to include superrotations (or rather superboosts) by replacing the "1" which appear in (4.5) and (4.6) with arbitrary positive integers \(m\),
\[K_{+}^{m}=\sum_{n=-\infty}^{+\infty}(m-n)x^{n}p_{n+m}, \tag{4.13}\] \[K_{-}^{m}=\sum_{n=-\infty}^{+\infty}(m+n)x^{n}p_{n-m}, \tag{4.14}\]
for \(m=1,2,\ldots\) By appropriate identifications, these generators, together with \(J\) and the \(\mathcal{P}_{n}\), provide a representation of the extended BMS algebra in terms of Poisson brackets. However, the extended generators obtained for \(m=2,3,\ldots\) do not commute with all the first class constraints of our system, and hence are not conserved quantities. To be more precise, the constraints \(\phi_{n}\), \(n=2,3,\ldots\), are weakly invariant only under \(K_{+}^{m}\), while the \(\bar{\phi}_{n}\) are invariant only under \(K_{-}^{m}\). We will see in Section 5 that the massless limit of our theory is fully invariant under these generalized transformations.
## 5 Massless limit
Since the Lagrangian (2.11) is proportional to \(\mu\) one cannot take the massless limit directly in configuration space. This is not a problem in phase space, since the system is in this case entirely defined by the set of constraints, which have a non-trivial limit when \(\mu\to 0\). Indeed, performing this limit in (3.27) and its complex conjugate (3.28) one gets the constraints
\[\varphi_{n}=p_{n}\pm(\mp 1)^{n}p_{-1}^{-n}(\sqrt{p_{1}p_{-1}})^{n +1},\quad n=0,1,2,\ldots, \tag{5.1}\] \[\bar{\varphi}_{n}=p_{-n}\pm(\mp 1)^{n}p_{-1}^{n}(\sqrt{p_{1}p_{- 1}})^{-n+1},\quad n=1,2,\ldots,. \tag{5.2}\]
Notice that \(\varphi_{0}=p_{0}\pm\sqrt{p_{1}p_{-1}}\), and that \(\varphi_{1}\) and \(\bar{\varphi}_{1}\) are trivial, as in the massive case.
As shown in Appendix C, one has that
\[\{\varphi_{n},K_{+}^{m}\}\simeq 0,\ \{\varphi_{n},K_{-}^{m}\}\simeq 0,\ \{\bar{\varphi}_{n},K_{+}^{m}\}\simeq 0,\ \{\bar{\varphi}_{n},K_{-}^{m}\}\simeq 0, \tag{5.3}\]
where the superrotation generators \(K_{\pm}^{m}\), \(m=0,1,2,\ldots\) are defined as in (4.13) and (4.14), and where the weak equality is now over the manifold defined by \(\varphi_{n}=0\), \(\bar{\varphi}_{n}=0\). Thus, \(K_{\pm}^{m}\) are conserved quantities in the massless limit theory.
The superrotation operators obey the algebra
\[\{K_{+}^{m},K_{+}^{n}\} =(m-n)K_{+}^{m+n}, \tag{100}\] \[\{K_{+}^{m},K_{-}^{n}\} =\begin{cases}2mJ&\text{if $m=n$},\\ -(m+n)K_{+}^{m-n}&\text{if $m>n$},\\ (m+n)K_{-}^{n-m}&\text{if $m<n$},\end{cases}\] (101) \[\{K_{-}^{m},K_{-}^{n}\} =(m-n)K_{-}^{m+n}, \tag{102}\]
Furthermore, they act on the supertranslation generators as
\[\{K_{\pm}^{m},p_{n}\}=(m\mp n)p_{n\pm m},\quad m=1,2,\dots,\quad n\in\mathbb{Z}. \tag{103}\]
If we now define
\[L_{m}=\begin{cases}-J&\text{if $m=0$},\\ K_{+}^{m}&\text{if $m>0$},\\ -K_{-}^{-m}&\text{if $m<0$},\end{cases} \tag{104}\]
it turns out that \(\{L_{m},L_{n}\}=(m-n)L_{m+n}\) and the extended BMS algebra (1) is obtained in terms of Poisson brackets of the massless limit BMS particle.
## 6 Gauge fixing
In Section 3 it has been shown that, after eliminating the degrees of freedom \(u,v\) and its corresponding canonical momenta by means of the primary second-class constraints \(\pi_{u}=0\), \(\pi_{v}=0\), \(\phi_{1}=0\) and \(\bar{\phi}_{1}=0\), the theory still contains an infinite number of primary first-class constraints which generate gauge transformations and indicate the presence of gauge degrees of freedom.
These gauge degrees can be eliminated by converting the first-class constraints to second class, by introducing appropriate gauge fixing conditions. Since all the constraints \(\phi_{n}\) (resp. \(\bar{\phi}_{n}\)) depend linearly on \(p_{n}\) (resp. \(p_{-n}\)), a sensible choice is to introduce the constraints
\[\psi_{n}=x^{n},\quad n=\pm 2,\pm 3,\dots, \tag{105}\]
so that
\[\{\psi_{n},\phi_{m}\}=\delta_{n,m},\quad n,m=\pm 2,\pm 3,\dots, \tag{106}\]
and define a gauge fixing as
\[GF=\{\psi_{m}=0\}_{|m|\geq 2}, \tag{107}\]
which allows the consistent elimination of all the extra BMS degrees of freedom in phase space,
\[p_{\pm n}=-\frac{\mu}{p_{\mp 1}^{n}}f_{n}^{\pm}(p_{1}p_{-1}),\quad x^{n}=0, \quad x^{-n}=0,\quad n=2,3,\dots, \tag{108}\]
with only the gauge symmetry associated with \(\phi_{0}\) remaining. In this way, the physical degrees of freedom of the theory in phase space are reduced to \(x^{0},p_{0},x^{\pm 1},p_{\pm 1}\), and it can
be seen that the Dirac brackets between these remaining variables are the standard Poisson brackets.
Notice that the gauge condition is not invariant under supertranslations and hence one must introduce a compensating gauge transformation so that the total variation of the gauge condition, computed on the gauge condition, is zero. If we consider \(|m|\geq 2\) and denote by \(\delta^{n}_{ST}x^{m}\) the supertranslation of \(x^{m}\) generated by \(p_{n}\), and by \(\epsilon^{m}(\tau)\) the gauge transformation on \(x^{m}\), generated by \(\phi_{m}\), one has
\[0=\left.(\epsilon^{m}(\tau)+\delta^{n}_{ST}x^{m})\right|_{GF}, \tag{100}\]
and, since \(\delta^{n}_{ST}x^{m}=\epsilon^{n}\delta^{m}_{n}=\epsilon^{m}\), the compensating gauge transformation associated to the supertranslation along the \(m\) coordinate, \(|m|\geq 2\), is just
\[\epsilon^{m}(\tau)=-\epsilon^{m}. \tag{101}\]
Since the generators of the gauge transformations \(\phi_{m}\), \(|m|\geq 2\), contain the momenta \(p_{1},p_{-1}\), it turns out that these compensating gauge transformations induce a residual transformation on the remaining variables \(x^{\pm 1}\), given by
\[\delta^{m}_{\rm res}x^{\pm 1}=\{x^{\pm 1},-\epsilon^{m}\phi_{m}\},\quad|m|\geq 2. \tag{102}\]
Using \(\{x^{1},\phi_{n}\}=-(n+1)P_{n-1}/(2P_{0})\), \(\{x^{-1},\phi_{n}\}=(n-1)P_{n+1}/(2P_{0})\), \(n=\pm 2,\pm 3,\ldots\), one gets
\[\delta^{m}_{\rm res}x^{1} =\epsilon^{m}(m+1)\frac{P_{m-1}}{2P_{0}}, \tag{103}\] \[\delta^{m}_{\rm res}x^{-1} =-\epsilon^{m}(m-1)\frac{P_{m+1}}{2P_{0}}, \tag{104}\]
where it should be reminded that the several \(P_{n}\) appearing on the right-hand sides are functions of \(p_{1}\), \(p_{-1}\). That these transformations are a symmetry of the theory is proved at the end of Appendix B. Notice that for \(m=1\) and \(m=-1\), although no compensating gauge transformation is needed, one formally obtains the standard translations in \(x^{1}\) and \(x^{-1}\), respectively.
These residual transformations on the physical variables \(x^{\pm 1}\) provide, together with the Lorentz transformations, a realization of BMS in the physical reduced space, up to reparametrizations. The need for a reparametrization follows from the fact that \(x^{0}\) does not transform under \(\delta^{m}_{\rm res}\) but, under a boost, transforms into \(x^{1}\) or \(x^{-1}\). For instance, for \(K_{+}\) one has \(\delta_{+}x^{0}=\{x^{0},K_{+}\}=2x^{-1}\) and then (we drop the constant parameters \(\epsilon^{m}\))
\[[\delta_{+},\delta^{m}_{\rm res}]x^{0}=(m-1)\frac{P_{m+1}}{P_{0}}, \tag{105}\]
while \(\delta^{m+1}_{\rm res}x^{0}\), which should appear on the right-hand side in order to have the BMS algebra, is zero. However, since \(\{x^{0},\phi_{0}\}=1\), the right-hand side can be interpreted as a reparametrization with parameter
\[\epsilon^{m}_{+}=(m-1)\frac{P_{m+1}}{P_{0}}, \tag{106}\]
so that the commutator is indeed a vanishing BMS supertranslation on \(x^{0}\) plus a reparametrization,
\[[\delta_{+},\delta^{m}_{\rm res}]x^{0}=(m-1)\cdot 0+\delta_{0}^{\xi^{m}_{+}}x^{0}. \tag{108}\]
For this to be consistent, the same reparametrization should appear when one considers the action of the transformations on \(x^{1}\) and \(x^{-1}\),
\[[\delta_{+},\delta^{m}_{\rm res}]x^{1} =\frac{m+1}{2}\left(\frac{p_{1}P_{m-1}}{P_{0}^{2}}+m\frac{P_{m-2} }{P_{0}}\right), \tag{109}\] \[[\delta_{+},\delta^{m}_{\rm res}]x^{-1} =-\frac{m-1}{2}\left(\frac{p_{1}P_{m+1}}{P_{0}^{2}}+m\frac{P_{m+2 }}{P_{0}}\right). \tag{110}\]
Using that
\[\delta^{m+1}_{\rm res}x^{1}=(m+2)\frac{P_{m}}{2P_{0}},\ \ \delta^{m+1}_{\rm res}x^{-1}=-m\frac{P_{m+2}}{2P_{0}}, \tag{111}\]
Notice this residual transformation is no longer a point transformation. Under reparametrizations with parameter \(\epsilon^{m}_{+}\) (107),
\[\delta_{0}^{\xi^{m}_{+}}x^{1} =\epsilon^{m}_{+}\{x^{1},\phi_{0}\}=-\frac{m-1}{2}\frac{p_{-1}P_{ m+1}}{P_{0}^{2}}, \tag{112}\] \[\delta_{0}^{\xi^{m}_{+}}x^{-1} =\epsilon^{m}_{+}\{x^{-1},\phi_{0}\}=-\frac{m-1}{2}\frac{p_{1}P_{ m+1}}{P_{0}^{2}}, \tag{113}\]
one can check that (109), (110) can be rewritten as
\[[\delta_{+},\delta^{m}_{\rm res}]x^{1} =(m-1)\delta^{m+1}_{\rm res}x^{1}+\delta_{0}^{\xi^{m}_{+}}x^{1}, \tag{114}\] \[[\delta_{+},\delta^{m}_{\rm res}]x^{-1} =(m-1)\delta^{m+1}_{\rm res}x^{-1}+\delta_{0}^{\xi^{m}_{+}}x^{-1}, \tag{115}\]
which, together with (108) and up to the reparametrization, yield the correct term for the BMS algebra. Similarly, for \(K_{-}\), the reparametrization parameter is
\[\epsilon^{m}_{-}=-(m+1)\frac{P_{m-1}}{P_{0}}, \tag{116}\]
while no reparametrization is necessary to close the commutators of (112,131) with the transformation given by the rotation generator \(J\).
This is an example of a fact previously reported in the literature (see [24], eq. (3.11)), _i.e._ the closure of the algebra of rigid symmetries with the help of gauge transformations. In our case, the rigid transformations correspond to Lorentz and supertranslations in physical space, and the gauge transformation is given by the reparametrization invariance associated with the first-class constraint \(\phi_{0}\), which has not been fixed. That the reparametrizations might be needed to close the algebra can be also be inferred from the fact that the constraints \(\phi_{n}\), \(|n|\geq 2\), are weakly Lorentz invariant on the manifold defined by \(\phi_{0}\) (see the proof in Section 4) and that those \(\phi_{n}\) are the generators of the residual gauge transformations that give rise to the BMS symmetry in physical space.
Under a Lorentz transformation,
\[\delta_{J}x^{n}=\{x^{n},J\}=nx^{n}, \tag{6.21}\] \[\delta_{+}x^{n}=\{x^{n},K_{+}\}=(2-n)x^{n-1},\] (6.22) \[\delta_{-}x^{n}=\{x^{n},K_{-}\}=(2+n)x^{n+1}, \tag{6.23}\]
and one has
\[\delta_{J}x^{n}|_{GF}=0,\quad\delta_{+}x^{n}|_{GF}=0,\quad\delta_{-}x^{n}|_{GF }=0, \tag{6.24}\]
so that the gauge condition is preserved, without the need for a compensating gauge transformation. Notice that the factors \((2-n)\) and \((2+n)\) play a fundamental role for \(n=2\) and \(n=-2\), respectively.
In the massless case, where the Lorentz group can be extended so as to include superrotations, one has, for \(m=2,3,\ldots\),
\[\delta_{+}^{m}x^{n}=\{x^{n},K_{+}^{m}\}=(2m-n)x^{n-m}, \tag{6.25}\] \[\delta_{-}^{m}x^{n}=\{x^{n},K_{-}^{m}\}=(2m+n)x^{n+m}, \tag{6.26}\]
and a compensating gauge transformation must be introduced for \(|n|\geq 2\) for the values of \(m\) such that the right-hand side of (6.25) or (6.26) are not zero when evaluated on the gauge fixing condition (6.3). As in the case of supertranslations, this will generate a residual gauge transformation for \(x^{\pm 1}\), which should provide a realization of superrotations.
In any case, after eliminating the gauge degrees of freedom, the remaining variables are just \(x^{0}\), \(x^{\pm 1}\) and their canonical momenta, together with the first class constraint \(\phi_{0}\). This describes a Poincare particle in \(2+1\), with the corresponding reparametrization invariance, with no extra degrees of freedom, and with a realization of the supertranslations, plus superrotations in the massless case, provided by the residual gauge transformations. Summing up, the physical degrees of freedom of the BMS particle do not contain the BMS coordinates for \(|n|>1\) and hence no soft BMS modes are present.
## 7 Conclusions and outlook
We have constructed a non-linear realization of a massive particle Lagrangian for the BMS symmetry algebra in \(2+1\) space-time. This Lagrangian depends on an infinite set of BMS coordinates, which include the standard \(2+1\) Poincare ones, together with the Goldstone boost variables.
The canonical analysis of this Lagrangian reveals the existence of a finite set of second-class constraints, which can be eliminated using the standard Dirac bracket construction, together with an infinite set of first-class constraints, which generate a corresponding infinite set of gauge transformations.
The standard Lorentz generators in \(2+1\) are extended so that they act on the full set of BMS variables, and the theory is shown to be invariant under them. These extended Lorentz generators can be further generalized to an infinite set, the so-called superrotations,
that obey the extended BMS\({}_{3}\) algebra, but it is only in the massless limit of the theory that they are conserved quantities and thus represent a symmetry of the system.
Upon fixing all the gauge degrees of freedom of the theory, except for the standard reparametrization, one obtains a theory whose physical content is that of an ordinary relativistic Poincare particle, with the standard reparametrization invariance provided by the remaining first-class constraint. However, this gauge fixing procedure results in residual gauge transformations acting on the standard space coordinates \(x^{0},x^{1}\), \(x^{-1}\) which, modulo reparametrizations, realize the BMS symmetry. Since the remaining first-class constraint is the standard one, the field theory associated with this particle model is that of a free Klein-Gordon field.
The interpretation of these transformations for \(x^{1}\), \(x^{-1}\), which depend on the associated canonical momenta \(p_{1}\), \(p_{-1}\), is a subject for further study. The appearance of the momenta in a non-polynomial form seems to indicate that, in the field theory associated with this model, the field should transform non-locally. In [14; 25] a realization of BMS in terms of a free scalar field was constructed, with transformations that were non-local in space. Further investigations are needed to clarify whether the two approaches can be related.
In the approach taken in this paper, the massless limit has been obtained in the Hamiltonian formalism as the theory defined by the massless limit of the Hamiltonian constraints. An alternative approach, which we will examine in the future, is to obtain the model of a massless particle in the nonlinear realization approach.
The construction of a particle model exhibiting BMS symmetry presented in this paper could be, in principle, repeated for BMS\({}_{4}\), using, for instance, the stereographic parametrization of BMS\({}_{4}\)[11; 12]. BMS structure constants for BMS\({}_{4}\), \(BMS_{5}\) and \(BMS_{6}\) using generalized spherical harmonics parametrizations can also be found in [16], although they are much more involved.
We acknowledge interesting discussions with Luca Ciambelli, Miguel Campliglia, Jaume Gomis, Marc Henneaux, Axel Kleinschmidt and Sabrina Pasterski. JG acknowledges the hospitality of the Max Planck Albert Einstein Institute in Golm and the Perimeter Institute in Waterloo where this work has been completed. The work of CB is partially supported by Project MASHED (TED2021-129927B-I00), funded by MCIN/AEI/10.13039/501100011033 and by the European Union Next Generation EU/PRTR. JG has been supported in part by PID2019-105614GB-C21 and PID2019- 105614GB-C21 and from the State Agency for Research of the Spanish Ministry of Science and Innovation through the Unit of Excellence Maria de Maeztu 2020-2023 award to the Institute of Cosmos Sciences (CEX2019-000918-M).
Computation of the term of the Maurer-Cartan form proportional to the \(P_{0}\) generator
Using
\[e^{X}Ye^{-X}=e^{\mathrm{ad}_{X}}Y=Y+[X,Y]+\frac{1}{2!}[X,[X,Y]]+\frac{1}{3!}[X,[X,[X,Y]]]+\cdots \tag{116}\]
one has
\[\left[Y,e^{-X}\right] = e^{-X}\left([X,Y]+\frac{1}{2!}[X,[X,Y]]+\frac{1}{3!}[X,[X,[X,Y]] ]+\cdots\right) \tag{117}\] \[\equiv e^{-X}K(X,Y),\]
where
\[K(X,Y)=[X,Y]+\frac{1}{2!}[X,[X,Y]]+\frac{1}{3!}[X,[X,[X,Y]]]+\cdots, \tag{118}\]
which is linear in its second argument. We will call \(K(X,Y)\) the \(K\)-action of \(X\) on \(Y\).
By repeated use of (117) one arrives at
\[U^{-1}P_{n}U=P_{n}+K(-iL_{1}u,P_{n})+K(-iL_{-1}v,P_{n})+K(-iL_{1}u,K(-iL_{-1}v,P_{n})). \tag{119}\]
The second and third terms in (119) are
\[K(-iL_{1}u,P_{n}) = \sum_{l=1}^{\infty}\frac{1}{l!}(-iu)^{l}[L_{1},[L_{1},.^{l})..,[ L_{1},P_{n}]\ldots]] \tag{120}\] \[= \sum_{l=1}^{\infty}\frac{1}{l!}(-iu)^{l}(-i)^{l}(n-1)n\ldots(n+ l-2)P_{n+l}\] \[= \sum_{l=1}^{\infty}\frac{1}{l!}(-1)^{l}u^{l}(n-1)n\ldots(n+l-2)P _{n+l}.\]
\[K(-iL_{-1}v,P_{n}) = \sum_{l=1}^{\infty}\frac{1}{l!}(-iv)^{l}[L_{-1},[L_{-1},.^{l}..,[L_{-1},P_{n}]\ldots]] \tag{121}\] \[= \sum_{l=1}^{\infty}\frac{1}{l!}(-iv)^{l}(-i)^{l}(n+1)n\ldots(n- l+2)P_{n-l}\] \[= \sum_{l=1}^{\infty}\frac{1}{l!}(-1)^{l}v^{l}(n+1)n\ldots(n-l+2)P _{n-l}.\]
The fourth term in (119) is
\[K(-iL_{1}u,K(-iL_{-1}v,P_{n})) = K(-iL_{1}u,\sum_{l=1}^{\infty}\frac{1}{l!}(-1)^{l}v^{l}(n+1)n \ldots(n-l+2)P_{n-l})\] \[= \sum_{l=1}^{\infty}\frac{1}{l!}(-1)^{l}v^{l}(n+1)n\ldots(n-l+2) K(-iL_{1}u,P_{n-l}),\]
where we have used the linearity of \(K\) in its second argument. Finally
\[K(-iL_{1}u,K(-iL_{-1}v,P_{n}))=\sum_{l=1}^{\infty}\frac{1}{l!}(-1)^{l }v^{l}(n+1)n\ldots(n-l+2)\] \[\sum_{k=1}^{\infty}\frac{1}{k!}(-1)^{k}u^{k}(n-l-1)(n-l)\ldots(n-l+ k-2)P_{n-l+k}. \tag{100}\]
As mentioned before, we are interested only in the \(P_{0}\) terms in (101):
* \(P_{n}\). Contributes only for \(n=0\), with \(P_{0}\).
* \(K(-iL_{1}u,P_{n})\). Only the terms with \(l=-n\) yield a \(P_{0}\). Since \(l\geq 1\), this means that there is no contribution for \(n\geq 0\), while for \(n=-m<0\) one picks the term \[\frac{1}{m!}(-1)^{m}u^{m}(-m-1)(-m)\ldots(-2)P_{0}=(m+1)u^{m}P_{0}.\]
* \(K(-iL_{-1}v,P_{n})\). The \(P_{0}\) contribution is obtained now for \(l=n\) and since \(l\geq 1\), there is only contribution if \(n>0\), which is \[\frac{1}{n!}(-1)^{n}v^{n}(n+1)n\ldots 2\,P_{0}=(-1)^{n}(n+1)v^{n}P_{0}.\]
* \(K(-iL_{1}u,K(-iL_{-1}v,P_{n}))\). This term has multiple \(P_{0}\) contributions, given by \(l-k=n\), subjected to \(l\geq 1\), \(k\geq 1\). For given \(l\) one picks the \(k=l-n\) term in the \(k\) series, but \(k\geq 1\) implies that \(l\) must satisfy, besides \(l\geq 1\), the constraint \(l\geq 1+n\). If \(n\leq 0\) this just means \(l\geq 1\), but, for \(n>0\), \(l\) is restricted by \(l\geq 1+n\). Selecting \(k=l-n\) in (100) and restricting the series over \(l\) according to the above discussion one has, after re-arranging terms and cancelling some signs, \[\sum_{l=n+1}^{\infty}\frac{l-n+1}{l!}(-1)^{l}(n+1)n\ldots(n-l+2)v^{l}u^{l-n}P_ {0}\] (101) for \(n>0\), and \[\sum_{l=1}^{\infty}\frac{m+l+1}{l!}(m-1)m\ldots(m+l-2)v^{l}u^{l+m}P_{0}\] (102) for \(n=-m\leq 0\).
Putting everything together, the coefficient of \(P_{0}\) in (8) can be computed as follows, collecting the contributions proportional to the different \(\mathrm{d}x^{n}\).
For \(n=0\), there is only contribution from the first and fourth terms in (101),
\[\mathrm{d}x^{0}(1+\sum_{l=1}^{\infty}\frac{l+1}{l!}(0-1)(0)\ldots(0+l-2)v^{l} u^{l}\]
Notice, however, that the above series finishes in fact after \(l=1\), so one gets
\[\mathrm{d}x^{0}(1-2uv). \tag{104}\]
For \(n>0\), only the third and fourth terms have a non-vanishing contribution, given by
\[\mathrm{d}x^{n}\left((-1)^{n}(n+1)v^{n}+\sum_{l=n+1}^{\infty}\frac{l-n+1}{l!}(-1 )^{l}(n+1)n\ldots(n-l+2)v^{l}u^{l-n}\right). \tag{105}\]
Actually, the product in the coefficients of the series always contains a zero except if \(l=n+1\), and hence the above expression collapses to
\[\mathrm{d}x^{n}\left((-1)^{n}(n+1)v^{n}+2(-1)^{n+1}uv^{n+1}\right)=\mathrm{d}x ^{n}(-1)^{n}v^{n}(n+1-2uv). \tag{106}\]
Finally, for \(n=-m<0\), the contributions come from the second and fourth terms and are given by
\[\mathrm{d}x^{-m}\left((m+1)u^{m}+\sum_{l=1}^{\infty}\frac{m+l+1}{l!}(m-1)m \ldots(m+l-2)v^{l}u^{l+m}\right). \tag{107}\]
The series is identically zero for \(m=1\), while for \(m\geq 2\) it can be rewritten as
\[\mathrm{d}x^{-m}\left((m+1)u^{m}+\sum_{l=1}^{\infty}\frac{m+l+1}{l!}\frac{(l+ m-2)!}{(m-2)!}v^{l}u^{l+m}\right). \tag{108}\]
This series can be summed (provided that \(|uv|<1\)) and, after adding the \((m+1)u^{m}\) term, one gets
\[\mathrm{d}x^{-m}u^{m}\frac{m+1-2uv}{(1-uv)^{m}}. \tag{109}\]
Adding all the terms, the coefficient of \(P_{0}\) in the Maurer-Cartan form is
\[\Omega_{P_{0}}=\mathrm{d}x^{0}(1-2uv)+\sum_{n=1}^{\infty}\mathrm{d}x^{n}(-1)^ {n}v^{n}(n+1-2uv)+\mathrm{d}x^{-1}2u+\sum_{n=2}^{\infty}\mathrm{d}x^{-n}u^{n} \frac{n+1-2uv}{(1-uv)^{n}}. \tag{110}\]
The fact that the \(\dot{x}^{-n}\) contribution is much more complex than that of \(\dot{x}^{n}\), actually involving the series that has been mentioned, is due to the form of the last term in (101), which in turn is a consequence of the ordering that we have selected for the two exponentials in \(U\). For \(n\geq 2\), the \(K\)-action of \(-ivL_{-1}\) on \(P_{n}\) can only descend to \(P_{-1}\) (since the Poincare part is BMS invariant), and then there is only one term in the \(K\)-action of \(-iL_{1}u\) that returns to \(P_{0}\). Instead, for \(n\geq 2\), \(K(-ivL_{-1},P_{-n})\) produces terms \(P_{k}\) for any \(k=-3,-4,\ldots\), and then, for each of them, there is a way to return to \(P_{0}\) by the \(K\)-action of \(-iL_{1}u\).
Quasi-invariance of the Lagrangian under gauge transformations
We consider first the full Lagrangian (11) and its variation under the full set of gauge transformations given by the first class constraints \(\phi_{m}\) and \(\bar{\phi}_{m}\), \(m\geq 2\). In order to compute the transformation of the phase-space variables induced by \(\phi_{m}\), \(\bar{\phi}_{m}\) one needs
\[\{x^{1},\phi_{m}\} =\{x^{1},p_{m}+\mu p_{-1}^{-m}f_{m}^{\pm}(p_{1}p_{-1})\}\] \[=\mu p_{-1}^{-m}(f_{m}^{\pm})^{\prime}(p_{1}p_{-1})p_{-1}=\mp\mu p_ {-1}^{-m+1}\frac{m+1}{2\sqrt{\mu^{2}+p_{1}p_{-1}}}f_{m-1}^{\pm}(p_{1}p_{-1})\] \[=\mu p_{-1}^{-m+1}\frac{m+1}{2P_{0}}f_{m-1}^{\pm}(p_{1}p_{-1})=-(m +1)\frac{P_{m-1}}{2P_{0}}, \tag{124}\] \[\{x^{1},\bar{\phi}_{m}\} =\{x^{1},\bar{p}_{m}+\mu p_{-1}^{m}g_{m}^{\pm}(p_{1}p_{-1})\}\] \[=\mu p_{-1}^{m}(g_{m}^{\pm})^{\prime}(p_{1}p_{-1})p_{-1}=\pm\mu p _{-1}^{m+1}\frac{m-1}{2\sqrt{\mu^{2}+p_{1}p_{-1}}}g_{m+1}^{\pm}(p_{1}p_{-1})\] \[=-\mu p_{-1}^{m+1}\frac{m-1}{2P_{0}}g_{m+1}^{\pm}(p_{1}p_{-1})=(m -1)\frac{P_{-m-1}}{2P_{0}}. \tag{125}\]
From these two it also follows that
\[\{x^{-1},\phi_{m}\} =\{x^{1},\bar{\phi}_{m}\}^{*}=(m-1)\frac{P_{m+1}}{2P_{0}}, \tag{126}\] \[\{x^{-1},\bar{\phi}_{m}\} =\{x^{1},\phi_{m}\}^{*}=-(m+1)\frac{P_{-m+1}}{2P_{0}}, \tag{127}\]
which, together with \(\{x^{n},\phi_{m}\}=\delta_{m}^{n}\), \(\{x^{-n},\bar{\phi}_{m}\}=\delta_{m}^{n}\), \(\{P_{n},\phi_{m}\}=\{P_{-n},\phi_{m}\}=\{P_{n},\bar{\phi}_{m}\}=\{P_{-n},\bar{ \phi}_{m}\}=0\), allow to compute the transformations of all the terms in the Lagrangian. For instance, if \(G=\alpha_{m}\phi_{m}=\epsilon_{m}2P_{0}\phi_{m}\), one has
\[\delta\mathcal{L} =\frac{\mathrm{d}}{\mathrm{d}\tau}(2P_{0}\epsilon_{m})P_{m}+\frac {\mathrm{d}}{\mathrm{d}\tau}(-(m+1)\epsilon_{m}P_{m-1})p_{1}+\frac{\mathrm{d}}{ \mathrm{d}\tau}((m-1)\epsilon_{m}P_{m+1})p_{-1}\] \[=\epsilon_{m}(2\dot{P}_{0}p_{m}-(m+1)\dot{P}_{m-1}p_{1}+(m-1)\dot{ P}_{m+1}p_{-1})\] \[\quad+\dot{\epsilon}_{m}(2P_{0}P_{m}-(m+1)p_{1}P_{m-1}+(m-1)P_{m+1 }p_{-1}). \tag{128}\]
This can be written as a total derivative, \(\delta\mathcal{L}=\frac{\mathrm{d}}{\mathrm{d}\tau}F\), provided that
\[2P_{0}\dot{P}_{m}-(m+1)P_{m-1}\dot{p}_{1}+(m-1)P_{m+1}\dot{p}_{-1}=0. \tag{129}\]
Using3
Footnote 3: We do not display the dependence of \(f_{m}^{\pm}\) on \(p_{1}p_{-1}\).
\[\dot{P}_{m} =\frac{\mathrm{d}}{\mathrm{d}\tau}\left(-\frac{\mu}{p_{-1}^{m}}f_{ m}^{\pm}\right)=m\frac{\mu}{p_{-1}^{m+1}}\dot{p}_{-1}f_{m}^{\pm}-\frac{\mu}{p_{-1}^{m }}(f_{m}^{\pm})^{\prime}\cdot(\dot{p}_{1}p_{-1}+p_{1}\dot{p}_{-1}) \tag{130}\] \[=m\frac{\mu}{p_{-1}^{m+1}}\dot{p}_{-1}f_{m}^{\pm}-\frac{\mu}{p_{- 1}^{m}}\left(\mp\frac{m+1}{2\sqrt{\mu^{2}+p_{1}p_{-1}}}f_{m-1}^{\pm}\right)( \dot{p}_{1}p_{-1}+p_{1}\dot{p}_{-1}), \tag{131}\]
the left-hand side of (B.6) is
\[\text{LHS(B.6)} =\mp 2\sqrt{\mu^{2}+p_{1}p_{-1}}m\frac{\mu}{p_{-1}^{m+1}}\dot{p}_{-1} f_{m}^{\pm}-(m+1)\frac{\mu}{p_{-1}^{m}}f_{m-1}^{\pm}\cdot(\dot{p}_{1}p_{-1}+p_{1} \dot{p}_{-1})\] \[\quad-(m+1)P_{m-1}\dot{p}_{1}+(m-1)P_{m+1}\dot{p}_{-1}\] \[=\mp 2\sqrt{\mu^{2}+p_{1}p_{-1}}m\frac{\mu}{p_{-1}^{m+1}}\dot{p}_{- 1}f_{m}^{\pm}-(m+1)\frac{\mu}{p_{-1}^{m}}f_{m-1}^{\pm}\cdot(\dot{p}_{1}p_{-1}+p _{1}\dot{p}_{-1})\] \[\quad+(m+1)\frac{\mu}{p_{-1}^{m-1}}f_{m-1}^{\pm}\dot{p}_{1}-(m-1) \frac{\mu}{p_{-1}^{m+1}}f_{m+1}^{\pm}\dot{p}_{-1}\] (B.9)
The two terms containing \(\dot{p}_{1}\) cancel each other, while the terms proportional to \(\dot{p}_{-1}\) are
\[-\dot{p}_{-1}\frac{\mu}{p_{-1}^{m+1}}\left((m-1)f_{m+1}^{\pm}\pm 2m\sqrt{\mu^{2} +p_{1}p_{-1}}f_{m}^{\pm}+(m+1)p_{1}p_{-1}f_{m-1}^{\pm}\right),\] (B.10)
which is zero due to (3.33). This proves (B.6) and thus (3.56). Equation (3.58) is proved in a similar way.
We consider next the partially gauge fixed Lagrangian
\[\mathcal{L}_{0}=\dot{x}^{0}P_{0}+\dot{x}^{1}p_{1}+\dot{x}^{-1}p_{-1},\] (B.11)
obtained from the full Lagrangian by setting \(x^{m}=0\) for \(|m|\geq 2\), and which, as explained in the text, is just the standard Lagrangian for a massive Poincare particle. This Lagrangian has the gauge symmetry transformation induced by the remaining first-class constraint \(\phi_{0}\), associated with reparametrization invariance, and the Poincare invariance generated by \(p_{0}\), \(p_{1}\), \(p_{-1}\), \(J\), \(K_{+}\) and \(K_{-}\), but also the infinite set of symmetries given by the residual gauge transformations (6.8), (6.9), which we write in the condensed notation
\[\delta_{\text{res}}^{m}x^{1} =\epsilon^{m}(m+1)\frac{P_{m-1}}{2P_{0}}=A_{m}(p_{1},p_{-1}),\] (B.12) \[\delta_{\text{res}}^{m}x^{-1} =-\epsilon^{m}(m-1)\frac{P_{m+1}}{2P_{0}}=B_{m}(p_{1},p_{-1}),\] (B.13)
with all the other variables \(x^{0}\), \(p_{0}\), \(p_{1}\) and \(p_{-1}\) invariant. One has
\[\delta_{\text{res}}\mathcal{L}_{0}=\delta_{\text{res}}(\dot{x}^{1}p_{1}+\dot{x} ^{-1}p_{-1})=p_{1}\frac{\text{d}}{\text{d}\tau}A_{m}+p_{-1}\frac{\text{d}}{ \text{d}\tau}B_{m}=C_{m}\dot{p}_{1}+D_{m}\dot{p}_{-1},\] (B.14)
where
\[C_{m} =p_{1}\frac{\partial A_{m}}{\partial p_{1}}+p_{-1}\frac{\partial B _{m}}{\partial p_{1}},\] (B.15) \[D_{m} =p_{1}\frac{\partial A_{m}}{\partial p_{-1}}+p_{-1}\frac{\partial B _{m}}{\partial p_{-1}}.\] (B.16)
The quasi-invariance of the Lagrangian, that is, the existence of a function \(F_{0}\) such that \(\delta_{\text{res}}\mathcal{L}_{0}=\frac{\text{d}}{\text{d}\tau}F_{0}\), is equivalent to
\[\frac{\partial C_{m}}{\partial p_{-1}}=\frac{\partial D_{m}}{\partial p_{1}},\] (B.17)
which boils down to
\[\frac{\partial A_{m}}{\partial p_{-1}}=\frac{\partial B_{m}}{\partial p_{1}},\] (B.18)
which in turn can be proved using the expressions of \(P_{n}\), \(P_{0}\) in terms of \(p_{1}\) and \(p_{-1}\) and the properties of the functions \(f_{n}^{\pm}\).
Invariance of the massless limit constraints under superrotations
Using \(\{p_{n},K_{+}^{m}\}=(n-m)p_{n+m}\) one has that the variation of \(\varphi_{n}\) under a superrotation induced by \(K_{+}^{m}\) is
\[\{\varphi_{n},K_{+}^{m}\} =\{p_{n}\pm(\mp 1)^{n}p_{-1}^{-n}(\sqrt{p_{1}p_{-1}})^{n+1},K_{+}^{m}\}\] \[=(n-m)p_{n+m}\pm(\mp)^{n}(-np_{-1}^{-n-1}(-1-m)p_{-1+m})(\sqrt{p_{ 1}p_{-1}})^{n+1}\] \[\pm(\mp)^{n}p_{-1}^{-n}(n+1)(\sqrt{p_{1}p_{-1}})^{n}\frac{1}{2 \sqrt{p_{1}p_{-1}}}\left(p_{1}(-1-m)p_{-1+m}+p_{-1}(1-m)p_{1+m}\right).\]
Since we only have to deal with the case \(m\geq 2\), one has that \(n+m\), \(-1+m\) and \(1+m\) are all positive. Using \(\varphi_{n+m}\), \(\varphi_{-1+m}\) and \(\varphi_{1+m}\) one can express \(p_{n+m}\), \(p_{-1+m}\) and \(p_{1+m}\) in terms of \(p_{1}\) and \(p_{-1}\), and one obtains, after re-arranging terms and extracting the common dependency of all the terms in \(p_{1}\) and \(p_{-1}\),
\[\{\varphi_{n},K_{+}^{m}\} \simeq(\mp)^{n+m+1}p_{-1}^{-n-m}(\sqrt{p_{1}p_{-1}})^{n+m+1}\] \[\cdot\left(n-m-n(1+m)+(1+m)\frac{n+1}{2}-(1-m)\frac{n+1}{2}\right) =0.\]
Similarly, from \(\{p_{n},K_{-}^{m}\}=-(m+n)p_{n-m}\),
\[\{\varphi_{n},K_{-}^{m}\} =\{p_{n}\pm(\mp 1)^{n}p_{-1}^{-n}(\sqrt{p_{1}p_{-1}})^{n+1},K_{-}^{ m}\}\] \[=-(n+m)p_{n-m}\pm(\mp)^{n}(-np_{-1}^{-n-1}(-(m-1)p_{-1-m}))(\sqrt {p_{1}p_{-1}})^{n+1}\] \[\pm(\mp)^{n}p_{-1}^{-n}(n+1)(\sqrt{p_{1}p_{-1}})^{n}\frac{1}{2 \sqrt{p_{1}p_{-1}}}\left(p_{1}(-(m-1)p_{-1-m})+p_{-1}(-(m+1)p_{1-m})\right).\]
Again, since we must only consider \(m\geq 2\), both \(p_{-1-m}\) and \(p_{1-m}\) are BMS momenta with negative indexes, and can be expressed in terms of \(p_{1}\) and \(p_{-1}\) using \(\bar{\varphi}_{1+m}\) and \(\bar{\varphi}_{m-1}\), respectively. For \(n-m\geq 0\), one can use \(\varphi_{n-m}\) for \(p_{n-m}\), while for \(n-m<0\)\(p_{n-m}\) has negative index and can be expressed in terms of \(p_{1}\) and \(p_{-1}\) using \(\bar{\varphi}_{m-n}\). It turns out that in both cases the term obtained from \(p_{n-m}\) is the same, and one has
\[\{\varphi_{n},K_{-}^{m}\} \simeq(\mp)^{n+m+1}p_{1}^{-m}p_{-1}^{-n}(\sqrt{p_{1}p_{-1}})^{n+m+1}\] \[\cdot\left(-(n+m)-n(m-1)+(m-1)\frac{n+1}{2}+(m+1)\frac{n+1}{2} \right)=0.\]
It should be noticed that it is this case, the variation of a positive index constraint under a negative index superrotation, the one that breaks the invariance of the theory under superrotations in the massive case.
Due to the real character of the Poisson bracket, one will also have
\[\{\bar{\varphi}_{n},K_{-}^{m}\}^{*} =\{\varphi_{n},K_{+}^{m}\}\simeq 0,\] \[\{\bar{\varphi}_{n},K_{+}^{m}\}^{*} =\{\varphi_{n},K_{-}^{m}\}\simeq 0,\]
and thus all the constraints are weakly invariant under all the superrotations.
Casimirs of the Lorentz and Poincare groups in BMS space
The action of the Lorentz generators \(K_{\pm}\), \(J\) on the \(p_{n}\), \(n\in\mathbb{Z}\), provided by Poisson brackets,
\[\delta_{J}p_{n}=\{p_{n},J\}=-np_{n}, \tag{104}\] \[\delta_{+}p_{n}=\{p_{n},K_{+}\}=-(1-n)p_{n+1},\] (105) \[\delta_{-}p_{n}=\{p_{n},K_{-}\}=-(1+n)p_{n-1}, \tag{106}\]
leads to the definition of infinite dimensional matrices acting on vectors \((\ldots,p_{-2},p_{-1},p_{0},p_{1},p_{2},\ldots)\) which implement this action, given by
\[J_{nm} =-n\delta_{nm}, \tag{107}\] \[(K_{+})_{nm} =-(1-m)\delta_{n,m+1},\] (108) \[(K_{-})_{nm} =-(1+m)\delta_{n,m-1}. \tag{109}\]
Using the structure constants of the \(SO(2,1)\) algebra in the \(J,K_{+},K_{-}\) basis one can construct the Killing form of the Lie algebra, and from that the quadratic Casimir, which is given by
\[C_{2}^{L}=\frac{1}{2}J^{2}+\frac{1}{4}K_{+}K_{-}+\frac{1}{4}K_{-}K_{+}. \tag{110}\]
Using the above matrices one immediately obtains
\[\left(\frac{1}{2}J^{2}+\frac{1}{4}K_{+}K_{-}+\frac{1}{4}K_{-}K_{+}\right)_{nm} =1\cdot\delta_{nm} \tag{111}\]
and hence this corresponds to an adjoint representation on the space of BMS momenta.
Similarly, one can consider the action on the space of BMS coordinates \(x^{n}\), \(n\in\mathbb{Z}\),
\[\delta_{J}x^{n}=\{x^{n},J\}=nx^{n}, \tag{112}\] \[\delta_{+}x^{n}=\{x^{n},K_{+}\}=(2-n)x^{n-1},\] (113) \[\delta_{-}x^{n}=\{x^{n},K_{-}\}=(2+n)x^{n+1}, \tag{114}\]
which leads to the matrices
\[\tilde{J}_{nm} =n\delta_{nm}, \tag{115}\] \[(\tilde{K}_{+})_{nm} =(2-m)\delta_{n,m-1},\] (116) \[(\tilde{K}_{-})_{nm} =(2+m)\delta_{n,m+1}. \tag{117}\]
Again
\[\left(\frac{1}{2}\tilde{J}^{2}+\frac{1}{4}\tilde{K}_{+}\tilde{K}_{-}+\frac{1 }{4}\tilde{K}_{-}\tilde{K}_{+}\right)_{nm}=1\cdot\delta_{nm}, \tag{118}\]
which shows that it also corresponds to an adjoint representation on the space of coordinates.
With respect to the Poincare group, the quadratic Casimir in \(2+1\) in our coordinates is
\[C_{2}^{P}=p_{0}^{2}-p_{1}p_{-1}, \tag{119}\]
which, for our system and taking into account the constraint \(\phi_{0}\), takes value \(-\mu^{2}\). One may wonder if it is possible to obtain a quadratic Casimir involving the higher BMS momenta, of the form
\[C_{2}=A_{mn}p_{m}p_{n},\quad A_{mn}=A_{nm}.\] (D.17)
Imposing the invariance under \(J\) one gets
\[\delta_{J}C_{2}=-A_{mn}p_{m}p_{n}(m+n)=0\] (D.18)
which implies that the only \(A_{mn}\) that can be different from zero are those corresponding to \(m=-n\). Thus
\[A_{mn}=A_{n}\delta_{m,-n},\] (D.19)
with \(A_{n}=A_{-n}\) due to the symmetry of \(A_{mn}\). Computing now the variation under \(K_{+}\) and using this form for \(A_{mn}\) one gets
\[\delta_{+}C_{2}=-p_{m+1}p_{-m}((1-m)A_{m}+(2+m)A_{m+1})=0.\] (D.20)
In order to equal to zero the coefficients of this sum over \(m\) one must notice that the terms corresponding to \(m=n\) and \(m=-1-n\) yield the same product \(p_{m+1}p_{-m}\). Taking this into account and using \(A_{m}=A_{-m}\) one gets the first order recurrence relation
\[(1-m)A_{m}+(2+m)A_{m+1}=0,\quad m=0,1,2,\ldots.\] (D.21)
The invariance under \(K_{-}\) does not add any new condition. For \(m=0\), (D.21) yields
\[A_{0}+2A_{1}=0,\]
from which \(A_{1}=-\frac{1}{2}A_{0}\) and hence also \(A_{-1}=-\frac{1}{2}A_{0}\). For \(m=1\), however, the relation is
\[0\cdot A_{1}+3A_{2}=0,\]
from which \(A_{2}=0\) and thus \(A_{-2}=0\). From this point, using the recurrence for higher values of \(m\) leads to \(A_{m}=A_{-m}=0\) for \(m=2,3,\ldots\). The final result is then that the only quadratic Casimir of the Poincare group in BMS space is, up to a global constant, the standard one, given by (D.16).
|
2304.14563 | Causality and stability in first-order conformal anisotropic
hydrodynamics | We formulate the first-order dissipative anisotropic hydrodynamical theory
for a relativistic conformal uncharged fluid, which generalizes the
Bemfica-Disconzi-Noronha-Kovtun first-order viscous fluid framework. Our
approach maintains causal behavior in the nonlinear regime with or without
general relativity coupling, and we derive and analyze the constraints on
transport coefficients imposed by causality. We demonstrate the causal and
stable behavior of our theory in specific cases, including the discussion of
nonlinear causality as well as stability for linearized perturbations. We apply
our newly developed first-order anisotropic theory to the Bjorken flow and show
how causality and stability impose constraints on the behavior of the
early-time attractor. | Fabio S. Bemfica, Mauricio Martinez, Masoud Shokri | 2023-04-27T23:31:07Z | http://arxiv.org/abs/2304.14563v2 | # Causality and stability in First-order Conformal Anisotropic Hydrodynamics
###### Abstract
We formulate the first-order dissipative anisotropic hydrodynamical theory for a relativistic conformal uncharged fluid, which generalizes the Bemfica-Disconzi-Noronha-Kovtun (BDNK) first-order viscous fluid framework. Our approach maintains causal behavior in the nonlinear regime with or without general relativity coupling, and we derive and analyze the constraints on transport coefficients imposed by causality. We demonstrate the causal and stable behavior of our theory in specific cases, including the discussion of nonlinear causality as well as stability for linearized perturbations. We apply our newly developed first order anisotropic theory to the Bjorken flow and show how causality and stability impose constraints on the behavior of the early-time attractor.
## I Introduction
Causality is one of the most important guiding principles in physics. In the bottom-up approach to constructing modern effective field theories, causality mandates that effective quantum field theories meet certain requirements, including microcausality, causal propagation limited to the speed of light, stability of the vacuum, and possible constraints on commutation relations [1]. Like other effective field theories, hydrodynamics is also subject to the restrictions imposed by causality. In this context, causality mandates that a solution to the relativistic fluid equations at a specific space-time point \(x\) is entirely defined by the past space-time region that is causally connected to \(x\)[2; 3]. While causality is an important physical requirement for relativistic fluid dynamical equations of motion, it is not the only one. Two additional requirements are necessary for the equations of motion: to be locally well-posed and stable. The latter demands that small perturbations around the thermal state decay over time, while the former ensures that the system follows a well-defined space-time evolution for a given set of initial conditions. The original first order dissipative hydrodynamical equations of motion, known as the Navier-Stokes (NS) equations and derived by Landau and Lifschitz [4] as well as Eckart [5] showed to be acausal in the linear and nonlinear regimes [6; 7]. To address the acausality and stability concerns of the Navier-Stokes equations, second-order hydrodynamics theories were introduced by Israel and Stewart (IS)[8; 9]. Since their seminal work, more recent formulations of second-order hydrodynamics have been developed [10; 11; 12]. However, it is unclear if second order theories are causal in the full non-linear regime.
A recent development in relativistic hydrodynamics is the BDNK theory proposed by Bemfica, Disconzi and Noronha [13; 14; 15; 16; 17] alongside with Kovtun [18; 19]. The BDNK theory has offered a practical and straightforward approach to addressing the longstanding issues of causality, stability, and local well-posedness in relativistic fluids. Its development has created new opportunities for advancing our understanding of the fundamental principles underlying relativistic fluid dynamics. Basically BDNK theory is a generalization of the first order NS relativistic theory which is causal in the linear and non-linear regimes 1 as well as locally well-posed in Sobolev spaces in the presence and/or absence of gravity. BDNK theory is also linearly stable around global equilibrium. An interesting feature of BDNK theory is that the definition of the hydrodynamical variables, such as the energy density, receive out-of-equilibrium corrections. In this sense BDNK first order theory differs from the standard approaches based on the Landau and Eckart frames.
Footnote 1: The rigorous mathematical demonstrations of these statements are found in Refs. [15; 16; 17].
On the other hand, anisotropic hydrodynamics has emerged as a very successful phenomenological model for describing non-equilibrium fluids with large spatial anisotropies in high-energy nuclear collisions [20; 21; 22; 23; 24]. It has been used to accurately model the space-time evolution of the fireball in these collisions and has shown good agreement with experimental results (see Sect. 10 of Ref.[25] and references therein). Additionally, anisotropic hydrodynamics has passed rigorous numerical tests against exact kinetic theory models based on the Boltzmann equation (See Sect. 7-8 of Ref.[25] and references therein), providing confidence in its efficacy for studying fluids in far-from-equilibrium situations. Despite these successes, the foundational aspects of anisotropic hydrodynamics remain incompletely understood, particularly with respect to the role of causality and its constraints. Motivated by this gap in understanding, we introduce a new approach in this work to investigate causality in anisotropic hydrodynamics. We develop a novel first order theory for a conformal, uncharged fluid near thermal equilibrium that exhibits a spatial anisotropy characterized by a space-like vector \(l^{\mu}\).
The paper is organized as follows: in section II we discuss the most general form of a first order anisotropic conformal fluid theory. We show in Sec. III that physical requirements of the second law of thermodynamics up to the order of validity of the theory leads unambiguously to the anisotropic BDNK theory of a conformal fluid. Section IV examines the conditions that causality imposes on the transport coefficients, both with and without gravity. Linear stability of our novel first order anisotropic theory is analyzed in Sec. V. The application of our novel anisotropic first order theory for a fluid undergoing Bjorken flow is presented in Sect. VI. Our conclusions are summarized in Sec. VII. Technical details are found in the appendices.
_Notations and conventions_ We use the natural unites in which \(\hbar=c=k_{B}=1\), and adopt the Lorentzian metric \(g_{\mu\nu}\) with signature \(-+++\). The standard covariant derivative is denoted by \(\nabla\), and the conformal covariant derivative by \(\mathcal{D}\). We use the standard symmetrization and antisymmetrization notations, such that, for example, for a rank-2 tensor we obtain that \(A_{\mu\nu}\) is \(A_{(\mu\nu)}=\frac{1}{2}\left(A_{\mu\nu}+A_{\nu\mu}\right)\) and \(A_{[\mu\nu]}=\frac{1}{2}\left(A_{\mu\nu}-A_{\nu\mu}\right)\), respectively. The Riemann tensor is defined as \(R^{\sigma}_{\rho\mu\nu}=2\left(\partial_{[\mu}\Gamma^{\sigma}_{\nu]\rho}+ \Gamma^{\sigma}_{[\mu\beta}\Gamma^{\beta}_{\nu]\rho}\right)\), and the Ricci tensor as \(R_{\mu\nu}=R_{\rho\mu\sigma\nu}g^{\rho\sigma}\).
## II First order anisotropic conformal BDNK theory
A common approach in hydrodynamics is to expand physical observables in terms of gradients and truncate the series expansion at a given order in the derivatives. For example, if one considers the energy-momentum tensor \(T^{\mu\nu}\), the gradient expansion takes the form \(T^{\mu\nu}=\mathcal{O}(1)+\mathcal{O}(\partial)+\cdots\), where \(\mathcal{O}(1)\) corresponds to the ideal fluid contributions while \(\mathcal{O}(\partial^{n})\) are the viscous corrections of order \(n\) in derivatives (time and space derivatives written in covariant form) of the dynamical variables such as energy, density, etc. Therefore, first-order theories contain only the first-order derivative corrections \(\mathcal{O}(\partial)\)2. Since quantities such as temperature and number density are only unambiguously defined in equilibrium, their out-of-equilibrium corrections may differ due to different choices of \(\mathcal{O}(\partial)\). In fact, different first-order theories are connected by transformations in these out-of-equilibrium variables 3. The connection between the Landau-Lifshitz and Eckart frames, for instance, can be understood through such transformations [8]. Recent works have provided a comprehensive discussion on this subtle yet crucial aspect [17; 18; 28].
Footnote 2: The gradient expansion is equivalent to the Knudsen expansion in kinetic theory [26; 27].
Footnote 3: For example, consider a fluid’s theory defined by a set of \(N\) out-of-equilibrium thermodynamic variables \(\psi_{a}\) with \(a=1,...,N\), and suppose we perform a transformation \(\psi_{a}\to\psi_{a}+\delta\psi_{a}\), where \(\delta\psi_{a}\) is first order in the derivative of the thermodynamic variables \(\psi_{a}\). This leads to the existence of two different first-order theories that are, in fact, equivalent up to the first-order corrections.
We shall develop the first order theory of a conformal uncharged fluid which has a spatial anisotropy along an arbitrary direction determined by a space-like vector \(l^{\mu}\). We start by introducing the most general form of the energy-momentum tensor that describes this particular fluid.
### Anisotropic energy momentum tensor
Here, we turn to the problem of constructing a nonlinearly causal anisotropic extension of the conformal BDNK hydrodynamics for an uncharged fluid [13]. The most general form for the energy-momentum tensor \(T^{\mu\nu}\), decomposed in the directions parallel and orthogonal to the time-like velocity flow \(u^{\mu}\) (\(u^{\mu}u_{\mu}=-1\)) and the anisotropic space-like vector \(l^{\mu}\) (\(l^{\mu}l_{\mu}=1\)) which is invariant under the little group \(SO(2)\), reads as [23; 24]
\[T^{\mu\nu} = \mathcal{E}\,u^{\mu}u^{\nu}+\mathcal{P}_{l}\,l^{\mu}l^{\nu}+ \mathcal{P}_{\perp}\,\Xi^{\mu\nu}+2\,M\,u^{(\mu}\,l^{\nu)}+2\,W^{(\mu}_{\perp u}u^{\nu)}\] (1) \[+2\,W^{(\mu}_{\perp l}l^{\nu)}+\pi^{\mu\nu}_{\perp}\] \[= \mathcal{E}\,u^{\mu}u^{\nu}+(\mathcal{P}_{l}-\mathcal{P}_{\perp} )\,l^{\mu}l^{\nu}+\mathcal{P}_{\perp}\,\Delta^{\mu\nu}+2\,M\,u^{(\mu}\,l^{\nu) }+2\,W^{(\mu}_{\perp u}u^{\nu)}\] \[+2\,W^{(\mu}_{\perp l}l^{\nu)}+\pi^{\mu\nu}_{\perp\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \
with \(V=\{u,l\}\). Note that the above expressions leads to \(\Xi_{\mu\nu}\pi_{\perp}^{\mu\nu}=0\). In the absence of other conserved currents, the fluid's evolution is governed by the energy-momentum conservation \(\nabla_{\nu}T^{\mu\nu}=0\), which will be referred to as the equations of motion (EOM) and due to the conformal invariance can be equivalently expressed as \(\mathcal{D}_{\nu}T^{\mu\nu}=0\), where \(\mathcal{D}_{\mu}\) is the conformal covariant derivative compatible with \(u^{\mu}\)[29]. In particular
\[\mathcal{D}_{\mu}u_{\nu}=\Delta_{\mu}^{\alpha}\nabla_{\nu}u_{\nu}-\Delta_{\mu \nu}\nabla_{\alpha}u^{\alpha}/3\,,\]
and
\[\mathcal{D}_{\mu}\varepsilon=\nabla_{\mu}\varepsilon+4\varepsilon(u^{\alpha} \nabla_{\alpha}u_{\mu}-u_{\mu}\nabla_{\alpha}u^{\alpha}/3)\;.\]
The EOM may be decomposed into the directions parallel to \(u\) and \(l\) and the directions perpendicular to both as
\[u^{\alpha}\mathcal{D}_{\alpha}\mathcal{E} =-(\mathcal{P}_{\mathrm{I}}-\mathcal{P}_{\perp})l^{\mu}l^{\nu} \sigma_{\mu\nu}-M\mathcal{D}_{\alpha}l^{\alpha}-l^{\nu}\mathcal{D}_{\nu}M- \mathcal{D}_{\nu}W^{\nu}_{\perp u}-2W^{\nu}_{\perp l}l^{\alpha}\sigma_{\mu\nu}\] \[-\pi_{\perp}^{\mu\nu}\sigma_{\perp\mu\nu}, \tag{2a}\] \[l^{\alpha}\mathcal{D}_{\alpha}\mathcal{P}_{\mathrm{I}} =-(\mathcal{P}_{\mathrm{I}}-\mathcal{P}_{\perp})\mathcal{D}_{ \alpha}l^{\alpha}-M\,l^{\mu}l^{\nu}\sigma_{\mu\nu}-u^{\alpha}\mathcal{D}_{ \alpha}M-l_{\nu}W^{\alpha}_{\perp u}\mathcal{D}_{\alpha}u^{\nu}-l_{\nu}u^{ \alpha}\mathcal{D}_{\alpha}W^{\nu}_{\perp u}\] \[-l_{\mu}l^{\nu}\mathcal{D}_{\nu}W^{\mu}_{\perp l}-\mathcal{D}_{ \alpha}W^{\nu}_{\perp l}+\pi_{\perp}^{\nu\alpha}\mathcal{D}_{\alpha}l_{\nu},\] (2b) \[\Xi^{\mu\alpha}\mathcal{D}_{\alpha}\mathcal{P}_{\perp} =-(\mathcal{P}_{\mathrm{I}}-\mathcal{P}_{\perp})\,\Xi^{\mu}l^{ \alpha}\mathcal{D}_{\alpha}l^{\nu}-M\,\Xi^{\mu}_{\nu}l^{\alpha}\mathcal{D}_{ \alpha}u^{\nu}-M\,\Xi^{\mu}_{\nu}u^{\alpha}\mathcal{D}_{\alpha}l^{\nu}-\Xi^{ \mu}_{\nu}W^{\alpha}_{\perp u}\mathcal{D}_{\alpha}u^{\nu}\] \[-\Xi^{\mu}_{\nu}u^{\alpha}\mathcal{D}_{\alpha}W^{\nu}_{\perp u}- W^{\mu}_{\perp l}\mathcal{D}_{\alpha}l^{\alpha}-\Xi^{\mu}_{\nu}W^{\alpha}_{ \perp}\mathcal{D}_{\alpha}l^{\nu}-\Xi^{\mu}_{\nu}l^{\alpha}\mathcal{D}_{ \alpha}W^{\nu}_{\perp l}-\Xi^{\mu}_{\nu}\mathcal{D}_{\alpha}\pi_{\perp}^{\nu \alpha}, \tag{2c}\]
where \(\sigma_{\mu\nu}=\mathcal{D}_{(\mu}u_{\nu)}=\Delta_{\mu\nu}^{\alpha\beta}\nabla _{\mu}u_{\nu}\) is the shear tensor, with \(\Delta_{\mu\nu}^{\alpha\beta}=[\Delta_{\mu}^{\alpha}\Delta_{\nu}^{\beta}+ \Delta_{\mu}^{\beta}\Delta_{\nu}^{\alpha}-(2/3)\Delta^{\alpha\beta}\Delta_{ \mu\nu}]/2\). We have also introduced a transverse shear tensor as
\[\sigma_{\perp\mu\nu}=\Xi_{\mu\nu}^{\alpha\beta}\mathcal{D}_{\alpha}u_{\beta} =\Xi_{\mu\nu}^{\alpha\beta}\nabla_{\alpha}u_{\beta}\;,\quad\text{with}\quad \Xi_{\mu\nu}^{\alpha\beta}=(\Xi_{\mu}^{\alpha}\Xi_{\nu}^{\beta}+\Xi_{\mu}^{ \beta}\Xi_{\nu}^{\alpha}-\Xi^{\alpha\beta}\Xi_{\mu\nu})/2\;.\]
It is important to keep in mind that an expression that only includes derivatives up to a certain order may be approximated at that order when the equations of motion are considered. As an example, one can rewrite (2a) by separating the leading and first order contributions from \(\mathcal{E}\) and \(\mathcal{P}\) to obtain
\[\begin{split} u^{\alpha}\mathcal{D}_{\alpha}\varepsilon+(P_{ \mathrm{I}}-P_{\perp})l^{\mu}l^{\nu}\sigma_{\mu\nu}&=-u^{\alpha} \mathcal{D}_{\alpha}\mathcal{E}^{(1)}-(\mathcal{P}_{\mathrm{I}}^{(1)}-\mathcal{ P}_{\perp}^{(1)})l^{\mu}l^{\nu}\sigma_{\mu\nu}-M\mathcal{D}_{\alpha}l^{\alpha}-l^{ \alpha}\mathcal{D}_{\alpha}M-\mathcal{D}_{\nu}W^{\nu}_{\perp u}\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad-2W^{\alpha}_{\perp \perp}l^{\nu}\sigma_{\mu\nu}-\pi_{\perp}^{\mu\nu}\sigma_{\perp\mu\nu}.\end{split} \tag{3}\]
Upon comparing the left and right hand sides of the equation above, it becomes evident that the terms \(u^{\alpha}\mathcal{D}_{\alpha}\varepsilon\) and \(l^{\mu}l^{\nu}\sigma_{\mu\nu}\) only contain first-order derivatives of the zeroth-order quantities. However, the combination \(u^{\alpha}\mathcal{D}_{\alpha}\varepsilon+(P_{\mathrm{I}}-P_{\perp})l^{\mu}l^ {\nu}\sigma_{\mu\nu}\) is second-order _on-shell_, meaning that it becomes second order when the equations of motion given by (3) are taken into account. It can be said that the same combination is of the first-order _off-shell_. Similar arguments may be applied in the remaining equations of motion in (2) above. In the next section, we make use of generic arguments based on the second law of thermodynamics together with the above _on-shell_ analysis in order to determine the remaining first order terms \(M\), \(E^{(1)}\), etc, by assuming the physical applicability of a first-order theory.
## III Entropy and entropy production
The second law of thermodynamics mandates that entropy should be at a maximum in a state of equilibrium, and the entropy production should not have a negative value. However, for a first-order theory one must only consider the entropy up to the first-order on-shell derivative, and hence the entropy production up to the second-order on-shell [18]. This ensures that the theory is being applied outside of its intended physical applicability.
The entropy current of an anisotropic conformal uncharged fluid reads as (see Appendix A)
\[S^{\mu}=\frac{P_{\perp}u^{\mu}-u_{\nu}T^{\mu\nu}}{T}\;, \tag{4}\]
where \(P_{\perp}\) is the leading order of \(\mathcal{P}_{\perp}\), i.e., \(\mathcal{P}_{\perp}=P_{\perp}+\mathcal{P}_{\perp}^{(1)}\). In this case, the entropy density reads
\[-u_{\mu}S^{\mu}=s+\frac{\mathcal{E}^{(1)}}{T}\,, \tag{5}\]
where the leading order contribution to the entropy density is \(s=(\varepsilon+P_{\perp})/T\). From Eq. (4) we calculate the entropy production
\[\nabla_{\mu}S^{\mu} =-\frac{\pi_{\perp}^{\mu\nu}\sigma_{\perp\mu\nu}}{T}-\frac{2W^{\mu }_{\perp l}l^{\nu}\sigma_{\mu\nu}}{T}-\frac{\varepsilon-3P_{\perp}}{4T}\frac{u^{ \alpha}\mathcal{D}_{\alpha}\varepsilon}{\varepsilon}-\frac{(\mathcal{P}_{ \mathrm{I}}-\mathcal{P}_{\perp})l^{\mu}l^{\nu}\sigma_{\mu\nu}}{T}\] \[\qquad-\frac{\mathcal{E}^{(1)}u^{\nu}+W^{\nu}_{\perp u}+M\,l^{ \nu}}{4T}\frac{\mathcal{D}_{\nu}\varepsilon}{\varepsilon}\;. \tag{6}\]
The choices \(\pi_{\perp\,\mu\nu}\propto-\sigma_{\perp\,\mu\nu}\) and \(W^{\mu}_{\perp\,l}\propto-\Xi^{\mu\nu}l^{\alpha}\sigma_{\alpha\nu}\) give positive contributions to the first two terms in (6). On the other hand, the third term
\[-\frac{\varepsilon-3P_{\perp}}{4T}\frac{u^{\alpha}{\cal D}_{\alpha} \varepsilon}{\varepsilon} \tag{7}\]
cannot be positive definite since \({\cal D}_{\alpha}\varepsilon\) has no definite sign. Furthermore, the order of derivatives of this term, as given by Eq. (3), depends on the leading order of pressures and could be either first or second order. Thus, positive entropy production in Eq. (6) demands the elimination of the term in Eq. (7) by means of the choice
\[P_{\perp}=P_{l}=P=\frac{\varepsilon}{3}\;. \tag{8}\]
It is worth mentioning that this choice for the pressures applied to Eqs. (2) makes \(u^{\alpha}{\cal D}_{\alpha}\varepsilon\), \(l^{\alpha}{\cal D}_{\alpha}\varepsilon\), and \(\Xi^{\alpha\beta}{\cal D}_{\beta}\varepsilon\) of order \(\partial^{3}\) on-shell. This means that \({\cal D}_{\alpha}\varepsilon={\cal O}(\partial^{3})\) when Eqs. (2) apply. After imposing (8), the term
\[\frac{({\cal P}_{l}-{\cal P}_{\perp})l^{\mu}l^{\nu}\sigma_{\mu\nu}}{T}=\frac{( {\cal P}_{l}^{(1)}-{\cal P}_{\perp}^{(1)})l^{\mu}l^{\nu}\sigma_{\mu\nu}}{T} \tag{9}\]
becomes second order off-shell since \(\pi_{\mu\nu}\) is of first order on- and off-shell. Hence, the first order viscous corrections \({\cal P}_{l}^{(1)}\) and \({\cal P}_{\perp}^{(1)}\) must be chosen in such way that their difference is positive up to second order when Eqs. (2) apply. Subsequently, and by following the prescription outlined in the previous section, the simplest anisotropic extension of the conformal BDNK hydrodynamics of an uncharged fluid is formulated by the energy-momentum tensor
\[T^{\mu\nu} = (\varepsilon+{\cal E}^{(1)})\,u^{\mu}u^{\nu}+({\cal P}_{l}^{(1)} -{\cal P}_{\perp}^{(1)})\,l^{\mu}l^{\nu}+(P+{\cal P}_{\perp}^{(1)})\,\Delta^{ \mu\nu}+2\,M\,u^{(\mu}l^{\,\nu)}+2\,W^{(\mu}_{\perp u}u^{\,\nu)} \tag{10}\] \[+2\,W^{(\mu}_{\perp\,l}l^{\,\nu)}+\pi^{\mu\nu}_{\perp}\;,\]
where its components are written as
\[{\cal E}^{(1)} = \frac{3\chi}{4}\,\frac{u^{\mu}{\cal D}_{\mu}\varepsilon}{ \varepsilon}\;, \tag{11a}\] \[{\cal P}_{l}^{(1)} = \frac{\chi_{l}}{4}\,\frac{u^{\mu}{\cal D}_{\mu}\varepsilon}{ \varepsilon}-2\eta_{ll}l^{\alpha}l^{\beta}\sigma_{\alpha\beta}\;,\] (11b) \[{\cal P}_{\perp}^{(1)} = \frac{\chi_{\perp}}{4}\,\frac{u^{\mu}{\cal D}_{\mu}\varepsilon}{ \varepsilon}+\eta_{ll}l^{\alpha}l^{\beta}\sigma_{\alpha\beta},\] (11c) \[\pi^{\mu\nu}_{\perp} = -2\eta_{\perp}\sigma^{\mu\nu}\;,\] (11d) \[W^{\mu}_{\perp l} = -2\eta_{\perp}\Xi^{\mu}_{l^{\alpha}}\sigma^{\lambda\nu}\;,\] (11e) \[W^{\mu}_{\perp u} = \frac{\lambda_{\perp}}{4}\Xi^{\mu\nu}\frac{{\cal D}_{\nu} \varepsilon}{\varepsilon}\;,\] (11f) \[M = \frac{\lambda_{l}}{4}\,\frac{l^{\nu}{\cal D}_{\nu}\varepsilon}{ \varepsilon}\;. \tag{11g}\]
In the previous expression we have 8 transport coefficients \(\{\chi,\chi_{l},\chi_{\perp},\eta_{l},\eta_{l},\eta_{\perp},\lambda_{l},\lambda _{\perp}\}\) which are proportional to \(\varepsilon^{3/4}\), due to the conformal invariance. The conformal invariance also requires \(\chi_{l}+2\chi_{\perp}=3\chi\) to ensure \(T^{\mu}_{\mu}=0\).
Plugging Eqs. (11) into the EOMs (2) and assuming condition (8), the entropy density is given by \(-u_{\mu}S^{\mu}=(\varepsilon+P)/T+{\cal O}(\partial^{2})\) and the on-shell entropy production is simply
\[\nabla_{\mu}S^{\mu}=\frac{\pi^{\mu\nu}_{\perp}\pi_{\perp\mu\nu}}{2\eta_{\perp} T}+\frac{W^{\mu}_{\perp\,l}W_{\perp\,l\,\mu}}{\eta_{l}T}+\frac{\left({\cal P}_{ \perp}^{(1)}-{\cal P}_{l}^{(1)}\right)^{2}}{3\eta_{ll}T}+{\cal O}(\partial^{3} )\;. \tag{12}\]
We conclude that the on-shell entropy production given by the previous expression is positive up to second order in derivatives if
\[\eta_{\perp}\geq 0\;,\qquad\eta_{ll}\geq 0\;,\qquad\eta_{l}\geq 0\;. \tag{13}\]
### The isotropic conformal limit of BDNK
Once the space-like anisotropic vector \(l\) is chosen, the isotropic limit for the conformal uncharged fluid is reproduced by setting
\[\eta_{\perp}=\eta_{l}=\eta_{ll}=\eta\;,\qquad\chi_{l}=\chi_{\perp}=\chi\;, \qquad\lambda_{\perp}=\lambda_{l}=\lambda\;. \tag{14}\]
If we plug the above condition into (10), the energy-momentum tensor reduces to the one derived in BDNK theory [13], i.e.,
\[T^{\mu\nu}=\left[\varepsilon+\frac{3\chi}{4}\frac{u^{\mu}\mathcal{D}_{\mu} \varepsilon}{\varepsilon}\right]u^{\mu}u^{\nu}+\left[P+\frac{\chi}{4}\frac{u^ {\mu}\mathcal{D}_{\mu}\varepsilon}{\varepsilon}\right]\Delta^{\mu\nu}-2\eta \sigma^{\mu\nu}+\frac{\lambda u^{(\mu}\Delta^{\nu)\alpha}\,\mathcal{D}_{ \alpha}\varepsilon}{2}\;, \tag{15}\]
where \(\chi,\lambda,\eta\propto\varepsilon^{3/4}\). The equivalence between the expressions derived above and the corresponding one in BDNK theory [13] becomes clearer when considering the limit established by Eqs.(14) while writing \(\Delta^{\mu\nu}=\Xi^{\mu\nu}+l^{\mu}l^{\nu}\) in Eq.(15) together with the following identity [23; 24]
\[\sigma_{\mu\nu}=\sigma_{\perp\mu\nu}-\frac{1}{2}\Xi_{\mu\nu}l^{\alpha}l^{ \beta}\sigma_{\alpha\beta}+2\Xi_{(\mu}^{\alpha}l_{\nu)}l^{\beta}\sigma_{ \alpha\beta}+l_{\mu}l^{\alpha}l^{\beta}\sigma_{\alpha\beta}. \tag{16}\]
Note that the traceless components of the energy momentum tensor (15) are \(\sigma_{\perp\mu\nu}\), \(\Xi_{(\mu}^{\alpha}l_{\nu)}l^{\beta}\sigma_{\alpha\beta}\), and the combination \(\Xi_{\mu\nu}l^{\alpha}l^{\beta}\sigma_{\alpha\beta}-2l_{\mu}l^{\alpha}l^{ \beta}\sigma_{\alpha\beta}\). These terms are needed and explain why these multiply the same coefficient \(\eta_{ll}\) in Eqs. (11b) and (11c).
## IV Causality
Now, we shall show that there exist conditions for the transport parameters defined in Eqs. (11) such that the resulting hydrodynamic theory exhibits nonlinear causality. It is worth noting that the EOM \(\nabla_{\nu}T^{\mu\nu}=0\) give rise to a system of quasi-linear partial differential equations (PDE), i.e., PDEs that do not contain products of their highest-order derivative terms. The independent variables in the energy-momentum tensor include the energy density \(\varepsilon\) and the components of the fluid's velocity \(u^{\mu}\), with the anisotropic vector \(l^{\mu}\)4 being chosen accordingly. To investigate causality in quasi-linear systems, we analyze the principal part of the system of equations. This part contains only the terms of the highest order in each variable and determines the order of the partial differential equations in each variable. As an example, a quasi-linear system that is of order one in \(\varepsilon\) and two in \(u^{\mu}\) does not contain terms like \((\partial\varepsilon)^{2}\), \((\partial^{2}u)^{2}\), or \(\partial\varepsilon\,\partial^{2}u\), and its principal part can contain only terms of form \(\partial\varepsilon\) or \(\partial^{2}u\). As explained in [13], we assume \(u^{\mu}\) to have four independent components, with the constraint \(u^{\mu}u_{\mu}=-1\) being imposed at an initial time and being preserved through the fluid's evolution. The EOM is then decomposed in directions parallel and perpendicular to \(u^{\mu}\). This gives rise to 5 equations of the 5 independent variables
Footnote 4: The components of the vector \(l^{\mu}\) are constrained by being a unitary space-like vector \(l^{\mu}l_{\mu}=1\) and being orthogonal to \(u^{\mu}\). For instance, at the earliest stages of a heavy-ion collision, it is common to choose \(u^{\mu}=\gamma(v_{z})(1,0,0,v_{z})\) with \(l^{\mu}=\gamma(v_{z})(v_{z},0,0,1)\)[24].
\[-u_{\mu}\nabla_{\nu}T^{\mu\nu}=0,\quad\Delta_{\alpha}^{\mu}\nabla_{\nu}T^{ \alpha\nu}=0\;, \tag{17}\]
with the constraint \(u^{\mu}u_{\mu}=-1\) being used when required 5. Let us now consider Eqs. (17) together with the constitutive relation (10) for the energy-momentum tensor, and the dissipative fluxes given in (11), coupled with gravity through Einstein's equation
Footnote 5: This is equivalent of considering \(\nabla_{\nu}T^{\mu\nu}=0\) together with the evolution equation for the constraint \(u^{\beta}\nabla_{\beta}[u^{\alpha}\nabla_{\alpha}(u^{\mu}u_{\mu})]=0\)
\[R^{\mu\nu}-\frac{1}{2}g^{\mu\nu}R=8\pi GT^{\mu\nu}\;, \tag{18}\]
and the gauge freedom fixed by assuming the harmonic gauge \(g^{\mu\nu}\Gamma^{\alpha}_{\mu\nu}=0\). As explained in Appendix B, the causality of the system is determined by the vectors \(\xi_{\mu}=\nabla_{\mu}\Phi\) normal to the characteristic hypersurface \(\Phi(x)=0\), which are the roots of the characteristic equation (60). With the roots of the characteristic equation obtained as \(\xi_{0}=\xi_{0}(\xi_{i})\), the system is causal if (60) they are real,
\[\xi_{0}\in\mathbb{R}\;, \tag{19}\]
and (61) \(\xi_{\alpha}=(\xi_{0},\xi_{i})\) is not timelike, i.e.,
\[\xi_{\alpha}\xi^{\alpha}\geq 0\;. \tag{20}\]
In our case, the characteristic equation has 30 roots, of which 20 are light-like and thus causal, arising from pure gravity. The remaining pieces of the characteristic equation are spacelike roots, which we refer to as the matter sector, can be further decomposed into two parts. The first part, which contains four roots, is
\[A^{2}=\left(\lambda_{\perp}a^{2}-\eta_{\perp}v^{2}+\delta\eta_{\perp\downarrow}b ^{2}\right)^{2}=0\;, \tag{21}\]
where
\[\delta\eta_{\perp l}=\eta_{\perp}-\eta_{l}\,\qquad a=u^{\mu}\xi_{\mu}\;,\qquad b= l^{\mu}\xi_{\mu}\;,\qquad v^{\mu}=\Delta^{\mu\nu}\xi_{\mu}\;. \tag{22}\]
The second part which contains the six remaining roots reads as
\[H_{\parallel}^{\parallel}(V,\xi)\left[A^{2}+A\left(U_{1}^{\mu}l_{ \mu}+U_{2}^{\mu}\xi_{\mu}\right)+U_{1}^{\mu}l_{\mu}U_{2}^{\nu}\xi_{\nu}-U_{1}^ {\mu}\xi_{\mu}U_{2}^{\nu}l_{\nu}\right]=0\;. \tag{23}\]
The explicit forms of the terms \(H_{\parallel}^{\parallel}(V,\xi)\), \(U_{1}^{\mu}\), and \(U_{2}^{\mu}\) are calculated explicitly in Appendix B, which for convenience we listed below
\[U_{1}^{\mu}l_{\mu} =b^{2}\delta\eta_{lll\perp}+a^{2}\delta\lambda+\delta\eta_{\perp l }v^{2}+b^{2}\delta\eta_{lll}-\frac{a^{2}b^{2}\delta\lambda(\chi_{\perp}+\lambda _{\perp}+\delta\lambda+\delta\chi)}{4\varepsilon H_{\parallel}^{\parallel}(V, \xi)}\;, \tag{24a}\] \[U_{1}^{\mu}\xi_{\mu} =b^{3}\delta\eta_{ll\perp}+a^{2}b\delta\lambda+(\delta\eta_{ \perp l}+\delta\eta_{ll})bv^{2}-\frac{a^{2}b\delta\lambda[(\chi_{\perp}+ \lambda_{\perp})v^{2}+b^{2}(\delta\lambda+\delta\chi)]}{4\varepsilon H_{ \parallel}^{\parallel}(V,\xi)}\;,\] (24b) \[U_{2}^{\mu}l_{\mu} =b\left(\frac{\delta\chi}{3}+\delta\eta_{lll}\right)+\frac{(\chi_ {\perp}-\eta_{ll})b}{3}-\frac{a^{2}b(\chi+\lambda_{\perp})(\chi_{\perp}+ \lambda_{\perp}+\delta\lambda+\delta\chi)}{4\varepsilon H_{\parallel}^{ \parallel}(V,\xi)}\;,\] (24c) \[U_{2}^{\mu}\xi_{\mu} =b^{2}\left(\frac{\delta\chi}{3}+\delta\eta_{lll}\right)+\frac{( \chi_{\perp}-\eta_{ll})v^{2}}{3}-\frac{a^{2}(\chi+\lambda_{\perp})[(\chi_{ \perp}+\lambda_{\perp})v^{2}+b^{2}(\delta\lambda+\delta\chi)]}{4\varepsilon H _{\parallel}^{\parallel}(V,\xi)}\;,\] (24d) \[H_{\parallel}^{\parallel}(V,\xi) =\frac{3\chi a^{2}+\lambda_{\perp}v^{2}+\delta\lambda\,b^{2}}{4 \varepsilon}\;. \tag{24e}\]
Here, \(\delta\eta_{lll\perp}=4\eta_{l}-3\eta_{ll}-\eta_{\perp}\), \(\delta\eta_{ll}=\eta_{ll}-\eta_{l}\), \(\delta\chi=\chi_{l}-\chi_{\perp}\), and \(\delta\lambda=\lambda_{l}-\lambda_{\perp}\). Because \(\xi_{\mu}\) needs to be spacelike according with Eq. (20), no root with \(v=0\) is allowed. Note that \(v=0\) yields to the following result
\[\xi_{\mu}\xi^{\mu}=(-u^{\mu}u^{\nu}+\Delta^{\mu\nu})\xi_{\mu}\xi_{\nu}=-a^{2}+v ^{2}=-a^{2}<0\;,\]
which clearly violates causality according with Eq. (20). Upon inspection of Eqs. (21) and (23), it is apparent that if either \(\lambda_{\perp}\) or \(\chi\), which give rise to the leading order power in \(a\), are zero, causality violating roots with \(b=v=0\) arise. Building upon this observation and the known isotropic conformal case [13], we can justify the following choice to ensure causality, i.e.
\[\lambda_{\perp}>0\;,\qquad\chi>0\;,\qquad\varepsilon>0\;. \tag{25}\]
We can rewrite \(A\) by writing \(b=l_{\mu}v^{\mu}=v\cos\theta\) (\(0\leq\theta\leq\pi\)) using the Cauchy-Schwarz inequality, as both vectors are orthogonal to \(u^{\mu}\). In this case, \(\Delta^{\mu\nu}\) defines a real inner product among these vectors. This results in the rewriting of \(A\), Eq. (21), as follows
\[A=\lambda_{\perp}(a^{2}-\tau v^{2})\;, \tag{26}\]
where
\[\tau=\frac{\eta_{\perp}-\delta\eta_{\perp l}\cos^{2}\theta}{\lambda_{\perp}}\;. \tag{27}\]
When considering the constraints (25), the roots of \(A=0\) obey (19) and (20) if, and only if 6, \(0\leq\tau<1\)7. Thus, if the conditions (25) hold, the roots of \(A=0\) are causal if, and only if,
Footnote 6: See for instance Ref. [17].
Footnote 7: The equality \(\tau=1\) may be used if the particles are massless.
\[\lambda_{\perp}>\max(\eta_{l},\,\eta_{\perp})\geq 0\;. \tag{28}\]
However, the previous inequality does not lead to a sufficient condition for \(\lambda_{\perp}\) when comparing the isotropic limit of this constraint, i.e., \(\lambda>\eta\), and comparing with the corresponding one derived in the conformal isotropic case [13].
Therefore, it is needed to analyze the roots of (23) as well. In general this analysis becomes cumbersome, and as is shown in Appendix C, it requires to finding the roots of the following polynomial in \(\varrho\equiv a^{2}/v^{2}\)
\[p(\varrho)=\sum_{i=0}^{3}\alpha_{i}\varrho^{i}\;. \tag{29}\]
This polynomial is found by writing Eq. (23) in the generic form given by Eq. (101), with the coefficients \(\alpha_{i}\) to be determined from the aforementioned equality. The causality condition is then stated in the statement 101: Assuming \(\alpha_{3}\) to be positive, causality then requires \(p(\varrho)\) to be positive for \(\varrho\geq 1\) and negative for \(\varrho<0\), while satisfying the following inequality
\[18\alpha_{0}\alpha_{1}\alpha_{2}\alpha_{3}-4\alpha_{2}^{3}\alpha_{0}+\alpha_{ 2}^{2}\alpha_{1}^{2}-4\alpha_{3}\alpha_{1}^{3}-27\alpha_{3}^{2}\alpha_{0}^{2} \geq 0\;. \tag{30}\]
We have explained the application of the generic conditions of statement 101. To simplify the analysis of the causality conditions, it is more instructive to consider specific cases that provide clearer results. From the constitutive relations (11), we realize that the spatial anisotropy appears explicitly in different dissipative fluxes of the energy-momentum tensor: in the scalar sector with \(\delta\chi\neq 0\), in the vector sector with \(\delta\lambda\neq 0\), or in the tensor sector through the differences between 3 different transport parameters, i.e, \(\eta_{\perp}\), \(\eta_{l}\), and \(\eta_{ll}\). If the only source of anisotropy is the scalar sector, then, as stated in statement 101, causality requires the 5 inequalities given by Eqs. (13), together with Eqs. (13) and (28), to be satisfied. The aforementioned conditions (101) are, for example, satisfied by the following choices
\[\lambda=\chi=10\eta\;,\qquad\chi_{\perp}=\delta\chi=15\eta/2\;, \tag{31}\]
with \(\eta_{\perp}=\eta_{\!I}=\eta_{\!I}\), and \(\delta\lambda=0\). Taking the isotropic limit (14), the conditions (101) reduce to the ones of the conformal isotropic BDNK [13], i.e.,
\[\chi>4\eta\;,\qquad\lambda>\frac{3\chi\eta}{\chi-\eta}\;.\] (32a) On the other hand, if the anisotropy is only in the vector sector, and the transport parameters of the tensor sector vanish, i.e., the system is shearless, as stated in statement 103, the condition ( 25 ) is sufficient for causality, without any further constraint on \[\lambda_{l}\].
To check for causality in more general cases, one can use statement 101. For instance, in the example worked out entirely in Appendix D one finds the following conditions that respect causality
\[\eta_{\perp}=\eta\,,\quad\eta_{l}=\frac{2\eta}{3}\,,\quad\eta_{ll}=\frac{5 \eta}{6}\,,\quad\lambda_{\perp}=\frac{13\eta}{2}\,,\quad\lambda_{l}=6\eta\,, \quad\chi=5\eta\,,\quad\chi_{\perp}=\frac{11\eta}{2}\,,\quad\chi_{l}=\frac{1 6\eta}{3}\,,\quad\text{with}\quad\eta>0\;. \tag{33}\]
The proof of linear stability for this choice of parameters may be found in Appendix E.
## V Linear stability
In this section, we study the stability of the linearized EOM for the hydrodynamic theory developed in Secs. II and III. We adopt standard methods [14; 30] and consider small perturbations of the hydrodynamic fields \(\varepsilon\) and \(u^{\mu}\) around a homogeneous background, which corresponds to a global equilibrium with constant hydrodynamic fields in flat space-time. In particular, we assume the energy density to be \(\varepsilon_{0}+\delta\varepsilon(t,x^{i})\), with \(\varepsilon_{0}\) being constant, and the fluid's velocity to be \(u_{0}^{\mu}+\delta u^{\mu}(t,x^{i})\), with \(u_{0}^{\mu}\delta u_{\mu}=\mathcal{O}\big{(}\delta^{2}\big{)}\). The fluid velocity in equilibrium is \(u_{0}^{\mu}=\gamma\left(1,v^{i}\right)\), with \(v^{i}\) being the components of the 3-velocity \(\mathbf{v}\), and \(\gamma=1/\sqrt{1-\mathbf{v}^{2}}\) the Lorentz factor. We then expand the equations of motion up to the first order in perturbations, which are assumed to be in the form of plane waves, i.e., \(\delta\varepsilon(t,x^{i})\to e^{-iT_{0}x^{\mu}k_{\mu}}\delta\varepsilon(k^{ \mu})\) and \(\delta u(t,x^{i})\to e^{-iT_{0}x^{\mu}k_{\mu}}\delta u(k^{\mu})\), where \(k^{\mu}=(i\Gamma,k^{i})\) and the presence of the equilibrium temperature \(T_{0}\) in the exponent makes the modes \(k^{\mu}\) dimensionless. Non-trivial solutions to the EOM may lead to imaginary solutions \(\Gamma=\Gamma(k^{i})\), where linear stability is verified if, and only if, \(\text{Re}(\Gamma)\leq 0\).
The roots \(\Gamma=\Gamma(k^{i})\) that come from the EOM are usually cumbersome, so the analysis becomes very difficult. However, new results show that causality + linear stability in the local rest frame (LRF) leads to linear stability in any boosted frame. This relation has been shown to be true for strongly hyperbolic systems of equations, as demonstrated in Ref.[17]. A recent study by Gavassino [31] demonstrated the validity of this relation for the general case. The author showed that if a mode grows for an observer A, it can be thought of as a parametrization of the time coordinate in that frame. If the theory is causal, this growth should be preserved between frames, otherwise there would be an inversion in the time direction. Accordingly, and since we already have the conditions for causality,
we shall study linear stability in the LRF and, in some specific cases, linear stability in a homogeneous boosted frame (where \(k^{i}=0\) but \(v^{i}\neq 0\)).
We begin by studying linear stability in the LRF, i.e., where \(u_{0}^{\mu}=(1,0,0,0)\) and, as a consequence, \(\delta u^{0}=0\). Furthermore, since \(l\) is orthogonal to \(u\), it can only have spatial components in the LRF, i.e., \(l_{0}^{\mu}=\left(0,l^{i}\right)\), with keeping \(l^{\mu}l_{\mu}=1\) in mind. We note that although \(l^{\mu}\) must also be perturbed in order to preserve the aforementioned orthogonality, its perturbation does not contribute to the dissipative fluxes of (11) up to the first order in perturbations.
Let us define \(\delta\bar{\varepsilon}=\delta\varepsilon/\big{(}\varepsilon+P\big{)}\) and \(\delta\tilde{T}^{\mu\nu}=\delta T^{\mu\nu}/(\varepsilon+P)\), together with the dimensionless quantities \(\bar{\eta}_{l}=\eta_{l}/s\), \(\bar{\eta}_{ll}=\eta_{l}/s\), \(\bar{\eta}_{\perp}=\eta_{\perp}/s\), \(\lambda_{l}=\lambda_{l}/s\), \(\bar{\lambda}_{\perp}=\lambda_{\perp}/s\), \(\bar{\chi}_{\perp}=\chi/s\), \(\bar{\chi}_{l}=\chi_{\perp}/s\), and \(\bar{\chi}_{\perp}=\chi_{\perp}/s\), where \(s=(\varepsilon+P)/T=4\varepsilon/3T\) is the equilibrium entropy density. We may also define \(\bar{k}=l_{i}\bar{k}^{i}\), \(\kappa^{i}=\Xi_{j}^{i}\bar{k}^{j}=\bar{k}^{i}-l^{i}\bar{k}\), and \(\kappa=\sqrt{\kappa_{i}\kappa^{i}}\) in order to perform the decomposition
\[\delta u^{i}=l^{i}\delta u_{L}+\frac{\kappa^{i}}{\kappa}\delta u_{\parallel}+ \delta u_{\perp}^{i},\]
where \(\delta u_{L}=l_{i}\delta u^{i}\), \(\delta u_{\parallel}=\kappa_{i}\delta u^{i}/\kappa\), and \(\delta u_{\perp}^{i}=\Xi_{j}^{i}\delta u^{j}-\kappa^{i}\delta u_{\parallel}\). With that in mind, we obtain the following equations for the modes:
\[\partial_{\nu}\delta\tilde{T}^{0\nu}=M_{11}\delta\bar{\varepsilon }+M_{12}\delta u_{L}+M_{13}\delta u_{\parallel}=0\;, \tag{34a}\] \[l_{i}\partial_{\nu}\delta\tilde{T}^{\mu\nu}=M_{21}\delta\bar{ \varepsilon}+M_{22}\delta u_{L}+M_{23}\delta u_{\parallel}=0\;,\] (34b) \[\frac{\kappa_{i}}{\kappa}\partial_{\nu}\delta\tilde{T}^{i\nu}=M_ {31}\delta\bar{\varepsilon}+M_{32}\delta u_{L}+M_{33}\delta u_{\parallel}=0\;,\] (34c) \[\omega_{j}^{i}\partial_{\nu}\delta\tilde{T}^{j\nu}=\left[\bar{ \lambda}_{\perp}\Gamma^{2}+\Gamma+\bar{\eta}_{\perp}k^{2}+(\bar{\eta}_{l}-\bar {\eta}_{\perp})\bar{k}^{2}\right]\delta u_{\perp}^{i}=0\;, \tag{34d}\]
where \(\omega^{ij}=\Xi^{ij}-\kappa^{i}\kappa^{j}/\kappa^{2}\) is the projector orthogonal to \(\kappa^{i}\) and \(l^{i}\), \(k^{2}=k_{i}k^{i}\), and
\[M_{11} =\bar{\chi}\Gamma^{2}+\Gamma-\frac{k^{2}\bar{\lambda}_{\perp}}{3} +\frac{\bar{k}^{2}}{3}(\bar{\lambda}_{\perp}-\bar{\lambda}_{l})\;, \tag{35a}\] \[M_{12} =i\bar{k}\left[\left(\bar{\lambda}_{l}+\bar{\chi}\right)\Gamma+1 \right]\;,\] (35b) \[M_{13} =i\kappa\left[\left(\bar{\lambda}_{\perp}+\bar{\chi}\right)\Gamma +1\right]\;,\] (35c) \[M_{21} =\frac{i}{3}\bar{k}\left[\left(\bar{\lambda}_{l}+\bar{\chi}_{l} \right)\Gamma+1\right]\;,\] (35d) \[M_{22} =\bar{\lambda}_{l}\Gamma^{2}+\Gamma+\kappa^{2}\bar{\eta}_{l}+ \frac{(4\bar{\eta}_{ll}-\bar{\chi}_{l})\bar{k}^{2}}{3}\;,\] (35e) \[M_{23} =\frac{\bar{k}\kappa}{3}\left(3\bar{\eta}_{l}-\bar{\chi}_{l}-2 \bar{\eta}_{ll}\right)\;,\] (35f) \[M_{31} =\frac{i\kappa}{3}\left[\left(\bar{\chi}_{l}+\bar{\lambda}_{\perp }\right)\Gamma+1\right]\;,\] (35g) \[M_{32} =\frac{\bar{k}\kappa}{3}\left(3\bar{\eta}_{l}-\bar{\chi}_{l}-2 \bar{\eta}_{ll}\right)\;,\] (35h) \[M_{33} =\Gamma^{2}\bar{\lambda}_{\perp}+\Gamma+\frac{k^{2}}{3}\left(- \bar{\chi}_{l}+\bar{\eta}_{ll}+3\bar{\eta}_{\perp}\right)+\frac{\bar{k}}{3} \left(3\bar{\eta}_{l}+\bar{\chi}_{l}-\bar{\eta}_{ll}-3\bar{\eta}_{\perp} \right)\;. \tag{35i}\]
Note that the transverse modes \(\delta u_{\perp}^{i}\) decouple from the rest and give the shear channel polynomial equation
\[\bar{\lambda}_{\perp}\Gamma^{2}+\Gamma+\bar{\eta}_{\perp}k^{2}+(\bar{\eta}_{l}- \bar{\eta}_{\perp})\bar{k}^{2}=0\;. \tag{36}\]
The equations for the longitudinal modes (34a)-(34c) are nontrivial, i.e., give nonzero solutions for the perturbations, when
\[\det(M)=0\;, \tag{37}\]
where \(M=[M_{ij}]_{3\times 3}\). The roots of Eq. (37) give the wave modes of the sound channel. It is worth mentioning that the mode equations in a Lorentz boosted frame are obtained by performing a boost which yields to the following change
\[\bar{k}\to l_{\mu}k^{\mu}\;,\quad k^{2}\to-\gamma^{2}(\Gamma+ik_{i}v^{i})^{2}+ \Gamma^{2}+k^{i}k_{i}\,,\quad\kappa^{2}\to k^{2}-(l_{\mu}k^{\mu})^{2}\;,\quad \Gamma\to\gamma(\Gamma+ik_{i}v^{i})\;. \tag{38}\]
In the boosted frame, \(u^{\mu}=\gamma(1,v^{i})\) and \(l^{\mu}=(\bar{l}^{k}v_{k},\bar{l}^{i})/\sqrt{1-v^{l}\bar{l}_{l}}\), with \(\bar{l}^{i}\) being a unitary 3-vector that coincides with the anisotropic unitary 3-vector \(l^{i}\) in the LRF (\(v=0\)).
In the LRF case, by means of the Cauchy-Schwarz inequality, one may set \(l_{i}k^{i}=kx\), where \(-1\leq x\leq 1\) covers all possible directions of \(k^{i}\). Then, Eq. (36) can be written as
\[\bar{\lambda}_{\perp}\Gamma^{2}+\Gamma+\left[\bar{\eta}_{\perp}\left(1-x^{2} \right)+\bar{\eta}_{l}\right]k^{2}=0. \tag{39}\]
The roots of the above equation have zero or negative real parts if, and only if, \(\bar{\lambda}>0\) and \(\bar{\eta}_{\perp}+x^{2}(\bar{\eta}_{l}-\bar{\eta}_{\perp})\geq 0\). Both conditions are guaranteed by the constraints (25) and (28) because \(1-x^{2}\geq 0\), and we have that either \(\bar{\eta}_{\perp}>\bar{\eta}\geq 0\) or \(\bar{\eta}>\bar{\eta}_{\perp}\geq 0\). Hence, Eqs. (25) and (28) lead to the linear stability of the shear channel in the LRF.
Now, let us consider (39) in the boosted homogeneous case, i.e., with vanishing \(k^{i}\). By employing again the Cauchy-Schwarz inequality, we can write \((l_{\mu}k^{\mu})^{2}=(\Delta^{\mu\nu}k_{\mu}k_{\nu})y^{2}\), with \(-1\leq y\leq 1\), and then apply (38) to obtain
\[\Gamma\left[\left(\bar{\lambda}_{\perp}-(1-y^{2})v^{2}\bar{\eta}_{\perp}-y^{2 }v^{2}\bar{\eta}_{l}\right)\gamma\Gamma+1\right]=0\;, \tag{40}\]
which has roots for all \(v^{2}\in[0,1)\) with nonpositive real parts if \(\bar{\lambda}_{\perp}\geq\bar{\eta}_{\perp}\left(1-y^{2}\right)+\bar{\eta}_{l }y^{2}\) and \(\bar{\lambda}_{\perp}>0\). Both inequalities are guaranteed by the causality condition (28) and Eq. (25), according to which \(\bar{\eta}_{\perp}\left(1-y^{2}\right)+\bar{\eta}y^{2}\leq\max(\bar{\eta}_{l},\bar{\eta}_{\perp})<\bar{\lambda}_{\perp}\). In this condition we clearly see the connection between stability in any frame and causality+stability in the LRF [31].
The polynomial for the sound channel in (37) is of power six, with complicated coefficients that depend on all transport parameters as well as \(\bar{k}\) and \(k^{2}\). The analysis of this type of polynomials is extremely complex, so it is more convenient and better to examine each set of parameters separately. As an example, the stability of the causal set of parameters (33) is demonstrated in Appendix E.
## VI Bjorken flow
In this section, we apply the formalism developed in this work to the case of Bjorken flow, i.e., when the fluid's velocity is \(u^{\mu}=(1,0,0,0,)\) in the so-called Milne coordinates \((\tau,x,y,\xi)\), which are related to the usual Minkowski coordinates via \(\tau=\sqrt{t^{2}-z^{2}}\) and \(\xi=\frac{1}{2}\,\log[(t+z)/(t-z)]\). From a mathematical perspective, one can choose the spacelike anisotropic vector to be in any of the \(x\), \(y\), and \(\eta\) directions. However, the physical picture of the heavy ion collisions suggests
\[l^{\mu}=\frac{1}{\tau}\,\left(0,0,0,1\right)\,, \tag{41}\]
to be the right choice. Alternatively, other choices may be possible, but one should be cautious of potential non-physical issues when making such a choice, as illustrated in Appendix F. By equating \(u\) and \(l\) into Eqs. (11) and using the Bjorken symmetries, the dissipative fluxes reduce to
\[\mathcal{E}^{(1)} =\tilde{\chi}^{3}T^{3}\left(\frac{1}{\tau}\,+\frac{3\dot{T}}{T} \right)\,, \tag{42a}\] \[\mathcal{P}_{l}^{(1)} =T^{3}\left(\frac{\tilde{\chi}_{l}-4\tilde{\eta}_{ll}}{3\tau}+ \frac{3\tilde{\chi}_{l}\dot{T}}{T}\right)\,,\] (42b) \[\mathcal{P}_{\perp}^{(1)} =T^{3}\left(\frac{\tilde{\chi}_{\perp}+2\tilde{\eta}_{ll}}{3\tau }+\frac{3\tilde{\chi}_{\perp}\dot{T}}{T}\right)\,, \tag{42c}\]
and
\[\pi_{\perp}^{\mu\nu}=0\,,\qquad W_{\perp l}^{\mu}=0\,,\qquad W_{\perp u}^{\mu }=0\,,\qquad M=0\,. \tag{43}\]
Namely, anisotropy only appears in the scalar sector with three independent relevant transport parameters, \(\chi=\tilde{\chi}T^{3}\), \(\chi_{\perp}=\tilde{\chi}_{\perp}T^{3}\), and \(\eta_{ll}=\tilde{\eta}_{ll}T^{3}\). Recall that in the isotropic case, the relevant transport parameters for the Bjorken flow are shear viscosity \(\eta\), and \(\chi\)[32; 13]. Similar to the cases of IS [33] and isotropic conformal BDNK, the fluid's evolution is governed by only one equation,
\[9\tilde{\chi}\frac{\tau^{2}\tilde{T}}{T}+18\tilde{\chi}\frac{\tau^{2}\dot{T}^ {2}}{T^{2}}+\left(\frac{3\tau(9\tilde{\chi}-\tilde{\chi}_{\perp})}{T}+12\tau^ {2}\right)\dot{T}+4\tau T+3\tilde{\chi}-2\tilde{\chi}_{\perp}-4\tilde{\eta}_{ ll}=0\,. \tag{44}\]
At late times, the solution to the previous equation can be written as a power series,
\[T=\frac{\Lambda}{(\Lambda\tau)^{1/3}}\left(1-\frac{\tilde{\eta}_{ll}}{2( \Lambda\tau)^{2/3}}-\frac{\tilde{\eta}_{ll}(\tilde{\chi}_{l}+5\tilde{\chi}_{ \perp})}{24(\Lambda\tau)^{2/3}}+\cdots\right)\,, \tag{45}\]
where \(\Lambda\) is a constant with energy dimensions. We clarify that when we refer to late times, we mean \(\Lambda\tau\gg 1\). One may assume that up to the first-order, the power series solution is equal to the one of the isotropic conformal case, to reduce the number of free parameters. Such an assumption gives rise to \(\eta_{ll}=\eta\).
To gain more insight into the physical implications of (44), especially in the far from equilibrium regime, we assume the dimensionless parameters \(w=T\tau\)8 and \(f(w)=\frac{\pi}{w}\frac{dw}{d\tau}\)[33]. The EOM (44) reduces to a first-order nonlinear differential equation,
Footnote 8: The variable \(w\) is proportional to the inverse Knudsen number for the conformal case.
\[\frac{9\tilde{\chi}}{4}f(w)^{2}+wf(w)\left(1+\frac{3}{4}\tilde{\chi}f^{\prime} (w)\right)-\frac{6\tilde{\chi}+\tilde{\chi}_{\perp}}{2}f(w)+\frac{3\tilde{ \chi}+\tilde{\chi}_{\perp}-\tilde{\eta}_{ll}}{3}-\frac{2w}{3}=0\,. \tag{46}\]
The late-time expansion (45) is written in terms of the variable \(w\) as
\[f(w)=\frac{2}{3}+\frac{\tilde{\eta}_{ll}}{3w}+\frac{\tilde{\eta}_{ll}(\tilde {\chi}_{\perp}+\tilde{\chi}_{\perp})}{18w^{2}}+\mathcal{O}\!\left(\frac{1}{w^ {3}}\right), \tag{47}\]
which is valid when \(w\gg 1\). The pressure anisotropy, \(\mathcal{A}=\left(\mathcal{P}_{\perp}-\mathcal{P}_{l}\right)/P\), in the isotropic conformal BDNK, and in contrast to IS theory, is purely determined by shear viscosity and does not depend on \(f\). In the anisotropic case, the situation is different because \(\mathcal{A}\) receives contribution from \(f(w)\),
\[\mathcal{A}=\frac{2}{3}\frac{\tilde{\chi}_{\perp}-\tilde{\chi}_{l}}{w}\left(f -\frac{2}{3}\right)+\frac{6\tilde{\eta}_{ll}}{w}\,. \tag{48}\]
Note that at late times \(f(w)\sim 2/3+\mathcal{O}\!\left(w^{-1}\right)\) so both, isotropic and anisotropic first order BDNK theories share the same forward attractor. This is expected since asymptotically the system relaxes towards the thermal equilibrium. Following Ref. [33], we assume a correction to the first-order on-shell terms in (47),
\[f(w)=\frac{2}{3}+\frac{\tilde{\eta}_{ll}}{3w}+\delta f(w)\,. \tag{49}\]
By equating the previous expression into Eq. (46) while assuming \(\delta f\ll f\), and expanding in \(1/w\) around \(w\to\infty\), we obtain
\[\delta f(w)\sim\exp\!\left(-\frac{2w}{\tilde{\chi}}\right)w^{\frac{\tilde{ \eta}_{ll}+\tilde{\chi}_{\perp}}{\tilde{\chi}}}\,. \tag{50}\]
Since \(\chi>0\) due to causality, at late times the perturbation (50) decays faster than the perturbative terms of the late-time expansion. The relation between the coefficients in the above form and the analytical structure of the Borel resummation might be of interest, but it will not be discussed here.
Equation (46) can also be studied for the existence of pullback attractors. Following the transasymptotic and dynamical systems methods outlined in Refs. [34; 35; 36; 37] one can show that the initial value which rise to the attractor solution is found by expanding (46) around \(w=0\),
\[f(w\ll 1)=\frac{7}{9}-\frac{\tilde{\chi}-\tilde{\chi}_{\perp}}{9\tilde{\chi}}+ \frac{\sqrt{12\tilde{\eta}_{ll}\tilde{\chi}+\tilde{\chi}_{\perp}^{2}}}{9\tilde {\chi}}\,. \tag{51}\]
In the following we use the slow roll approximation [33]. Namely, we assume \(|f^{\prime}|\) to be much smaller than \(|f|\) and expand (46) in terms of \(f^{\prime}/f\) to obtain an algebraic equation. This leads to two solutions for \(f(w)\) that we expand around \(w\to\infty\) and compare with (47), to recognize the stable one,
\[f(w)_{\text{slowroll}}=\frac{7}{9}-\frac{\tilde{\chi}-\tilde{\chi}_{\perp}}{9 \tilde{\chi}}-\frac{2w}{9\tilde{\chi}}+\frac{\sqrt{\left(2w-\tilde{\chi}_{ \perp}\right)^{2}+12\tilde{\eta}_{ll}\tilde{\chi}}}{9\tilde{\chi}}\,. \tag{52}\]
If at early times, \(f\) gets larger than \(1\) then the fluid experiences reheating. To prevent reheating we must have
\[\chi_{l}>4\eta_{ll}>0\,, \tag{53}\]
which can be shown that is equivalent to the condition for stability and causality and reduces to \(\chi>4\eta\) in the isotropic limit. Therefore, causality forbids reheating for the Bjorken model discussed in this section.
Finally, we might assume an off-shell, or nonphysical, entropy current \(S_{\text{off}}^{\mu}\), that is, a current determined from (4) without considering the power-counting discussed in Sec. III. If causal and stable parameters are chosen for the attractor solution (52), \(\nabla\cdot S_{\text{off}}\) will be negative at early times. However, if causality and stability are violated, the divergence will always be positive. \(\nabla\cdot S_{\text{off}}\) approaches zero rapidly and then changes sign in the stable and causal case.
## VII Conclusions
In this work, we explore the constraints imposed by causality in anisotropic hydrodynamics. Our approach involves deriving the simplest and most general form of the first-order anisotropic energy-momentum tensor, based on the invariance under the little group \(SO(2)\) and the second law of thermodynamics. We demonstrate that our theory is both nonlinearly causal whether or not it is coupled to gravity and that it is stable in the linearized regime. It is worth mentioning that causality is not only in the heart of any relativistic theory but also is essential for well-posedness of a system of relativistic covariant equations since it must be valid in any Lorentz frame. Furthermore, we show that the standard isotropic BDNK theory can be recovered as a limit of our novel approach.
We verify the linear stability in the local rest frame (LRF) and use causality to apply the recent result obtained by Gavassino [31], which guarantees that linear stability in the LRF together with causality results in linear stability in any boosted frame. We give an illustration of such stability-causality connection by verifying these results in the homogeneous boosted frame. For the specific case of Bjorken flow, we investigate the causality conditions necessary to ensure the existence of forward and pullback attractors. Our findings reveal that the behavior of these attractors at early and late times is constrained by causality conditions. Violation of these conditions results in a reheating effect, which is the increase of temperature from its initial value at very early times.
There are several potential avenues for future research that can build on our work. Firstly, it would be valuable to investigate how our novel anisotropic theory can be derived from a coarse-graining approach, as is typically done in relativistic kinetic theory. Additionally, extending our analysis to the most general non-conformal case for both charged and uncharged fluids, with and without gravity, could have significant implications in other fields such as cosmology [38]. Our work may also impact the development of a BDNK-type theory for resistive dissipative magnetohydrodynamics [39; 40], where the magnetic field serves a similar purpose to the anisotropy vector \(l^{\mu}\).
Finally, while our approach has been successful in analyzing the near-equilibrium regime, it would be valuable to extend our techniques to the extreme far-from-equilibrium dynamics. This could provide valuable insights into how causality affects the space-time evolution of these systems. These fascinating research problems are important and require further investigation in the future.
###### Acknowledgements.
M. S. was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through the Collaborative Research Center CRC-TR 211 "Strong-interaction matter under extreme conditions" - project number 315477589 - TRR 211 and by the State of Hesse within the Research Cluster ELEMENTS (Project ID 500/10.006). M. M. was supported in part by the US Department of Energy Grant No. DE-FG02-03ER41260 and BEST (Beam Energy Scan Theory) DOE Topical Collaboration.
## Appendix A Anisotropic free energy density
In this appendix, we present the thermodynamic relations of an anisotropic fluid. To this end, let us assume the leading-order generating functional of the fluid to be [41; 42]
\[W_{0}=\int\mathrm{d}^{4}x\,\sqrt{-g}\mathcal{F}(T,\ell)\,. \tag{10}\]
Here, \(\mathcal{F}\) is the free energy that is to be determined and \(\ell=\sqrt{g_{\mu\nu}l^{\mu}l^{\nu}}\), which is set to unity at the end 9. The energy-momentum tensor can then be derived from
Footnote 9: \(\ell\) is similar to the parameter \(\xi\) for the superfluid case in [41]. Note that, \(\ell\) varies only because of the metric variation.
\[T^{\mu\nu}=\frac{2}{\sqrt{-g}}\frac{\delta W}{\delta g_{\mu\nu}}=\left( \mathcal{F}\delta\sqrt{-g}+\sqrt{-g}\left(\frac{\partial\mathcal{F}}{\partial T }\right)_{\ell}\delta T+\sqrt{-g}\left(\frac{\partial\mathcal{F}}{\partial \ell}\right)_{T}\delta\ell\right). \tag{11}\]
The variation of temperature and \(\ell\) are
\[\delta T=\frac{1}{2}\,Tu^{\mu}u^{\nu}\delta g_{\mu\nu}\,,\qquad\delta\ell= \frac{1}{2}\,l^{\mu}l^{\nu}\delta g_{\mu\nu}\,.\]
Plugging the above relations into (101), we obtain
\[T_{0}^{\mu\nu}=\left(-\mathcal{F}+T\left(\frac{\partial\mathcal{F}}{\partial T} \right)\Big{|}_{\ell}\right)u^{\mu}u^{\nu}+\left(\frac{\partial\mathcal{F}}{ \partial\ell}\right)\Big{|}_{T}l^{\mu}l^{\nu}+\mathcal{F}\Delta^{\mu\nu}\,. \tag{102}\]
The pressure and longitudinal pressure are defined as
\[P\equiv\frac{1}{3}\,\Delta_{\mu\nu}T^{\mu\nu}=\mathcal{F}+\frac{1}{3}\,\left( \frac{\partial\mathcal{F}}{\partial\ell}\right)_{T}\,,\qquad P_{l}\equiv l_{ \mu}l_{\nu}T^{\mu\nu}=\mathcal{F}+\left(\frac{\partial\mathcal{F}}{\partial \ell}\right)_{T}\,.\]
Plugging (102) into the above, we recognize the free energy,
\[\mathcal{F}=\frac{3P-P_{l}}{2}\equiv P_{\perp}\,. \tag{103}\]
After identifying the free energy with the transverse pressure, one can observe the relations between thermodynamic quantities,
\[\epsilon\equiv u_{\mu}u_{\nu}T^{\mu\nu}=-P_{\perp}+T\left(\frac{\partial P_{ \perp}}{\partial T}\right)_{\ell}\,,\qquad\left(\frac{\partial P_{\perp}}{ \partial\ell}\right)_{T}=P_{l}-P_{\perp}\,. \tag{104}\]
The first equation above is Euler's equation for the anisotropic fluid that determines its entropy density,
\[s=\frac{\epsilon+\mathcal{F}}{T}=\left(\frac{\partial P_{\perp}}{\partial T }\right)_{\ell}\,. \tag{105}\]
From the above definition of the entropy density and (104), we find
\[\mathrm{d}\epsilon=-\left(P_{l}-P_{\perp}\right)\mathrm{d}\ell+T\,\mathrm{d}s \qquad\mathrm{d}P_{\perp}=\left(P_{l}-P_{\perp}\right)\mathrm{d}\ell+s\, \mathrm{d}T\.\]
At this point, we set \(\ell=1\) to find the Gibbs-Duhem relation and the first law of thermodynamics for the anisotropic fluid
\[\mathrm{d}\epsilon=T\,\mathrm{d}s\qquad\mathrm{d}P_{\perp}=s\,\mathrm{d}T. \tag{106}\]
Finally, the entropy current is found from the covariant form of (105)
\[Ts_{\mu}=P_{\perp}u_{\mu}-u^{\nu}T_{\mu\nu}\,. \tag{107}\]
## Appendix B Details of causality analysis
In this appendix we give the mathematical details of causality analysis that are not presented in Sec. IV. The system of equations arising from (17) and (18) can be written as 10
Footnote 10: One may notice that causality is blind to the leading order anisotropy of pressure since it does not appear in the principal part of the system of PDEs.
\[\frac{3\chi u^{\alpha}u^{\beta}+\lambda_{\perp}\Delta^{\alpha \beta}+\delta\lambda\,l^{\alpha}l^{\beta}}{4\varepsilon}\partial_{\alpha} \partial_{\beta}\varepsilon+\left[(\chi+\lambda_{\perp})u^{(\alpha}\delta_{ \nu}^{\beta)}+\delta\lambda\,l^{(\alpha}u^{\beta)}l_{\nu}\right]\partial_{ \alpha}\partial_{\beta}u^{\nu}\] \[+h_{a}^{\parallel,\,\alpha\beta}(\varepsilon,u,g)\partial_{\alpha }\partial_{\beta}g_{a}=b^{\parallel}(\partial\varepsilon,\partial u,\partial g )\;, \tag{108a}\] \[\frac{(\chi_{\perp}+\lambda_{\perp})\Delta^{\mu(\alpha}u^{\beta)} +(\delta\lambda+\delta\chi)\,l^{\mu}l^{(\alpha}u^{\beta)}}{4\varepsilon} \partial_{\alpha}\partial_{\beta}\varepsilon+C_{\nu}^{\mu\alpha\beta}\partial_ {\alpha}\partial_{\beta}u^{\nu}+h_{a}^{\mu,\,\alpha\beta}(\varepsilon,u,g) \partial_{\alpha}\partial_{\beta}g_{a}\] \[=b^{\mu}(\partial\varepsilon,\partial u,\partial g)\;,\] (108b) \[g^{\alpha\beta}\partial_{\alpha}\partial_{\beta}g_{a}=b_{a}( \partial\varepsilon,\partial u,\partial g)\;, \tag{108c}\]
where there is an implicit sum over repeated \(a,b=\mu\nu\), with \(\mu\leq\nu\), over the values 00, 01, 02, 03, 11, 12, 13, 22, 23, and 33. Furthermore, \(\delta\lambda=\lambda_{l}-\lambda_{\perp}\), \(\delta\chi=\chi_{l}-\chi_{\perp}\), and we have defined
\[C_{\nu}^{\mu\alpha\beta}\equiv\delta\eta_{ll\perp}\,l^{\mu}l^{ \beta}l^{\alpha}l_{\nu}+\left(\frac{\delta\chi}{3}+\delta\eta_{ll\perp}\right) \,l^{\mu}l^{(\beta}\delta_{\nu}^{\alpha)}+\delta\eta_{ll\perp}\,\Delta^{\mu( \beta}l^{\alpha)}l_{\nu}+\frac{(\chi_{\perp}-\eta_{ll})\Delta^{\mu(\beta} \delta_{\nu}^{\alpha)}}{3}\] \[+\delta\lambda\,l^{\mu}l_{\nu}u^{\alpha}u^{\beta}+\delta\eta_{ \perp l}\Delta^{\alpha\beta}l^{\mu}l_{\nu}+\left[\lambda_{\perp}u^{\alpha}u^{ \beta}-\eta_{\perp}\Delta^{\alpha\beta}+\delta\eta_{\perp l}l^{\alpha}l^{ \beta}\right]\delta_{\nu}^{\mu}\;, \tag{109}\]
wherein \(\delta\eta_{lll\perp}=4\eta_{l}-3\eta_{ll}-\eta_{\perp}\), \(\delta\eta_{lll}=\eta_{ll}-\eta_{l}\) and \(\delta\eta_{\perp l}=\eta_{\perp}-\eta_{l}\). We may rewrite (14) in the matrix form
\[H_{J}^{I}(V,\partial)V^{J}+b^{I}=0, \tag{15}\]
where \(I,J\) take the values \(\parallel\), \(\mu=0,1,2,3\), and \(a=00,01,\cdots,33\), \(V^{I}=(\varepsilon,u^{\nu},g_{b})\),
\[H(V,\partial)_{J}^{I}=h(V)_{J}^{I,\,\alpha\beta}\partial_{\alpha}\partial_{\beta}\]
is a \(15\times 15\) matrix linear operator, and
\[h_{\parallel}^{\parallel,\,\alpha\beta} =\frac{3\chi u^{\alpha}u^{\beta}+\lambda_{\perp}\Delta^{\alpha \beta}+\delta\lambda\,l^{\alpha}l^{\beta}}{4\varepsilon}, \tag{16a}\] \[h_{\nu}^{\parallel,\,\alpha\beta} =(\chi+\lambda_{\perp})u^{(\alpha}\delta_{\beta}^{\omega)}+ \delta\lambda\,l^{(\alpha}u^{\beta)}l_{\nu},\] (16b) \[h_{\parallel}^{\mu,\,\alpha\beta} =\frac{(\chi_{\perp}+\lambda_{\perp})\Delta^{\mu(\alpha}u^{\beta) }+(\delta\lambda+\delta\chi)\,l^{\mu}l^{(\alpha}u^{\beta)}}{4\varepsilon},\] (16c) \[h_{\nu}^{\mu,\,\alpha\beta} =C_{\nu}^{\alpha\alpha\beta},\] (16d) \[h_{b}^{a,\,\alpha\beta} =\delta_{b}^{a}g^{\alpha\beta},\text{ where }\delta_{b}^{a}=\delta_{ \lambda}^{\mu}\delta_{\sigma}^{\nu}\text{ when }a=\mu\nu\text{ and }b=\lambda\sigma,\] (16e) \[h_{b}^{a,\,\alpha\beta} =h_{\mu}^{a,\,\alpha\beta}=0. \tag{16f}\]
The remaining expressions \(h_{b}^{\mu,\,\alpha\beta}(V)\) and \(b^{I}(\partial V)\) are irrelevant to what follows. The principal part of each equation \(I\) is contained in \(H(V,\partial)_{J}^{I}V^{J}\). It is worth mentioning that all terms \(b^{I}(\partial V)\) are functions of at most the first-order derivative of the variables \(V^{I}=(\varepsilon,u^{\nu},g_{a})\), with products containing first-order derivatives such as \((\partial V^{K})^{n}\), \((\partial V^{K})^{n}(\partial V^{L})^{m}\), among others, being allowed. On the other hand, \(h_{J}^{I,\,\alpha\beta}(V)\) are functions of the variables \(V^{I}\) only and not their derivatives. Therefore, this is a quasi-linear PDE system and the usual tools to compute causality apply. The characteristic surfaces \(\{\Phi(x)=0\}\) are determined by the principal part of the equations by solving the characteristic equation \(\det\bigl{[}H(V^{k},\xi)\bigr{]}=0\), with \(\xi_{\mu}=\nabla_{\mu}\Phi\)[43; 44]. Note that the components of the matrix \(H(V,\xi)\) are \(H_{J}^{I}(V,\xi)=h(V)_{J}^{I,\,\alpha\beta}\xi_{\alpha}\xi_{\beta}\). The system is causal if, for any given real \(\xi_{i}\), (13) the roots \(\xi_{0}=\xi_{0}(\xi_{i})\) of the characteristic equation are real and (14) \(\xi_{\alpha}=(\xi_{0}(\xi_{i}),\xi_{i})\) is spacelike or lightlike, i.e., \(\xi_{\mu}\xi^{\mu}\geq 0\). Condition (14) guarantees that the hypersurfaces \(\{\Phi(x)=0\}\) are timelike or lightlike, ensuring that there is no superluminal information. 11
Footnote 11: Note that the matrix \(H(V^{k},\xi)\) is invertible only when \(\xi\) is timelike, i.e., solutions are only possible over spacelike or lightlike hypersurfaces \(\Phi\).
We may now compute the characteristic equation, for which we must compute the determinant of the matrix \(H(V,\xi)\) that reads
\[\det[H(V,\xi)] =\det\begin{bmatrix}H_{\parallel}^{\parallel}(V,\xi)&H_{\nu}^{ \parallel}(V,\xi)&H_{b}^{\parallel}(V,\xi)\\ H_{\parallel}^{\mu}(V,\xi)&H_{\nu}^{\mu}(V,\xi)&H_{b}^{\mu}(V,\xi)\\ 0_{10\times 1}&0_{10\times 4}&\xi_{\mu}\xi^{\mu}I_{10}\end{bmatrix}\] \[=(\xi_{\mu}\xi^{\mu})^{10}M\;, \tag{17}\]
where
\[M=\det\begin{bmatrix}H_{\parallel}^{\parallel}(V,\xi)&H_{\nu}^{ \parallel}(V,\xi)\\ H_{\parallel}^{\mu}(V,\xi)&H_{\nu}^{\mu}(V,\xi)\end{bmatrix}, \tag{18}\]
wherein \(I_{10}\) is the \(10\times 10\) identity matrix and \(0_{m\times n}\) is the \(m\times n\) null matrix. Out of 30 roots of the characteristic equation, 20 are the real roots \(\xi_{\mu}\xi^{\mu}=0\) coming from the pure gravity sector and, thus, are causal. These are expected to be lightlike roots since pure gravity fields have no mass. If matter instead of pure radiation, which is permitted for a conformal fluid, is treated, the remaining roots are expected to be spacelike. As for the matter sector, let us define \(a=u^{\mu}\xi_{\mu}\), \(b=l^{\mu}\xi_{\mu}\), \(v^{\mu}=\Delta^{\mu\nu}\xi_{\mu}\), and \(v=\sqrt{\xi_{\mu}\xi_{\nu}\Delta^{\mu\nu}}\). Then, from (18) one obtains that
\[M=H_{\parallel}^{\parallel}(V,\xi)\det\left[H_{\nu}^{\mu}(V,\xi)- \frac{H_{\parallel}^{\mu}(V,\xi)H_{\nu}^{\mu}(V,\xi)}{H_{\parallel}^{ \parallel}(V,\xi)}\right]\] \[=H_{\parallel}^{\parallel}(V,\xi)\det\left[A\delta_{\nu}^{\mu}+U _{1}^{\mu}l_{\nu}+U_{2}^{\mu}\xi_{\nu}\right]\] \[=A^{2}H_{\parallel}^{\parallel}(V,\xi)\Bigl{[}A^{2}+A\left(U_{1}^ {\mu}l_{\mu}+U_{2}^{\mu}\xi_{\mu}\right)+U_{1}^{\mu}l_{\mu}U_{2}^{\nu}\xi_{\nu} -U_{1}^{\mu}\xi_{\mu}U_{2}^{\nu}l_{\nu}\Bigr{]}\;, \tag{19}\]
where
\[A =\lambda_{\perp}a^{2}-\eta_{\perp}v^{2}+\delta\eta_{\perp\downarrow }b^{2}\;, \tag{11a}\] \[U_{1}^{\mu} =b^{2}\delta\eta_{ll\perp}\,l^{\mu}+a^{2}\delta\lambda l^{\mu}+ \delta\eta_{\perp\downarrow}v^{2}l^{\mu}+b\delta\eta_{ll}\,v^{\mu}-\frac{ab \delta\lambda\,H_{\parallel}^{\mu}(V,\xi)}{H_{\parallel}^{\parallel}(V,\xi)}\;,\] (11b) \[U_{2}^{\mu} =b\left(\frac{\delta\chi}{3}+\delta\eta_{ll}\right)\,l^{\mu}+ \frac{(\chi_{\perp}-\eta_{ll})v^{\mu}}{3}-\frac{a(\chi+\lambda_{\perp})H_{ \parallel}^{\mu}(V,\xi)}{H_{\parallel}^{\parallel}(V,\xi)}\;, \tag{11c}\]
In particular, let us write explicitly
\[U_{1}^{\mu}l_{\mu} =b^{2}\delta\eta_{ll\perp}+a^{2}\delta\lambda+\delta\eta_{\perp \downarrow}v^{2}+b^{2}\delta\eta_{ll}-\frac{a^{2}b^{2}\delta\lambda(\chi_{ \perp}+\lambda_{\perp}+\delta\lambda+\delta\chi)}{4\varepsilon H_{\parallel}^ {\parallel}(V,\xi)}\;, \tag{12a}\] \[U_{1}^{\mu}\xi_{\mu} =b^{3}\delta\eta_{ll\perp}+a^{2}b\delta\lambda+(\delta\eta_{ \perp\downarrow}+\delta\eta_{ll})bv^{2}-\frac{a^{2}b\delta\lambda[(\chi_{ \perp}+\lambda_{\perp})v^{2}+b^{2}(\delta\lambda+\delta\chi)]}{4\varepsilon H _{\parallel}^{\parallel}(V,\xi)}\;,\] (12b) \[U_{2}^{\mu}l_{\mu} =b\left(\frac{\delta\chi}{3}+\delta\eta_{ll}\right)+\frac{(\chi_ {\perp}-\eta_{ll})b}{3}-\frac{a^{2}b(\chi+\lambda_{\perp})(\chi_{\perp}+ \lambda_{\perp}+\delta\lambda+\delta\chi)}{4\varepsilon H_{\parallel}^{ \parallel}(V,\xi)}\;,\] (12c) \[U_{2}^{\mu}\xi_{\mu} =b^{2}\left(\frac{\delta\chi}{3}+\delta\eta_{ll}\right)+\frac{( \chi_{\perp}-\eta_{ll})v^{2}}{3}-\frac{a^{2}(\chi+\lambda_{\perp})[(\chi_{ \perp}+\lambda_{\perp})v^{2}+b^{2}(\delta\lambda+\delta\chi)]}{4\varepsilon H _{\parallel}^{\parallel}(V,\xi)}\;,\] (12d) \[H_{\parallel}^{\parallel}(V,\xi) =\frac{3\chi a^{2}+\lambda_{\perp}v^{2}+\delta\lambda\,b^{2}}{4 \varepsilon}\;. \tag{12e}\]
Thus, the matter sector contains 10 overall roots that must obey (10) and (11), with 4 coming from
\[A^{2}=\left(\lambda_{\perp}a^{2}-\eta_{\perp}v^{2}+\delta\eta_{\perp\downarrow }b^{2}\right)^{2}=0\;, \tag{13}\]
and the remaining 6 from 12
Footnote 12: Note that in the product \(H_{\parallel}^{\parallel}(V,\xi)\left(U_{1}^{\mu}l_{\mu}U_{2}^{\nu}\xi_{\nu}-U_ {1}^{\mu}\xi_{\mu}U_{2}^{\nu}l_{\nu}\right)\), the term with denominator \(H_{\parallel}^{\parallel}(V,\xi)\) cancels as expected.
\[H_{\parallel}^{\parallel}(V,\xi)\left[A^{2}+A\left(U_{1}^{\mu}l_{\mu}+U_{2}^{ \mu}\xi_{\mu}\right)+U_{1}^{\mu}l_{\mu}U_{2}^{\nu}\xi_{\nu}-U_{1}^{\mu}\xi_{ \mu}U_{2}^{\nu}l_{\nu}\right]=0\;. \tag{14}\]
## Appendix C Causility conditions in the general and specific cases
The polynomial in (14) is of power 3 in \(a^{2}\). Since for causality we must obtain the roots in the form of \(\varrho=\frac{a^{2}}{v^{2}}\), then we may rewrite (14) as
\[4\varepsilon H_{\parallel}^{\parallel}(V,\xi)\left[A^{2}+A\left(U_{1}^{\mu}l_ {\mu}+U_{2}^{\mu}\xi_{\mu}\right)+U_{1}^{\mu}l_{\mu}U_{2}^{\nu}\xi_{\nu}-U_{1 }^{\mu}\xi_{\mu}U_{2}^{\nu}l_{\nu}\right]=p(\varrho)v^{6};. \tag{15}\]
We have multiplied (14) by \(4\varepsilon\) to eliminate it from the denominator in \(H_{\parallel}^{\parallel}(V,\xi)\). Note that \(p(\varrho)\) is a cubic polynomial in \(\varrho\). Since \(l^{\mu}=\Delta^{\mu\nu}l_{\nu}\) and \(v^{\mu}=\Delta^{\mu\nu}\xi_{\nu}\) are vectors orthogonal to \(u^{\mu}\), \(\Delta^{\mu\nu}\) define an inner product between them and, thus, we can apply the Cauchy-Schwarz inequality to write \(b=l_{\mu}v^{\mu}=\kappa\,v\), where \(\kappa\in[-1,1]\) depending on the root \(\xi\) and the vector \(l\). Causality of the 6 roots of (10) follows from the statement:
**Statement C.1** (The general case).: _Let \(p(\varrho)\) be defined by means of Eq. (10) and let us write it as_
\[p(\varrho)=\alpha_{3}\varrho^{3}+\alpha_{2}\varrho^{2}+\alpha_{1}\varrho+\alpha _{0}. \tag{16}\]
_Assume that_
\[\alpha_{3}>0\;. \tag{17}\]
_Then, causality requires that_
\[p(\varrho)>0,\quad\forall\varrho\geq 1 \tag{101a}\] \[p(\varrho)<0,\quad\forall\varrho<0,\] (101b) \[18\alpha_{0}\alpha_{1}\alpha_{2}\alpha_{3}-4\alpha_{2}^{3}\alpha_ {0}+\alpha_{2}^{2}\alpha_{1}^{2}-4\alpha_{3}\alpha_{1}^{3}-27\alpha_{3}^{2} \alpha_{0}^{2}\geq 0, \tag{101c}\]
_for all \(\kappa\in[-1,1]\)._
Proof.: First, we must ensure that all real roots of \(p(\varrho)\) lie on the range \([0,1)\) as demanded by (20). From (100), we must ensure that \(p(\varrho)\) is positive for all \(\varrho\geq 1\) [condition (101a)] and negative for all \(\varrho<0\) [condition (101b). This guarantees that the real roots can only occur in this desired range, and thus (20) is satisfied. As for (19), the roots are real if the discriminant of the cubic polynomial (100) is greater or equal to zero, which leads to the condition (101c).
In what follows, we present causality conditions for two more specific cases. The first case is when the anisotropy appears only in \(\mathcal{E}^{(1)}\), \(\mathcal{P}^{(1)}_{l}\), and \(\mathcal{P}^{(1)}\):
**Statement C.2** (Anisotropy in the \(\chi\)'s).: _Consider the anisotropic conformal fluid theory defined by the energy-momentum tensor in (1) and supplemented with Eqs. (11) with the choices \(\lambda_{\perp}=\lambda_{l}=\lambda\), and \(\eta_{\perp}=\eta_{l}=\eta_{ll}=\eta\). Then, the corresponding EOM are causal under assumption (25) if, and only if, condition (28) applies together with_
\[\kappa^{2}\delta\chi\lambda+\chi(4\eta+\lambda-\chi)+(\lambda+ \chi)\chi_{\perp}\geq 0 \tag{102a}\] \[\kappa^{4}\delta\chi^{2}\lambda^{2}-2\kappa^{2}\lambda\chi\delta \chi(-4\eta+\lambda+\chi)+2(\lambda+\chi)\chi_{\perp}\left[\kappa^{2}\delta \chi\lambda+\chi(4\eta+\lambda-\chi)\right]\] \[+\chi\left[16\eta^{2}\chi+8\eta\left(2\lambda^{2}+\lambda\chi- \chi^{2}\right)+\chi\left(-3\lambda^{2}-2\lambda\chi+\chi^{2}\right)\right]+( \lambda+\chi)^{2}\chi_{\perp}^{2}\geq 0,\] (102b) \[\chi\geq 4\eta-\kappa^{2}\delta\chi,\] (102c) \[-\kappa^{2}\lambda\delta\chi+\chi(-4\eta+5\lambda+\chi)-(\lambda +\chi)\chi_{\perp}>0,\] (102d) \[-2\kappa^{2}\delta\chi\lambda-4\eta\lambda-12\eta\chi+7\lambda \chi-3(\lambda+\chi)\chi_{\perp}+3\chi^{2}>0, \tag{102e}\]
_for all \(\kappa^{2}\in[0,1]\)._
Proof.: The determinant \(M\) in (100) can be rewritten as
\[M=\frac{3\chi\lambda^{4}}{4\varepsilon}\prod_{a=1,\pm}(a^{2}-\tau_{a}v^{2})^{ n_{a}}, \tag{103}\]
where \(n_{1}=3\), \(n\pm=1\), and
\[\tau_{1} =\frac{\eta}{\lambda}, \tag{104a}\] \[\tau_{\pm} =\frac{\alpha\pm\sqrt{\beta}}{6\lambda\chi},\] (104b) \[\alpha =\kappa^{2}\delta\chi\lambda+\chi(4\eta+\lambda-\chi)+(\lambda+ \chi)\chi_{\perp},\] (104c) \[\beta =\kappa^{4}\delta\chi^{2}\lambda^{2}-2\kappa^{2}\lambda\zeta \delta\chi(-4\eta+\lambda+\chi)+2(\lambda+\chi)\chi_{\perp}\left[\kappa^{2} \delta\chi\lambda+\chi(4\eta+\lambda-\chi)\right]\] \[+\chi\left[16\eta^{2}\chi+8\eta\left(2\lambda^{2}+\lambda\chi- \chi^{2}\right)+\chi\left(-3\lambda^{2}-2\lambda\chi+\chi^{2}\right)\right]+( \lambda+\chi)^{2}\chi_{\perp}^{2}. \tag{104d}\]
The matter sector has two roots for (104a) with multiplicity \(3\) each and four roots for (104b) (two for each \(\tau_{\pm}\)), a total of \(10\) roots. Now, (19) ad (20) are observed if, and only if, \(0\leq\tau_{a}<1\). For \(\tau_{1}\) it is guaranteed by (25) together with (28). As for \(\tau_{\pm}\), it needs to be real, i.e., \(\beta\geq 0\), what corresponds to (102a), \(\tau_{-}\geq 0\), and \(\tau_{+}<1\). For \(\tau_{-}\geq 0\), we need that \(\alpha\geq 0\) [condition (102b)] together with \(\alpha^{2}-\beta\geq 0\) [condition (102c)]. As for \(\tau_{+}<1\), it corresponds to \(6\lambda\chi-\alpha\geq 0\) [condition (102d)] together with \((6\lambda\chi-\alpha)^{2}-\beta>0\), i.e., condition (102e). All the above conditions must be valid for all values of \(\kappa\in[-1,1]\), i.e., for all possible values of the product \(b=l_{\mu}v^{\mu}=\kappa v\).
Finally, a much simpler case is given below:
**Statement C.3** (The shearless case).: _Consider the shearless anisotropic fluid (\(\eta_{\perp}=\eta_{l}=\eta_{ll}=0\)) with \(\delta\chi=0\) described by the energy-momentum tensor (10) and supplemented by (11). The theory is causal if (25) is satisfied._
Proof.: In this particular case, the determinant in (107) becomes
\[M=\frac{3\chi\lambda_{\perp}^{4}a^{6}}{4\varepsilon}\left(a^{2}-\frac{1}{3}v^{2} \right)^{2}=0. \tag{108}\]
Note that there is one causal root \(a=\xi_{\mu}u^{\mu}=0\) because in this case \(\xi_{\mu}\xi^{\mu}=-a^{2}+v^{2}=v^{2}>0\), with multiplicity 6, and two roots \(a^{2}=v^{2}/3\), also causal since \(0<\tau=1/3<1\), with multiplicity 2 each, completing the total 10 roots from the matter sector. Assumption (11) guarantees that the determinant \(M\) is not trivially zero, which would give any \(\xi_{\mu}\) as a possible solution.
## Appendix D A causal example
In this appendix, we use statement C.1 to show that the set of parameters (33), which are reproduced below for convenience, are causal
\[\eta_{\perp}=\eta\,,\quad\eta_{l}=\frac{2\eta}{3}\,,\quad\eta_{ll}=\frac{5\eta }{6}\,,\quad\lambda_{\perp}=\frac{13\eta}{2}\,,\quad\lambda_{l}=6\eta\,,\quad \chi=5\eta\,,\quad\chi_{\perp}=\frac{11\eta}{2}\,,\quad\chi_{l}=\frac{16\eta}{ 3}\,.\]
The above parameters satisfy (25) and (28) by construction, and we show that satisfies the conditions of statement C.1 as well. For simplicity, we take the overall factor \(\eta^{3}/216\) out of the definition of \(p(\varrho)\) in (101) to obtain
\[4\varepsilon H_{\parallel}^{\parallel}(V,\xi)\left[A^{2}+A\left(U_{1}^{\mu}l_ {\mu}+U_{2}^{\mu}\xi_{\mu}\right)+U_{1}^{\mu}l_{\mu}U_{2}^{\nu}\xi_{\nu}-U_{1} ^{\mu}\xi_{\mu}U_{2}^{\nu}l_{\nu}\right]=\frac{\eta^{3}}{216}p(\varrho)v^{6}\;. \tag{109}\]
Then, the coefficients in \(p(\varrho)\) are given by
\[\alpha_{0} =-364-1623\kappa^{2}+1674\kappa^{4}-119\kappa^{6}<0,\quad\forall \;\kappa^{2}\in[0,1] \tag{110a}\] \[\alpha_{1} =16224+19601\kappa^{2}-19925\kappa^{4}>0,\quad\forall\;\kappa\in [0,1]\] (110b) \[\alpha_{2} =-18(7254-203\kappa^{2})<0,\quad\forall\;\kappa\in[0,1]\] (110c) \[\alpha_{3} =126360\;, \tag{110d}\]
where \(\kappa\) is defined through \(l_{\mu}v^{\mu}=\kappa v\) (see Appendix C). For \(\varrho<0\), we have
\[p(\varrho)=-\left(|\alpha_{0}|+\alpha_{1}|\varrho|+|\alpha_{2}|\varrho^{2}+ \alpha_{3}|\varrho|^{3}\right)<0\;, \tag{111}\]
and, thus, (10b) is verified. On the other hand, for \(\varrho\geq 1\)
\[p(\varrho)=\alpha_{0}+\alpha_{1}\varrho+(\alpha_{2}+\alpha_{3}\varrho^{2}) \varrho\geq\alpha_{0}+\alpha_{1}+\alpha_{2}+\alpha_{3}\;, \tag{112}\]
since \(\alpha_{1},\alpha_{3}>0\). However,
\[\alpha_{0}+\alpha_{1}+\alpha_{2}+\alpha_{3}=11648+21632\kappa^{2}-18251\kappa ^{4}-119\kappa^{6}>0,\quad\forall\,\kappa^{2}\in[0,1], \tag{113}\]
and, therefore, (10a) is also verified. Finally,
\[18\alpha_{0}\alpha_{1}\alpha_{2}\alpha_{3}-4\alpha_{2}^{3}\alpha _{0}+\alpha_{2}^{2}\alpha_{1}^{2}-4\alpha_{3}\alpha_{1}^{3}-27\alpha_{3}^{2} \alpha_{0}^{2}=324\left(2421746461615104-6266597114164608\kappa^{2}\right.\] \[\left.+24310721163158820\kappa^{4}-50747526702105948\kappa^{6}+59 336411451755437\kappa^{8}-40187158939087070\kappa^{10}\right.\] \[\left.+12398536143066361\kappa^{12}\right)>0\;,\quad\forall\, \kappa^{2}\in[0,1]\;. \tag{114}\]
Hence, (10c) is also verified and the set of transport parameters (33) is causal.
## Appendix E Stability of the causal example (33)
In this appendix, we examine the stability of the sound channel (37) for the causal set of parameters (33). In this case, up to an overall constant and, because \(\bar{\eta}>0\), by performing the changes \(\Gamma\to\Gamma/\bar{\eta}\) and \(k^{i}\to k^{i}/\bar{\eta}\) we obtain that Eq. (37) becomes
\[a_{0}\Gamma^{6}+a_{1}\Gamma^{5}+a_{2}\Gamma^{4}+a_{3}\Gamma^{3}+a_{4}\Gamma^{2} +a_{5}\Gamma+a_{6}=0\;, \tag{125}\]
where
\[a_{0} =21060\;, \tag{10a}\] \[a_{1} =10962\;,\] (10b) \[a_{2} =6\left[315+f_{1}(x^{2})k^{2}\right]\;,\] (10c) \[a_{3} =108+f_{2}(x^{2})k^{2}\;,\] (10d) \[a_{4} =\frac{k^{2}}{2}\left[f_{3}(x^{2})+f_{4}(x^{2})k^{2}\right],\] (10e) \[a_{5} =36k^{2}+f_{5}(x^{2})k^{4}\;,\] (10f) \[a_{6} =k^{4}\left[f_{6}(x^{2})+f_{7}(x^{2})k^{2}\right]\;. \tag{10g}\]
We have defined the functions
\[f_{1}(x^{2}) =3498-70x^{2}\;, \tag{10a}\] \[f_{2}(x^{2}) =7248-270x^{2}\;,\] (10b) \[f_{3}(x^{2}) =1680-36x^{2}\;,\] (10c) \[f_{4}(x^{2}) =5548+6564x^{2}-6464x^{4}\;,\] (10d) \[f_{5}(x^{2}) =485+556x^{2}-553x^{4}\;,\] (10e) \[f_{6}(x^{2}) =24+30x^{2}-30x^{4}\;,\] (10f) \[f_{7}(x^{2}) =78+306x^{2}-310x^{4}+22x^{6}\;. \tag{10g}\]
One may verify that all functions in (10) are positive for all \(x^{2}\in[0,1]\), which makes all \(a_{I}\)'s in (10) (\(I=0,\cdots,6\)) to be also positive. Thus, pure real roots are in fact negative. As for imaginary roots, we apply the Routh-Hurwitz criterion (RHC) [45], which in this case requires us to compute the following table
\[\begin{array}{|c|c|c|c|}\hline a_{0}&a_{2}&a_{4}&a_{6}&0\\ \hline a_{1}&a_{3}&a_{5}&0&0\\ \hline b_{1}&b_{2}&b_{3}&0&0\\ \hline c_{1}&c_{2}&0&0&0\\ \hline d_{1}&d_{2}&0&0&0\\ \hline e_{1}&0&0&0&0\\ \hline\end{array} \tag{10}\]
where \(b_{i}=(a_{1}a_{2i}-a_{0}a_{2i+1})/a_{1}\), \(c_{i}=(b_{1}a_{2i+1}-a_{1}b_{i+1})/b_{1}\), \(d_{i}=(c_{1}b_{i+1}-b_{1}c_{i+1})/c_{1}\), and \(e_{1}=(d_{1}c_{2}-c_{1}d_{2})/d_{1}\). Since \(a_{I}>0\) for \(I=0,\cdots,6\), then \(\text{Re}(\Gamma)<0\) if, and only if, \(b_{1}>0\), \(c_{1}>0\), \(d_{1}>0\), and \(e_{1}>0\). Since \(a_{1}>0\), then it is enough to obtain
\[a_{1}b_{1} =324\left[56925+(238974+3340x^{2})k^{2}\right]>0\;, \tag{11a}\] \[b_{1}c_{1} =\frac{36}{203}\left[1024650+18g_{1}(x^{2})k^{2}+g_{2}(x^{2})k^{ 4}\right]\;,\] (11b) \[b_{1}c_{1}d_{1} =\frac{36k^{2}}{203}\left[6147900g_{3}(x^{2})+27g_{4}(x^{2})k^{ 2}+6g_{5}(x^{2})k^{4}+g_{6}(x^{2})k^{6}\right]\;,\] (11c) \[b_{1}c_{1}d_{1}e_{1} =\frac{36k^{4}}{203}\left[221324400g_{7}(x^{2})+972g_{8}(x^{2})k^{ 2}+27g_{9}(x^{2})k^{4}+6g_{10}(x^{2})k^{6}+g_{11}(x^{2})k^{8}\right], \tag{11d}\]
where
\[g_{1}(x^{2}) = 1412154-77159x^{2}\;, \tag{100a}\] \[g_{2}(x^{2}) = 174806118-143563237x^{2}+133959417x^{4}\;,\] (100b) \[g_{3}(x^{2}) = 35-3x^{2}\;,\] (100c) \[g_{4}(x^{2}) = 422612235-188238658x^{2}+129974883x^{4}\;,\] (100d) \[g_{5}(x^{2}) = 20839149333-15122827103x^{2}+12589851858x^{4}+623807058x^{6}\;,\] (100e) \[g_{6}(x^{2}) = 219646045656+96361454524x^{2}-432604607986x^{4}+620066970164x^{6}\] (100f) \[-290471866054x^{8}\;,\] \[g_{7}(x^{2}) = 23-18x^{2}+15x^{4}\;,\] (100g) \[g_{8}(x^{2}) = 362387218-297049991x^{2}+243855688x^{4}-5179815x^{6}\] (100h) \[g_{9}(x^{2}) = 216917135055-156807618144x^{2}+105968706122x^{4}+35462079400x^{6}\] (100i) \[-13719895953x^{8}\;,\] \[g_{10}(x^{2}) = 5000654914056-3108012289024x^{2}+1415965400581x^{4}+1798665419999x ^{6}\] (100j) \[-134955905301x^{8}-625170319671x^{10}\;,\] \[g_{11}(x^{2}) = 20097012270600-82381572326488x^{2}+329458308450878x^{4}-682695134 979400x^{6}\] (100k) \[+795041858933812x^{8}-532773739193376x^{10}+161024463542854x^{12}\;.\]
Since all 11 functions \(g\) are greater than zero for all \(x^{2}\in[0,1]\), then all equations in (100) are positive, the RHC are verified and the system is stable in the LRF. Since we proved in Sec. D that the system under these parameters is causal, then the result in [31] ensures linear stability in any boosted frame.
Just for the sake of illustration, let us verify stability in the boosted homogeneous frame by taking the changes (38) in (10) and then taking \(k^{i}=0\). This will lead us to the roots \(\Gamma=0\) of multiplicity 3 and the roots of X.
\[\beta_{0}(\gamma\Gamma)^{3}+\beta_{1}(\gamma\Gamma)^{2}+\beta_{2}\gamma\Gamma +\beta_{3}=0\;, \tag{101}\]
where
\[\beta_{0} = 2(1384+1698x^{2}-1461x^{4}-11x^{6})>0,\quad\forall\;x^{2}\in[0,1 ]\;, \tag{102a}\] \[\beta_{1} = 4199+826x^{2}-553x^{4}>0,\quad\forall\;x^{2}\in[0,1]\;,\] (102b) \[\beta_{2} = 6(179+8x^{2}-5x^{4})>0,\quad\forall\;x^{2}\in[0,1]\;,\] (102c) \[\beta_{3} = 72\;. \tag{102d}\]
Since all coefficients \(\beta_{0,1,2,3}\) are positive, then there is only one negative pure real root for the polynomial, as desired. As for the complex roots, the remaining RHC to be computed is
\[\beta_{1}\beta_{2}-\beta_{0}\beta_{3}=6(718405+140694x^{2}-78310x^{4}-8290x^{6 }+2765x^{8})>0\;, \tag{103}\]
which is greater than zero for all \(x^{2}\in[0,1]\). Thus, linear stability is also verified in the homogeneous boosted frame.
## Appendix F The unphysical choice of \(l\) in the Bjorken flow
Here, we repeat the study of Bjorken flow in Sec. VI, with an alternative choice of spacelike anisotropy vector, \(l=\frac{\partial}{\partial x}\). The fluxes of (11) in this case are
\[\mathcal{E}^{(1)} =\bar{\chi}^{3}T^{3}\left(\frac{1}{\tau}\,+\frac{3\dot{T}}{T}\right) \tag{11a}\] \[\mathcal{P}_{l}^{(1)} =T^{3}\left(\frac{2\tilde{\eta}_{ll}+\tilde{\chi}_{l}}{3\tau}+ \frac{3\tilde{\chi}_{l}\dot{T}}{T}\right)\] (11b) \[\mathcal{P}_{\perp}^{(1)} =T^{3}\left(\frac{\tilde{\chi}_{\perp}-\tilde{\eta}_{ll}}{3\tau} +\frac{3\tilde{\chi}_{\perp}\dot{T}}{T}\right)\] (11c) \[\pi_{\perp}^{\mu\nu} =-2\eta_{\perp}T^{3}\text{diag}\left(0,0,1/\tau,-1/\tau^{3}\right),\] (11d) \[W_{\perp l}^{\mu} =0,\] (11e) \[W_{\perp u}^{\mu} =0,\] (11f) \[M =0, \tag{11g}\]
which gives rise to the following EOM
\[9\bar{\chi}\frac{\tau^{2}\ddot{T}}{T}+18\bar{\chi}\frac{\tau^{2}\dot{T}^{2}}{ T^{2}}+\left(\frac{3\tau(6\bar{\chi}+\tilde{\chi}_{\perp})}{T}+12\tau^{2} \right)\dot{T}+4\tau T+\tilde{\chi}_{\perp}-3\tilde{\eta}_{\perp}-\tilde{\eta }_{ll}=0\,. \tag{12}\]
The above can be expressed in terms of \(w=T\tau\) and \(f(w)=\frac{\tau}{w}\frac{\text{d}w}{\text{d}\tau}\) as
\[\frac{9\bar{\chi}}{4}f(w)^{2}+wf(w)\left(1+\frac{3}{4}\tilde{\chi}f^{\prime}( w)\right)-\frac{\tilde{\chi}-15\tilde{\chi}_{\perp}}{4}f(w)+\frac{18\tilde{\chi}-2 \tilde{\chi}_{\perp}-3\tilde{\eta}_{\perp}-\tilde{\eta}_{ll}}{12}-\frac{2w}{3} =0\,. \tag{13}\]
The pressure anisotropy reads
\[\mathcal{A}=\frac{2}{3}\frac{\tilde{\chi}_{\perp}-\tilde{\chi}_{l}}{w}\left(f -\frac{2}{3}\right)-\frac{3\tilde{\eta}_{ll}}{w}\,. \tag{14}\]
As in Sec. VI, \(\mathcal{A}\) has an "offshell" contribution, which cannot be found in the isotropic conformal BDNK. However, the first-order term is negative, in contrast to the isotropic case. The late-time expansion for \(T\) is
\[T=\frac{\Lambda}{(\Lambda\tau)^{1/3}}\left(1-\frac{\tilde{\eta}_{ll}+3\tilde{ \eta}_{\perp}}{8(\Lambda\tau)^{2/3}}-\frac{(\tilde{\eta}_{ll}+3\tilde{\eta}_{ \perp})(5\tilde{\chi}-\tilde{\chi}_{\perp})}{64(\Lambda\tau)^{2/3}}+\cdots \right)\,, \tag{15}\]
and for \(f\) is
\[f(w)=\frac{2}{3}+\frac{\tilde{\eta}_{ll}+3\tilde{\eta}_{\perp}}{12w}+\frac{( \tilde{\eta}_{ll}+3\tilde{\eta}_{\perp})(5\tilde{\chi}-\tilde{\chi}_{\perp})} {48w^{2}}+\mathcal{O}\!\left(\frac{1}{w^{3}}\right). \tag{16}\]
We may consider the following linear perturbation at late times
\[f(w)=\frac{2}{3}+\frac{\tilde{\eta}_{ll}+3\tilde{\eta}_{\perp}}{12w}+\delta f (w)\,, \tag{17}\]
which up to the first order in perturbation in late times, is
\[\delta f(w)\sim\exp\!\left(-\frac{2w}{\tilde{\chi}}\right)w^{\frac{\tilde{ \eta}_{ll}+3\tilde{\eta}_{\perp}+2(\tilde{\chi}_{\perp}+\tilde{\chi}_{\perp}) }{4\tilde{\chi}}}\,. \tag{18}\]
The numerical attractor can be found from the following initial condition
\[f(w\ll 1)=\frac{7}{9}+\frac{\tilde{\chi}_{l}-\tilde{\chi}_{\perp}}{36\tilde{ \chi}}+\frac{\sqrt{12\left(\tilde{\eta}_{ll}+3\tilde{\eta}_{\perp}\right) \tilde{\chi}+\left(\tilde{\chi}_{\perp}+\tilde{\chi}_{l}\right)^{2}}}{18\tilde {\chi}}\,, \tag{19}\]
and the slow-roll attractor is
\[f(w)_{\rm slowroll}=\frac{7}{9}+\frac{\tilde{\chi}_{l}-\tilde{\chi}_{\perp}}{36 \tilde{\chi}}-\frac{2w}{9\tilde{\chi}}+\frac{\sqrt{\left(4w-\left(\tilde{\chi}_ {l}+\tilde{\chi}_{\perp}\right)\right)^{2}+12\left(3\tilde{\eta}_{\perp}+\tilde{ \eta}_{ll}\right)\tilde{\chi}}}{18\tilde{\chi}} \tag{110}\]
Neglecting the power counting argument, we find \(\nabla\cdot S_{\rm off}\) is initially negative if
\[\chi_{\perp}>\eta_{ll}+3\eta_{\perp}>0\,. \tag{111}\]
In the isotropic limit, the above reproduces the stability condition
\[\chi>4\eta\,.\]
The condition \(\nabla\cdot S<0\) at \(w=0\) is equivalent to
\[\frac{2}{3}<f(0)<1\,, \tag{112}\]
which prevents early reheating for the attractor solution.
|
2305.03336 | QCRI at SemEval-2023 Task 3: News Genre, Framing and Persuasion
Techniques Detection using Multilingual Models | Misinformation spreading in mainstream and social media has been misleading
users in different ways. Manual detection and verification efforts by
journalists and fact-checkers can no longer cope with the great scale and quick
spread of misleading information. This motivated research and industry efforts
to develop systems for analyzing and verifying news spreading online. The
SemEval-2023 Task 3 is an attempt to address several subtasks under this
overarching problem, targeting writing techniques used in news articles to
affect readers' opinions. The task addressed three subtasks with six languages,
in addition to three ``surprise'' test languages, resulting in 27 different
test setups. This paper describes our participating system to this task. Our
team is one of the 6 teams that successfully submitted runs for all setups. The
official results show that our system is ranked among the top 3 systems for 10
out of the 27 setups. | Maram Hasanain, Ahmed Oumar El-Shangiti, Rabindra Nath Nandi, Preslav Nakov, Firoj Alam | 2023-05-05T07:40:41Z | http://arxiv.org/abs/2305.03336v1 | QCRI at SemEval-2023 Task 3: News Genre, Framing and Persuasion Techniques Detection using Multilingual Models
###### Abstract
Misinformation spreading in mainstream and social media has been misleading users in different ways. Manual detection and verification efforts by journalists and fact-checkers can no longer cope with the great scale and quick spread of misleading information. This motivated research and industry efforts to develop systems for analyzing and verifying news spreading online. The SemEval-2023 Task 3 is an attempt to address several subtasks under this overarching problem, targeting writing techniques used in news articles to affect readers' opinions. The task addressed three subtasks with six languages, in addition to three "surprise" test languages, resulting in 27 different test setups. This paper describes our participating system to this task. Our team is one of the 6 teams that successfully submitted runs for all setups. The official results show that our system is ranked among the top 3 systems for 10 out of the 27 setups.
## 1 Introduction
Monitoring and analyzing news have become an important process to understand how different topics (e.g., political) are reported in different news media and within and across countries. This has many important applications since the tone, framing, and factuality of news reporting can significantly affect public reactions toward social or political agendas. A news piece can be manipulated on multiple aspects to sway readers' perceptions and actions. Going beyond information factuality, other aspects include objectivity/genre, framing dimensions inserted to steer the focus of the audience Card et al. (2015), and propaganda techniques used to persuade readers towards a certain agenda Barron-Cedeno et al. (2019); Da San Martino et al. (2019).
News categorization is a well studied problem in the natural language processing field. Recently, research attention has focused on classifying news by factuality Zhou and Zafarani (2020); Nakov et al. (2021), or other related categorizations such as fake vs. satire news Low et al. (2022); Golbeck et al. (2018). However, there have been efforts towards other classification dimensions. Card et al. (2015) developed a corpus of news articles annotated by 15 framing dimensions such as economy, capacity and resources, and fairness and equality, to support development of systems for news framing classification. Moreover, identifying propagandistic content has gained a lot of attention over several domains including news Barron-Cedeno et al. (2019); Da San Martino et al. (2019), social media Alam et al. (2022) and multimodal content Dimitrov et al. (2021).
The SemEval-2023 Task 3 shared task aims at motivating research in the aforementioned categorization tasks, namely: detection and classification of the _genre_, _framing_, and the _persuasion techniques_ in news articles Piskorski et al. (2023). It targets multiple languages including English, French, German, Italian, Polish, and Russian to push the research on multilingual systems. Moreover, to promote development of language-agnostic models, the task organizers released test subsets for three surprise languages Georgian, Greek, and Spanish).
Our proposed system is based on fine-tuning transformer based models Vaswani et al. (2017) in multiclass and multi-label classification settings for different tasks and languages. We participated in all three subtasks submitting runs for all nine languages, which resulted in 27 testing setups. We experimented with different mono and multilingual transformer models, such as BERT Devlin et al. (2019) and XLM-RoBERTa Conneau et al. (2020); Chi et al. (2022) among others. In addition, we also experimented with data augmentation.
The rest of the paper is organized as follows. Section 2 gives an overview of related work. In section 3, we present the proposed system. In section 4, we provide the details of our experiments.
Section 5 presents the results for our official runs, and finally, we conclude our paper in section 6.
## 2 Related Work
### News Genre Categorization
Prior works on automated news categorization have focused on various aspects such as topic, style, how news is presented or structured, and intended audience (Einea et al., 2019; Chen and Choi, 2008; Yoshioka et al., 2001; Stamatatos et al., 2000). News articles have also been categorized based on their factuality and deceptive intentions (Golbeck et al., 2018). For example, fake news is false and the intention is deceive where satire news is also false but the intent is not deceive rather to call out, ridicule, or expose behavior that is shameful, corrupt, or otherwise "bad".
### Propaganda Detection
Propaganda is defined as the use of automatic approaches to intentionally disseminate misleading information over social media platforms (Woolley and Howard, 2018). Recent work on propaganda detection has focused on news articles (Barron-Cedeno et al., 2019; Rashkin et al., 2017; Da San Martino et al., 2019, 2020), multi-modal content such as memes (Dimitrov et al., 2021, 2021) and tweets (Vijayaraghavan and Vosoughi, 2022; Alam et al., 2022). Several annotated datasets have been developed for the task such as TSHP-17 (Rashkin et al., 2017), and QProp (Barron-Cedeno et al., 2019). Habernal et al. (2017, 2018) developed a corpus with 1.3k arguments annotated with five fallacies (e.g., red herring fallacy), which directly relate to propaganda techniques. Da San Martino et al. (2019) developed a more fine-grained taxonomy consisting of 18 propaganda techniques with annotation of news articles. Moreover, the authors proposed a multigranular deep neural network that captures signals from the sentence-level task and helps to improve the fragment-level classifier. An extended version of the annotation scheme was proposed to capture information in multimodal content (Dimitrov et al., 2021). Datasets in languages other than English have been proposed. For example, using the same annotation scheme from (Dimitrov et al., 2021), Alam et al. (2022) developed a dataset of Arabic tweets and organized a shared task on Arabic propaganda technique detection. Vijayaraghavan and Vosoughi (2022) developed a dataset of tweets, which are weakly labeled with different fine-grained propaganda techniques. They also proposed a neural approach for classification.
### Framing
Framing refers to representing different salient aspects and perspectives for the purpose of conveying the latent meaning about an issue (Entman, 1993). Recent work on automatically identifying media frames includes developing coding schemes and semi-automated methods (Boydstun et al., 2013), datasets such as the Media Frames Corpus (Card et al., 2015), systems to automatically detect media frames (Liu et al., 2019; Zhang et al., 2019), large-scale automatic analysis of news articles (Kwak et al., 2020), and semi-supervised approaches (Cheeks et al., 2020).
Given the multilingual nature of the datasets released with the task at hand, our work is focused on designing a multilingual approach for news classification for the three subtasks of interest.
## 3 System Overview
Our system is comprised of preprocessing followed by fine-tuning pre-trained transformer models. The preprocessing part includes standard model specific tokenization. Our experimental setup consists of (i) monolingual (\(*_{\text{mono}}\)): training and evaluating monolingual transformer model for each language and subtask; (ii) multilingual (\(*_{\text{multi}}\)): combining subtask specific data from all languages for training, and evaluating the model on task and language specific data; (iii) data augmentation (\(*_{\text{aug}}\)): applying data augmentation on language specific training set, then training a monolingual model using augmented dataset, and evaluating it on the test set. This has been applied for each subtask.
### Data Augmentation
Data augmentation is an effective way to deal with class imbalance issues or to increase the size of the training dataset or increase within-class variation. Typically, textual data augmentation has been done by upsampling techniques such as SMOTE (Chawla et al., 2002), however, that approach is applied to the vector representation. Very recently, some useful strategies are introduced for textual data augmentation (Feng et al., 2021), which range from rule-based approaches to model-based techniques. Wei and Zou (2019) proposed a set of token-level random perturbation operations
including random insertion, deletion, and swap, which have been employed in several studies [14, 15].
We used such approaches with contextual representation from transformer models in this study. These include (i) synonym augmentation using WordNet, (ii) word insertion and substitution using BERT [13], RoBERTa [12] and DistilBERT [11]. More details on the implementation of these approaches can be found in the following data augmentation package.1
Footnote 1: [https://github.com/makcedward/nlpaug](https://github.com/makcedward/nlpaug)
## 4 Experiments
In this section, we describe the tasks and datasets used during experiments and provide implementation details for our models.
### Task and Dataset
The SemEval-2023 Task 3 is composed of 3 subtasks for each language:
1. **News Genre Categorization (_subtask1_)**: Given a news article in a particular language, classify it to an _opinion_, _news reporting_, or a _satire_ piece. This is a multiclass classification task at the article level.
2. **Framing Detection (_subtask2_)**: Given a news article, identify the frames used in the article. This is a multi-label classification task at the article level. This task includes 14 frames/labels such as _economic_, _capacity and resources_, _morality_, and _fairness and equality_.
3. **Persuasion Techniques Detection (_subtask3_)**: Given an article, identify the persuasion technique(s) present in each paragraph. This is a multi-label classification task at the paragraph level. This task includes 23 techniques/labels such as _loaded language_, _appeal to authority_, _appeal to popularity_, and _appeal to values_.
The task organizers released three subsets (train, development and test) of data per language of the six main languages for each subtask. Further details and statistics can be found in [13]. Starting with the six _train_ subsets, we apply three methods to acquire new versions of these train subsets:
1. Train subset splitting: we randomly split each of the train subsets into 80-20 splits to acquire training and validation subsets for each subtask and each language. As will be shown in the following subsection, our models were re-trained using different random seeds. The validation set is used to select the random seed leading to the best model.
2. Multilingual dataset construction: to support our multilingual training setup, we combine the training subsets resulting from the previous step for all languages to create a multilingual training subset. We apply the same approach to the validation subsets.
3. Data augmentation: for each of our generated training splits, we apply data augmentation to it and use the resulting datasets to train a monolingual model for each subtask and each language.
### Implementation Details
We use HuggingFace (HF) library [16] on top of PyTorch framework [14] as our base and source of all the pre-trained language models. Since different random initialization can considerably affect the model performance, we train the model for each language with \(k\) different random seeds.
For all experiments, we use Adam optimizer [13] with the learning rate of 2x10\({}^{-5}\). In setting other parameters of the models, we distinguish between _subtask1_ and _subtask2_ that operate on the document level, and _subtask3_\({}_{\text{multi/aug}}\) that works at the paragraph level
\begin{table}
\begin{tabular}{l|c} \hline
**HF Model Name** & **Language** \\ \hline xlm-roberta-large & Multilingual \\ bert-large-cased & English \\
**roberta-large** & English \\ dbmdz/bert-base-french-europeana-cased & French \\ dbmdz/bert-base-german-uncased & German \\
**uklrf/bert-base** & German \\ dbmdz/bert-base-italian-uncased & Italian \\ sdadas/polish-roberta-large-v2 & Polish \\
**allegro/herbert-large-cased** & Polish \\ DeepPavlov/rubert-base-cased & Russian \\ \hline \end{tabular}
\end{table}
Table 1: Pre-trained models used in experiments. For languages with multiple models, the best ones are shown in bold, which are also comparable in the monolingual training setup on the dev subset across all three subtasks.
and has a much larger training subset. Only for _subtask3_multi/aug, the number of epochs=5, \(k\)=5, maximum sequence length=256, and batch size=8. For all remaining training setups and subtasks, the number of epochs=10, \(k\)=10, maximum sequence length=512, and batch size=4.
For each of the three training setups described in section 3, the models trained using \(k\) seeds for a language are evaluated over our validation subset using the official evaluation measure for the corresponding subtask. The model with the best performance is then applied to the development set. Eventually, the training setup that has the best performance on the development subset will be used to generate the official run for the corresponding subtask and test language. As for the "surprise" test languages, we use the model trained on the multilingual training subset with the best performance on the multilingual validation subset.
For our multilingual training setup, we opt to use XLM-RoBERTa (Conneau et al., 2020). As for all other setups, we used per-language monolingual pre-trained models listed in Table 1.
## 5 Results
The results for our official runs per subtask are shown in Tables 2, 3 and 4. For each subtask, we compare our official runs to two baselines: the top run in each test language, and the baseline as reported by the task organizers.
We observe that the multilingual models are generally the best performing models across all tasks. On average, the performance of the system was best for _subtask3_ with a slight average ranking difference compared to _subtask2_. Another interesting observation is that although _subtask3_ has much larger train subsets, since it operates on the paragraph level, this did not improve the average system ranking across languages when compared to _subtask2_. The results also clearly show the robustness
\begin{table}
\begin{tabular}{c|c|l|l l} \hline
**Lang** & **Rank** & **Run** & \(\mathbf{F1_{macro}}\) & \(\mathbf{F1_{micro}}\) \\ \hline \multirow{3}{*}{EN} & 1 & MELODI & 0.784 & 0.815 \\ & 16 & Baseline & 0.288 & 0.611 \\ & **17** & QCRI\({}_{\text{multi}}\) & 0.281 & 0.593 \\ \hline \multirow{3}{*}{FR} & 1 & UMUTeam & 0.835 & 0.880 \\ & **2** & QCRI\({}_{\text{aug}}\) & 0.767 & 0.800 \\ & 10 & Baseline & 0.568 & 0.740 \\ \hline \multirow{3}{*}{GE} & 1 & UMUTeam & 0.820 & 0.820 \\ & 1 & SheffieldVeraAI & 0.820 & 0.820 \\ & **7** & QCRI\({}_{\text{mono}}\) & 0.667 & 0.660 \\ & 9 & Baseline & 0.630 & 0.760 \\ \hline \multirow{3}{*}{IT} & 1 & Hitachi & 0.768 & 0.852 \\ & **7** & QCRI\({}_{\text{mono}}\) & 0.541 & 0.787 \\ & 12 & Baseline & 0.389 & 0.672 \\ \hline \multirow{3}{*}{PO} & 1 & FTD & 0.786 & 0.936 \\ & **10** & QCRI\({}_{\text{mono}}\) & 0.571 & 0.830 \\ & 13 & Baseline & 0.490 & 0.830 \\ \hline \multirow{3}{*}{RU} & 1 & Hitachi & 0.755 & 0.750 \\ & **6** & QCRI\({}_{\text{multi}}\) & 0.567 & 0.653 \\ & 12 & Baseline & 0.398 & 0.653 \\ \hline \multirow{3}{*}{KA} & 1 & Riga & 1.000 & 1.000 \\ & **4** & QCRI\({}_{\text{multi}}\) & 0.622 & 0.897 \\ & 13 & Baseline & 0.256 & 0.345 \\ \hline \multirow{3}{*}{GR} & 1 & SinaaAI & 0.806 & 0.813 \\ & **4** & QCRI\({}_{\text{multi}}\) & 0.708 & 0.813 \\ & 15 & Baseline & 0.171 & 0.344 \\ \hline \multirow{3}{*}{ES} & 1 & DSHacker & 0.563 & 0.567 \\ & **3** & QCRI\({}_{\text{multi}}\) & 0.489 & 0.567 \\ \cline{1-1} & 16 & Baseline & 0.154 & 0.300 \\ \hline \end{tabular}
\end{table}
Table 2: Official results for all nine test languages in _subtask1_. \(\mathbf{F1_{macro}}\) is the official evaluation measure for this subtask. Subscripts for our team runs indicate the training setup used.
\begin{table}
\begin{tabular}{c|c|c c} \hline
**Lang** & **Rank** & **Run** & \(\mathbf{F1_{micro}}\) & \(\mathbf{F1_{macro}}\) \\ \hline \multirow{3}{*}{EN} & 1 & SheffieldVeraAI & 0.784 & 0.815 \\ & **7** & QCRI\({}_{\text{multi}}\) & 0.288 & 0.611 \\ & 18 & Baseline & 0.281 & 0.593 \\ \hline \multirow{3}{*}{FR} & 1 & UMUTeam & 0.835 & 0.880 \\ & **2** & QCRI\({}_{\text{aug}}\) & 0.767 & 0.800 \\ & 10 & Baseline & 0.568 & 0.740 \\ \hline \multirow{3}{*}{GE} & 1 & UMUTeam & 0.820 & 0.820 \\ & 1 & SheffieldVeraAI & 0.820 & 0.820 \\ & **7** & QCRI\({}_{\text{mono}}\) & 0.667 & 0.660 \\ & 9 & Baseline & 0.630 & 0.760 \\ \hline \multirow{3}{*}{IT} & 1 & Hitachi & 0.768 & 0.852 \\ & **7** & QCRI\({}_{\text{mono}}\) & 0.541 & 0.787 \\ & 12 & Baseline & 0.389 & 0.672 \\ \hline \multirow{3}{*}{PO} & 1 & FTD & 0.786 & 0.936 \\ & **10** & QCRI\({}_{\text{mono}}\) & 0.571 & 0.830 \\ & 13 & Baseline & 0.490 & 0.830 \\ \hline \multirow{3}{*}{RU} & 1 & Hitachi & 0.755 & 0.750 \\ & **6** & QCRI\({}_{\text{multi}}\) & 0.567 & 0.653 \\ & 12 & Baseline & 0.398 & 0.653 \\ \hline \multirow{3}{*}{KA} & 1 & Riga & 1.000 & 1.000 \\ & **4** & QCRI\({}_{\text{multi}}\) & 0.622 & 0.897 \\ & 13 & Baseline & 0.256 & 0.345 \\ \hline \multirow{3}{*}{GR} & 1 & SinaaAI & 0.806 & 0.813 \\ & **4** & QCRI\({}_{\text{multi}}\) & 0.708 & 0.813 \\ & 15 & Baseline & 0.171 & 0.344 \\ \hline \multirow{3}{*}{ES} & 1 & DSHacker & 0.563 & 0.567 \\ & **3** & QCRI\({}_{\text{multi}}\) & 0.489 & 0.567 \\ \cline{1-1} & 16 & Baseline & 0.154 & 0.300 \\ \hline \end{tabular}
\end{table}
Table 3: Official results for all nine test languages in _subtask2_. \(\mathbf{F1_{micro}}\) is the official evaluation measure for this subtask. Subscripts for our team runs indicate the training setup used.
of our model across languages and subtasks, as it managed to be among the best 3 runs for 10 out of the 27 test subsets, and it was among the top 5 runs for 15 of them.
Results over _subtask1_ and _subtask3_ showed that our proposed system had a strong cross-lingual transfer ability when training the model on multilingual data and testing it on unseen languages (Georgian, Greek and Spanish).
## 6 Conclusion
In this paper, we presented our experiments and findings on news genre categorization, framing and persuasion techniques detection on multiple languages, which was a part of SemEval-2023 Task 3 shared task. The task includes 27 test setups for three subtasks and nine test languages. Our team successfully submitted runs for all setups. We proposed a system that is based on fine-tuning transformer models in multiclass and multi-label classification settings. We experimented with different mono and multilingual pre-trained models, in addition to data augmentation. From the experimental results, we observed that our multilingual model based on XLM-RoBERTa performs better across all tasks, even on unseen languages.
Our future work includes domain adaptation and further exploration of data augmentation techniques.
## Ethics and Broader Impact
BiasesWe note that there might be some biases in the data we use, however, we used the data that organizers made available. The biases, in turn, will likely be exacerbated by the unsupervised models trained on them. This is beyond our control, as the potential biases in pre-trained large-scale transformers models, which we use in our experiments.
## Acknowledgments
This publication was made possible by NPRP grant 14C-0916-210015 _The Future of Digital Citizenship in Qatar: a Socio-Technical Approach_ from the Qatar National Research Fund.
Part of this work was also funded by Qatar Foundation's IDKT Fund TDF 03-1209-210013: _Tanbih: Get to Know What You Are Reading_.
The views, opinions, and findings presented in this paper are those of the authors alone and do not necessarily reflect the views, policies, or positions of the QNRF or any other affiliated organizations.
|
2306.05217 | Synchronizing Chaos using Reservoir Computing | We attempt to achieve isochronal synchronization between a drive system
unidirectionally coupled to a response system, under the assumption that
limited knowledge on the states of the drive is available at the response.
Machine learning techniques have been previously implemented to estimate the
states of a dynamical system from limited measurements. We consider situations
in which knowledge of the non-measurable states of the drive system is needed
in order for the response system to synchronize with the drive. We use a
reservoir computer to estimate the non-measurable states of the drive system
from its measured states and then employ these measured states to synchronize
the response system with the drive. | Amirhossein Nazerian, Chad Nathe, Joseph D. Hart, Francesco Sorrentino | 2023-06-08T14:15:50Z | http://arxiv.org/abs/2306.05217v1 | # Synchronizing Chaos using Reservoir Computing
###### Abstract
We attempt to achieve isochronal synchronization between a drive system unidirectionally coupled to a response system, under the assumption that limited knowledge on the states of the drive is available at the response. Machine learning techniques have been previously implemented to estimate the states of a dynamical system from limited measurements. We consider situations in which knowledge of the non-measurable states of the drive system is needed in order for the response system to synchronize with the drive. We use a reservoir computer to estimate the non-measurable states of the drive system from its measured states and then employ these measured states to synchronize the response system with the drive.
A large literature has investigated synchronization of chaos, see e.g., [1; 2; 3; 4; 5; 6; 7] and the review paper [8]. However, little attention has been devoted to the usage of machine learning techniques to enable synchronization. A typical problem is that of synchronizing two identical chaotic systems, a drive system that is unidirectionally coupled to a response system. This is typically achieved by having the drive communicate part of its state to the response. Here we use a reservoir observer within a control loop to reconstruct other states of the drive at the response system and use these reconstructed states to aid synchronization. We also show how this proposed scheme can be used to control the time evolution of the response system on an unstable periodic orbit embedded in the chaotic attractor. The robustness of our proposed approach is studied against measurement noise affecting the information transmitted from the drive to the response system.
## I Introduction
The need of regulating the behavior of nonlinear systems is a common requirement in many physical, social, and biological applications [9; 10; 11; 12]. Observers are classically used in controls applications to estimate states of a dynamical system that cannot be directly measured; feedback control can then be performed using the states that are directly available and those that are reconstructed using the observer. An important problem in the control of nonlinear systems, then, is the construction of a sufficiently accurate observer.
In the absence of a physical model of the system to be observed, one must turn to a data-driven approach. One such approach that is particularly effective when limited data is available is that of reservoir computing. A reservoir computer is a type of recurrent neural network that is designed to be easy to train [13; 14]. While reservoir computers are most often used in an autonomous mode [15], they have also been found to be an efficient and effective observer of dynamical systems such as Lorenz oscillators [16; 17; 18], semiconductor lasers [19], ecological models [20], and spatiotemporal systems [21; 16].
There is ample observation of synchronization in the natural world [22], yet it is unclear whether this observed synchronization may be the result of an underlying learning process. Autonomous reservoir computing models have been found to be capable of synchronizing with the nonlinear systems upon which they are trained [23; 24] as well as with other identical reservoir computing models [25; 26; 27]. A modification of reservoir computing termed deep reservoir computing has been used for direct learning of a control rule [28]. Additionally, autonomous reservoir computing has been used for the estimation of unstable periodic orbits of an unknown dynamical system and for the pinning control of a network on to the estimated unstable periodic orbit [29].
Despite these successes, and even though the combination of an observer and a controller is a well established paradigm in control theory, little work has been devoted to study how reservoir computing can be used as an observer for control applications. In this work, we train a reservoir computer as an observer, which is used to mediate the pinning control of a response system by a drive system. The reservoir observer, driven by the measured variable of the drive system, is used to estimate unmeasured variables of the drive system. The measured variable
and the estimated variables are then used to drive the response system to a controlled trajectory. We find that the introduction of the reservoir computer can lead to a dramatic reduction in the coupling strength required.
More formally, we consider a problem in which a drive system is unidirectionally coupled to a response system. The drive system produces a reference trajectory for the response system with the goal of controlling or synchronizing the time evolution of the response system on that of the drive system. Another reason why this particular setting may be of interest is that by choosing the initial condition of the drive system to lie on an unstable periodic orbit (UPO) it may be possible to control the time evolution of the response to converge on that particular UPO.
We take the response and the drive to be described by the same equations and we assume that when these systems are uncoupled, their dynamics evolve on the same chaotic attractor. We consider the following set of generic dynamical equations that describe the time evolution of the drive, \(\mathbf{x}_{D}(t)\), and the response, \(\mathbf{x}_{R}(t)\),
\[\begin{split}\dot{\mathbf{x}}_{D}(t)&=\mathbf{F}(\mathbf{x}_{ D}(t))\\ \dot{\mathbf{x}}_{R}(t)&=\mathbf{F}(\mathbf{x}_{R}(t))+\kappa H (\mathbf{x}_{D}(t)-\mathbf{x}_{R}(t))\end{split} \tag{1}\]
where \(\mathbf{F}:\mathbb{R}^{m}\rightarrow\mathbb{R}^{m}\) and \(m\) is the number of states in the dynamics system. The matrix \(H\) is size \(m\times m\) and describes the coupling scheme. The scalar \(k>0\) is the coupling strength.
In the rest of this paper, without loss of generality we take \(m=3\) and write \(\mathbf{x}_{D}(t)=[x_{D}(t),y_{D}(t),z_{D}(t)]\), \(\mathbf{x}_{R}(t)=[x_{R}(t),y_{R}(t),z_{R}(t)]\). We assume the entries of the matrix \(H=\{H_{ij}\}\) to be either zeros or ones, and consider different coupling schemes, as a result of different choices of the matrix \(H\). For example, by setting all the entries of the matrix \(H\) equal to zero except for entry \(H_{12}=1\), we indicate that the state \(y_{D}(t)\) from the drive system is available to the response system and that \(y_{D}(t)\) appears in the equation of the first state of the response system. We will also refer to this coupling scheme as \(y_{D}(t)\to x_{R}(t)\). We will instead refer to a coupling scheme as \(\hat{y}_{D}(t)\to x_{R}(t)\) when \(y_{D}(t)\) is not a measured variable from the response system, but an estimate of \(\hat{y}_{D}(t)\).
The rest of this paper is organized as follows. In Sec. II, reservoir computing and the general procedure of the drive-response system are discussed. Examples of the Chen and Rossler systems are also provided, and the effect of measurement noise on the performance of the reservoir computer is discussed. Conclusions are provided in Sec. III. In Appendix A, we discuss how the integration of the continuous time system and the evolution of the discrete time RC are performed.
## II Reservoir Computing
We begin by introducing the reservoir equation,
\[\mathbf{r}(t+\Delta t)=(1-\alpha)\mathbf{r}(t)+\alpha\tanh\left(A\mathbf{r}(t)+s(t)\mathbf{w} _{\text{in}}\right) \tag{2}\]
where \(A\) is the coupling matrix, \(s(t)\) is the drive signal, \(\Delta t\) is the discrete time step, and \(\mathbf{w}_{\text{in}}\) is a vector of random elements drawn from a standard Gaussian distribution, i.e., \(\mathcal{N}(1,0)\). We set \(\Delta t=0.001\) throughout this paper unless stated otherwise. The matrix \(A\) is the adjacency matrix of a directed Erdos Renyi network with \(N\) nodes and connectivity probability \(p=0.1\). We make \(A\) to be have a spectral radius of \(\rho\) via the operation, \(A\gets A\rho/\lambda\), where \(\lambda\) is the largest real eigenvalue and \(0<\rho\leq 1\).
From the time evolution of the reservoir equation, Eq. (2), one can construct the readout matrix,
\[\Omega=\begin{bmatrix}r_{1}(1)&r_{2}(1)&\cdots&r_{N}(1)&1\\ r_{1}(2)&r_{2}(2)&\cdots&r_{N}(2)&1\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ r_{1}(T_{1})&r_{2}(T_{1})&\cdots&r_{N}(T_{1})&1\end{bmatrix} \tag{3}\]
where \(r_{i}(t)\) is the readout of node \(i\) at time \(t\) and \(t=T_{1}\) indicates the end of the training phase. The last column of the matrix \(\Omega\) is set to \(1\) to account for any constant offset in the fit. We then relate the readouts to the training signal, \(y(t)\), with additive noise, via the unknown coefficients contained in the vector, \(\mathbf{w}_{\text{out}}\),
\[\Omega\mathbf{w}_{\text{out}}=\mathbf{g} \tag{4}\]
where \(\mathbf{g}(t)\) is the training signal. We then compute the unknown coefficients vector \(\mathbf{w}_{\text{out}}\) via the equation,
\[\mathbf{w}_{\text{out}}=\mathbf{\Omega}^{\dagger}\mathbf{g}. \tag{5}\]
Here, \(\mathbf{\Omega}^{\dagger}\) is given as,
\[\mathbf{\Omega}^{\dagger}=\left(\mathbf{\Omega}^{\text{T}}\mathbf{\Omega}+\beta\mathbf{I} \right)^{-1}\mathbf{\Omega}^{\text{T}}. \tag{6}\]
In the above equation, \(\beta\) is the ridge-regression parameter used to avoid overfitting [16] and \(\mathbf{I}\) is the identity matrix. For all the simulations in this work, \(\beta=10^{-9}\) is used. For \(\beta=0\), \(\mathbf{\Omega}^{\dagger}\) is the pseudo
inverse matrix of \(\mathbf{\Omega}\). Next, we define the training fit signal as,
\[\mathbf{h}=\Omega\mathbf{w}_{\text{out}}. \tag{7}\]
Lastly, the training error is computed as,
\[\Delta_{\text{tr}}=\frac{\text{std}(\mathbf{h}-\mathbf{g})}{\text{std}(\mathbf{g })} \tag{8}\]
where the notation \(\text{std}(\cdot)\) denotes the standard deviation.
The testing phase is carried out in the same manner, except using the previously computed coefficients contained in, \(\mathbf{w}_{\text{out}}\), to compute the fit signal, \(\mathbf{h}_{\text{ts}}\) according to the \(\Omega_{\text{ts}}\) matrix generated by the new drive signal, i.e., the testing signal \(\mathbf{g}_{\text{ts}}(t)\). We call the length of the testing phase \(T_{s}\). The testing error is computed as
\[\Delta_{\text{ts}}=\frac{\text{std}(\mathbf{h}_{\text{ts}}-\mathbf{g}_{\text{ ts}})}{\text{std}(\mathbf{g}_{\text{ts}})}. \tag{9}\]
In Fig. 1, we show the process of training the reservoir computer in (a) and then using it for control in (b). I/R is the input-to-reservoir function and is represented by the term \(s(t)\mathbf{w}_{\text{in}}\) in Eq. (2). R/O is the reservoir-to-output function represented by Eq. (7).
### Chen system
We take both the drive and the response systems to be described by the Chen equation \(\dot{\mathbf{x}}(t)=\mathbf{F}(\mathbf{x}(t))\) where \(\mathbf{x}=[x,\ y,\ z]^{\top}\) and
\[\mathbf{F}(\mathbf{x}(t))=\begin{bmatrix}a(y(t)-x(t))\\ (c-a-z(t))+cy(t)\\ x(t)y(t)-\beta z(t)\end{bmatrix}, \tag{10}\]
where here we have set \(a=35,c=28\) and \(\beta=8/3\). The goal is to synchronize the trajectories of the response system and the drive system, using the scheme shown in Fig. 1. It is assumed that the only component of the drive system accessible to the response system is the time evolution of the state \(x_{D}(t)\).
The master stability function [30] predicts that by coupling \(x_{D}\to y_{R}\), the two systems synchronize when the coupling strength \(\kappa>10.62\)[31]. However, if the information about \(y_{D}\) is available, the two systems synchronize by the coupling \(y_{D}\to y_{R}\) when \(\kappa>3.54\)[31]. Hence, we would like to estimate \(\hat{y}_{D}\) from \(x_{D}\) in order to synchronize the two systems with a lower coupling strength \(3.54<\kappa<10.62\) through coupling \(\hat{y}_{D}\to y_{R}\). Note that coupling \(x_{D}\to x_{R}\) does not result in synchronization for any value of \(k\), as shown in [31].
We introduce the instantaneous and average synchronization errors,
\[\begin{split} E(t):=&\left[\left(\frac{x_{D}(t)-x_{R} (t)}{\text{std}(x_{D}(t))_{t}}\right)^{2}+\left(\frac{y_{D}(t)-y_{R}(t)}{\text {std}(y_{D}(t))_{t}}\right)^{2}\right.\\ &+\left.\left(\frac{z_{D}(t)-z_{R}(t)}{\text{std}(z_{D}(t))_{t}} \right)^{2}\right]^{\frac{1}{2}}\\ \bar{E}=&<E(t)>_{t},\end{split} \tag{11}\]
respectively, where \(<\cdot>_{t}\) indicates an average over the time interval \(t=[500,1000]\) seconds, and \(\text{std}(\cdot)_{t}\) returns the standard deviation (a positive scalar value) of the time-series in the argument over the time interval \(t\).
Figure 2 shows the synchronization error \(\bar{E}\) for different coupling schemes and different coupling strengths, when the drive and the response system are initialized from randomly chosen initial conditions from the Chen system attractor. The results we obtain are in agreement with the predictions of the master stability function. From each plot in Fig. 2 we see that there is a critical coupling strength \(\kappa_{\text{min}}\) above which a transition from asynchrony to synchrony is observed. Fig. 2 (A) corresponds to coupling \(x_{D}\to y_{R}\), for which \(\kappa_{\text{min}}\simeq 10.78\); Fig. 2 (B) corresponds to coupling \(y_{D}\to y_{R}\), for which \(\kappa_{\text{min}}\simeq 3.96\). Finally, Fig. 2 (C) corresponds to coupling in both \(x_{D}\to y_{R}\) and \(y_{D}\to y_{R}\), for which \(\kappa_{\text{min}}\simeq 3.00\).
By comparing Figs. 2 (B) and 2 (C) with Fig. 2 (A) we see that a much lower coupling strength is needed for synchronization when coupling involves the state \(y_{D}(t)\), hence in what follows we attempt to reconstruct \(\hat{y}_{D}(t)\approx y_{D}(t)\) from knowledge of \(x_{D}(t)\). Our approach illustrated in Fig.1 is to indirectly couple the response system to the drive system through the estimate \(\hat{y}_{D}(t)\) produced by the reservoir observer. Hence, the coupled system equations are,
\[\begin{split}\dot{\mathbf{x}}_{D}(t)&=\mathbf{F}(\mathbf{x}_{D} (t))\\ \dot{\mathbf{x}}_{R}(t)&=\mathbf{F}(\mathbf{x}_{R}(t))+\kappa \begin{bmatrix}0\\ \hat{y}_{D}-y_{R}+x_{D}-x_{R}\\ 0\end{bmatrix}\end{split} \tag{12}\]
If the two systems are coupled 'ideally', then \(\hat{y}_{D}\) is replaced by the true value \(y_{D}\).
As an example, Fig. 3 shows the time evolutions of the \(x\) components of the drive and response system as they are coupled through the RC and ideally. We tentatively set the RC parameters equal to \(\alpha=0.61\), \(\rho=0.9\), \(500\) nodes, and the input weights are drawn randomly from a uniform distribution between \(-0.5\) and \(0.5\). The two systems are initialized from different initial conditions on the chaotic attractor of the Chen system. We see that the two coupling schemes produce comparable levels of synchronization between the drive and response systems, which appear qualitatively similar to the human eye. However, a calculation shows that the two synchronization errors differ by several orders of magnitude. This can be seen in Figure 4, which compares the synchronization errors \(E_{\rm RC}(t)\) and \(E_{\rm Ideal}(t)\) through a \(1000s\) simulations. Here, the instantanous synchronization errors are \(E_{\rm RC}(t)\) when coupled through the RC, and \(E_{\rm Ideal}(t)\) when coupled ideally. We see that while the synchronization error attains a small value throughout the simulation when coupling uses the estimated state, the error attains a much smaller value (equal to the numerical precision of the computer) when coupling uses the true state. This is due to the fact that the estimate produced by the RC observer is not exactly the same as the true signal, but just an approximation. It is also important to point out that the ideal case shown in Fig. 4 does not consider the presence of noise in the signal transmitted from the drive system to the response system. This unrealistic assumption is removed in Sec. II.4.
The results in Figs. 2 and 3 show the feasibility of our approach. An important step is to properly choose the hyperparameters of the RC observer. Figure 5 (A) shows the training, testing, and synchronization
Figure 1: (a) Training phase of reservoir. The reservoir input signal is \(x_{D}(t)\) and it is trained on \(y_{D}(t)\). The error, \(E(t)\), is calculated with the fit signal, \(\hat{y}(t)\), and the training signal. (b) Control configuration. The response system takes as an input, \(E(t)\), where \(E(t)\) is \(\boldsymbol{x}_{R}(t)-\boldsymbol{x}_{D}(t)\). The matrix, \(H\), describes what state variables are used in coupling.
errors, \(\Delta_{\rm tr}\), \(\Delta_{\rm ts}\), and \(\bar{E}\), respectively, as the parameter \(\alpha\) in (2) is varied. We see that as \(\alpha\) grows to \(0.4\), the training and synchronization errors are substantially reduced. However, for \(0.4\leq\alpha\leq 1\), the testing errors attain higher standard deviations, especially for larger values of \(\alpha\). Figure 5 (B) shows the effect of varying the spectral radius \(\rho\) over the training, testing, and synchronization errors, for \(\alpha=0.5\). We see that the performance we achieve is quite robust to variations in \(\rho\), as long as \(\rho\geq 0.1\). We set \(\rho=0.9\) in our simulations.
### Rossler
The dynamical equation of a Rossler system is \(\dot{\mathbf{x}}(t)=\mathbf{F}(\mathbf{x}(t))\) where \(\mathbf{x}=[x,\ y,\ z]^{\top}\) and
\[\mathbf{F}(\mathbf{x}(t))=\begin{bmatrix}-y(t)-z(t)\\ x(t)+ay(t)\\ b+(x(t)-c)z(t)\end{bmatrix} \tag{13}\]
where we set \(a=b=0.2\) and \(c=9\). In what follows, we set the initial conditions of the drive and response system to be randomly chosen points on the chaotic attractor. We examine the effect of the gain, \(\kappa\), on the synchronization error in the scenarios where we couple \(x_{D}\to x_{R}\), \(y_{D}\to y_{R}\), and both simultaneously. We then move on to show the performance of the RC when \(x_{D}\to x_{R}\) and \(\dot{y}_{D}\to y_{R}\) are the couplings.
In Fig. 6 we perform a similar study as what previously shown in Fig. 5 to select the hyperparameters of the RC for the case of the Rossler system.
Figure 7 shows a clear advantage of incorporating the RC into the control loop. In this figure, we set \(\alpha=0.002\) and \(\rho=0.9\). In plot (A) we show the synchronization error versus the coupling strength
Figure 4: Synchronization error of Chen’s drive and response systems from time series in Fig. 3. The plot shows \(E_{\rm RC}(t)\), the error when the coupling \(x_{D}\to y_{R}\) and \(\dot{y}_{D}\to y_{R}\) is through estimation by RC, and \(E_{\rm Ideal}(t)\), the error in the ideal case when all the states of the drive system are known, \(x_{D}\to y_{R}\) and \(y_{D}\to y_{R}\).
Figure 5: Chen system. Coupling is in \(x_{D}\to x_{R}\) and \(\dot{y}_{D}\to y_{R}\) with \(\kappa=10\) (A) We plot the training \(\Delta_{\rm tr}\), testing \(\Delta_{\rm ts}\), and synchronization \(E\) errors as a function of the leakage parameter, \(\alpha\) for a fixed value of the spectral radius \(\rho=0.9\). (B) We plot the training, testing, and synchronization error as a function of the spectral radius, \(\rho\), of the \(A\) matrix, for a fixed value of the leakage parameter \(\alpha=0.5\).
Figure 3: The plots show the time evolution of the \(x\) component of the drive and the response systems when coupled through the RC in \(x_{D}\to y_{R}\) and \(\dot{y}_{D}\to y_{R}\) (top) and when coupled ideally in \(x_{D}\to y_{R}\) and \(y_{D}\to y_{R}\) (bottom.) Here, the coupling strength \(\kappa=3.1\) for both cases.
for the accessible coupling, \(x_{D}\to x_{R}\). In plot (B) we show that \(\hat{y}_{D}\), generated by the RC, is sufficient enough to drive the synchronization error close to zero. Plot (C) shows that when both couplings are used, a lower value for the coupling strength is needed to achieve synchronization.
### Unstable periodic orbits
Reservoir computing has been previously used to detect unstable periodic orbits (UPOs) of chaotic systems [29]. In this section, we aim to synchronize a response system on the trajectory of a drive system, evolving on a UPO. The time-evolutions of the drive and the response systems are still described by Eq. (1), but we consider the special case that the drive system is on a UPO embedded in the chaotic attractor.
Without loss of generality, we focus on the case of the Rossler system, obeying Eq. (13), with the same parameters used before, \(a=b=0.2\) and \(c=9\). First, we find the trajectory of an unstable periodic orbit of the Rossler system by using the MatCont toolbox [32] for MATLAB. We used the period-1 trajectory as the drive signal, as this is expected to be the most difficult to synchronize [33]. The reservoir is trained on the UPO signal using 100 nodes, \(\alpha=9.7(10^{-4}),\rho=0.9\), and input weights \(\mathbf{\psi}_{\text{in}}\) randomly drawn from a uniform distribution between \(-0.5\) and \(0.5\); the reservoir then produces an estimate of the unavailable \(y\) state from knowledge of the available \(x\) state of the UPO signal.
Figure 8 shows that by using the RC we are successful in achieving synchronization on the period-1 UPO with a coupling strength (\(\kappa=0.3\)) for which the drive-response system could not synchronize only through coupling in the \(x\) component.
### Noise
In order to compare the results of the RC fairly with ideal knowledge, we consider the situation in which there may be measurement noise present in data acquisition. We proceed under the assumption that all the information obtained from the drive is noise corrupted. This holds true both during the training of the RC as well as implementing it in the control configuration. Following Ref [34], we set the magnitude of noise present in the training and control configurations to be equal. Our noise corrupted signals are then,
\[\tilde{\mathbf{x}}_{D}(t)=\mathbf{x}_{D}(t)+\epsilon\sqrt{\frac{\Delta t}{T}}\mathbf{ \zeta}(t), \tag{14}\]
\(\tilde{\mathbf{x}}_{D}(t)=[\tilde{x}_{D}(t),\tilde{y}_{D}(t),\tilde{z}_{D}(t)]\), where \(\epsilon\) is the magnitude of noise, \(T\) is the approximate period of oscillation, and \(\mathbf{\zeta}(t)\) is a vector with the same
Figure 6: Rössler system. Coupling is in \(x_{D}\to x_{R}\) and \(\hat{y}_{D}\to y_{R}\) with \(\kappa=0.15\). (A) We plot the training, testing, and synchronization error as a function of the leakage parameter, \(\alpha\). We set \(\rho=0.6\) (B) We plot the training, testing, and synchronization error as a function of the spectral radius, \(\rho\), of the \(A\) matrix. We set \(\alpha=0.05\)
dimension of \(\mathbf{x}_{D}(t)\) composed of elements randomly drawn from a standard normal distribution.
We consider the same case of the Rossler system described in Sec. II.2 and adopt the same trained RC used in Fig. 7 with coupling strength \(\kappa=0.2\). In Fig. 9 we plot \(\tilde{E}\) as we vary the magnitude of the noise \(\epsilon\). In (A) we compare the case of coupling \(\tilde{y}_{D}\to y_{R}\) (noise corrupted) with the case \(\hat{y}_{D}\to y_{R}\) (noise corrupted+RC). In (B) we compare the case of coupling \(\tilde{y}_{D}\to y_{R}\) and \(\tilde{x}_{D}\to x_{R}\) (noise corrupted) with the case \(\hat{y}_{D}\to y_{R}\) and \(\tilde{x}_{D}\to x_{R}\) (noise corrupted+RC).
We see that for all values of \(\epsilon\) in the plotted range, the error is higher in the case in which \(\hat{y}_{D}\) is reconstructed using an RC observer than in the case in which the signal is actually available at the receiver but corrupted with noise. The presence of noise results in a deterioration of the RC performance but overall our proposed strategy based on RC observation appears to be quite robust to the presence of noise, even for large values of \(\epsilon\). This is true for both the coupling schemes shown in Fig. 9(A) and (B).
## III Conclusions
In this paper, we used reservoir computing as an observer within a control loop to estimate the unmeasurable states of a system from its measurable states. We consider two identical chaotic systems: a drive system undirectionally coupled to a response system. We have successfully applied this approach to reconstruct unavailable states of the drive system at the response system and to isochronally synchronize the response to the drive.
In both the cases of the Chen system and of the Rossler system, we have shown that usage of the reservoir observer allows us to achieve synchronization for lower values of the coupling strength than it would be possible otherwise. This is
Figure 8: Time trajectories of the \(x\) components of the drive and the response systems when the drive Rössler system evolves on an unstable periodic orbit. In (A), the two systems are coupled with \(x_{D}\to x_{R}\) with the coupling strength \(\kappa=0.3\). In (B), the two systems are coupled with \(x_{D}\to x_{R}\) and \(\hat{y}_{D}\to y_{R}\) with the same \(\kappa=0.3\). The estimation \(\hat{y}_{D}\) is done through a reservoir computer, as explained in the text.
Figure 9: Rössler system. We plot the synchronization error as a function of the magnitude of noise, \(\epsilon\), as seen in Eq. (14). (A) compares the case of coupling \(\tilde{y}_{D}\to y_{R}\) (noise corrupted) with the case \(\hat{y}_{D}\to y_{R}\) (noise corrupted+RC). (B) compares the case of coupling \(\tilde{y}_{D}\to y_{R}\) and \(\tilde{x}_{D}\to x_{R}\) (noise corrupted) with the case \(\hat{y}_{D}\to y_{R}\) and \(\tilde{x}_{D}\to x_{R}\) (noise corrupted+RC).
particularly relevant to practical situations in which physical constraints may affect our ability to freely choose the strength of the coupling. We showed successful implementation of our proposed scheme to control the state of the response system on an unstable periodic orbit embedded within the chaotic attractor. The simulations with noisy measured data also demonstrated that the reservoir computer can very well estimate the state trajectory even in the presence of measurement noise.
## Appendix A Numerical integration
Here, we discuss the numerical method used to integrate the continuous-time dynamics of the drive-response system when they are coupled through the discrete-time reservoir computer.
After the training and testing phases are done, a random point from the testing trajectory is selected to be set as the initial condition for the drive system. We denote this by \(\mathbf{x}_{0}=\mathbf{x}(t_{s})\) where \(t_{s}\) is the randomly chosen time-point from the testing signal. The corresponding \(\mathbf{r}(t_{s})\) from the testing phase is set as the initial condition for Eq. (2). A different random point on the training signal is chosen as the initial condition for the response system.
```
1:\(\tilde{\mathbf{F}}(t,\mathbf{y},\hat{y}_{D}),\mathbf{y}(t_{0}),\mathbf{r}(t_{0}),t,h,K,\alpha,A, \mathbf{w}_{\text{in}},\mathbf{w}_{\text{out}}\)
2:\(\mathbf{y},\mathbf{r}\)
3:\(k\gets K\)
4:for\(j=0,\dots,n-1\)do
5:\(\mathbf{y}_{j}\leftarrow\mathbf{y}(t_{j})\)
6:\(\hat{y}_{D}\leftarrow\mathbf{r}(t_{j})^{\top}\mathbf{w}_{\text{out}}\)
7:\(k_{1}\leftarrow\tilde{\mathbf{F}}(t_{j},\mathbf{y}_{j},\hat{y}_{D})\)
8:\(k_{2}\leftarrow\tilde{\mathbf{F}}(t_{j}+\frac{h}{2},\mathbf{y}_{j}+\frac{h}{2}k_{1},\hat {y}_{D})\)
9:\(k_{3}\leftarrow\tilde{\mathbf{F}}(t_{j}+\frac{h}{2},\mathbf{y}_{j}+\frac{h}{2}k_{2}, \hat{y}_{D})\)
10:\(k_{4}\leftarrow\tilde{\mathbf{F}}(t_{j}+h,\mathbf{y}_{j}+hk_{1},\hat{y}_{D})\)
11:\(\mathbf{y}(t_{j+1})\leftarrow\mathbf{y}_{j}+\frac{h}{6}(k_{1}+2k_{2}+2k_{3}+k_{4})\)
12:if\(k=K\)then
13:\(\mathbf{r}(t_{j+1})\leftarrow(1-\alpha)\mathbf{r}(t_{j})+\alpha\tanh(A\mathbf{r}(t_{j})+x_{ D}(t_{j})\mathbf{w}_{\text{in}})\)
14:\(k\gets 1\)
15:else
16:\(\mathbf{r}(t_{j+1})\leftarrow\mathbf{r}(t_{j})\)
17:\(k\gets k+1\)
18:endif
19:endfor
20:return\(\mathbf{y},\mathbf{r}\)
```
**Algorithm 1** The modified 4th order Runge-Kutta
The integration of Eq. (1) is done using the 4th-order fixed step Runge-Kutta method. We assume the \(x_{D}(t)\) component of the drive signal is known, and we try to estimate \(\hat{y}_{D}(t)\). We use the compact notation for Eq. (1) as \(\mathbf{y}(t)=\tilde{\mathbf{F}}(t,\mathbf{y}(t),\hat{y}_{D}(t))\). The integration interval is \(0\leq t\leq T_{f}\) with time points \(t_{j}\), \(j=0,\dots,n\) sampled at a fixed sampling time \(h\). For simplicity, we assume \(\Delta t\), the time step of the RC in Eq. (2), to be an integer multiple of \(h\), i.e., \(K=\Delta t/h\) where \(K\geq 1\) is an integer. For large enough \(K\), the integration procedure using the Runge-Kutta method has an acceptable numerical error. For our simulations in this paper, we set \(K=1\), since \(h=\Delta t=0.001\) showed to provide good accuracy for the integration process. See Algorithm 1 for the pseudo-code of the modified 4th order Runge-Kutta for the case that \(x_{D}(t)\) of the drive system is known and the goal is to estimate \(\hat{y}_{D}(t)\) of the drive system.
## Acknowledgement
This work was partly funded by NIH Grant No. 1R21EB028489-01A1 and by the Naval Research Lab's Basic Research Program.
## Data availability
The data that support the findings of this study are available within the article.
|
2302.00698 | Temperature gradient and asymmetric steady state correlations in
dissipatively coupled cascaded optomechanical systems | The interaction between a light mode and a mechanical oscillator via
radiation pressure in optomechanical systems is an excellent platform for a
multitude of applications in quantum technologies. In this work we study the
dynamics of a pair of optomechanical systems interacting dissipatively with a
wave guide in a unidirectional way. Focusing on the regime where the cavity
modes can be adiabatically eliminated we derive an effective coupling between
the two mechanical modes and we explore both classical and quantum correlations
established between the modes in both in the transient and in the stationary
regime, highlighting their asymmmetrical nature due to the unidirectional
coupling, and we find that a constant amount of steady correlations can exist
at long times. Furthermore we show that this unidirectional coupling
establishes a temperature gradient between the mirrors, depending on the
frequencies' detuning. We additionally analyze the power spectrum of the output
guide field and we show how, thanks to the chiral coupling, from such spectrum
it is possible to reconstruct the spectra of each single mirror. | Claudio Pellitteri, G. Massimo Palma, Salvatore Lorenzo | 2023-02-01T19:00:26Z | http://arxiv.org/abs/2302.00698v2 | # Cascaded Optomechanical systems
###### Abstract
We study the dynamics of a pair of optomechanical systems interacting dissipatively with a wave guide in a unidirectional way. We investigate the behaviour of both classical and quantum correlations established between the two mechanical modes both in the transient and in the stationary regime. We find that a constant amount of steady correlations can exists at long times. We furthermore analyze the power spectrum of the output guide field and we show how from such spectrum it is possible to reconstruct the spectra of each single mirror. Finally we show that that, thanks to the unidirectional coupling, a temperature gradient between the mirrors depending on the frequencies detuning is established.
## I Introduction
Optomechanical systems, with light modes interacting with massive mechanical oscillators, have attracted a considerable interest for their possible in quantum technologies application [1; 2]. Depending on the configuration of the system, the optomechanical interaction can be used to cool the mechanical mode near to its ground state [3; 4; 5; 6; 7; 8] ( a technique applied also to levitating nanospheres [9]), to generate squeezing [10; 11; 12] or to create entanglement between optical and mechanical modes [13; 14; 15]. These configurations can be mixed in an appropriate way in order to generate and measure purely quantum states in the mechanical oscillators (e.g. generation of single phonon states [16]).
A natural extension to the simple single mode - single mirror coupled oscillators consists of several coupled modes. We can distinguish two major and distinct setup. The first one is called _Multimode optomechanical system_[17; 18; 19; 20; 21], and consists of several mechanical oscillator interacting with the same cavity. In _Optomechanical array_ instead, each mechanical oscillator interacts locally with its own cavity-mode but an effective coupling between neighbours is implemented by photons and/or phonons tunneling [22; 23; 24].
In our work we consider a slightly different scheme in which the cavity modes are coupled to a unidirectional waveguide [25], leading to a cascaded configuration [26; 27; 28; 29; 30]. This induces a non-reciprocal interaction at first between the cavities and indirectly between the mechanical oscillators[31]. This kind of configuration is similar to the one studied in [32] with the difference that there no pure unidirectional coupling is treated, and in [33], where the author studied the synchronization between the subsystems driven by a blue detuned laser, which leads to a self-sustained oscillatory dynamics.
This work is organized as follow: In section II we present our model, we introduce its Hamiltonian and by means of a Langevin equations we characterise the evolution of the system in terms of mean values and fluctuations where the latter are analysed in terms of a Lyapunov equation for the covariance matrix. In section III, we derive an equation of motion for the effective dynamics of the two mechanical oscillators by performing an adiabatic elimination of the cavities modes. In section IV we study the stability regions of the parameters space, exploring when the system can exhibit multistability. In section V the correlations between the two mechanical modes are investigated in terms of mutual information and quantum discord, showing the possibility to establish stationary correlations. In section VI we analize the power spectra of the two mirrors and of the output field mode and show how the two are related.. In section VII It's shown how, in cooling regime, the two mechanical modes thermalize at different temperatures depending on the mirrors frequency mismatch. Finally in section VIII we draw our conclusions.
## II The model
Our system consists of two optomechanical mirrors indirectly coupled through an unidirectional waveguide. Such mediated interaction (see fig. 1) leads to a cascade scenario in which the first system can drive the following one without back action. Each subsystem consists of a mechanical harmonic oscillator with mass \(m\) and frequency \(\Omega\) coupled to a cavity field by means of its radiation pressure. If \(\Omega\) is much smaller than \(c/2L\) (\(L\) stands for the cavity length) we can consider only one cavity mode [34; 35] and write the following Hamiltonian for both subsystems:
\[\hat{H}_{S} =\sum_{j=1}^{2}\hat{H}_{j} =\sum_{j=1}^{2}\omega_{c}\hat{a}_{j}^{\dagger}\hat{a}_{j}+\frac {\Omega_{j}}{2}(\hat{q}_{j}^{2}+\hat{p}_{j}^{2})-g_{j}\hat{a}_{j}^{\dagger} \hat{a}_{j}\hat{q}_{j}\] \[\qquad\qquad+iE_{j}(\hat{a}_{j}^{\dagger}e^{-i\omega_{L}t}-\hat{ a}_{j}e^{i\omega_{L}t}) \tag{1}\]
where \(\hat{a}_{j}\) is the cavity mode annihilation operator with optical frequency \(\omega_{c}\) of the \(j\)-th subsystem and \(\hat{q}_{j}\) (\(\hat{p}_{j}\)) stands for dimensionless position (momentum) operators of mechanical mode. The term proportional to
\(g_{j}\)=\(\omega_{c}/L\sqrt{\hbar/(m\Omega_{j})}\) describes the optomechanical coupling, while the last term is the coherent input field with frequency \(\omega_{L}\). The quantities \(E_{j}\) are related to the input powers \(P_{j}\) by \(E_{j}=\sqrt{2\kappa P_{j}/(\hbar\omega_{L})}\)). In a rotating frame at laser frequency \(\omega_{L}\), we define \(\Delta=\omega_{c}-\omega_{L}\) and eq.1 becomes
\[H_{S}\text{=}\sum_{j}\left(\Delta-g_{j}\hat{q}_{j}\right)\hat{a}_{j}^{\dagger} \hat{a}_{j}\text{+}\frac{\Omega_{j}}{2}(\hat{q}_{j}^{2}\text{+}\hat{p}_{j}^{2} )\text{+}iE_{j}(\hat{a}_{j}^{\dagger}\text{-}\hat{a}_{j}) \tag{2}\]
As we are interested to the scenario in which we drive the dynamics of the second optomechanical system by the output field of the first cavity, in the following we always assume that the external laser pumps only the first system (i.e. \(E_{2}=0\)). Our system is intrinsically open, therefore we assume each mechanical mode to be coupled to its own environment at finite temperature [36] and that the cavities undergo photon leakage. In particular we assume such optical dissipation to take place via a unidirectional wave guide, in this way, the two optical modes are coupled together in a cascade fashion by guide mediated interaction [31].
Following input-output prescription [37], we introduce the radiation vacuum input noise operator \(\hat{a}^{\text{in}}\)[38; 39] and the Brownian noise operator \(\hat{\xi}_{j}\)[36], (see section VIII for details), with autocorrelation functions:
\[\langle\hat{a}^{\text{in}}(t)\hat{a}^{\text{in}\dagger}(t^{\prime })\rangle =\delta(t-t^{\prime})\] \[\langle\{\hat{\xi}_{j}(t),\xi_{j}(t^{\prime})\}\rangle =2\gamma_{j}\coth\left(\frac{\hbar\Omega_{j}}{2k_{B}T}\right) \delta(t-t^{\prime}) \tag{3}\]
Although the cavity and the resonator are at the same temperature, however, the cavity frequency is typically orders of magnitude larger than the mechanical frequency, therefore the average number of photons in the optical environment is negligible. Henceforth, in all the showed results we consider the following set of parameters: \(m\)=150\(ng\), \(\Omega_{1}/(2\pi)\)=1 MHz, \(\gamma/(2\pi)\)=1 Hz, \(T\)=300 K, \(L\)=25mm, \(\kappa\)=1.34 MHz, \(\lambda\)=1064 nm, and \(P_{1}\)=2 mW. These values are consistent with the state of the art experiments, and time is expressed in units of \(\tau=2\pi/\Omega_{1}\). In view of the above one can derive the following quantum Langevin equations for the field \(\hat{a}_{j}\) operators
\[\frac{d\hat{a}_{1}}{dt} =-i[\hat{a}_{1},H_{S}]-\frac{k}{2}\hat{a}_{1}-\sqrt{\kappa}\hat{a }^{\text{in}}\] \[\frac{d\hat{a}_{2}}{dt} =-i[\hat{a}_{2},H_{S}]-\frac{\kappa}{2}\hat{a}_{2}-\kappa\hat{a }_{1}-\sqrt{\kappa}\hat{a}^{\text{in}} \tag{4}\]
and the mirror operators \(\hat{q}_{j}\).\(\hat{p}_{j}\) operators
\[\frac{d\hat{q}_{j}}{dt} =-i[\hat{q}_{j},H_{S}]\] \[\frac{d\hat{p}_{j}}{dt} =-i[\hat{p}_{j},H_{S}]-\gamma_{j}\hat{p}_{j}-\sqrt{\gamma_{j}}\, \hat{\xi}_{j} \tag{5}\]
_Mean field equations and fluctuations Dynamics -_ The joint field - mirror dynamics ensuing from eq.4 and eq.5 is non linear. A standard approach in the study of the quantum features of optomechanical systems is to first look for the mean field solution of the field and mechanical operators an then address the linearized dynamics of the quantum fluctuations around the average values. Following such approach we write the operators as the sum of their average value (a \(c\) cumber) and a - small - quantum fluctuation :
\[\hat{o}=\langle\hat{o}\rangle+[\hat{o}-\langle\hat{o}\rangle]=O+\delta\hat{o} \tag{6}\]
This leads to the following set of non linear differential equations for the mean values
\[\frac{dQ_{j}(t)}{dt} =\Omega_{j}P_{j}(t) \tag{7}\] \[\frac{dP_{j}(t)}{dt} =-\Omega_{j}Q_{j}(t)-\gamma_{j}P_{j}(t)+|G_{j}(t)|^{2}/g_{j}\] \[\frac{dA_{1}(t)}{dt} =-\frac{\kappa}{2}A_{1}(t)-i\Delta_{1}(t)A_{1}(t)+E\] \[\frac{dA_{2}(t)}{dt} =-\frac{\kappa}{2}A_{2}(t)-i\Delta_{2}(t)A_{2}(t)-\kappa A_{1}(t)\]
where we defined \(\Delta_{j}(t)\)=\(\Delta\)\(-g_{j}Q_{j}(t)\) and \(G_{j}(t)\)=\(g_{j}A_{j}(t)\). From eq.4, eq.5, eq.6 and eq.7, by keeping only terms \(\mathcal{O}(\delta\hat{o})\) one obtains the following linearized set of equations: with
\[\frac{d\delta\hat{q}_{j}}{dt} =\omega_{j}\delta\hat{p}_{j} \tag{8}\] \[\frac{d\delta\hat{p}_{j}}{dt} =-\omega_{j}\delta\hat{q}_{j}+(G_{j}^{*}\delta\hat{a}_{j}+G_{j} \delta\hat{a}_{j}^{\dagger})-\gamma\delta\hat{p}_{j}-\xi_{j}\] \[\frac{d\delta\hat{a}_{1}}{dt} =iG_{1}\delta\hat{q}_{1}-i\Delta_{1}\delta\hat{a}_{1}-\frac{\kappa }{2}\delta\hat{a}_{1}-\sqrt{\kappa}\hat{a}^{\text{in}}\] \[\frac{d\delta\hat{a}_{2}}{dt} =iG_{1}\delta\hat{q}_{2}-i\Delta_{2}\delta\hat{a}_{2}-\frac{\kappa }{2}\delta\hat{a}_{2}-\kappa\delta\hat{a}_{1}-\sqrt{k}\hat{a}^{\text{in}}\]
_Covariance matrix and Lyapunov equation -_ Note that the solutions of eq.7 appear as coefficients in eq.8. As the set of quantum langevin equations eq.8 is linear, the quantum noise is Gaussian, therefore it is convenient to characterize the quantum fluctuations in terms of the
Figure 1: Sketch of the model: two opto-mechanical systems, each of which composed by one mechanical mode and one optical mode that interact through radiation pressure force caused by external laser power, are coupled to unidirectional wave guide
covariance matrix \(\mathbf{C}\) whose elements are defined by \(C_{ij}=1/2\langle\hat{u}_{i}\hat{u}_{j}+\hat{u}_{j}\hat{u}_{i}\rangle\) with \(\vec{u}=\otimes_{j=1}^{2}\{\hat{q}_{j},\hat{p}_{j},\hat{x}_{j},\hat{y}_{j}\}\). Here we have define the cavity field quadratures \(\hat{x}{=}1/\sqrt{2}(\hat{a}+\hat{a}^{\dagger})\) and \(\hat{y}{=}-i/\sqrt{2}(\hat{a}-\hat{a}^{\dagger})\). From eq. (8) it follows that the \(\mathbf{C}\) matrix obeys the following Lyapunov equation [5; 40],
\[\frac{d\mathbf{C}(t)}{dt}=\mathbf{S}(t)\mathbf{C}(t)+\mathbf{C}(t)\mathbf{S}(t )+\mathbf{N} \tag{9}\]
in which the drift (\(\mathbf{S}\)) and diffusion (\(\mathbf{N}\)) matrices reflect the unidirectionality of the system
\[\mathbf{S}=\begin{pmatrix}\mathbf{S}_{1}&\mathbf{0}\\ \mathbf{S}_{R}&\mathbf{S}_{2}\end{pmatrix}\qquad\text{and}\qquad\mathbf{N}= \begin{pmatrix}\mathbf{N}_{1}&\mathbf{N}_{12}\\ \mathbf{N}_{12}&\mathbf{N}_{2}\end{pmatrix} \tag{10}\]
with
\[\mathbf{S}_{j}{=}\begin{pmatrix}0&\omega_{j}&0&0\\ -\omega_{j}&-\gamma&\Re G_{j}&\Im G_{j}\\ -\Im G_{j}&0&-\kappa/2&\Delta_{j}\\ \Re G_{j}&0&-\Delta_{j}&-\kappa/2\end{pmatrix}\ \mathbf{S}_{R}{=}\begin{pmatrix}0&0&0&0\\ 0&0&0&0\\ 0&0&-\kappa&0\\ 0&0&0&-\kappa\end{pmatrix}\]
and
\[\mathbf{N}_{j}{=}\begin{pmatrix}0&0&0&0\\ 0&\gamma(2\bar{n}_{j}+1)&0&0\\ 0&0&\kappa&0\\ 0&0&0&\kappa\end{pmatrix}\ \mathbf{N}_{12}{=}\begin{pmatrix}0&0&0&0\\ 0&0&0&0\\ 0&0&\kappa&0\\ 0&0&0&\kappa\end{pmatrix}\]
## III Effective mirrors dynamics
In the weak coupling regime \(G_{j}\lesssim\kappa\) we can focus on evolution of mechanical operators \(\bar{b}_{j}\) and \(\bar{b}_{j}^{\dagger}\), defined as \(\delta\hat{q}_{j}=(\bar{b}_{j}e^{-i\Omega_{j}t}+\bar{b}_{j}^{\dagger}e^{i \Omega_{j}t})/\sqrt{2}\) and \(\delta\hat{p}_{j}=i(\bar{b}_{j}^{\dagger}e^{i\Omega_{j}t}-\bar{b}_{j}e^{-i \Omega_{j}t})/\sqrt{2}\), with respect to which the cavity field evolves on a much shorter timescale [41]. From eq. (8), discarding counter-rotating terms, one obtains
\[\frac{d\bar{b}_{j}}{dt}=\frac{ie^{i\omega_{j}t}}{\sqrt{2}}(G_{j}^{*}\delta\hat {a}_{j}+G_{j}\delta\hat{a}_{j}^{\dagger})-\frac{\gamma}{2}\bar{b}_{j}-\frac{ ie^{i\omega_{j}t}}{\sqrt{2}}\xi_{j} \tag{11}\]
The expressions for the cavity fields fluctuations can be found solving the respective equations in the frequency domain by using \(\hat{O}(t)=1/\sqrt{2\pi}\int_{-\infty}^{+\infty}\hat{O}(\omega)e^{-i\omega t}\). Therefore we rewrite the last two of the. eq. (8) as
\[\delta\hat{a}_{1}(\omega)=\chi_{a_{1}}(\omega)\left(\hat{a}^{\text {in}}(\omega)\sqrt{\kappa}+iG_{1}\delta\hat{q}_{1}(\omega)\right) \tag{12}\] \[\delta\hat{a}_{2}(\omega)=\chi_{a_{2}}(\omega)\left(\hat{a}^{\text {in}}(\omega)\sqrt{\kappa}+iG_{2}\delta\hat{q}_{2}(\omega)-\kappa\delta\hat{ a}_{1}(\omega)\right)\]
where we introduced the natural susceptibility of the optical modes \(\chi_{a_{j}}\)
\[\chi_{a_{j}}(\omega)=\frac{1}{\kappa/2-i\left(\omega-\Delta_{j}\right)}. \tag{13}\]
As we assume the system to evolve at room-temperature, we neglect the optical input noise, which is small compared to the mechanical thermal and back in the time domain we obtain
\[\delta\hat{a}_{1}(t)=\frac{iG_{1}}{\sqrt{2\pi}}\int d\omega\chi_{a _{1}}(\omega)\delta\hat{q}_{1}(\omega)e^{-i\omega t} \tag{14}\] \[\delta\hat{a}_{2}(t)=\frac{iG_{2}}{\sqrt{2\pi}}\int d\omega\chi_{a _{2}}(\omega)\left(\delta\hat{q}_{2}(\omega)-\kappa\delta\hat{a}_{1}(\omega) \right)e^{-i\omega t}\]
Thanks to the properties of convolutions for Fourier transforms, assuming that \(\bar{b}_{j}\) and \(\bar{b}_{j}^{\dagger}\) vary slowly in time, eq. (14) become
\[\delta\hat{a}_{1}(t)=\frac{iG_{1}}{\sqrt{2}}\left(\bar{b}_{1}\chi_ {a_{1}}(\Omega_{1})e^{-i\Omega_{1}t}+\bar{b}_{1}^{\dagger}\chi_{a_{1}}^{*}(- \Omega_{1})e^{i\Omega_{1}t}\right)\] \[\delta\hat{a}_{2}(t)=\frac{iG_{2}}{\sqrt{2}}\left(\bar{b}_{2}\chi_ {a_{2}}(\Omega_{2})e^{-i\Omega_{2}t}+\bar{b}_{2}^{\dagger}\chi_{a_{2}}^{*}(- \Omega_{2})e^{i\Omega_{2}t}\right)-\] \[\frac{i\kappa G_{1}}{\sqrt{2}}(\bar{b}_{1}\chi_{a_{1}}(\Omega_{1}) \chi_{a_{2}}(\Omega_{1})e^{-i\Omega_{1}t}+\bar{b}_{1}^{\dagger}\chi_{a_{1}}^{*}(- \Omega_{1})\chi_{a_{2}}^{*}(-\Omega_{1})e^{i\Omega_{1}t}\right)\]
Substituting these results in eq. (11) and discarding non rotating terms we finally obtain the coupled equations of motion for the mirrors operators:
\[\frac{d\bar{b}_{1}}{dt}=-i\Delta_{1}^{\text{eff}}\bar{b}_{1}-( \Gamma_{1}^{\text{eff}}+\frac{\gamma}{2})\bar{b}_{1}-\frac{ie^{i\omega_{1}t}}{ \sqrt{2}}\xi_{1}\] \[\frac{d\bar{b}_{2}}{dt}=-i\Delta_{2}^{\text{eff}}\bar{b}_{2}-( \Gamma_{2}^{\text{eff}}+\frac{\gamma}{2})\bar{b}_{2}-\Lambda\bar{b}_{1}-\frac{ie^{i \omega_{2}t}}{\sqrt{2}}\xi_{2}\]
in which \(\Gamma_{j}^{\text{eff}}\) and \(\Delta_{j}^{\text{eff}}\) are respectively the real and imaginary part of \(|G_{j}|^{2}\left(\chi_{a_{j}}(\omega_{j})-\chi_{a_{j}}^{*}(-\omega_{j})\right)/2\), and
\[\Lambda{=}\frac{\kappa}{2}\left(G_{2}G_{1}^{*}\chi_{a_{1}}^{*}(-\Omega_{1})\chi_{a _{2}}^{*}(-\Omega_{1})-G_{2}^{*}G_{1}\chi_{a_{1}}(\Omega_{1})\chi_{a_{2}}(\Omega_{1 })\right).\]
These equations can be recast in a covariance matrix equation form analogous to eq. (9) with \(\vec{u}=\otimes_{j=1}^{2}\{\bar{b}_{j},\bar{b}_{j}^{\dagger}\}\). In this case, the matrices \(\mathbf{S}\) and \(\mathbf{N}\) (cfr.eq. (10)) turn out to be
\[\mathbf{S}_{j}{=}\begin{pmatrix}-i\Delta_{j}^{\text{eff}}-(\Gamma_{j}^{ \text{eff}}+\frac{\gamma}{2})&0\\ 0&i\Delta_{j}^{\text{eff}}-(\Gamma_{j}^{\text{eff}}+\frac{\gamma}{2})\end{pmatrix}; \tag{15}\] \[\mathbf{S}_{R}{=}\begin{pmatrix}-\Lambda&0\\ 0&-\Lambda^{*}\end{pmatrix}\]
and
\[\mathbf{N}_{j}{=}\begin{pmatrix}\gamma(2\bar{n}_{j}+1)&0\\ 0&\gamma(2\bar{n}_{j}+1)\end{pmatrix}\ \mathbf{N}_{12}{=}\begin{pmatrix}0&0\\ 0&0\end{pmatrix}\]
Notice that the noise matrix \(\mathbf{N}\) is diagonal due to the fact that we are discarding non rotating terms.
Self-induced oscillations and multistability
As the optomechanical coupling gets stronger and the damping gets weaker, nonlinearities cannot be discarded anymore. The system shows instabilities and the mirror starts to oscillate in a regime of so called self-sustained oscillations[42; 43]. In this regime the mean position of the mirrors can be written as \(Q_{j}(t)=\bar{Q}_{j}+\alpha_{j}cos(\Omega_{j}t)\). Putting this into eq. (7), the exact solutions for the cavity modes amplitude \(A_{j}\), in the long time limit, can be written as
\[A_{1}(t)= \exp\left[ig_{1}\alpha_{1}\frac{\sin(\Omega_{1}t)}{\Omega_{1}} \right]\sum_{n}A_{1}^{n}e^{i\Omega_{1}nt} \tag{16}\] \[A_{2}(t)= \exp\left[ig_{2}\alpha_{2}\frac{\sin(\Omega_{2}t)}{\Omega_{2}} \right]\sum_{n,m,l}A_{2}^{nml}e^{i(\Omega_{1}(n+l)+\Omega_{2}m)t}\]
with
\[A_{1}^{n}=J_{n}\left(\frac{-g_{1}\alpha_{1}}{\Omega_{1}}\right) \chi_{\alpha_{1}}(-\Omega_{1}n)E_{1} \tag{17}\] \[A_{2}^{nml}=-\kappa\;J_{n}\left(-\frac{g_{1}\alpha_{1}}{\Omega_ {1}}\right)\;J_{m}\left(-\frac{g_{2}\alpha_{2}}{\Omega_{2}}\right)\times\] (18) \[\times\chi_{\alpha_{2}}(-\Omega_{1}(n+l)-\Omega_{2}m)\;J_{l} \left(\frac{g_{1}\alpha_{1}}{\Omega_{1}}\right)E_{1}\]
where \(J_{n}(x)\) is the Bessel function of first kind and \(\chi_{\alpha_{j}}\) are the susceptibilities defined in eq. (13). The stable states of the system are those for which the total time-averaged force vanishes and the power due to the radiation pressure \(P_{radi_{j}}=G(|A_{j}|^{2}\bar{Q}_{j})\) equals the power dissipated \(P_{fric}=\gamma(\dot{Q}_{j}^{2})\). Plotting the ratio \(P_{rad}/P_{fric}\) for the two subsystems as a function of \(A_{j}\) and detuning \(\Delta\), we find the diagrams that show the values of \(A_{j}\) and \(\Delta\) corresponding to a stable state.
_Multistability_ - A characteristical feature for optomechanical systems is that, in the regime in which \(A_{j}=0\), they exhibit multistability. A given intensity of the light pumped in the cavity can lead to different steady states of both cavity photon number and mechanical position [44; 2]. From eq. (7), taking the stationary limit, we can find the equations for the average number of photons in the two cavities \(N_{j}\) i.e.
\[\frac{g_{1}^{4}}{\Omega_{1}^{2}}N_{1}^{3}-\frac{2\Delta g_{1}^{2 }}{\Omega_{1}}N_{1}^{2}+\left(\Delta^{2}+\frac{\kappa^{2}}{2}\right)N_{1}-E_{ 1}^{2}=0\] \[\frac{g_{2}^{4}}{\Omega_{2}^{2}}N_{2}^{3}-\frac{2\Delta g_{2}^{2 }}{\Omega_{2}}N_{2}^{2}+\left(\Delta^{2}+\frac{\kappa^{2}}{2}\right)N_{2}- \kappa^{2}N_{1}=0\]
and once found these, we can find the average cantilever positions as
\[Q_{j}=\frac{g_{j}}{\Omega_{j}}N_{j} \tag{20}\]
We note that the first of eq. (19) has three roots, but, as shown also in [44], only two of these solutions are stable solutions, specifically the lower and the higher ones while the middle one is unstable and can't be observed experimentally. Regarding the equation for the second cavity a richer behaviour is obtained, as shown in fig. 3.
## V Mutual information and quantum discord
Once eq. (9) is solved, we can analyse and conveniently characterise the mirrors correlations - both in the transient and in stationary regimes - by means of the mutual information, which can be evaluated from the covariance matrix, as shown in [45], in terms of its symplectic invariants and symplectic eigenvalues (see Appendix A).
As can be seen in fig. 4, the time dependence of the mutual information shows that the two mirrors are initially uncorrelated, then they are correlated for a short time and finally, after being uncorrelated again, reach steady state correlations.
The amount of quantumness of such correlations can be characterized in terms of quantum Discord [46].
Figure 2: (Left) Stability Graph for the first mechanical oscillator. It shows the ratio between the power due to radiation pressure and the power dissipated as a function of the amplitude of oscillation and the detuning between the pump and the cavity. (Right) Stability Graph for the second mechanical oscillator, it shows the ratio between the power due to radiation pressure and the power dissipated as a function of the amplitude of oscillation and the detuning between the pump and the cavity
This different type of quantum correlations can be nonzero even in the case of separable states which implies that some bipartite quantum states can show correlations that are incompatible with classical physics. For our system we can adopt the Gaussian quantum Discord [47; 48]. Quantum Gaussian Discord is defined as the difference between mutual information and classical correlations. Classical correlations are defined as the maximum amount of information that one can gain on one subsystem by locally measuring the other subsystem [45] and so, by this definition, quantum Discord is not symmetric with respect to the interchange of the two subsystems.
As shown in fig. 4, performing a measure on the second mirror one can recover some information on the first one, but the converse is not true. That is expected due to the unidirectionality of the coupling.
## VI The steady state
As shown in the previous sections, the linearized equations for the fluctuations eq. (8) can be solved in the frequency domain. The correlation functions eq. (3) become
\[\langle\hat{a}^{\rm in}(\omega)\hat{a}^{\rm int}(\Omega)\rangle= \delta(\omega+\Omega)\] \[\langle\{\hat{\xi}(\omega),\xi(\Omega^{\prime})\}\rangle=2\gamma \coth\left(\frac{\hbar\omega_{M}}{2k_{B}T}\right)\delta(\omega+\Omega), \tag{21}\]
while the equations for the fluctuations of cavity field modes are eq. (12) and the equations for the positions of the mirrors become
\[\delta\hat{q}_{j}(\omega)=\chi_{j}(\omega)\left(G_{j}\hat{\delta a }_{j}^{\dagger}(\omega)+G_{1}^{*}\delta\hat{a}_{j}(\omega)+\xi_{j}(\omega)\right) \tag{22}\]
where we have introduced the natural susceptibilities of the mechanical modes
\[\chi_{j}(\omega)=\frac{\Omega_{j}}{\Omega_{j}^{2}-\omega^{2}-i \omega\gamma} \tag{23}\]
The mirror's position fluctuations can be expressed in terms of effective susceptibilities and noise operators:
\[\delta\hat{q}_{1}(\omega)= \chi_{1}^{\rm eff}(\omega)\ \ \xi_{1}(\omega)\] \[\delta\hat{q}_{2}(\omega)= \chi_{2}^{\rm eff}(\omega)\xi_{2}^{\rm eff}(\omega) \tag{24}\]
with
\[\chi_{j}^{\rm eff}(\omega)=\frac{\chi_{j}(\omega)}{1-|G_{j}|^{2} \chi_{j}(\omega)(\chi_{a_{j}}(\omega)-\chi_{a_{j}}^{*}(\neg\omega))} \tag{25}\] \[\xi_{2}^{\rm eff}(\omega)=\xi_{2}(\omega)-i\kappa\chi_{1}^{\rm eff }(\omega)\xi_{1}(\omega)\eta(\omega)\] \[\eta(\omega)=G_{1}^{*}G_{2}\chi_{a_{1}}^{*}(\neg\omega)\chi_{a_{2 }}^{*}(\neg\omega)-G_{1}G_{2}^{*}\chi_{a_{1}}(\omega)\chi_{a_{2}}(\omega)\]
Note that the effective susceptibility of the mechanical oscillators \(\chi_{j}^{\rm eff}(\omega)\) are modified by the radiation pressure [3], furthermore, in the second of eq. (V), the effective noise seen by the second mirror, modified by the presence of the first, is made explicit. It is now clear how the position fluctuations of the first (\(\delta\hat{q}_{1}\)) of the two mechanical modes is only dependent on its local thermal bath, while the second one (\(\delta\hat{q}_{2}\)) depends also on the thermal bath of the first via the optical field.
In the same way, for the cavity field fluctuation we have
\[\delta\hat{a}_{1}(\omega)=iG_{1}\chi_{a_{1}}(\omega)\chi_{1}^{ \rm eff}(\omega)\xi_{1}(\omega) \tag{26}\] \[\delta\hat{a}_{2}(\omega)=iG_{2}\chi_{a_{2}}(\omega)\chi_{2}^{ \rm eff}(\omega)\xi_{2}^{\rm eff}(\omega)\] \[\phantom{\delta\hat{a}_{2}(\omega)=}{}-iG_{1}\kappa\chi_{a_{1}}( \omega)\chi_{a_{2}}(\omega)\chi_{1}^{\rm eff}(\omega)\xi_{1}(\omega)\]
### Power Spectra
From eqs. (V) and (26), thanks to eq. (V), it is possible to evaluate the position spectrum of the two mirrors defined by
\[S_{j}^{q}(\omega)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{+\infty}e^{-i(\omega+ \Omega)t}\langle\delta\hat{q}_{j}(\omega)\delta\hat{q}_{j}(\Omega)\rangle \tag{27}\]
Figure 4: The first plot shows the mutual information between the two mirrors as a function of time in the cooling regime (\(\Delta=\Omega_{1}\)) and in the case \(\Omega_{2}=\Omega_{1}\). The second plot shows the quantum discord (the solid line refers to \(D_{12}\) and the dashed line refers to \(D_{21}\)) for the same values of \(\Delta\) and \(\Omega_{2}\) used in the upper plot
obtaining
\[S_{1}^{q}(\omega) = \gamma(2\bar{n}_{1}{+}1)|\chi_{1}^{\text{eff}}(\omega)|^{2} \tag{28}\] \[S_{2}^{q}(\omega) = \gamma(2\bar{n}_{2}{+}1)|\chi_{2}^{\text{eff}}(\omega)|^{2}{-} \kappa^{2}S_{1}^{q}(\omega)|\chi_{2}^{\text{eff}}(\omega)|^{2}|\eta(\omega)|^{2}\]
from which one can obtain the variances \(\langle\delta\hat{q}_{j}\rangle\) trough
\[\langle\delta\hat{q}_{j}\rangle=\int_{-\infty}^{+\infty}\frac{d\omega}{2\pi}S _{j}^{q}(\omega) \tag{29}\]
We are also interested to the output power spectral density that would be detected in an homodyne detection of the output fluctuations \(\delta x^{\text{out}}{=}1/\sqrt{2}(\delta\hat{a}^{\text{out}}+\delta\hat{a}^{ \text{out}})\) where
\[\delta\hat{a}^{\text{out}}(\omega)=\hat{a}^{\text{in}}(\omega)-\sqrt{\kappa} \delta\hat{a}_{1}(\omega)-\sqrt{\kappa}\delta\hat{a}_{2}(\omega) \tag{30}\]
The spectrum of such fluctuations can be obtained as
\[P^{\text{out}}(\omega)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{+\infty}e^{-i( \omega+\Omega)t}x^{\text{out}}(\omega)x^{\text{out}}(\Omega) \tag{31}\]
Using eq. (21) in the frequency domain one finds
\[P^{\text{out}}(\omega)\sim\sum_{j=1}^{2}\frac{\kappa}{2}|G_{j}\chi_{a_{j}}( \omega){-}G_{j}^{*}\chi_{a_{j}}^{*}(\omega)|^{2}S_{j}^{q}(\omega) \tag{32}\]
From eq. (32), it follows, as shown in fig. 9, that the output field from the second cavity contains information on the power spectra of the two mechanical modes as it simply proportional to the sum of the two mechanical power spectra. A similar result was obtained, for a single optomechanical system, in [49]
The presence of two peaks in the power spectra of the first mirror depends on the chosen value for the first cavity's pump power. In fact, as shown in fig. 6 these two peaks appear at a certain value of the power pump and they move away from each other as the power value increases
## VII Cooling
In the cooling regime, i.e. \(\Delta{=}\Omega_{1}\), the optical field generates extra damping on the mechanical mode. Such optical damping, caused by radiation pressure, depends on both the position \(Q\) and the speed with which the mirror changes its position. At \(t=0\) the phonons associated to the mechanical oscillator motion are in a thermal equilibrium state. Then, the interaction between the photons and the phonons, as described by the last term in eq. (2), leads to a change of the phonon number which fluctuates because the coupling to its environment, consisting of a hot phonon bath at temperature \(T\). The goal of optomechanical (sideband) cooling is to reduce the amount of such fluctuations thereby cooling it down.
The mean energy of the mirrors is evaluated
\[U_{j}(t)=\frac{\hbar\Omega_{j}}{2}\left(\langle\delta q_{j}^{2}(t)\rangle+ \langle\delta p_{j}(t)^{2}\rangle\right)=\hbar\omega_{M}(n_{j}^{\text{eff}}(t )-\frac{1}{2}) \tag{33}\]
with \(n_{j}^{\text{eff}}(t)\) obtained by the solution eq. (9) of covariance matrix as \(1/2(C_{11}+C_{22}-1)\) for the first mechanical mode and \(1/2(C_{55}+C_{66}-1)\) for the second one.
The effective temperature of the movable mirrors are then given by
\[T_{j}^{\text{eff}}(t)=\frac{\hbar\Omega_{j}}{k_{B}\ln\left(1+1/n_{j}^{\text{ eff}}(t)\right)} \tag{34}\]
When the two mirrors have the same frequency, i.e. \(\Omega_{2}=\Omega_{1}\) (fig. 7), the steady state is characterised by a higher temperature of the second mirror with respect to the first one. This is a consequence of the unidirectionality of the coupling. To further investigate the properties of this temperature gradient, the temperatures of the second mirror were evaluated varying its frequency. It must be noted that, as shown in fig. 8, moving away from the resonance condition the time at which the second mirror reaches the stationary state increases. This
Figure 5: (Left) _Mirror’s Spectra_ - In this figure is reported the power spectrum eq. (28) of the first mechanical mode and of the second one for three different frequencies, from top to bottom, \(\Omega_{2}=\Omega_{1}/2\), \(\Omega_{2}=\Omega_{1}\) and \(\Omega_{2}=3\Omega_{1}/2\). (Right) _Output Spectra_ - Spectrum of the output field from the second cavity. In this case \(P=10^{-2}mW\)
Figure 6: Difference between the frequencies (expressed in units of \(\Omega_{1}\)) which correspond to the two peaks in the spectrum of the first mirror as a function of pump’s power in the regime \(\Delta=\Omega_{1}\)
is due to the mismatch between the optical detuning and the frequency of the mechanical mode in the second optomechanical system which corresponds to a variation in the cooling efficiency.
It can be seen in fig. 9 that for different values of \(\Omega_{2}\), varying the detuning between the pump and the first cavity, one can always tune it in such a way that it creates a temperature gradient between the mirrors. The stationary correlations between the two mirrors, evaluated as the mutual information in the stationary regime, shows a peak in correspondence to the minima of the second mirror temperatures.
## VIII Conclusion
In conclusion, we have characterised the dynamics of two optomechanical systems coupled indirectly in a cascaded way by mean of a chiral waveguide. In the weak coupling regime we have performed an adiabatic elimination the optical modes to find the equations describing an effective mirror dynamics. We have first identified the stability regions for the mirror dynamics as a function of the first cavity pump. Furthermore we have investigated the possibility for the two mirrors to show multiple steady states of both cavity photon number and mechanical position for a given intensity of the light pumped in the first cavity. We have studied the evolution of the correlations between the two mechanical mode by evaluating their mutual information and the corresponding quantum discord a function of time. We have evaluated also the power spectra of the two mechanical modes and the output spectrum of the second cavity. We have shown that measuring the latter it it's possible to reconstruct the mirror's spectra. Finally we have evaluated the steady state temperature of the two mirrors for different values of \(\Omega_{2}\) and varying \(\Delta\) and we have shown that there is a finite steady state temperature difference between the two. This shows the possibility, using an indirect effective coupling, to engineer a finite gradient of temperature between two mechanical modes
###### Acknowledgements.
SL and GMP acknowledge support by MUR under PRIN Project No. 2017 SRN-BRK QUSHIP.
Figure 8: (Left) Time at which the second mechanical mode reaches the stationary state as a function of its frequency (Right) Temperature of the second mechanical mode in the stationary regime as a function of its frequency
Figure 7: Temperatures of mechanical modes eq. (34) represented in respect with time. The solid line represents the temperature of the first mirror, the dashed one represents the temperature of the second mirror in the case \(\Omega_{2}=\Omega_{1}\)
Figure 9: (Left) Temperatures of mechanical modes eq. (34) in the stationary limit. In particular, the blue curve refers to the first mirror, the others refer to the second one considering respectively \(\Omega_{2}=\Omega_{1}/2\), \(\Omega_{2}=\Omega_{1}\) and \(\Omega_{2}=3\Omega_{1}/2\). (Right) Mutual information between the two mechanical modes in the stationary limit |
2302.03391 | Sparse and geometry-aware generalisation of the mutual information for
joint discriminative clustering and feature selection | Feature selection in clustering is a hard task which involves simultaneously
the discovery of relevant clusters as well as relevant variables with respect
to these clusters. While feature selection algorithms are often model-based
through optimised model selection or strong assumptions on the data
distribution, we introduce a discriminative clustering model trying to maximise
a geometry-aware generalisation of the mutual information called GEMINI with a
simple l1 penalty: the Sparse GEMINI. This algorithm avoids the burden of
combinatorial feature subset exploration and is easily scalable to
high-dimensional data and large amounts of samples while only designing a
discriminative clustering model. We demonstrate the performances of Sparse
GEMINI on synthetic datasets and large-scale datasets. Our results show that
Sparse GEMINI is a competitive algorithm and has the ability to select relevant
subsets of variables with respect to the clustering without using relevance
criteria or prior hypotheses. | Louis Ohl, Pierre-Alexandre Mattei, Charles Bouveyron, Mickaël Leclercq, Arnaud Droit, Frédéric Precioso | 2023-02-07T10:52:04Z | http://arxiv.org/abs/2302.03391v2 | # Sparse GEMINI
###### Abstract
Feature selection in clustering is a hard task which involves simultaneously the discovery of relevant clusters as well as relevant variables with respect to these clusters. While feature selection algorithms are often model-based through optimised model selection or strong assumptions on \(p(\mathbf{x})\), we introduce a discriminative clustering model trying to maximise a geometry-aware generalisation of the mutual information called GEMINI with a simple \(\ell_{1}\) penalty: the Sparse GEMINI. This algorithm avoids the burden of combinatorial feature subset exploration and is easily scalable to high-dimensional data and large amounts of samples while only designing a clustering model \(p_{\theta}(y|\mathbf{x})\). We demonstrate the performances of Sparse GEMINI on synthetic datasets as well as large-scale datasets. Our results show that Sparse GEMINI is a competitive algorithm and has the ability to select relevant subsets of variables with respect to the clustering without using relevance criteria or prior hypotheses.
## 1 Introduction
It is common that clustering algorithms as well as supervised models rely on all available features for the best performance. Yet, as datasets become high-dimensional, clustering algorithms tend to break under the curse of dimensionality (Bouveyron and Brunet-Saumard, 2014). To alleviate this burden, feature selection is a method of choice. Indeed, all features may not always be of interest. Some variables can be perceived as relevant or not with respect to the clustering objective. Relevant variables bring information that is useful for the clustering operation whereas irrelevant variables do not bring any new knowledge regarding the cluster distribution (Tadesse et al., 2005) and redundant variables look relevant yet do not bring beneficial knowledge (Maugis et al., 2009). The challenge of selecting the relevant variables often comes with the burden of combinatorial search in the variable space. Solutions may thus be hardly scalable to high-dimensional data (Raftery and Dean, 2006) or to the number of samples (Witten and Tibshirani, 2010) when the selection process is part of the model.
Therefore reducing the number of variables on which to learn to a relevant few is of interest, notably in terms of interpretation (Fop and Murphy, 2018). The necessity of variable selection notably met successful applications in genomics (Marbac et al., 2020), multi-omics (Meng et al., 2016; Ramazzotti et al., 2018; Shen et al., 2012).
Often, integrating the selection process as part of the model will lead to either not scaling well (Solorio-Fernandez et al., 2020) in terms of number of features (Raftery and Dean, 2006) or number of samples (Witten and Tibshirani, 2010) or imposing too constrained decision boundaries due to the nature of strong parametric assumptions. To alleviate both problems, we present the Sparse GEMINI: a model that combines the LassoNet architecture (Lemhadri et al., 2021) and the discriminative clustering objective GEMINI (Ohl et al., 2022) for a scalable discriminative clustering with penalised feature selection. The contributions of Sparse GEMINI are:
* A simple novel algorithm efficiently combining feature selection and discriminative clustering.
* A scalable feature selection and clustering model compatible with deep learning architectures.
* Demonstrations of performances on multiple synthetic and real datasets as well as a large-scale transcriptomics dataset.
## 2 Related works
Feature selection algorithms can be divided into 2 distinct categories (John et al., 1994; Dy, 2007): filter methods and wrapper methods. Filter methods apply in an independent step feature selection using a relevance criterion to eliminate irrelevant features before performing clustering. This can be done for example using information theory (Cover, 1999) with the SVD-Entropy (Varshavsky et al., 2006) or spectral analysis (von Luxburg, 2007; He et al., 2005; Zhao and Liu, 2007). Those methods are thus easily scalable and quick despite bearing the challenge of defining unsupervised feature interestingness (Dy, 2007). Wrapper methods encompass the selection process within the model and exploit their clustering results to guide the feature selection (Solorio-Fernandez et al., 2020). Other related works sometimes refer to a third category named hybrid model (Alelyani et al., 2018) or embedded models (Blum and Langley, 1997) as compromises between the two first categories.
While the definition of the relevance of a variable is more straightforward for supervised learning, its definition in unsupervised learning clearly impacts the choice of selection criterion for filter methods or distribution design in model-based methods (Fop and Murphy, 2018). Often, the terms relevant variables, irrelevant variables (Tadesse et al., 2005) for the notion of conveying information are used. Others may consider as well redundant variables as those that bring already available information (Maugis et al., 2009). A key difference in models would then be to consider whether the informative variables are independent given the cluster assignment (local independence) or dependent (global independence from the uninformative variables), yet the latter hardly accounts for redundant variables (Fop and Murphy, 2018).
Feature selection is to be not mistaken with dimensionality reduction, sometimes called feature reduction, which is the process of finding a latent space of lower dimension leveraging good manifolds for clustering, f.e. using matrix factorisation (Shen et al., 2012). Moreover, by enforcing the projection matrix to be sparse, feature selection can be recovered in the original space (Bouveyron and Brunet-Saumard, 2014). Similarly, subspace clustering seeks to find clusters in different subspaces of the data. (Zografos et al., 2013; Chen et al., 2018) and is thus an extension of feature selection (Parsons et al., 2004), notably with the motivation that several latent variables could explain the heterogeneity of the data (Vandewalle, 2020). However, such problems usually incorporate a mechanism to merge clusterings which is challenging as well while we are interested in a method that selects features while producing a single clustering output.
Finally, models for clustering in feature selection are often model-based (Scrucca and Raftery, 2018; Raftery and Dean, 2006; Maugis et al., 2009), implying that they assume a parametric mixture model that can either explain the distribution of the data, as well as the distribution of the irrelevant variables. To perform well, these methods need a good selection criterion to compare models with one another (Raftery and Dean, 2006; Marbac et al., 2020; Maugis et al., 2009). To the best of our knowledge, there does not exist models for joint feature selection and clustering in the discriminative sense of Minka (2005) and Krause et al. (2010), i.e. models that only design \(p_{\theta}(y|\mathbf{x})\). Finally, most of these generative wrapper methods hardly scale both in sample quantity and/or variable quantity.
## 3 The Sparse GEMINI
Sparse GEMINI is a combination of the generalised mutual information objective for discriminative clustering (Ohl et al., 2022) with the LassoNet framework for feature selection (Lembadri et al., 2021) in neural networks. The model
Figure 1: Description of the complete Sparse GEMINI model. Through a proximal gradient, clusters learned by GEMINI drop irrelevant features both in a skip connection and an MLP.
is summarised in Figure 1.
### The GEMINI objective
Let \(\mathcal{D}=\{\mathbf{x}_{i=1}^{N}\}_{i=1}^{N}\subset\mathcal{X}\) a dataset of \(N\) observations, each of dimension \(d\). We note each feature \(\mathbf{x}^{j}\in\mathcal{X}_{j}\), thus: \(\mathcal{X}=\prod_{j=1}^{d}\mathcal{X}_{j}\). We seek to cluster this dataset by learning a distribution \(p_{\theta}(y|\mathbf{x})\) where \(y\) is a discrete variable taking \(K\) values. This distribution is defined by a softmax-ended function:
\[y|\mathbf{x}\sim\text{Categorical}(\text{SoftMax}\circ f_{\theta}(\mathbf{x})), \tag{1}\]
where \(f_{\theta}:\mathcal{X}\mapsto\mathbb{R}^{K}\) has parameters \(\theta\). In order to perform clustering with \(f\) as a discriminative distribution, we train the parameters \(\theta\) using a generalised mutual information (GEMINI) (Ohl et al., 2022). This objective was introduced to circumvent the need for parametric assumptions regarding \(p(x)\) in clustering and thus leads to designing only a discriminative clustering model \(p_{\theta}(y|\mathbf{x})\). With the help of Bayes theorem, this objective can be estimated without knowledge of the data distribution \(p(\mathbf{x})\) using only the output of the clustering distribution \(p_{\theta}(y|\mathbf{x})\). Overall, the GEMINI aims at separating according to a distance \(D\) the cluster distributions from either the data distribution (one-vs-all):
\[\mathcal{I}_{D}^{\text{ova}}(\theta)=\mathbb{E}_{y\sim p_{\theta}(y)}\left[D( p_{\theta}(\mathbf{x}|y)\|p(\mathbf{x}))\right], \tag{2}\]
or other cluster distributions (one-vs-one):
\[\mathcal{I}_{D}^{\text{ova}}(\theta)=\mathbb{E}_{y_{1},y_{2}\sim p_{\theta}(y )}\left[D(p_{\theta}(\mathbf{x}|y_{1})\|p(\mathbf{x}|y_{2}))\right]. \tag{3}\]
The novelty of GEMINI is to consider different types of distances \(D\) between distributions with a special focus on the maximum mean discrepancy (MMD) (Gretton et al., 2012) or the Wasserstein distance (Peyre and Cuturi, 2019). The former corresponds to the distance between the expectations of the respective distributions projected into a Hilbert space and the second is an optimal transport distance describing the minimum of energy necessary to reshape one distribution as the other. Both of them incorporate geometrical information on the data respectively through a kernel \(\kappa\) or a distance \(\delta\) in the data space. Thus, any neural network that is trainable through cross-entropy loss can be switched to unsupervised learning at the cost of choosing a metric or kernel in the data space.
### The LassoNet architecture
To perform variable selection inside the neural network, we chose to adapt the LassoNet (Lembadri et al., 2021) framework with GEMINIs. The neural network \(f_{\theta}:\mathcal{X}\mapsto\mathbb{R}^{K}\) is taken from a family of architectures \(\mathcal{F}\) consisting of one multi-layered perceptron (MLP) and a linear skip connection:
\[\mathcal{F}=\{f_{\theta}:\mathbf{x}\mapsto g_{\omega}(\mathbf{x})+\mathbf{W}^{\top}\mathbf{x}\}, \tag{4}\]
with \(\theta=\{\mathbf{\omega},\mathbf{W}\}\) including \(\mathbf{\omega}\) the parameters of the MLP and \(\mathbf{W}\in\mathbb{R}^{K\times d}\) the weights of a linear skip connection penalised by \(\ell_{1}\), similarly to the Lasso (Tibshirani, 1996). However, to properly ensure that an entire vector weights is eliminated at once, a group-lasso penalty is preferred (Hastie et al., 2015, Section 3.3.3) also known as \(\ell_{1}/\ell_{2}\) penalty (Bach et al., 2012). Thus, the optimal parameters should satisfy:
\[\hat{\theta}=\text{argmax}_{\theta}\mathcal{I}_{D}(\theta)-\lambda\sum_{j=1}^{ d}\|\mathbf{W}_{j}\|_{2}, \tag{5}\]
with \(\mathbf{W}_{j}\in\mathbb{R}^{K}\), the \(j\)-th column of \(\mathbf{W}\). Notice that \(\lambda\) is positive because we seek to simultaneously maximise the GEMINI and minimise the \(\ell_{1}/\ell_{2}\) penalty. During training, the sparsity-induced linear parameter \(\mathbf{W}\) will remove some feature subset \(I\). In order to force the MLP to drop this same subset of features as well, the weights of the first layer \(\mathbf{\omega}^{(1)}\) are constrained such that:
\[\|\mathbf{\omega}^{(1)}_{j}\|_{\infty}\leq M\|\mathbf{W}_{j}\|_{2},\forall j\leq d. \tag{6}\]
where \(M\) is called the hierarchy coefficient. When \(M=0\), the method is equivalent to a penalised logistic regression. Thus, when a feature \(j\) is eliminated, all weights starting from this feature in the MLP will be equal to 0 as well. Lemhadri et al. (2021) gracefully provide a proximal gradient operation to satisfy this constraint during training time which guarantees true zeros in the first MLP layer and the skip connection.
Interestingly, while the constraints are designed to specifically select features in the dataset, dimension reduction can be performed as well by extracting representations from lower-dimension layers in the network \(g_{\mathbf{\omega}}\). However, this intermediate representation would not be complete as it misses the information from the skip connection.
### Training and model selection
We follow Lemhadri et al. (2021) in proposing a _dense-to-sparse_ training strategy for the penalty coefficient. Training is carried along a path where the \(\ell_{1}\) penalty parameter \(\lambda\) is geometrically increased: \(\lambda=\lambda_{0}\rho^{t}\) (\(\rho>1\)) at time step \(t\) after an initial step without \(\ell_{1}\) penalty. We stop when the number of remaining features used by the model is below an arbitrary threshold \(0<F_{\text{thres}}<d\) which can be thought as the minimum number of useful variables required. Each time the number of features decrease during training, we save its associate intermediate model
Once the training is finished, we look again at all GEMINI scores during the feature decrease and select the model with the minimum of features that managed to remain in the range of 90% of the best GEMINI value. This best value is most of the time the loss evaluated with the model exploiting
all features. We propose as well a less grounded yet efficient training mode in appendix A.
## 4 Experiments
A brief summary of the datasets used in these experiments can be found in table 1.
### Metrics
Depending on the experiments for comparison purposes, we report 3 different metrics. The adjusted rand index (ARI, Hubert and Arabie, 1985) describes how close the clustering is to the classes, with a correction to random guesses. The variable selection error rate (VSER), for instance used by Celeux et al. (2014), describes the percentage of variables that the model erroneously omitted or accepted, thus the lower the better. We finally report the correct variable rate (CVR) which describes how many of the expected variables were selected: higher is better. For example, a model selecting all variables of a dataset with \(d\) variables and \(d^{\prime}\) good variables will get a CVR of 100% and a VSER of \(1-\frac{d^{\prime}}{d}\). All metrics are written in percentage form.
### Default hyperparameters
We set the hierarchy coefficient to \(M=10\), as Lemhadri et al. (2021) report that this value seems to "work well for a variety of datasets". The optimiser for the initial training step with \(\lambda=0\) is Adam (Kingma and Ba, 2014) with a learning rate of \(10^{-3}\) while other steps are done with SGD with momentum 0.9 and the same learning rate like Lemhadri et al. (2021). Most of our experiments are done with 100 epochs per step with early stopping as soon as the global objective does not improve by 1% for 10 consecutive epochs. The early stopping criterion is evaluated on the same training set since we do not seek to separate the dataset in train and validation sets in clustering. All activation functions are ReLUs. The default starting penalty is \(\lambda_{0}=1\) with a 5% increase per step. We keep the linear kernel and the Euclidean distance respectively in conjunction with the MMD and Wasserstein distances when evaluating the GEMINI. Finally, we evaluate in most experiments the method with the exact same number of clusters as the number of known (supervised) labels.
### Numerical experiments
We experimented Sparse GEMINI on two synthetic datasets proposed by Celeux et al. (2014) and also used by (Bouveyron and Brunet-Saumard, 2014) to first highlight some properties of the algorithm and compare it with competitors.
The first synthetic dataset consists of a few informative variables amidst noisy independent variables. The first 5 variables are informative and drawn from an equiprobable multivariate Gaussian mixture distribution of 3 components. All covariances are set to the identity matrix. The means are \(\boldsymbol{\mu}_{1}=-\boldsymbol{\mu}_{2}=\alpha\mathbf{1}\) and \(\boldsymbol{\mu}_{3}=\mathbf{0}\). All remaining \(p\) variables follow independent noisy centred Gaussian distributions. The number of samples \(N\), the mean proximity \(\alpha\) and the number of non-informative variables \(p\) vary over 5 scenarios described along results in Table 2.
The second dataset consists of \(n=2000\) samples of 14 variables, 2 of them being informative and most others being linearly dependent on the former. The Gaussian mixture is equiprobable with 4 Gaussian distributions of means \([0,0]\), \([4,0]\), \([0,2]\) and \([4,2]\) with identity covariances. The 9 following variables are sampled as follows:
\[\boldsymbol{x}^{3-11}=[0,0,0.4,0.8,1.2,1.6,2.0,2.4,2.8]^{\top}+\\ \boldsymbol{x}^{1-2^{\top}}\left[\begin{array}{ccccccccc}0.5&2&0 &-1&2&0.5&4&3&2\\ 1&0&3&2&-4&0&0.5&0&1\end{array}\right]\\ +\boldsymbol{\epsilon}, \tag{7}\]
where \(\boldsymbol{\epsilon}\sim\mathcal{N}(\mathbf{0},\boldsymbol{\Omega})\) with the covariance:
\begin{table}
\begin{tabular}{c c c c} \hline \hline Name & Samples & Features & \#Classes \\ \hline US-Congress & 435 & 16 & 2 \\ Heart-statlog & 270 & 13 & 2 \\ MNIST & 12000 & 784 & 10 \\ MNIST-BR & 12000 & 784 & 10 \\ Prostate-BCR & 171 & 24508 & 2 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Brief description of datasets involved in experiments
Figure 2: Example of convergence of the norm of the weights of the skip connection for every feature during training for the Wasserstein OvA objective. Green lines are the informative variables, black lines are the noise and red are the correlated variables. In the case of noisy variables, Sparse GEMINI can recover the informative variables. In the presence of redundant variables, Sparse GEMINI eliminates informative variables to keep the redundant ones.
\[\mathbf{\Omega}=\text{diag}\left(\mathbf{I}_{3},0.5\mathbf{I}_{2},\text{diag}([1,3])\text{Rot}(\frac{\pi}{3}),\right.\\ \text{diag}[2,6]\text{Rot}(\frac{\pi}{6})\Big{)}\,. \tag{8}\]
Finally, the last 3 variables are independently sampled from \(\mathcal{N}([3.2,3.6,4],\mathbf{I}_{3})\).
For all synthetic datasets, we asked training to stop with \(F_{\text{thres}}\) set to the expected quantity of variables. We report the results of Sparse GEMINI in Table 2 after 20 runs. We compare our results against our own runs of other methods using their R package: SparseKMeans (Witten et al., 2013), ClustVarSel (Scrucca and Raftery, 2018), vscc (Andrews and McNicholas, 2013, 2014) and SparseFisherEM (Bouveyron and Brunet, 2012).
It appears that the Sparse GEMINI is efficient in selecting the relevant variables when several others are noisy, especially with the MMD-OvO objective. Moreover, while we do not systematically get the best ARI, our performances never fall far behind the most competitive method. We can observe as well that the MMD objective learns well despite the presence of few samples in scenarios 2 and 3. Additionally, the selection strategy often leads to selecting the correct number of variables for the MMD, except in scenarios 1 and 3 where the Gaussian distributions are close to each other. It also appears that we performed poorly at selecting the correct variables in presence of redundancy in the second dataset. However, since all variables except 3 are correlated to the informative variables, we still managed to get a correct ARI on the dataset while using other variables. On average, the top-selected variables by our models were the 6th and the 8th variables. We focus on this difference
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{
\begin{tabular}{c} Sparse \\ KMeans \\ \end{tabular} } & \multirow{2}{*}{Clustvarsel} & \multirow{2}{*}{vscc} & \multirow{2}{*}{SFEM} & \multicolumn{2}{c}{MMD} & \multicolumn{2}{c}{Wasserstein} \\ \cline{5-8} & & & & & OvA & OvO & OvA & OvO \\ \hline S1 & ARI & **25 (5.9)** & 9.4 (0) & -0.8 (0) & 17 (1.5) & 23 (5.9) & 22 (6.8) & 8.8 (7.1) & 8.8 (8.4) \\ \hline \(N=30\) & VSER & 30 (20) & 28 (0) & 80 (0) & **27 (3.6)** & 29 (11) & 32 (11) & 54 (9.8) & 47 (8.4) \\ \(\alpha=0.6\) & CVR & 59 (28) & 0 (0) & **100 (0)** & 40 (0) & 63 (15) & 68 (15) & 79 (22) & 76 (20) \\ \(p=20\) & \# Var & 8.4 (7.9) & 2.0 (0) & 25 (0) & 5.8 (0.9) & 8.5 (3.1) & 9.8 (3.0) & 16.4 (2.0) & 14.2 (3.2) \\ \hline S2 & ARI & 82 (0) & 9.4 (0) & -0.8 (0) & **90 (0)** & 49 (4.1) & 89 (8.7) & 54 (18) & 56 (14) \\ \hline \(N=30\) & VSER & 80 (0) & 28 (0) & 80 (0) & 40 (0) & 8.2 (6.0) & **0 (0)** & 18 (12) & 14 (9.9) \\ \(\alpha=1.7\) & CVR & **100 (0)** & 0(0) & **100 (0)** & 20 (0) & 78 (16) & **100 (0)** & 83 (18) & 81 (18) \\ \(p=20\) & \# Var & 25 (0) & 2 (0) & 25 (0) & 7 (0) & 4.8 (0.37) & 5 (0) & 7.8 (2.9) & 6.5 (2.0) \\ \hline S3 & ARI & 9.1 (0.1) & 0.5 (0) & **24 (0)** & 19 (0) & 21 (2.0) & 20 (2.4) & 9.1 (5.2) & 11 (4.7) \\ \hline \(N=300\) & VSER & 80 (0) & 24 (0) & 80 (0) & **18 (0)** & 21 (7.9) & 20 (8.8) & 67 (9.5) & 49 (18) \\ \(\alpha=0.6\) & CVR & **100 (0)** & 20 (0) & **100 (0)** & 33 (9.8) & 99 (4.5) & **100 (0)** & 96 (11) & 85 (19) \\ \(p=20\) & \# Var & 25 (0) & 3.0 (0) & 25 (0) & 2.8 (0.64) & 10.2 (2.2) & 9.9 (2.2) & 21.4 (2.8) & 15.7 (5.3) \\ \hline S4 & ARI & 86 (0) & **87 (0)** & 50 (0) & 86 (0) & 50 (5.77) & 86 (0.6) & 80 (12) & 81 (12) \\ \hline \(N=300\) & VSER & 80 (0) & 4 (0) & 80 (0) & 24 (0) & **0 (0)** & **0 (0)** & 0.8 (2.5) & **0.4 (1.2)** \\ \(\alpha=1.7\) & CVR & **100 (0)** & **100 (0)** & **100 (0)** & 60 (0) & **100 (0)** & **100 (0)** & **100 (0)** & **100 (0)** & **100 (0)** \\ \(p=20\) & \# Var & 25 (0) & 6 (0) & 25 (0) & 7 (0) & 5 (0) & 5 (0) & 5.2 (0.6) & 5.1 (0.3) \\ \hline S5 & ARI & 86 (0) & **87 (0)** & 0 (0) & 86 (0) & 77 (7.63) & 86 (0.5) & 58 (19) & 74 (16) \\ \hline \(N=300\) & VSER & 95 (0) & 1 (0) & 95 (0) & 12 (0) & **0 (0)** & **0 (0)** & 5.6 (6.4) & 0.8 (2.0) \\ \(\alpha=1.7\) & CVR & **100 (0)** & **100 (0)** & **100 (0)** & 60 (0) & **1 (0)** & **100 (0)** & 95 (11) & 97 (7.3) \\ \(p=95\) & \# Var & 100 (0) & 6 (0) & 100 (0) & 13 (0) & 5 (0) & 5 (0) & 10.1 (5.9) & 5.5 (1.4) \\ \hline \multirow{4}{*}{D2} & ARI & 30 (0) & **60 (0)** & 54 (0) & 57 (0) & 56 (4.2) & 55 (2.8) & 56 (2.2) & 55 (3.0) \\ & VSER & 86 (0) & **0 (0)** & 71 (0) & 50 (0) & 29 (0) & 29 (0) & 30 (3.7) & 29 (0) \\ \cline{1-1} & CVR & **100 (0)** & **100 (0)** & **100 (0)** & **100 (0)** & 0 (0) & 0 (0) & 0 (0) & 0 (0) \\ \cline{1-1} & \# Var & 14 (0) & 2 (0) & 12 (0) & 9 (0) & 2.1 (0.3) & 2.1 (0.3) & 2.2 (0.5) & 2 (0) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performances of Sparse GEMINI using on synthetic datasets after 20 runs. We compare our performances against other methods. S stands for a scenario of the first synthetic dataset and D2 stands for the second synthetic dataset. Standard deviation is reported in parentheses
of convergence in Figure 2 where we plot the norm of the skip connection per feature \(\mathbf{W}_{j}\). In the case of noisy variables, we are able to recover them as the number of selected features decreases whereas we eliminated the informative variable of the second dataset during the first steps. Overall, Clustvarsel (Scrucca and Raftery, 2018) performed better on this type of synthetic dataset in terms of variable selections because it explicitly assumes linear dependency between relevant variables and others.
### Examples on MNIST and variations
We demonstrate as well performances of the Sparse GEMINI algorithm by running it on the MNIST dataset. The initial \(\lambda_{0}\) was set to 40. Following Lemhadri et al. (2021), we chose to stop training after finding 50 features. We use as well 5% of dropout inside an MLP with 2 hidden layers of 1200 dimensions each (Hinton et al., 2012). We report in Figure 3 the selected features by the clustering algorithms and the evolution of the ARI. We extended this experiment as well to the variations of MNIST (Larochelle et al., 2007) by showing the performances on the MNIST-BR dataset1. The former consists in samples of MNIST with the black background being replaced by uniform noise hence displaying conditional noise on the data whereas the latter replaces that background by real images. To be fair, we reduced MNIST to the first 12,000 samples of the training set in order to match the number of samples in MNIST-BR.
Footnote 1: Datasets were available at [https://web.archive.org/web/20180519112150/http://www.iro.umontreal.ca/~lisa/twiki/bin/view.cgi/Public/MNistVariations](https://web.archive.org/web/20180519112150/http://www.iro.umontreal.ca/~lisa/twiki/bin/view.cgi/Public/MNistVariations)
We observed that for both the default MNIST dataset and the MNIST-BR dataset despite the presence of noise, the feature map concentrates precisely on the good location of the digits in the picture. Following the GEMINI curves in figures 3(b) and 3(d), the respective optimal numbers of features were 122 for MNIST and 243 for MNIST-BR. These chosen models also have a respective ARI of 0.34 for 7 clusters and 0.28 for 8 clusters. The presence of empty clusters is a possible outcome with GEMINI (Ohl et al., 2022) which contributed here to lowering the ARI when evaluating with the true digits targets.
### Real datasets
#### 4.5.1 OpenML datasets
We ran Sparse GEMINI on two OpenML datasets that are often shown in related works: the US Congress dataset (Almanac, 1984) and the Heart-statlog dataset (Brown, 2004). The US congress dataset describes the choice of the 435 representatives on 16 key votes in 1984. The labels used for evaluation are the political affiliations: 164 Republican against 267 Democrats. We replaced the missing values with 0 and converted the yes/no answers to 1, -1. Thus, an unknown label is equidistant from both answers. The Heart-statlog dataset describes 13 clinical and heart-related features with labels describing the presence or absence of cardiac disease among patients. We preprocessed it with standard scaling. For the US Congress dataset, we used one hidden layer of 20 nodes and a batch size of 87 samples. For the Heart-statlog dataset, we used 10 nodes and 90 samples. As we seek only two clusters, we only ran the one-vs-all versions of the GEMINI because it is strictly equal to the one-vs-one in binary clustering. Both datasets had a penalty increase of \(\rho=10\%\). We first show the number of selected features evolving with \(\lambda\) as well as the evolution of the GEMINI score as the number of features decreases respectively in Figure 5 for the US Congress dataset and in Figure 4 for Heart-statlog. Table 4 contains the performances for the US Congress dataset and Table 3 those of the Heart-statlog dataset. Both reports the average number of selected variables over 20 runs according to our postprocessing selection criterion. We added as well the performances of competitors from the previous section. However, we only managed to run Sparse Fisher EM on the Heart-statlog dataset, hence its presence only in Table 3. For comparison purposes, the best unsupervised accuracy reported on the Heart-statlog dataset in Solorio-Fernandez et al. (2020) is 75.3% while Sparse GEMINI achieves 79% with the MMD. The best score for all methods in the review is 79.6%, but
Figure 3: Relative importance of MNIST features after dynamic training of Sparse GEMINI with a log-scale color map. Blue features were eliminated at the first steps of \(\lambda\) and red features were eliminated last. On the right: evolution of the GEMINI depending on \(\lambda\). \(F\) stands for the number of selected features.
this encompasses filter methods which Sparse GEMINI is not. We also get similar results to the best performances of Marbac et al. (2020) who report 33% of ARI. Since most competitors retained all variables in the dataset, we chose to show as well the clustering performances without selection and hence with the greatest GEMINI score as well.
We averaged the number of times each feature was selected according to the model over the 20 runs and sorted them decreasingly. This post-process revealed that the Wasserstein objective consistently selected the El Salvador Aid and the Aid to Nicaraguan contras votes as sufficient to perform clustering. Indeed, these two votes are among the most discriminating features between Republicans and Democrats and were often chosen by other model-based methods (Fop and Murphy, 2018). The MMD objective only added the Physician fee freeze vote to this subset. Regarding the heart dataset, the MMD consistently picked a subset of 8 features out of 13, including for example age or chest pain type as relevant variables. Contrarily, the Wasserstein objective did not consistently choose the same subset of variables, yet its top variables that were selected more than 80% of the runs agree with the MMD selection as well.
#### 4.5.2 Prostate-BCR dataset
To show the scalability of Sparse GEMINI, we demonstrate its performance as well on the Prostate-BCR dataset, taken from (Vittrant et al., 2020) and publicly available at [https://github.com/ArnaudDroitLab/prostate_BCR_prediction](https://github.com/ArnaudDroitLab/prostate_BCR_prediction). This dataset is a combination of transcriptomics data from 3 different sources. Those are the Cancer Genom atlas (Abeshouse et al., 2015), the GSE54460 dataset from the NCBI website, and the PRJEB6530 project of the European Nucleotide Archive. The combined dataset contains 25,904 transcripts over 171 filtered patients with long-term follow-up, counting 52, 96 and 23 patients from the respective sources. The objective is to find biochemical recurrences (BCR) of prostate cancer through transcriptomic signature, hence binary targets.
To carefully eliminate the variables, we increase \(\lambda\) gradually by 2%. We took a simple MLP with only one hidden layer of 100 neurons. We chose to run until converging to 400 features or less, following (Vittrant et al., 2020). We trained Sparse GEMINI 5 times to find either 2 clusters or 3 clusters in order to break down possible substructures among the supervised targets. For the evaluation of the 3 clusters case, we binarised the results by mapping each cluster to the class in which it had the most samples
\begin{table}
\begin{tabular}{c c c} \hline \hline & ARI & \# Variables \\ \hline SparseKMeans & 54 (0) & 16 (0) \\ Clustvarsel & 0.4 (0) & 2 (0) \\ vscc & 40 (0) & 11 (0) \\ \cline{2-3} MMD & 48 (0.2) & 3.1 (0.08) \\ Wasserstein & 47 (0) & 2.0 (0) \\ \cline{2-3} MMD\({}^{*}\) & **55** (0.7) & 16 (-) \\ Wasserstein\({}^{*}\) & **55** (1.7) & 16 (-) \\ \hline \hline \end{tabular}
\end{table}
Table 4: ARI of Sparse GEMINI on the US Congress dataset with the average number of selected features. Standard deviation in parentheses. Scores with an asterisk are the initial performances when using all features.
Figure 4: Average training curves of Sparse GEMINI on the Heart Statlog dataset over 20 runs. Blue lines are Wasserstein, red lines are MMD.
\begin{table}
\begin{tabular}{c c c} \hline \hline & ARI & \# Variables \\ \hline SparseKMeans & 18.1 (0) & 13 (0) \\ Clustvarsel & 2.8 (0) & 13 (0) \\ vscc & 27 (0) & 1 (0) \\ Sparse Fisher EM & 19 (0) & 1 (0) \\ \cline{2-3} MMD & 32 (1.4) & 8 (0) \\ Wasserstein & 32 (8.8) & 8.4 (2.7) \\ \cline{2-3} MMD\({}^{*}\) & **37** (2.0) & 13 (-) \\ Wasserstein\({}^{*}\) & **33** (9.1) & 13 (-) \\ \hline \hline \end{tabular}
\end{table}
Table 3: ARI of Sparse GEMINI on the Heart-statlog dataset with the average number of selected features. Standard deviation in parentheses. Scores with an asterisk are the initial performances when using all features.
Figure 5: Average training curves of Sparse GEMINI on the US Congress dataset over 50 runs. Blue lines are Wasserstein, red lines are MMD.
Interestingly, the clustering results did not catch up with the actual BCR targets, with an ARI close to 0% most of the time. However, upon evaluation of the clusters with respect to the original source of each sample, we found scores close to 100% ARI in the case of the MMD GEMINI. Thus, the unsupervised algorithm was able to find sufficient differences in distribution between each source of data to discriminate them. We report these scores in Figure 5. Additionally, consistent subsets of features were always selected as the final subset on all 5 runs depending on the GEMINI. This implies that even without the best GEMINI within a range for feature selection, several runs can lead to identifying subsets of relevant data as well.
These results can be viewed as discovering batch effect in the data. Batch effect, also known as batch variation, is a phenomenon that occurs in biological experiments where the results are affected by factors unrelated to the experimental variables being studied. These factors can include variations in sample processing, measurement conditions, people manipulating the samples, or equipment used. One common example of a batch effect is observed in microarray or RNA sequencing experiments, where the samples are processed in different batches and the results are affected by variations in the reagents or protocols used. It has been demonstrated that batch effects in microarray experiments originated from multiple causes, including variations in the labelling and hybridization protocols used, which led to differences in the intensity of gene expression signals (Luo et al., 2010).
To minimise batch effects, it is important to control for variables such as reagents, protocols, and equipment used, and to use appropriate normalisation and data analysis methods to account for these variations. There are several approaches that can be used to detect batch effects in RNA-seq experiments, including PCA (Reese et al., 2013) and clustering. For this latter, Hierarchical clustering is often used as a method that groups samples based on their similarity in gene expression patterns, and batch effects can be identified based on dendrogram analysis (Leek et al., 2010).
## 5 Discussion
Our first observation from Table 2 is that the Sparse GEMINI algorithm can reach performances close to some competitors in terms of ARI while performing better in variable selection, especially for the one-vs-one MMD. The MMD is a distance computed between expectations making it thus insensible to small variations of the kernel, typically when noisy variables are introduced contrary to the Wasserstein distance which takes a global point of view on the distribution. Specifically, the algorithm is good at discarding noisy variables, but less competitive regarding redundant variables as illustrated with the second synthetic dataset. Nonetheless, the ARI remains competitive even though the model failed to give the correct ground for the clustering.
Additionally, the training path produces critical values of \(\lambda\) at which features disappear. Thus, the algorithm produces an explicit unsupervised metric of the relevance of each feature according to the clustering. Typically, plateaus of the number of used variables like in figures 5(b) and 4(b) for the MMD shed light on different discriminating subsets. We also find that the empirical threshold of 90% of the maximal GEMINI to select fewer variables is an efficient criterion. In case of a too sudden collapse of variables, we encourage training over again models on iteratively selected subsets of features. Indeed, as \(\lambda\) increases during training, the collapse of the number of selected variables will often happen when the geometric increase is too strong which might lead to unstable selections.
## 6 Conclusion
We presented a novel algorithm named Sparse GEMINI that jointly performs clustering and feature selection by combining GEMINI for objective and an \(\ell_{1}\) penalised skip connection. The algorithm shows good performances in eliminating noisy irrelevant variables while maintaining relevant clustering. Owing to the nature of multi-layered perceptrons, Sparse GEMINI is easily scalable to high-dimensional data and provides thus an unsupervised technique to get a projection of the data. However, the limits of the scalability are the number of clusters and samples per batch due to the complex nature of GEMINI. Thus, we believe that Sparse
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multicolumn{2}{c}{Objective} & \#Var & BCR targets ARI & Data source targets ARI \\ \cline{3-5} \multirow{3}{*}{MMD} & 2 clusters & **385 (11)** & -0.5 (0) & 79 (0) \\ & 3 clusters & 8293 (11308) & **8.2 (0.6)** & **98 (2)** \\ & 2 clusters & **381 (16)** & -0.3 (0.2) & 70 (6) \\ & 3 clusters & 10598 (13971) & 5.3 (4.9) & 84 (12) \\ \hline \hline \end{tabular}
\end{table}
Table 5: ARI scores of the Prostate BCR dataset for various numbers of clusters depending on the chosen type of targets. We either use the expected targets (BCR) regarding cancer prediction, or data source targets that identify the data origin of each sample.
GEMINI is a relevant algorithm for multi-omics data where the number of samples is often little and the number of features large, especially when it is hard to design a good generative model for such data. As a concluding remark, we want to draw again the attention to the discriminative nature of the algorithm: Sparse GEMINI focuses on the design of a decision boundary instead of parametric assumptions.
## Acknowledgements
This work has been supported by the French government, through the 3IA Cote d'Azur, Investment in the Future, project managed by the National Research Agency (ANR) with the reference number ANR-19-P3IA-0002. We would also like to thank the France Canada Research Fund (FFCR) for their contribution to the project. This work was partly supported by EU Horizon 2020 project AI4Media, under contract no. 951911.
|
2302.05938 | Mean Field Optimization Problem Regularized by Fisher Information | Recently there is a rising interest in the research of mean field
optimization, in particular because of its role in analyzing the training of
neural networks. In this paper by adding the Fisher Information as the
regularizer, we relate the regularized mean field optimization problem to a
so-called mean field Schrodinger dynamics. We develop an energy-dissipation
method to show that the marginal distributions of the mean field Schrodinger
dynamics converge exponentially quickly towards the unique minimizer of the
regularized optimization problem. Remarkably, the mean field Schrodinger
dynamics is proved to be a gradient flow on the probability measure space with
respect to the relative entropy. Finally we propose a Monte Carlo method to
sample the marginal distributions of the mean field Schrodinger dynamics. | Julien Claisse, Giovanni Conforti, Zhenjie Ren, Songbo Wang | 2023-02-12T15:26:12Z | http://arxiv.org/abs/2302.05938v2 | # Mean Field Optimization Problem Regularized by Fisher Information
###### Abstract
Recently there is a rising interest in the research of mean field optimization, in particular because of its role in analyzing the training of neural networks. In this paper by adding the Fisher Information as the regularizer, we relate the regularized mean field optimization problem to a so-called mean field Schrodinger dynamics. We develop an energy-dissipation method to show that the marginal distributions of the mean field Schrodinger dynamics converge exponentially quickly towards the unique minimizer of the regularized optimization problem. Remarkably, the mean field Schrodinger dynamics is proved to be a gradient flow on the probability measure space with respect to the relative entropy. Finally we propose a Monte Carlo method to sample the marginal distributions of the mean field Schrodinger dynamics.
## 1 Introduction
Recently the mean field optimization problem, namely
\[\inf_{p\in\mathcal{P}}\mathfrak{F}(p),\quad\text{for a function }\mathfrak{F}: \mathcal{P}\to\mathbb{R},\text{ where }\mathcal{P}\text{ is a set of probability measures},\]
attracts increasing attention, in particular because of its role in analysing the training of artificial neural networks. The Universal Representation Theorem (see e.g. [11]) ensures that a given function \(f:\mathbb{R}^{d}\to\mathbb{R}\) can be approximated by the parametric form:
\[f(x)\approx\sum_{i=1}^{N}c_{i}\varphi(a_{i}\cdot x+b_{i}),\quad\text{with }c_{i}\in\mathbb{R},\ a_{i}\in\mathbb{R}^{d},\ b_{i}\in\mathbb{R}\text{ for }1\leq i\leq N,\]
where \(\varphi\) is a fixed non-constant, bounded, continuous activation function. This particular parametrization is called a two-layer neural network (with one hidden layer). In order to train the optimal parameters, one need to solve the optimization problem:
\[\inf_{(c_{i},a_{i},b_{i})_{1\leq i\leq N}}\sum_{j=1}^{M}L\left(f(x_{j}),\sum_ {i=1}^{N}c_{i}\varphi(a_{i}\cdot x_{j}+b_{i})\right),\]
where \(L:(y,z)\mapsto L(y,z)\) is a loss function, typically convex in \(z\). Here we face an over-parametrized, non-convex optimization, and have no theory for an efficient solution. However it is recently observed (see e.g. [6, 12, 14, 18]) that by lifting the optimization problem to the space of probability measures, namely
\[\inf_{p\in\mathcal{P}}\sum_{j=1}^{M}L\Big{(}f(x_{j}),\mathbb{E}^{p}[C\varphi(A \cdot x+B)]\Big{)},\]
with random variables \((C,A,B)\) taking values in \(\mathbb{R}\times\mathbb{R}^{d}\times\mathbb{R}\) following the distribution \(p\), one makes the optimization convex (the function \(F:p\mapsto\sum_{j=1}^{M}L\big{(}f(x_{j}),\mathbb{E}^{p}[C\varphi(A\cdot x+B)] \big{)}\) is convex), and has extensive tools to find the minimizers.
Unlike in [6] where the authors address the mean field optimization directly, in [12, 18] the authors add the entropy regularizer \(H(p):=\int p(x)\log p(x)dx\), that is, they aim at solving the regularized optimization problem:
\[\inf_{p\in\mathcal{P}}F(p)+\frac{\sigma^{2}}{2}H(p). \tag{1.1}\]
Recall the definition of the linear derivative \(\frac{\delta F}{\delta p}\) and the intrinsic derivative \(D_{p}F\) (see Remark 2.2 below) in the calculus for the functions on the space of probability measures. In [12] the authors introduce the mean field Langevin (MFL) dynamics:
\[dX_{t}=-D_{p}F(p_{t},X_{t})dt+\sigma dW_{t},\]
where \(p_{t}=\text{Law}(X_{t})\) and \(W\) is a standard Brownian motion, and prove that the marginal laws \((p_{t})_{t\geq 0}\) of the MFL dynamics converge towards the minimizer of the entropic regularization (1.1). In the following works [5, 19] it has been shown that the convergence is exponentially quick.
In this paper we try to look into the mean field optimization problem from another perspective, by adding the Fisher information \(I(p):=\int|\nabla\log p(x)|^{2}p(x)dx\) instead of the entropy as the regularizer, namely solving the regularized optimization
\[\inf_{p\in\mathcal{P}}\mathfrak{F}^{\sigma}(p),\quad\mathfrak{F}^{\sigma}(p) :=F(p)+\frac{\sigma^{2}}{4}I(p).\]
By a calculus of variation (see Proposition 4.3), it is not hard to see that \(p^{*}\in\text{argmin}_{p\in\mathcal{P}}\mathfrak{F}^{\sigma}(p)\) if
\[\frac{\delta\mathfrak{F}^{\sigma}}{\delta p}(p^{*},x):=\frac{\delta F}{\delta p }\left(p^{*},x\right)-\frac{\sigma^{2}}{4}\left(2\Delta\log p^{*}+|\nabla\log p ^{*}|^{2}\right)=\text{constant}. \tag{1.2}\]
We shall introduce the mean field Schrodinger dynamics:
\[\partial_{t}p=-\frac{\delta\mathfrak{F}^{\sigma}}{\delta p}(p_{t},\cdot)p,\]
prove its wellposedness and show that its marginal distributions \((p_{t})_{t\geq 0}\) converge towards the minimizer of the free energy function \(\mathfrak{F}^{\sigma}\). One crucial observation is that the free energy function decays along the mean field Schrodinger dynamics:
\[\frac{d\mathfrak{F}^{\sigma}(p_{t})}{dt}=-\int\left|\frac{\delta\mathfrak{F}^{ \sigma}}{\delta p}(p_{t},x)\right|^{2}p_{t}(dx).\]
In order to prove it rigorously, we develop a probabilistic argument (coupling of diffusions) to estimate \((\nabla\log p_{t},\nabla^{2}\log p_{t})_{t\geq 0}\). Remarkably, the estimate we obtain is uniform in time. Using the energy dissipation we can show that \((p_{t})_{t\geq 0}\) converges exponentially quickly with help of
the convexity of \(F\) and the Poincare inequality. Another main contribution of this paper is to show that the mean field Schrodinger dynamics is a gradient flow of the free energy function \(\mathfrak{F}^{\sigma}\) on the space of probability measures, provided that the 'distance' between the probability measures is measured by relative entropy. Finally it is noteworthy that mean field Schrodinger dynamics is numerically implementable, and we shall briefly propose a Monte Carlo simulation method.
Related works.Assume \(F\) to be linear, i.e. \(F(p):=\int f(x)p(dx)\) with a real potential function \(f\) and denote the wave function by \(\psi:=\sqrt{p}\). Then the function \(\mathfrak{F}^{\sigma}\) reduces to the conventional energy function in quantum mechanics, composed of the potential energy \(\langle\psi,f\psi\rangle\) and the kinetic energy \(\sigma^{2}\langle\nabla\psi,\nabla\psi\rangle\). Meanwhile, the mean field Schrodinger dynamics is reduced to the semi group generated by the Schrodinger operator:
\[\partial_{t}\psi=-\mathcal{H}\psi,\quad\mathcal{H}=-\frac{\sigma^{2}}{2} \Delta+\frac{1}{2}f. \tag{1.3}\]
The properties of the classical Schrodinger operator, including its longtime behavior, have been extensively studied in the literature, see e.g. the monographs [16, 20]. There are also profound studies in cases where \(F\) is nonlinear, notably the density functional theory [9, 10]. However, to our knowledge there is no literature dedicated to the category of convex potential \(F:\mathcal{P}\to\mathbb{R}\), and studying the longtime behavior of such nonlinear Schrodinger operator by exploiting the convexity. In addition, the probabilistic nature of our arguments seem novel.
Using the change of variable: \(u:=-\log p^{*}\), the first order equaiton (1.2) can be rewritten as
\[\frac{\sigma^{2}}{2}\Delta u-\frac{\sigma^{2}}{4}|\nabla u|^{2}+\frac{\delta F }{\delta p}(p^{*},x)=\text{constant}.\]
So the function \(u\) solves the ergodic Hamilton-Jacobi-Bellman equation, and its gradient \(\nabla u\) is the optimal control for the ergodic stochastic control problem:
\[\lim_{T\to\infty}\frac{1}{T}\sup_{\alpha}\mathbb{E}\left[\int_{0 }^{T}\left(\frac{1}{2}|\alpha_{t}|^{2}+\frac{2}{\sigma^{2}}\frac{\delta F}{ \delta p}(p^{*},X_{t}^{\alpha})\right)dt\right],\] \[\text{where}\quad dX_{t}^{\alpha}=\alpha_{t}dt+\sqrt{2}dW_{t}.\]
Further note that the probability \(p^{*}=e^{-u}\) coincides with the invariant measure of the optimal controlled diffusion: \(dX_{t}^{*}=-\nabla u(X_{t}^{*})dt+\sqrt{2}dW_{t}\), so we call \(p^{*}\) the Nash equilibrium of the corresponding ergodic mean field game. For more details on the ergodic mean field game, we refer to the seminal paper [15], and for more general mean field games we refer to the recent monographs [3, 4]. Our convergence result of the mean field Schrodinger dynamics \((p_{t})_{t\geq 0}\) towards \(p^{*}\) offers an approximation to the equilibrium of the ergodic mean field game.
Our result on the gradient flow, as far as we know, is new to the literature. It is well known to the community of computational physics that the normalized solution \((\psi_{t})_{t\geq 0}\) to the imaginary time Schrodinger equation (1.3) is the gradient flow of the free energy \(\mathfrak{F}^{\sigma}\) on the \(L^{2}\)-unit ball. On the other hand, in [21] the authors discuss the (linear) optimization problem without Fisher information regularizer, and formally show that the dynamics, \(\partial_{t}p=-fp\), is the gradient flow of the potential functional \(\int fdp\) on the space of probability measures provided that the distance between the measures are measured by the relative entropy. Inspired by these works, we prove in the current paper that the solution to the variational problem:
\[p_{i+1}^{h}:=\operatorname*{argmin}_{p\in\mathcal{P}}\left\{\mathfrak{F}^{ \sigma}(p)+h^{-1}H(p|p_{i}^{h})\right\},\quad\text{for }h>0,\ i\geq 0,\]
converges to the continuous-time flow of mean-field Schrodinger dynamics as \(h\to 0\). This result can be viewed as a counterpart of seminal paper [13] on the Wasserstein-2 gradient flow.
The rest of the paper is organized as follows. In Section 2 we formulate the problem and state the main results of the paper. The proofs are postponed to the subsequent sections. In Section 3, we show that the mean-field Schrodinger dynamic is well-defined and we gather some important properties for later use. Then we study the long time behavior of this dynamic in Section 4 and we prove that it converges to the unique minimizer of the mean-field optimization problem regularized by Fisher information. Finally we establish in Section 5 that the mean-field Schrodinger dynamic corresponds to the gradient flow with respect to the relative entropy. Some technical results including a refined (reflection) coupling result are also gathered in Appendix.
## 2 Main Results
### Free Energy with Fisher Information
Denote by \(\mathcal{P}_{2}(\mathbb{R}^{d})\) the set of all probability measures on \(\mathbb{R}^{d}\) with finite second moments, endowed with \(\mathcal{W}_{2}\) the Wasserstein distance of order \(2\). In this paper we focus on the probability measures admitting densities, and denote the density of \(p\in\mathcal{P}_{2}(\mathbb{R}^{d})\) still by \(p:\mathbb{R}^{d}\to\mathbb{R}\) if it exists. In particular we are interested in the probability measures of density satifying:
\[\mathcal{P}_{H}:=\left\{p\in\mathcal{P}_{2}(\mathbb{R}^{d}):\ \sqrt{p}\in H^{1} \right\}.\]
In this paper we study a regularized mean-field optimization problem, namely, given a potential function \(F:\mathcal{P}_{2}(\mathbb{R}^{d})\to\mathbb{R}\) we aim at solving
\[\inf_{p\in\mathcal{P}_{H}}\ \mathfrak{F}^{\sigma}(p),\quad\text{with}\quad \mathfrak{F}^{\sigma}(p):=F(p)+\sigma^{2}I(p), \tag{2.1}\]
where \(\sigma>0\) and \(I\) is the Fisher information defined by
\[I(p):=\int_{\mathbb{R}^{d}}|\nabla\sqrt{p}(x)|^{2}dx. \tag{2.2}\]
In the literature \(\mathfrak{F}^{\sigma}\) is called Ginzburg-Landau energy function with temperature \(\sigma\). Note that for \(p\in\mathcal{P}_{H}\) and \(p>0\), it holds
\[4\int_{\mathbb{R}^{d}}|\nabla\sqrt{p}(x)|^{2}dx=\int_{\mathbb{R}^{d}}\left| \nabla\log p(x)\right|^{2}p(x)dx.\]
Throughout the paper, we assume that the potential function \(F\) satisfies the following assumption.
**Definition 2.1**.: _We say that a function \(F:\mathcal{P}_{2}(\mathbb{R}^{d})\to\mathbb{R}\) is \(\mathcal{C}^{1}\) if there exist \(\frac{\delta F}{\delta p}:\mathcal{P}_{2}(\mathbb{R}^{d})\times\mathbb{R}^{d} \to\mathbb{R}\) continuous with quadratic growth in the space variable such that for all \(p,q\in\mathcal{P}_{2}(\mathbb{R}^{d}),\)_
\[F(q)-F(p)=\int_{0}^{1}\int_{\mathbb{R}^{d}}\frac{\delta F}{\delta p}\big{(} \eta q+(1-\eta)p,x\big{)}(q-p)(dx)d\eta.\]
**Remark 2.2**.: _Note that \(F\in\mathcal{C}^{1}\) is \(\mathcal{W}_{2}\)-continuous. We call \(\frac{\delta F}{\delta p}\) the linear derivative and we may further define the intrinsic derivative \(D_{p}F(p,x):=\nabla\frac{\delta F}{\delta p}(p,x)\)._
**Assumption 2.3**.: _Assume that \(F\) is convex, \(\mathcal{C}^{1}\) and_
\[F(p)\geq\lambda\int_{\mathbb{R}^{d}}\left|x\right|^{2}p(dx)\quad\text{for some }\lambda>0.\]
The following theorem states that the bias caused by the regularizer vanishes as temperature \(\sigma\to 0\). It ensures that the Fisher information is efficient as regularizer in this mean-field optimization problem.
**Proposition 2.4**.: _We have_
\[\lim_{\sigma\to 0}\left(\inf_{p\in\mathcal{P}_{H}}\;\mathfrak{F}^{\sigma}(p) \right)=\inf_{p\in\mathcal{P}_{2}}\;F(p).\]
Proof.: Let us pick \(p\in\mathcal{P}_{2}\) such that \(F\left(p\right)<\inf_{p\in\mathcal{P}_{2}}F\left(p\right)+\varepsilon\). By truncation and mollification, define \(p_{K,\delta}=p_{K}\star\varphi_{\delta}\) where \(p_{K}:=\frac{p1_{|x|\leq K}}{p|\left(x\right|\leq K\right)}\) and \(\varphi_{\delta}\left(x\right):=\frac{1}{\left(2\pi\delta\right)^{\frac{d}{2}} }\exp\left(-\frac{|x|^{2}}{2\delta}\right)\). It is clear that \(p_{K,\delta}\) converges to \(p\) in \(\mathcal{W}_{2}\) as \(K\to\infty\) and \(\delta\to 0.\) Additionally, one easily checks by direct computation that \(I(p_{K,\delta})<+\infty.\) By \(\mathcal{W}_{2}\)-continuity of \(F,\) we deduce by choosing \(K\) large and \(\delta\) small enough that
\[\inf_{p\in\mathcal{P}_{H}}\mathfrak{F}^{\sigma}\left(p\right)\leq F\left(p_{ K,\delta}\right)+\frac{\sigma^{2}}{2}I\left(p_{K,\delta}\right)\leq F\left(p \right)+\varepsilon+\frac{\sigma^{2}}{2}I\left(p_{K,\delta}\right)\leq\inf_{ p\in\mathcal{P}_{2}}F\left(p\right)+2\varepsilon+\frac{\sigma^{2}}{2}I\left(p_{K, \delta}\right).\]
We conclude by taking the limit \(\sigma\to 0.\)
For further analysis we shall introduce the following generalized free energy function: for all \(p\in\mathcal{P}_{H}\),
\[\mathfrak{F}^{\sigma,\gamma}(p):=F(p)+\sigma^{2}I(p)+\gamma H(p), \tag{2.3}\]
where \(\gamma\geq 0\) and \(H\) is the entropy defined as
\[H(p):=\int_{\mathbb{R}^{d}}p(x)\log p(x)dx.\]
By considering the limit of the rate of change \(\frac{F(p+t(q-p))-F(p)}{t}\) as \(t\to 0\), a formal calculus of variation leads to define by abuse of notation
\[\frac{\delta\mathfrak{F}^{\sigma,\gamma}}{\delta p}(p,\cdot)\;:=\;\frac{ \delta F}{\delta p}\left(p,\cdot\right)-\frac{\sigma^{2}}{4}\left(2\nabla \cdot\left(\frac{\nabla p}{p}\right)+\frac{\left|\nabla p\right|^{2}}{p^{2}} \right)+\gamma\log p-\lambda(p), \tag{2.4}\]
where \(\lambda(p)\) is chosen so that
\[\int_{\mathbb{R}^{d}}\frac{\delta\mathfrak{F}^{\sigma,\gamma}}{\delta p}(p,x )p(x)dx=0. \tag{2.5}\]
A rigorous justification can be found in Proposition 4.3 below.
### Mean-Field Schrodinger Dynamics
Given the definition in eq. (2.4), we will consider the following generalized mean-field Schrodinger dynamics
\[\frac{dp_{t}}{dt}=-\frac{\delta\mathfrak{F}^{\sigma,\gamma}}{\delta p}\left(p _{t},\cdot\right)p_{t}.\]
Thanks to the normalization in eq. (2.5), the mass of \(p_{t}\) is conserved to \(1\). Writing the functional derivative explicitly, we have the following dynamics
\[\partial_{t}p_{t}=-\left(\frac{\delta F}{\delta p}\left(p_{t},\cdot\right)- \frac{\sigma^{2}}{4}\left(2\nabla\cdot\left(\frac{\nabla p_{t}}{p_{t}}\right)+ \frac{\left|\nabla p_{t}\right|^{2}}{p_{t}^{2}}\right)+\gamma\log p_{t}- \lambda_{t}\right)p_{t} \tag{2.6}\]
where \(p_{t}=p\left(t,\cdot\right)\), \(\nabla\) is the spatial derivative on \(x\), and \(\lambda_{t}=\lambda(p_{t})\) satisfies
\[\lambda_{t}=\int_{\mathbb{R}^{d}}\left(\frac{\delta F}{\delta p}\left(p_{t},x \right)-\frac{\sigma^{2}}{4}\left(2\nabla\cdot\left(\frac{\nabla p_{t}}{p_{t}} \right)(x)+\frac{\left|\nabla p_{t}\right|^{2}}{p_{t}^{2}}(x)\right)+\gamma \log p_{t}(x)\right)p_{t}(x)dx.\]
In particular we call the dynamics with \(\gamma=0\) the mean-field Schrodinger dynamics, namely,
\[\frac{dp_{t}}{dt}=-\frac{\delta\mathfrak{F}^{\sigma}}{\delta p}\left(p_{t}, \cdot\right)p_{t}. \tag{2.7}\]
**Assumption 2.5**.: _The linear derivative admits the decomposition \(\frac{\delta F}{\delta p}\left(p,x\right)=g\left(x\right)+G\left(p,x\right)\) where \(g\) and \(G(p,\cdot)\) are \(C^{2}\) such that_
* \(g\) _is_ \(\underline{\kappa}\)_-convex and has bounded Hessian, i.e.,_ \[\underline{\kappa}I_{d}\leq\nabla^{2}g\leq\overline{\kappa}I_{d},\quad\text{ for some }\overline{\kappa}\geq\underline{\kappa}.\]
* \(G\) _is_ \(\mathcal{W}_{1}\)_-continuous in_ \(p\) _and uniformly Lipschitz continuous in_ \(x\)_, i.e.,_ \[|G(p,x)-G(p,x^{\prime})|\leq L_{G}|x-x^{\prime}|,\quad\text{for all}\quad p \in\mathcal{P}_{2}(\mathbb{R}^{d}),\]
* \(\nabla G\) _is Lipschitz continuous, i.e.,_ \[|\nabla G(p,x)-\nabla G(p^{\prime},x^{\prime})|\leq L_{\nabla G}\left(|x-x^{ \prime}|+\mathcal{W}_{1}(p,p^{\prime})\right);\]
**Assumption 2.6**.: _The initial probability distribution admits the decomposition \(p_{0}(x)=e^{-\left(v_{0}(x)+w_{0}(x)\right)}\) where \(v_{0}\) and \(w_{0}\) are \(C^{1}\) such that_
* \(w_{0}\) _is Lipschitz continous;_
* \(\nabla v_{0},\ \nabla w_{0}\) _are both_ \(\overline{\eta}\)_-Lipschitz continuous;_
* \(v_{0}\) _is_ \(\underline{\eta}\)_-convex, i.e.,_ \[\left(\nabla v_{0}(x)-\nabla v_{0}(y)\right)\cdot(x-y)\geq\underline{\eta}|x- y|^{2},\quad\text{for }x,y\in\mathbb{R}^{d}.\]
In the sequel, we assume that Assumptions 2.3, 2.5 and 2.6 hold. First we show that the mean-field Schrodinger dynamic is well-defined. The proof is postponed to Section 3.2. For each \(T>0\), we denote by \(Q_{T}=(0,T]\times\mathbb{R}^{d},\ \bar{Q}_{T}=[0,T]\times\mathbb{R}^{d}\) and by \(C^{n}(Q_{T})\) the set of functions \(f\) such that \(\partial_{t}^{k}\nabla^{m}f\) exists for \(2k+m\leq n\).
**Theorem 2.7**.: _Under the assumptions above, the generalized mean-field Schrodinger dynamics eq. (2.6) admits a unique positive classical solution \(p\in C^{3}(Q_{T})\cap C(\bar{Q}_{T})\) for all \(T>0.\) In addition, it admits the decomposition \(p_{t}=e^{-\left(v_{t}+w_{t}\right)}\) where there exist \(\underline{\kappa},\underline{c},C>0,\) such that_
\[\underline{c}I_{d}\leq\nabla^{2}v_{t}\leq\overline{c}I_{d},\qquad\|\nabla w_{ t}\|_{\infty}\vee\|\nabla^{2}w_{t}\|_{\infty}\leq C,\qquad\forall\,t>0.\]
Then we study the long-time behaviour of the generalized mean-field Schrodinger dynamics and establish convergence toward the unique minimizer of the generalized free-energy function. The proof is postponed to Section 4.3. It essentially relies on the energy dissipation
\[\frac{d}{dt}\mathfrak{F}^{\sigma,\gamma}(p_{t})=-\int_{\mathbb{R}^{d}}\left| \frac{\delta\mathfrak{F}^{\sigma,\gamma}}{\delta p}(p_{t},x)\right|^{2}p_{t}( x)dx,\]
(see Proposition 4.4) so that the generalized free energy monotonously decreases along the generalized mean-field Schrodinger dynamics (2.6). Intuitively, the dissipation of energy only stops at the moment \(\frac{\delta\mathfrak{F}^{\sigma,\gamma}}{\delta p}(p_{\infty},\cdot)=0\). Since \(\mathfrak{F}^{\sigma,\gamma}\) is (strictly) convex, it is a sufficient condition for \(p_{\infty}\) to be the minimizer.
**Theorem 2.8**.: _Under the assumptions above, the solution \((p_{t})_{t\geq 0}\) to eq. (2.6) converges uniformly to \(p^{*}\), the unique minimizer of \(\mathfrak{F}^{\sigma,\gamma}\) in \(\mathcal{P}_{H}\). In addition, the optimizer \(p^{*}\) also satisfies Assumption 2.6 (but with different coefficients \(\underline{\eta},\,\overline{\eta}\)) and it holds_
\[\frac{\delta\mathfrak{F}^{\sigma,\gamma}}{\delta p}(p^{*},\cdot)=0.\]
**Remark 2.9**.: _In view of Corollary 4.8 below, the family of distributions \((p_{t})_{t\geq 0}\) admits uniform Gaussian bounds and thus it also converges to \(p^{*}\) for the \(L^{p}\)-norm or the \(\mathcal{W}_{p}\)-distance for any \(p\geq 1\)._
**Remark 2.10**.: _In case that the function \(p\mapsto F(p)\) is linear, i.e., \(F(p)=\int_{\mathbb{R}^{d}}V(x)p(dx)\) with some potential \(V\), the function \(\mathfrak{F}^{\sigma}\) is the classical energy function in quantum mechanics composed of the potential energy \(F\) and the kinetic one \(\int_{\mathbb{R}^{d}}|\nabla\sqrt{p}(x)|^{2}dx\). Let \(p^{*}\) be the minimizer of \(\mathfrak{F}^{\sigma}\), and denote by \(\psi^{\star}:=\sqrt{p^{*}}\) the corresponding wave function. If \(\psi^{\star}\) is twice continuously differentiable, then the first order equation reads_
\[-\sigma^{2}\Delta\psi^{\star}+V\psi^{\star}=c\psi^{\star},\quad\text{with} \quad c=\mathfrak{F}^{\sigma}(p^{*})=\min_{p\in\mathcal{P}_{H}}\mathfrak{F}^ {\sigma}(p).\]
_It is well known that \(c\) is the smallest eigenvalue of the Schrodinger operator \(-\sigma^{2}\Delta+V\) and that \(\psi^{\star}\) is the ground state of the quantum system._
Further we shall prove that the convergence for the mean-field Schrodinger dynamics (with \(\gamma=0\)) is exponentially quick. The proof is postponed to Section 4.4. As a byproduct, we establish a functional inequality which may carry independent interest, see Theorem 4.10.
**Theorem 2.11**.: _There exists a constant \(c>0\) such that_
\[\mathfrak{F}^{\sigma}(p_{t})-\mathfrak{F}^{\sigma}(p^{*})\leq e^{-ct}( \mathfrak{F}^{\sigma}(p_{0})-\mathfrak{F}^{\sigma}(p^{*})). \tag{2.8}\]
_Moreover, if we denote by \(I(\cdot|\cdot)\) the relative Fisher information, we have_
\[\frac{\sigma^{2}}{4}I(p_{t}|p^{*})\leq e^{-ct}(\mathfrak{F}^{\sigma}(p_{0})- \mathfrak{F}^{\sigma}(p^{*})).\]
### Gradient Flow with Relative Entropy
In this paper, we shall further investigate the gradient flow with respect to the relative entropy given the free energy function \(\mathfrak{F}^{\sigma}\). First, given \(h>0\) and \(\tilde{p}\) satisfying Assumption 2.6, consider the variational problem:
\[\inf_{p\in\mathcal{P}_{H}}\left\{\mathfrak{F}^{\sigma}(p)+h^{-1}H(p|\tilde{p} )\right\} \tag{2.9}\]
For \(\tilde{p}\) satisfying Assumption 2.6, we have the form \(\tilde{p}=e^{-\tilde{u}}\). Denoting by
\[\tilde{F}(p):=F(p)+h^{-1}\int_{\mathbb{R}^{d}}\tilde{u}(x)p(dx)\]
the new potential function, we may rewrite the objective function in the optimization (2.9) in the form of the generalized free energy function (2.3), namely, \(\tilde{\mathfrak{F}}^{\sigma,h^{-1}}\). Moreover, the new potential function \(\tilde{F}\) still satisfies Assumption 2.5 with \(\tilde{g}=g+h^{-1}\tilde{v}\) and \(\tilde{G}=G+h^{-1}\tilde{w}\). Therefore, the following corollary is a direct result of Theorem 2.8.
**Corollary 2.12**.: _Under the assumptions above, the minimization problem (2.9) admits a unique minimizer \(p^{*}\in\mathcal{P}_{H}\) still satisfying Assumption 2.6 (but with different coefficients) and it holds_
\[\frac{\delta\tilde{\mathfrak{F}}^{\sigma,h^{-1}}}{\delta p}(p^{*},\cdot)=0.\]
Now given \(p_{0}^{h}:=p_{0}\) satisfying Assumption 2.6 we may define a sequence of probability measures using the variational problem (2.9):
\[p_{i}^{h}:=\operatorname*{argmin}_{p\in\mathcal{P}_{H}}\left\{\mathfrak{F}^{ \sigma}(p)+h^{-1}H(p|p_{i-1}^{h})\right\},\quad\text{for }i\geq 1. \tag{2.10}\]
It follows from Corollary 2.12 that \(p_{i}^{h}\) satisfy the first order equation
\[\frac{\delta F}{\delta p}\left(p_{i}^{h},\cdot\right)+h^{-1}(\log p_{i}^{h}-\log p _{i-1}^{h})-\frac{\sigma^{2}}{4}\left(2\nabla\cdot\left(\frac{\nabla p_{i}^{h}}{ p_{i}^{h}}\right)+\frac{\left|\nabla p_{i}^{h}\right|^{2}}{\left|p_{i}^{h} \right|^{2}}\right)=\lambda_{i}^{h}, \tag{2.11}\]
where
\[\lambda_{i}^{h}=\int_{\mathbb{R}^{d}}\left(\frac{\delta F}{\delta p}\left(p_{i }^{h},\cdot\right)+h^{-1}(\log p_{i}^{h}-\log p_{i-1}^{h})-\frac{\sigma^{2}}{4 }\left(2\nabla\cdot\left(\frac{\nabla p_{i}^{h}}{p_{i}^{h}}\right)+\frac{\left| \nabla p_{i}^{h}\right|^{2}}{\left|p_{i}^{h}\right|^{2}}\right)\right)p_{i}^{h}. \tag{2.12}\]
By slightly abusing the notation, define the continuous-time flow of probability measures:
\[p_{t}^{h}:=p_{i}^{h},\quad\text{for }t\in[hi,h(i+1)).\]
**Theorem 2.13**.: _Under the assumptions above, the sequence of functions \((p^{h})_{h>0}\) converges uniformly on \([0,T]\times\mathbb{R}^{d}\) as \(h\to 0\) to \(p\) the mean-field Schrodinger dynamics (2.7)._
**Remark 2.14**.: _In view of Proposition 5.1 below, the family of distributions \((p_{t}^{h})_{h>0}\) admits uniform Gaussian bounds and thus we also have for any \(p\geq 1,\)_
\[\sup_{t\in[0,T]}\left\|p_{t}^{h}-p_{t}\right\|_{L^{p}}\xrightarrow[h\to 0]{}0, \qquad\sup_{t\in[0,T]}\mathcal{W}_{p}\big{(}p_{t}^{h},p_{t}\big{)} \xrightarrow[h\to 0]{}0.\]
### Numerical Simulation
In this section we shall briefly report how to sample \(\frac{1}{N}\sum_{i=1}^{N}\delta_{X_{i}^{i}}\) to approximate the probability law \(p_{t}\) in the mean field Schrodinger dynamics (2.7), without pursuing mathematical rigorism.
In case that \(x\mapsto p_{t}(x)\) is twice continuously differentiable, the mean field Schrodinger dynamics (2.7) can be rewritten as
\[\partial_{t}p=\frac{\sigma^{2}}{2}\Delta p-\left(\frac{\delta F}{\delta p} \left(p_{t},x\right)+\frac{\sigma^{2}}{4}\frac{\left|\nabla p\right|^{2}}{p^{ 2}}-\lambda_{t}\right)p.\]
This can viewed as the Fokker-Planck equation describing the marginal distributions a Brownian motion \((X_{t})_{t\geq 0}\) killed at rate \(\eta(t,x):=\frac{\delta F}{\delta p}\left(p_{t},x\right)+\frac{\sigma^{2}}{4 }\frac{\left|\nabla p\right|^{2}}{p^{2}}\) conditionned on not being killed. In other words, the particle \(X\) moves freely in the space \(\mathbb{R}^{d}\) as a Brownian motion \((\sigma W_{t})_{t\geq 0}\) before it gets killed with the conditional probability:
\[\mathbb{P}\left(\text{ X get killed in }[t,t+\Delta t]\text{ }|\text{ }X_{t} \right)\approx\eta(t,X_{t})\Delta t,\quad\text{for small }\Delta t.\]
Meanwhile the killed particle gets reborn instantaneously according to the distribution \(p_{t}\). This interpretation of the mean field Schrodinger dynamics offers an insight on how to sample the marginal law \(p_{t}\), however, in order to evaluate the death rate \(\eta(t,X_{t})\) one need to evaluate \(\frac{\left|\nabla p\right|^{2}}{p^{2}}\), which can be hard if not impossible. This difficulty forces us to find a more sophistical way to sample \(p_{t}\).
Note that \(\psi_{t}:=\sqrt{p_{t}}\) solves the PDE:
\[\partial_{t}\psi=\frac{\sigma^{2}}{2}\Delta\psi-\frac{1}{2}\left(\frac{\delta F }{\delta p}(p_{t},x)-\lambda_{t}\right)\psi.\]
Now introduce the two scaling of \(\psi_{t}\), namely \(\bar{\psi}_{t}:=e^{-\frac{1}{2}\int_{0}^{t}\lambda_{s}\,ds}\psi_{t}\) and \(\widehat{\psi}_{t}:=(\int\psi_{t})^{-1}\psi_{t}\) such that
\[\partial_{t}\bar{\psi}=\frac{\sigma^{2}}{2}\Delta\bar{\psi}-\frac{1}{2}\frac{ \delta F}{\delta p}(p_{t},x)\bar{\psi}\quad\text{and}\quad\partial_{t}\widehat {\psi}=\frac{\sigma^{2}}{2}\Delta\widehat{\psi}-\frac{1}{2}\left(\frac{\delta F }{\delta p}(p_{t},x)-\widehat{\lambda}_{t}\right)\widehat{\psi}.\]
The constant \(\widehat{\lambda}_{t}\) is chosen so that \(\widehat{\psi}_{t}\) is a probability density. Observe that:
* By the Feynman-Kac formula, the function \(\tilde{\psi}\) has the probabilistic representation: \[\begin{split}\tilde{\psi}(t,x)&=\mathbb{E}\left[\exp \Big{(}-\int_{0}^{t}\frac{1}{2}\frac{\delta F}{\delta p}(p_{t-s},x+\sigma W_{s} )ds\Big{)}\psi(0,x+\sigma W_{t})\right]\\ &\approx\frac{1}{M}\sum_{j=1}^{M}\exp\Big{(}-\int_{0}^{t}\frac{1} {2}\frac{\delta F}{\delta p}(p_{t-s},x+\sigma W_{s}^{j})ds\Big{)}\psi(0,x+ \sigma W_{t}^{j}),\end{split}\] where the latter approximation stands for the Monte Carlo simulation to approximate the expectation.
* The probability law \(\widehat{\psi}\) is the marginal distribution of a birth-death Brownian motion \((\widehat{X}_{t})_{t\geq 0}\) with death rate equal to \(\eta(t,x):=\frac{1}{2}\frac{\delta F}{\delta p}(p_{t},x)\) which we have no difficulty to evaluate numerically.
* Eventually, note that \(p_{t}=\frac{\tilde{\psi}_{t}}{\int_{\mathbb{R}^{d}}\frac{1}{\tilde{\psi}_{t}( x)}\tilde{\psi}_{t}(x)dx}\widehat{\psi}_{t}\) and can be approximately sampled as the weighted empirical measure \[p_{t}\approx\frac{1}{N}\sum_{i=1}^{N}\frac{\bar{\psi}(t,\widehat{X}_{t}^{i})} {\frac{1}{N}\sum_{k=1}^{N}\bar{\psi}(t,\widehat{X}_{t}^{k})}\delta\hat{X}_{t} ^{i}.\]
**Remark 2.15**.: _In particular, in view of Remark 2.10, the Monte Carlo method above offers an efficient way to sample the ground state of a high dimensional quantum system. To our knowledge there is little discussion on similar numerical schemes in the literature._
## 3 Mean-Field Schrodinger Dynamics
In order to study the dynamics in eq. (2.6), we introduce a change of variable \(p_{t}=\exp\left(-u_{t}\right)/Z_{t}\) where \(u\) satisfies the following equation:
\[\partial_{t}u=\frac{\sigma^{2}}{2}\Delta u-\frac{\sigma^{2}}{4}\left|\nabla u \right|^{2}+\frac{\delta F}{\delta p}\left(p_{t},\cdot\right)-\gamma u, \tag{3.1}\]
with the normalization constant \(Z_{t}=\int\exp\left(-u_{t}\left(x\right)\right)dx\). Clearly, \(u\) is a classical solution to eq. (3.1), if and only if the probability density \(p\) is a positive classical solution to eq. (2.6). Then we consider the mapping
\[(m_{t})_{t\in[0,T]}\mapsto(u_{t})_{t\in[0,T]}\mapsto(p_{t})_{t\in[0,T]} \tag{3.2}\]
where \(u\) solves the HJB equation
\[\partial_{t}u=\frac{\sigma^{2}}{2}\Delta u-\frac{\sigma^{2}}{4}\left|\nabla u \right|^{2}+\frac{\delta F}{\delta p}\left(m_{t},\cdot\right)-\gamma u, \tag{3.3}\]
and \(p_{t}=\exp\left(-u_{t}\right)/Z_{t}\), and we look for a fixed point to this mapping as it corresponds to a solution to eq. (3.1).
In this section, we first show that the solution to HJB equation (3.3) can be decomposed as the sum of a convex and a Lipschitz function. This allows us to apply a reflection coupling argument to show that the mapping (3.2) is a contraction on short horizon and thus to ensure existence and uniqueness of the solution to (3.1) and to prove Theorem 2.7. Finally we gather some properties of the solution to eq. (2.6) for later use.
### Hamilton-Jacobi-Bellman Equation
The aim of this section is to prove that the solution to HJB equation (3.3) is smooth and can be decomposed into the sum of one convex and one Lipschitz function as stated in Proposition 3.2 below.
**Assumption 3.1**.: _Assume that the mapping \(t\mapsto m_{t}\) is \(\mathcal{W}_{1}\)-continuous, i.e.,_
\[\lim_{s\to t}\mathcal{W}_{1}(m_{t},m_{s})=0,\quad\text{for all }t\geq 0.\]
**Proposition 3.2**.: _Under Assumption 3.1, there exists a unique classical solution \(u\) of class \(C^{3}\) to the HJB equation (3.3). In addition, \(u=v+w\) where there exist \(\underline{c},\overline{c},C>0,\) independent of \(m,\) such that_
\[\underline{c}I_{d}\leq\nabla^{2}v_{t}\leq\overline{c}I_{d},\qquad\|\nabla w_{t }\|_{\infty}\vee\|\nabla^{2}w_{t}\|_{\infty}\leq C,\qquad\forall\,t>0.\]
**Corollary 3.3**.: _For any \(T\in[0,+\infty),\) there exists \(C_{T}>0\) is independent of \(m\) such that_
\[\sup_{t\leq T}|\nabla u(t,x)|\leq C_{T}(1+|x|),\qquad\sup_{t\leq T}|u(t,x)| \leq C_{T}(1+|x|^{2}).\]
By the Cole-Hopf transformation, we may prove in a rather classical way that there exists a unique smooth solution to eq. (3.3). We refer to Appendix A.1 for a complete proof. Further, given the decomposition \(\frac{\delta F}{\partial p}(p,x)=g(x)+G(p,x)\) in Assumption 2.5, we are tempted to decompose the solution to eq. (3.3) as \(u=v+w,\) where \(v\) solves the HJB corresponding to the strongly convex part \(g\):
\[\partial_{t}v=\frac{\sigma^{2}}{2}\Delta v-\frac{\sigma^{2}}{4}\left|\nabla v \right|^{2}+g-\gamma v \tag{3.4}\]
and \(w\) solves the remaining part:
\[\partial_{t}w=\frac{\sigma^{2}}{2}\Delta w-\frac{\sigma^{2}}{2}\nabla v\cdot \nabla w-\frac{\sigma^{2}}{4}\left|\nabla w\right|^{2}+G\left(m_{t},\cdot \right)-\gamma w. \tag{3.5}\]
Because it is a special case of eq. (3.3), eq. (3.4) also admits a unique classical solution, and therefore so does eq. (3.5). In addition, Proposition A.3 also shows that under the upholding conditions the solutions \(u,v,w\) indeed belong to \(C^{3}(Q),\) and for all \(t\in[0,T],\)
\[\left|\nabla\psi(t,x)\right|\leq C_{T}(1+|x|^{2}),\quad\text{with }\psi(t,x)=e^{-\frac{1}{2}u(t,x)}. \tag{3.6}\]
The proof of Proposition 3.2 is completed through Proposition 3.6 and Proposition 3.8 below.
**Remark 3.4**.: _In case \(G=0\) and \(w_{0}=0,\) we have \(u=v\). Therefore all the properties proved for the function \(u\) are shared by the function \(v\)._
**Lemma 3.5**.: _Let \(u\) be the classical solution to eq. (3.3). There exists a constant \(\delta>0\) only depending on \(\underline{\kappa},\overline{\kappa},\overline{\eta}\) from Assumption 2.5 and 2.6 such that \(\sup_{T\leq\delta}\|\nabla^{2}u(T,\cdot)\|_{\infty}<\infty.\)_
Proof.: (i). We first show that the SDE (3.9) below admits a unique strong solution. Define \(\psi(t,x):=e^{-\frac{1}{2}u(t,x)}.\) By Lemma A.2 in Appendix, we have
\[\psi(t,x)=\mathbb{E}\left[e^{-\frac{1}{2}\int_{0}^{t}\big{(}\frac{\delta F}{ \delta p}(m_{t-s},x+\sigma W_{s})-\gamma u(t-s,x+\sigma W_{s})\big{)}ds}\psi( 0,x+\sigma W_{t})\right]. \tag{3.7}\]
Now consider the continuous paths space \(C[0,T]\) as the canonical space. Denote by \(\overline{\mathbb{F}}:=(\overline{\mathcal{F}}_{t})_{t\leq T}\) the canonical filtration and \(\overline{X}\) the canonical process. Let \(\mathbb{P}\) be the probability measure such
that \((\overline{X}-x)/\sigma\) is a \(\mathbb{P}\)-Brownian motion starting from the origin. We may define an equivalent probability measure \(\mathbb{Q}\) on the canonical space via
\[\frac{d\mathbb{Q}}{d\mathbb{P}}\Big{|}_{\overline{\mathcal{F}}_{T}}=\Lambda_{T} :=\frac{e^{-\int_{0}^{T}\frac{1}{2}\big{(}\frac{\delta F}{\delta p}(m_{T-s}, \overline{X}_{s})-\gamma u(T-s,\overline{X}_{s})\big{)}dt}\psi(0,\overline{X} _{T})}{\psi(T,x)}. \tag{3.8}\]
By Ito's formula we may identify that
\[\mathbb{E}^{\mathbb{P}}\left[\Lambda_{T}\left|\overline{\mathcal{ F}}_{t}\right] =\frac{e^{-\int_{0}^{t}\frac{1}{2}\big{(}\frac{\delta F}{\delta p}(m_{T-s}, \overline{X}_{s})-\gamma u(T-s,\overline{X}_{s})\big{)}dt}\psi(T-t,\overline{ X}_{t})}{\psi(T,x)}\] \[=\exp\left(-\int_{0}^{t}\frac{1}{2}\nabla u(T-s,\overline{X}_{s}) \cdot d\overline{X}_{s}-\int_{0}^{t}\frac{\sigma^{2}}{8}|\nabla u(T-s, \overline{X}_{s})|^{2}ds\right).\]
Using the Girsanov's theorem, we may conclude that the SDE
\[X_{t}=x-\int_{0}^{t}\frac{\sigma^{2}}{2}\nabla u(T-s,X_{s})ds+\sigma W_{t} \tag{3.9}\]
admits a weak solution. In addition, since \(x\mapsto\nabla u(t,x)\) is locally Lipschitz, the SDE above has the property of pathwise uniqueness. Therefore, by Yamada-Watanabe's theorem we conclude.
(ii). Further we may define \(Y_{t}:=\nabla u(T-t,X_{t}).\) It follows from Ito's formula that \((X,Y)\) solves the forward-backward SDE (FBSDE):
\[\left\{\begin{aligned} dX_{t}&=-\frac{\sigma^{2}}{2} Y_{t}dt+\sigma dW_{t},& X_{0}=x\\ dY_{t}&=\big{(}\gamma Y_{t}-\nabla\frac{\delta F}{ \delta p}(m_{T-t},X_{t})\big{)}dt+Z_{t}dW_{t},& Y_{T}=\nabla u(0,X _{T}),\end{aligned}\right.\]
where \(Z_{t}=\sigma\nabla^{2}u(T-t,X_{t})\). Introduce the norm
\[\|(Y,Z)\|_{\mathcal{D}}:=\sup_{t\leq T}\left\{\mathbb{E}\left[|Y_{t}|^{2}+ \int_{t}^{T}|Z_{s}|^{2}ds\right]\right\}^{\frac{1}{2}}.\]
We are going to show that \(\|(Y,Z)\|_{\mathcal{D}}<\infty\), provided that \(T\) is small enough.
By (3.6), we have
\[|\nabla\psi(t,x)|\leq C_{T}(1+|x|^{2}),\]
as well as \(\psi(t,x)\geq e^{-C_{T}(1+|x|^{2})}\). Therefore,
\[|\nabla u(t,x)|=2\frac{|\nabla\psi|}{\psi}(t,x)\leq C_{T}(1+|x|^{2})e^{C_{T}|x |^{2}}.\]
On the other hand, by the definition of \(\Lambda_{T}\) in (3.8) we have
\[\Lambda_{T}\leq C_{T}e^{C_{T}(|x|^{2}+\sup_{t\leq T}|\overline{X}_{t}|^{2})}.\]
Now we may provide the following estimate
\[\mathbb{E}\left[\sup_{t\leq T}|Y_{t}|^{2}\right] =\mathbb{E}\left[\sup_{t\leq T}|\nabla u(T-t,X_{t})|^{2}\right]= \mathbb{E}^{\mathbb{P}}\left[\Lambda_{T}\sup_{t\leq T}|\nabla u(T-t,\overline {X}_{t})|^{2}\right]\] \[\leq C_{T}e^{C_{T}|x|^{2}}\mathbb{E}^{\mathbb{P}}\left[(1+\sup_{t \leq T}|\overline{X}_{t}|^{2})e^{C_{T}\sup_{t\leq T}|\overline{X}_{t}|^{2}} \right].\]
In particular, if \(T\) is small enough, we have \(\mathbb{E}^{\mathbb{P}}\left[(1+\sup_{t\leq T}|\overline{X}_{t}|^{2})e^{C_{T} \sup_{t\leq T}|\overline{X}_{t}|^{2}}\right]<\infty\).
Moreover, by Ito's formula, we obtain
\[d|Y_{t}|^{2} =\left(2\gamma|Y_{t}|^{2}-2Y_{t}\cdot\nabla\frac{\delta F}{\delta p} (m_{T-t},X_{t})+|Z_{t}|^{2}\right)dt+2Y_{t}\cdot Z_{t}dW_{t}\] \[\geq\left((2\gamma-1)|Y_{t}|^{2}-|\nabla\frac{\delta F}{\delta p} (m_{T-t},X_{t})|^{2}+|Z_{t}|^{2}\right)dt+2Y_{t}\cdot Z_{t}dW_{t}.\]
Define the stopping time \(\tau_{n}:=\inf\{t\geq 0:\ |Z_{t}|\geq n\}\), and note that
\[\mathbb{E}\left[\int_{0}^{T\wedge\tau_{n}}|Z_{t}|^{2}dt\right]\leq\mathbb{E}[| Y_{T\wedge\tau_{n}}|^{2}]-\mathbb{E}[|Y_{0}|^{2}]+\mathbb{E}\left[\int_{0}^{T \wedge\tau_{n}}\left((1-2\gamma)|Y_{t}|^{2}+|\nabla\frac{\delta F}{\delta p} (m_{T-t},X_{t})|^{2}dt\right)\right].\]
Since we have proved \(\mathbb{E}\left[\sup_{t\leq T}|Y_{t}|^{2}\right]<\infty\), by monotone and dominated convergence theorem, we obtain
\[\mathbb{E}\left[\int_{0}^{T}|Z_{t}|^{2}dt\right]\leq\mathbb{E}[|Y_{T}|^{2}]+ \mathbb{E}\left[\int_{0}^{T}\left((1-2\gamma)|Y_{t}|^{2}+|\nabla\frac{\delta F }{\delta p}(m_{T-t},X_{t})|^{2}dt\right)\right]<\infty.\]
Therefore, we have \(\|(Y,Z)\|_{\mathcal{D}}<\infty\).
(iii). It is known (see e.g. [17, Theorem I.5.1]) that there exists \(\delta>0\) only depending on \(\underline{\kappa},\ \overline{\kappa},\ \overline{\eta}\) such that for \(T\leq\delta\) the process \((Y,Z)\) here is the unique solution to the FBSDE such that \(\|(Y,Z)\|_{\mathcal{D}}<\infty\). Moreover, by standard a priori estimate (again see [17, Theorem I.5.1]) we may find a constant \(C\geq 0\) only depending on \(\underline{\kappa},\ \overline{\kappa},\ \overline{\eta}\) such that for \((Y^{\prime},Z^{\prime})\) solution to the FBSDE above starting from \(X_{0}=x^{\prime}\) we have
\[\|(Y,Z)-(Y^{\prime},Z^{\prime})\|_{\mathcal{D}}\leq C|x-x^{\prime}|,\quad \text{for}\quad T\leq\delta.\]
In particular, it implies that
\[\left|Y_{0}-Y^{\prime}_{0}\right|=\left|\nabla u(T,x)-\nabla u(T,x^{\prime}) \right|\leq C|x-x^{\prime}|,\]
so that \(\sup_{T\leq\delta}\|\nabla^{2}u(T,\cdot)\|_{\infty}<\infty\)..
**Proposition 3.6**.: _Let \(v\) be the classical solution to eq. (3.4). It holds:_
* _The Hessian of_ \(v\) _has a lower bound_ \(\theta_{t}\)_, i.e.,_ \(\nabla^{2}v\left(t,\cdot\right)\geq\theta_{t}I_{d}\)_, such that_ \[\frac{d\theta_{t}}{dt}=\underline{\kappa}-\gamma\theta_{t}-\sigma^{2}\theta_{ t}^{2},\qquad\theta_{0}=\underline{\eta}.\] (3.10) _In particular,_ \(v\) _is strictly convex uniformly w.r.t._ \(t\geq 0\) _and_ \(m\)_._
* _The Hessian of_ \(v\) _is bounded uniformly w.r.t._ \(t>0\) _and_ \(m\)_._
**Remark 3.7**.: _The convexity constant \(\theta_{t}\) for \(v_{t}\) satisfies the Riccati equation (3.10). Let_
\[\theta^{*}:=\frac{\sqrt{\gamma^{2}+4\sigma^{2}}\underline{\kappa}-\gamma}{2 \sigma^{2}}\]
_be the positive "equilibrium" position. We note that for any initial condition \(\theta_{0}>0\) the solution \(\theta_{t}\) converges monotonically to \(\theta^{*}\) when \(t\to\infty\) and thus \(\theta_{t}\geq\min(\theta^{*},\theta_{0})=:\underline{\theta}\) for all \(t\geq 0\)._
Proof.: We divide the following discussion into 3 steps.
(i). We first prove the strict convexity of the solution \(v\) on a short horizon. Fix \(T:=\delta\) small enough so that thanks to Lemma 3.5\(\nabla^{2}v\) is uniformly bounded on \((0,T]\). We shall prove that not only \(\nabla^{2}v\) has a positive lower bound, but also the bound does not depend on \(T\).
As in Step (i) of the proof of Lemma 3.5, we may define the strong solution
\[X_{t}=x-\int_{0}^{t}\nabla v(s,X_{s})ds+\sigma W_{t}.\]
Further define \(Y_{t}:=\nabla v(T-t,X_{t})\) and \(Z_{t}:=\sigma\nabla^{2}v(T-t,X_{t})\) so that \(\|(Y,Z)\|_{\mathcal{D}}<\infty\) and \((Y,Z)\) is the unique solution to the FBSDE on the short horizon \([0,T]\):
\[\left\{\begin{aligned} dX_{t}&=-\frac{\sigma^{2}}{2}Y_ {t}dt+\sigma dW_{t},& X_{0}=x\\ dY_{t}&=\big{(}\gamma Y_{t}-\nabla g(X_{t})\big{)} dt+Z_{t}dW_{t},& Y_{T}=\nabla v(0,X_{T}).\end{aligned}\right.\]
Define \((X^{\prime},Y^{\prime},Z^{\prime})\) similarly with \(X^{\prime}_{0}=x^{\prime}\), and further denote by \(\delta X_{t}:=X_{t}-X^{\prime}_{t},\ \delta Y_{t}:=Y_{t}-Y^{\prime}_{t},\ \delta Z_{t}:=Z_{t}-Z^{ \prime}_{t}\). Note that due to the uniqueness of the solution to the FBSDE, we have \(\delta X_{t}=\delta Y_{t}=\delta Z_{t}=0\) for \(t\geq\tau:=\inf\{t\geq 0:\ \delta X_{t}=0\}\). By Ito's formula, it is easy to verify that
\[d\frac{\delta X_{t}\cdot\delta Y_{t}}{|\delta X_{t}|^{2}}=\left( -\frac{\sigma^{2}|\delta Y_{t}|^{2}}{2|\delta X_{t}|^{2}}+\gamma\frac{\delta X _{t}\cdot\delta Y_{t}}{|\delta X_{t}|^{2}}-\frac{\delta X_{t}\cdot\big{(} \nabla g(X_{t})-\nabla g(X^{\prime}_{t})\big{)}}{|\delta X_{t}|^{2}}+\sigma^{2 }\frac{|\delta X_{t}\cdot\delta Y_{t}|^{2}}{|\delta X_{t}|^{4}}\right)dt\\ +\frac{\delta X_{t}\cdot\delta Z_{t}dW_{t}}{|\delta X_{t}|^{2}}.\]
Therefore, the pair \((\widehat{Y}_{t},\widehat{Z}_{t}):=\left(\frac{\delta X_{t}\cdot\delta Y_{t}}{ |\delta X_{t}|^{2}},\frac{\delta X_{t}^{\top}\delta Z_{t}}{|\delta X_{t}|^{2}}\right)\) solves the backward SDE:
\[d\widehat{Y}_{t}=\left(-\frac{\sigma^{2}|\delta Y_{t}|^{2}}{2|\delta X_{t}|^{ 2}}+\gamma\widehat{Y}_{t}-\frac{\delta X_{t}\cdot\big{(}\nabla g(X_{t})- \nabla g(X^{\prime}_{t})\big{)}}{|\delta X_{t}|^{2}}+\sigma^{2}\widehat{Y}_{ t}^{2}\right)dt+\widehat{Z}_{t}dW_{t}.\]
According to Lemma 3.5, the process \(\widehat{Y}\) is bounded on \([0,T]\) and so is the coefficient in front of \(dt\) above. By the Ito's isometry, we clearly have \(\mathbb{E}[\int_{0}^{T}|\widehat{Z}_{t}|^{2}dt]<\infty\). We aim at providing a lower bound for \(\widehat{Y}\). Introduce the Riccati equation (3.10) with solution \(\theta\) and note that \((\theta_{t})\) is bounded on \([0,\infty)\). Define \(\widehat{\theta}_{t}:=\theta_{T-t}\) for \(t\leq T\) so that
\[d\widehat{\theta}_{t}=(-\underline{\kappa}+\gamma\widehat{\theta}_{t}+\sigma^ {2}\widehat{\theta}_{t}^{2})dt,\quad\widehat{\theta}_{T}\leq\widehat{Y}_{T}.\]
Since \(g\) is \(\underline{\kappa}\)-convex, we have
\[d(\widehat{Y}_{t}-\widehat{\theta}_{t}) =\left(-\frac{\sigma^{2}|\delta Y_{t}|^{2}}{2|\delta X_{t}|^{2}}- \frac{\delta X_{t}\cdot\big{(}\nabla g(X_{t})-\nabla g(X^{\prime}_{t})\big{)} }{|\delta X_{t}|^{2}}+\underline{\kappa}+\gamma(\widehat{Y}_{t}-\widehat{ \theta}_{t})+\sigma^{2}(\widehat{Y}_{t}^{2}-\widehat{\theta}_{t}^{2})\right)dt +\widehat{Z}_{t}dW_{t}\] \[\leq\left(\gamma(\widehat{Y}_{t}-\widehat{\theta}_{t})+\sigma^{2} (\widehat{Y}_{t}+\widehat{\theta}_{t})(\widehat{Y}_{t}-\widehat{\theta}_{t}) \right)dt+\widehat{Z}_{t}dW_{t}\]
Since \(\widehat{Y}_{t},\ \widehat{\theta}_{t}\) are both bounded and \(\mathbb{E}[\int_{0}^{T}|\widehat{Z}_{t}|^{2}dt]<\infty\), it follows from the comparison principal of the standard backward SDE that \(\widehat{Y}_{t}-\widehat{\theta}_{t}\geq 0\), that is, the function \(v(t,\cdot)\) is \(\theta_{t}\)-convex for \(t\in[0,T]\).
(ii). We shall improve the bound of \(|\nabla^{2}v|\) to get a bound independent of the size of the horizon \(T=\delta\). Note that \(\nabla v\) satisfies the equation
\[\partial_{t}\nabla v=\frac{\sigma^{2}}{2}\Delta\nabla v-\frac{\sigma^{2}}{2} \nabla^{2}v\nabla v+\nabla g-\gamma\nabla v,\]
and has the probabilistic representation
\[\nabla v(t,x)=\mathbb{E}\left[\int_{0}^{t}e^{-\gamma s}\nabla g(X_{s})ds+e^{- \gamma t}\nabla v(0,X_{t})\right],\]
with
\[X_{s}=x-\int_{0}^{s}\frac{\sigma^{2}}{2}\nabla v(t-r,X_{r})dr+\sigma W_{s}.\]
Since \(\nabla g,\ \nabla v(0,\cdot)\) are both Lipschitz continuous, we have
\[\left|\nabla v(t,x)-\nabla v(t,x^{\prime})\right|\leq C\mathbb{E}\left[\int_{0 }^{t}|X_{s}-X_{s}^{\prime}|ds+|X_{t}-X_{t}^{\prime}|\right], \tag{3.11}\]
where \(X_{s}^{\prime}=x^{\prime}-\int_{0}^{s}\frac{\sigma^{2}}{2}\nabla v(t-r,X_{r}^ {\prime})dr+\sigma W_{s}\). Now recall that we have proved in Step (i) that the function \(v(s,\cdot)\) is \(\theta_{s}\)-convex for \(s\in[0,t]\) so that
\[\frac{1}{2}d\left|X_{s}-X_{s}^{\prime}\right|^{2} =\left(X_{s}-X_{s}^{\prime}\right)\cdot\left(dX_{s}-dX_{s}^{ \prime}\right)\] \[=-\frac{\sigma^{2}}{2}\left(X_{s}-X_{s}^{\prime}\right)\cdot \left(\nabla v\left(t-s,X_{s}\right)-\nabla v\left(t-s,X_{s}^{\prime}\right) \right)ds\] \[\leq-\frac{\sigma^{2}\theta_{t-s}}{2}\left|X_{s}-X_{s}^{\prime} \right|^{2}ds,\]
Furthermore we observe that \(\underline{\theta}:=\inf_{s\geq 0}\theta_{s}=\underline{\eta}\wedge\theta^{*}>0\) by Remark 3.7 so that
\[|X_{s}-X_{s}^{\prime}|\leq e^{-\frac{\sigma^{2}\theta_{s}}{2}}|x-x^{\prime}|. \tag{3.12}\]
Together with eq. (3.11), we obtain
\[|\nabla v(t,x)-\nabla v(t,x^{\prime})|\leq C\left(1+\frac{2}{\sigma^{2} \underline{\theta}}\right)|x-x^{\prime}|.\]
Therefore \(|\nabla^{2}v(t,\cdot)|\leq C\left(1+\frac{2}{\sigma^{2}\underline{\theta}}\right)\), in particular the bound does not depend on \(T=\delta\).
(iii). By the result of Step (ii), we know that \(\nabla^{2}v(\delta,\cdot)\) is bounded and the bound does not depend on \(\delta\). Together with Lemma 3.5, we conclude that \(\nabla^{2}v\) is bounded on \([\delta,2\delta]\), and further deduce that \(v\) is \(\theta_{t}\)-convex and \(\nabla^{2}v\) has a \(\delta\)-independent bound again on \([\delta,2\delta]\) thanks to the results of Step (i), (ii). Therefore the desired result of the proposition follows from induction.
**Proposition 3.8**.: _Let \(w\) be the classical solution to eq. (3.5). Then the functions \(x\mapsto w(t,x)\) are Lipschitz continuous uniformly w.r.t. \(t\in[0,\infty)\) and \(m\)._
Proof.: We consider the following stochastic control problem. Let \((\Omega,\mathcal{F},\mathbb{P},\mathbb{F})\) be a filtered probability space, and \(W\) be a \((\mathbb{P},\mathbb{F})\)-Brownian motion. Denote by \(\mathcal{A}\) the collection of admissible control process, _i.e._, \(\alpha\) is progressively measurable and \(\mathbb{E}\left[\int_{0}^{T}|\alpha_{t}|^{2}dt\right]<\infty\). Then it follows from standard dynamic programming arguments that
\[w(T,x)=\inf_{\alpha\in\mathcal{A}}\mathbb{E}\left[\int_{0}^{T}e^{-\gamma s} \left(G\left(m_{T-s},X_{s}^{\alpha}\right)+\frac{\sigma^{2}}{4}|\alpha_{s}|^{ 2}\right)ds+e^{-\gamma T}w\left(0,X_{T}^{\alpha}\right)\right],\]
where \(X^{\alpha}\) stands for the strong solution to
\[dX_{s}^{\alpha}=-\frac{\sigma^{2}}{2}\left(\nabla v\left(T-s,X_{s}^{\alpha} \right)+\alpha_{s}\right)ds+\sigma dW_{s},\quad X_{0}^{\alpha}=x.\]
Denote by \(Y^{\alpha}\) the solution to the SDE above with \(Y_{0}^{\alpha}=y.\) Then it holds
\[\left|w\left(T,y\right)-w\left(T,x\right)\right|\leq\sup_{\alpha}\mathbb{E} \left[L_{G}\int_{0}^{T}|Y_{s}^{\alpha}-X_{s}^{\alpha}|\,ds+\left\|\nabla w_{0} \right\|_{\infty}|Y_{T}^{\alpha}-X_{T}^{\alpha}|\right]. \tag{3.13}\]
Using the convexity of \(v(t,\cdot)\) proved in Proposition 3.6, we obtain by the same argument as eq. (3.12) that
\[\left|Y_{s}^{\alpha}-X_{s}^{\alpha}\right|\leq e^{-\frac{\sigma^{2}\theta s}{2}} \left|y-x\right|.\]
Together with eq. (3.13), we can find a \(T\)-independent constant \(L_{w}\) such that
\[\left|w\left(T,y\right)-w\left(T,x\right)\right|\leq L_{w}\left|y-x\right|.\]
Given the fact that \(x\mapsto w(t,x)\) are Lipschitz continuous uniformly in \(t\geq 0\), we shall also prove for later use that the Hessian of \(u\) is uniformly bounded which is clearly an improvement over Lemma 3.5.
**Lemma 3.9**.: _Let \(u\) be the classical solution to eq. (3.3). Then the Hessian of \(u\) is uniformly bounded, that is,_
\[\sup_{t\geq 0}\|\nabla^{2}u(t,\cdot)\|_{\infty}\leq C,\]
_where the constant \(C\) is independent of \(m\)._
Proof.: Since \(u\) is the classical solution to eq. (3.3) and \(u\in C(\bar{Q}_{T})\cap C^{3}(Q_{T})\), its gradient \(\nabla u\) is the classical solution to
\[\partial_{t}\nabla u=\frac{\sigma^{2}}{2}\Delta\nabla u-\frac{\sigma^{2}}{2} \nabla^{2}u\nabla u+\nabla\frac{\delta F}{\delta p}(m_{t},\cdot)-\gamma\nabla u.\]
By Feynman-Kac's formula, \(\nabla u\) has the probabilistic representation
\[\nabla u(t,x)=\mathbb{E}\left[\int_{0}^{t}e^{-\gamma s}\Big{(}\nabla G(m_{t-s},X_{s})+\nabla g(X_{s})\Big{)}ds+e^{-\gamma t}\nabla u(0,X_{t})\right], \tag{3.14}\]
with
\[X_{s}=x-\frac{\sigma^{2}}{2}\int_{0}^{s}\nabla u(t-r,X_{r})dr+\sigma W_{s}.\]
Let us prove \(x\mapsto\nabla u(t,x)\) is Lipschitz continuous and the Lipschitz constant is independent of \(t\) and \(m\). Denote by
\[Y_{s}=y-\frac{\sigma^{2}}{2}\int_{0}^{s}\nabla u(t-r,Y_{r})dr+\sigma W_{s}.\]
Note that \(\nabla u=\nabla v+\nabla w\) and that \(v\) is uniformly strictly convex by Proposition 3.6 and \(\nabla w\) is bounded by Proposition 3.8. In view of Remark A.6 in Appendix, it follows that the function \(\nabla u\) satisfies Assumption A.5 and thus we can apply the reflection coupling Theorem A.7 to obtain for \(p_{s}^{X}:=\mathcal{L}(X_{s})\) and \(p_{s}^{Y}:=\mathcal{L}(Y_{s})\),
\[\mathcal{W}_{1}(p_{s}^{X},p_{s}^{Y})\leq Ce^{-c\sigma^{2}s}\mathcal{W}_{1}(p_{ 0}^{X},p_{0}^{Y}),\quad\text{for all $s\geq 0$}.\]
Together with eq. (3.14) and the fact that \(\nabla g\), \(\nabla u_{0}\) and \(\nabla G(p,\cdot)\) are uniformly Lipschitz, we have by Kantorovitch duality that
\[\left|\nabla u(t,x)-\nabla u(t,y)\right|\leq C\left(\int_{0}^{t}\mathcal{W}_{1 }(p_{s}^{X},p_{s}^{Y})ds+\mathcal{W}_{1}(p_{t}^{X},p_{t}^{Y})\right)\leq C|x- y|,\]
where the constant \(C\) does not depend on \(t\) and \(m\)
### Proof of Theorem 2.7
Proof of Theorem 2.7.: In view of Proposition 3.2, it is enough to show that the mapping (3.2) \((m_{t})_{t\in[0,T]}\mapsto(p_{t})_{t\in[0,T]}\) is a contraction for \(T\) small enough, where \(p_{t}=e^{-u_{t}}/\int e^{-u_{t}}\) with \(u\) the solution to eq. (3.3). This contraction property relies essentially on a reflection coupling argument established in Appendix A.3 which follows from the decomposition of \(u\) as the sum of a convex and a Lipschtz function.
(i). Let \((m^{\prime}_{t})_{t\in[0,T]}\) be another flow of probability measures satisfying Assumption 3.1, and use it to define the function \(u^{\prime}\) as in eq. (3.3). Denote by \(\delta u:=u-u^{\prime}\). Using the stability result for the HJB equation (3.3) proved in Lemma 3.10 below, we obtain
\[\|\nabla\delta u(t,\cdot)\|_{\infty}\leq TC_{T}\sup_{s\leq T}\mathcal{W}_{1}( m_{s},m^{\prime}_{s}),\text{for all }t\leq T. \tag{3.15}\]
(ii). Further define the probability density \(p^{\prime}_{t}=e^{-u^{\prime}_{t}}/\int e^{-u^{\prime}_{t}}\). Note that \(p_{t}\) and \(p^{\prime}_{t}\) are the invariant measures of the diffusion processes
\[dX_{s}=-\nabla u(t,X_{s})ds+\sqrt{2}dW_{s},\qquad dX^{\prime}_{s}=-\nabla u^{ \prime}(t,X^{\prime}_{s})ds+\sqrt{2}dW_{s},\]
respectively. Denote by \(p_{t,s}:=\mathcal{L}(X_{s})\) and \(p^{\prime}_{t,s}:=\mathcal{L}(X^{\prime}_{s})\) the marginal distributions, and assume that \(p_{t,0}=p^{\prime}_{t,0}=p_{0}\). By Proposition 3.2 and Remark A.6, we may apply the reflection coupling in Theorem A.7 in Appendix and obtain
\[\mathcal{W}_{1}\big{(}p_{t,s},p^{\prime}_{t,s}\big{)}\leq Ce^{-cs}\int_{0}^{s} e^{cr}\|\nabla\delta u(t,\cdot)\|_{\infty}dr.\]
Let \(s\to\infty\) on both sides. Since \(\lim_{s\to\infty}\mathcal{W}_{1}(p_{t,s},p_{t})=0\) and \(\lim_{s\to\infty}\mathcal{W}_{1}(p^{\prime}_{t,s},p^{\prime}_{t})=0\), we have
\[\mathcal{W}_{1}\big{(}p_{t},p^{\prime}_{t}\big{)}\leq C\|\nabla\delta u(t, \cdot)\|_{\infty}.\]
(iii). Together with eq. (3.15), we finally obtain
\[\sup_{t\leq T}\mathcal{W}_{1}\big{(}p_{t},p^{\prime}_{t}\big{)}\leq TC_{T} \sup_{t\leq T}\mathcal{W}_{1}\big{(}m_{t},m^{\prime}_{t}\big{)}.\]
Therefore, given \(T\) small enough, the mapping \((m_{t})_{t\leq T}\mapsto(p_{t})_{t\leq T}\) is a contraction under the metric \(\sup_{t\leq T}\mathcal{W}_{1}(\cdot_{t},\cdot_{t})\).
The following lemma shows that the gradient \(\nabla u\) of the solution to the HJB equation (3.3) is stable with respect to \((m_{t})_{t\in[0,T]}\) as needed for the proof of Theorem 2.7 above, as well as with respect to \(\nabla u(0,\cdot)\) for later use.
**Lemma 3.10**.: _Let \(u\) be the classical solution to eq. (3.3), while \(\tilde{u}\) is the classical solution to_
\[\partial_{t}\tilde{u}=\frac{\sigma^{2}}{2}\Delta\tilde{u}-\frac{\sigma^{2}}{4 }\left|\nabla\tilde{u}\right|^{2}+\frac{\delta F}{\delta p}\left(\tilde{m}_{t},\cdot\right)-\gamma\tilde{u},\]
_with the initial value \(\tilde{u}(0,\cdot)\) satisfying Assumption 2.6 and \(\tilde{m}\) satisfying Assumption 3.1. Then, we have the following stability results:_
* _If_ \(\nabla u(0,\cdot)=\nabla\tilde{u}(0,\cdot)\)_, then_ \(\|\nabla\delta u(t,\cdot)\|_{\infty}\leq C_{t}\int_{0}^{t}\mathcal{W}_{1}(m_{s },\tilde{m}_{s})ds\)_._
* _Otherwise_ \(\|\nabla\delta u(t,\cdot)\|_{(2)}:=\sup_{x\in\mathbb{R}^{d}}\frac{|\nabla \delta u(t,x)|}{1+|x|^{2}}\leq C_{t}\left(\int_{0}^{t}\mathcal{W}_{1}(m_{s}, \tilde{m}_{s})ds+\|\nabla\delta u(0,\cdot)\|_{(2)}\right).\)__
Proof.: Similiar to eq. (3.14), it follows from the Feynman-Kac's formula that
\[\nabla u(t,x) =\mathbb{E}\left[\int_{0}^{t}e^{-\gamma s}\nabla\frac{\delta F}{ \delta p}(m_{t-s},X_{s})ds+e^{-\gamma t}\nabla u(0,X_{t})\right],\] \[\nabla\tilde{u}(t,x) =\mathbb{E}\left[\int_{0}^{t}e^{-\gamma s}\nabla\frac{\delta F}{ \delta p}(\tilde{m}_{t-s},\tilde{X}_{s})ds+e^{-\gamma t}\nabla\tilde{u}(0, \tilde{X}_{t})\right],\]
with
\[dX_{s} =-\frac{\sigma^{2}}{2}\nabla u(t-s,X_{s})ds+\sigma dW_{s},\quad X_ {0}=x,\] \[d\tilde{X}_{s} =-\frac{\sigma^{2}}{2}\nabla\tilde{u}(t-s,\tilde{X}_{s})ds+ \sigma dW_{s},\quad\tilde{X}_{0}=x.\]
By Proposition 3.2 and Remark A.6, we may apply the reflection coupling in Theorem A.7 in Appendix to compare the marginal distribution of \(X\) and \(\tilde{X}\), denoted by \(p\) and \(\tilde{p}\) respectively. We obtain
\[\mathcal{W}_{1}(p_{s},\tilde{p}_{s})\leq\ell e^{-c\sigma^{2}s}\int_{0}^{s}e^{ c\sigma^{2}r}\mathbb{E}\big{[}|\nabla\delta u(t-r,X_{r})|\big{]}dr.\]
Further by the Lipschitz continuity of \(\nabla\frac{\delta F}{\delta p}\) and \(\nabla\tilde{u}(0,\cdot)\) we have
\[|\nabla\delta u(t,x)|\leq C\mathbb{E}\Big{[}\int_{0}^{t}\int_{0} ^{s}\ell e^{-\gamma s-c\sigma^{2}(s-r)}\big{|}\nabla\delta u(t-r,X_{r})\big{|} drds+\int_{0}^{t}e^{-\gamma s}\mathcal{W}_{1}(m_{t-s},\tilde{m}_{t-s})ds\\ +\int_{0}^{t}\ell e^{-\gamma t-c\sigma^{2}(t-s)}|\nabla\delta u(t -s,X_{s})|ds+e^{-\gamma t}|\nabla\delta u(0,X_{t})|\Big{]},\]
which implies that
\[|\nabla\delta u(t,x)|\leq C\mathbb{E}\Big{[}\int_{0}^{t}\big{|}\nabla\delta u (t-s,X_{s})\big{|}ds+\int_{0}^{t}\mathcal{W}_{1}(m_{s},\tilde{m}_{s})ds+| \nabla\delta u(0,X_{t})|\Big{]}. \tag{3.16}\]
Recall the decomposition of the solution established in Proposition 3.2:
\[u=v+w,\quad\tilde{u}=\tilde{v}+\tilde{w},\]
where \(v,\tilde{v}\) are strictly convex and \(w,\tilde{w}\) are Lipschitz. We divide the following discussion into two cases.
(i). We assume \(\nabla\delta u(0,\cdot)=0\). Note that in this case \(\nabla v=\nabla\tilde{v}\) (because \(v,\tilde{v}\) are not influenced by \(m\) or \(\tilde{m}\)) and that \(\nabla\delta u=\nabla w-\nabla\tilde{w}\) is bounded. It follows from the eq. (3.16) that
\[\|\nabla\delta u(t,\cdot)\|_{\infty}\leq C\left(\int_{0}^{t}\|\nabla\delta u(s,\cdot)\|_{\infty}ds+\int_{0}^{t}\mathcal{W}_{1}(m_{s},\tilde{m}_{s})ds\right).\]
Finally by the Gronwall inequality we obtain
\[\|\nabla\delta u(t,\cdot)\|_{\infty}\leq C_{t}\int_{0}^{t}\mathcal{W}_{1}(m_{s },\tilde{m}_{s})ds.\]
(ii). We consider the general case. Recall that both \(\nabla v\) and \(\nabla\tilde{v}\) are Lipschitz, and both \(\nabla w\) and \(\nabla\tilde{w}\) are bounded, so we have \(\|\nabla\delta u(t,\cdot)\|_{(2)}<\infty\). Further it follows from eq. (3.16) that
\[|\nabla\delta u(t,x)| \leq C\Big{(}\int_{0}^{t}\|\nabla\delta u(t-s,\cdot)\|_{(2)}(1+ \mathbb{E}[|X_{s}|^{2}])ds\] \[\qquad\qquad+\int_{0}^{t}\mathcal{W}_{1}(m_{s},\tilde{m}_{s})ds+ \|\nabla\delta u(0,\cdot)\|_{(2)}(1+\mathbb{E}[|X_{t}|^{2}])\Big{)}\] \[\leq Ce^{Ct}\Big{(}\int_{0}^{t}\|\nabla\delta u(t-s,\cdot)\|_{(2) }(1+|x|^{2})ds\] \[\qquad\qquad+\int_{0}^{t}\mathcal{W}_{1}(m_{s},\tilde{m}_{s})ds+ \|\nabla\delta u(0,\cdot)\|_{(2)}(1+|x|^{2})\Big{)}.\]
Finally, by the Gronwall inequality we obtain
\[\|\nabla\delta u(t,\cdot)\|_{(2)}\leq C_{t}\left(\int_{0}^{t}\mathcal{W}_{1}(m_{s},\tilde{m}_{s})ds+\|\nabla\delta u(0,\cdot)\|_{(2)}\right).\]
### Properties of Mean-Field Schrodinger Dynamics
It follows from the decomposition of \(u\) solution to the HJB equation (3.1) that the probability \(p\) solution to the mean-field Schrodinger equation (2.6) admits Gaussian bounds.
**Proposition 3.11**.: _For any \(T>0,\) there exist \(\underline{c},\overline{c},\underline{C},\overline{C}>0,\) such that for all \(t\in[0,T],\)\(x\in\mathbb{R}^{d},\)_
\[\underline{C}e^{-\underline{c}|x|^{2}}\leq p_{t}(x)\leq\overline{C}e^{- \overline{c}|x|^{2}}.\]
_In particular, \(p_{t}\in\mathcal{P}_{H}\) for all \(t\geq 0.\)_
Proof.: The Gaussian bounds follow immediately from Lemma A.4 in Appendix, whose assumptions are satisfied according to Proposition 3.2. Then we observe
\[\left|\nabla\sqrt{p_{t}}\right|^{2}=\frac{1}{4}\left|\nabla u_{t}\right|^{2}p _{t}\leq C_{T}(1+|x|^{2})p_{t},\]
where the last inequality follows from Corollary 3.3. Thus \(\nabla\sqrt{p_{t}}\in L^{2}\) and \(p_{t}\in\mathcal{P}_{H}.\)
Then we establish a stability result for the mean-field Schrodinger dynamics (2.6).
**Proposition 3.12**.: _For \(n\in\mathbb{N}\), let \(p^{n}\) (resp. \(p\)) be the mean-field Schrodinger dynamics (2.6) starting from \(p_{0}^{n}\) (resp. \(p_{0}\)), where \(p_{0}^{n}\) (resp. \(p_{0}\)) satisfy Assumption 2.6. If \(\nabla u_{0}^{n}\) converges to \(\nabla u_{0}\) in \(\|\cdot\|_{(2)},\) then \((p_{t}^{n},\nabla u^{n}(t,\cdot))\) converges to \((p_{t},\nabla u(t,\cdot))\) in \(\mathcal{W}_{1}\otimes\|\cdot\|_{(2)}\) for all \(t>0\)._
Proof.: Denote by \(\delta u:=u^{n}-u\). By the stability result of the HJB equation (3.3) proved in Lemma 3.10, we have
\[\|\nabla\delta u(T,\cdot)\|_{(2)}\leq C_{T}\left(\int_{0}^{T} \mathcal{W}_{1}(p_{t}^{n},p_{t})dt+\|\nabla\delta u(0,\cdot)\|_{(2)}\right). \tag{3.17}\]
As in the proof of Theorem 2.7, note that \(p_{t}^{n}\) and \(p_{t}\) are the invariant measures of the diffusions:
\[dX_{s}^{n}=-\nabla u^{n}(t,X_{s}^{n})ds+\sqrt{2}dW_{s},\qquad dX_ {s}=-\nabla u(t,X_{s})ds+\sqrt{2}dW_{s},\]
respectively. Denote by the marginal distributions \(p_{t,s}^{n}:=\mathcal{L}(X_{s}^{n})\) and \(p_{t,s}:=\mathcal{L}(X_{s}),\) and assume that \(p_{t,0}^{n}=p_{t,0}\). Using the reflection coupling, we obtain the estimate from Theorem A.7 that
\[\mathcal{W}_{1}(p_{t,s}^{n},p_{t,s})\leq Ce^{-cs}\int_{0}^{s}e^{ cr}\mathbb{E}\big{[}|\nabla\delta u(t,X_{r})|\big{]}dr.\]
By letting \(s\to\infty\) on both sides, it follows from Proposition 3.11 that
\[\mathcal{W}_{1}(p_{t}^{n},p_{t})\leq C\int_{\mathbb{R}^{d}}| \nabla\delta u(t,x)|p_{t}(x)dx\leq C_{T}\|\nabla\delta u(t,\cdot)\|_{(2)}.\]
Together with eq. (3.17), by the Gronwall's inequality we obtain
\[\|\nabla\delta u(T,\cdot)\|_{(2)}\leq C_{T}e^{TC_{T}}\|\nabla \delta u(0,\cdot)\|_{(2)}\]
as well as
\[\mathcal{W}_{1}(p_{T}^{n},p_{T})\leq C_{T}e^{TC_{T}}\|\nabla \delta u(0,\cdot)\|_{(2)}.\]
Convergence towards the Minimizer
### First Order Condition of Free Energy
Recall that \(\mathcal{F}^{\sigma,\gamma}\) is defined by eq. (2.3) with parameters \(\sigma>0\) and \(\gamma\geq 0\).
**Proposition 4.1**.: _The function \(\mathfrak{F}^{\sigma,\gamma}\) is convex on \(\mathcal{P}_{H}.\) Additionally, if it admits a minimizer \(p^{*}\in\mathcal{P}_{H}\) such that \(\frac{1}{p^{*}}\in L^{\infty}_{\text{loc}},\) then it is unique._
Proof.: It is an immediate consequence of the convexity of \(F,\)\(x\mapsto x\log(x)\) and Lemma 4.2 below.
**Lemma 4.2**.: _Let \(p,q\in\mathcal{P}_{H}\) and \(\alpha,\beta>0\). Then we have_
\[I\left(\alpha p+\beta q\right)\leq\alpha I\left(p\right)+\beta I\left(q\right).\]
_If in addition \(1/p\in L^{\infty}_{\text{loc}},\) then the equality holds if and only if \(p=q\)._
Proof.: Let \(\varphi=\sqrt{p},\psi=\sqrt{q}\). We have by using the Cauchy-Schwarz inequality
\[I\left(\alpha p+\beta q\right) =\int\left|\nabla\sqrt{\alpha\varphi^{2}+\beta\psi^{2}}\right|^{2 }=\int\frac{\left(\alpha\varphi\nabla\varphi+\beta\psi\nabla\psi\right)^{2}}{ \alpha\varphi^{2}+\beta\psi^{2}}\] \[\leq\int\frac{\left(\alpha\varphi^{2}+\beta\psi^{2}\right)\left( \alpha\left(\nabla\varphi\right)^{2}+\beta\left(\nabla\psi\right)^{2}\right)}{ \alpha\varphi^{2}+\beta\psi^{2}}=\alpha I\left(p\right)+\beta I\left(q\right).\]
The equality holds if and only if \(\varphi\nabla\psi=\psi\nabla\varphi\). If in addition \(1/p\in L^{\infty}_{\text{loc}}\) then \(\frac{1}{\varphi}\in L^{\infty}_{\text{loc}}\) and \(\frac{\psi}{\varphi}\in L^{1}_{\text{loc}}\) which is a distribution in sense of Schwartz. Its derivative satisfies
\[\nabla\left(\frac{\psi}{\varphi}\right)=\frac{\varphi\nabla\psi-\psi\nabla \varphi}{\varphi^{2}}=0.\]
Therefore \(\frac{\psi}{\varphi}\) is constant a.e., namely, \(p\) and \(q\) are proportional.
**Proposition 4.3**.: _If a probability measure \(p\in\mathcal{P}_{H}\) satisfies \(p\in C^{2}\) and_
\[\underline{C}e^{-\varepsilon|x|^{2}}\leq p(x)\leq\overline{C}e^{ -\overline{\varepsilon}|x|^{2}}, \tag{4.1}\] \[\frac{|\nabla p|}{p}\leq C(1+|x|),\quad\left|\nabla\cdot\frac{ \nabla p}{p}\right|\leq C(1+|x|^{2}). \tag{4.2}\]
_then the following first-order inequality holds for all \(q\in\mathcal{P}_{H},\)_
\[\mathfrak{F}^{\sigma,\gamma}(q)-\mathfrak{F}^{\sigma,\gamma}(p)\geq\int_{ \mathbb{R}^{d}}\frac{\delta\mathfrak{F}^{\sigma,\gamma}}{\delta p}(p,x)\left(q (x)-p(x)\right)\,dx.\]
_In particular, if \(\frac{\delta\mathfrak{F}^{\sigma,\gamma}}{\delta p}(p,\cdot)=0,\) then \(p\) is the unique minimizer of the free energy \(\mathfrak{F}^{\sigma,\gamma}.\)_
Proof.: We have \(\mathfrak{F}^{\sigma,\gamma}(p)=F(p)+\sigma^{2}I(p)+\gamma H(p).\) We deal with each of these three terms separately. Adding the three subsequent inequalities gives the desired result. Throughout the proof, we denote \(\varphi:=p-q\) and \(p_{t}:=p+t\varphi\) for \(t\in[0,1]\).
(i). Since \(F\) is \(\mathcal{C}^{1},\) it holds
\[\lim_{t\to 0+}\frac{F(p_{t})-F(p)}{t}=\int_{\mathbb{R}^{d}}\frac{\delta F}{ \delta p}(p,\cdot)\left(q-p\right).\]
In addition, by convexity of \(F,\) it holds
\[F(p_{t})-F(p)\leq(1-t)F(p)+tF(q)-F(p)=t\left(F(q)-F(p)\right).\]
We deduce that
\[F\left(q\right)-F\left(p\right)\geq\int_{\mathbb{R}^{d}}\frac{\delta F}{\delta p }\left(p,\cdot\right)\left(q-p\right).\]
(ii). Denote \(I_{K}\left(p\right):=\int_{K}\frac{\left|\nabla p\right|^{2}}{p}\) for \(K\subset\mathbb{R}^{d}\) compact and \(p\in\mathcal{P}_{H}\). We note that \(\lim_{K\uparrow\mathbb{R}^{d}}I_{K}\left(p\right)=I\left(p\right)\). We have
\[I_{K}\left(p_{t}\right)-I_{K}\left(p\right)=\int_{K}\left(\frac{\left|\nabla p \right|^{2}}{p+t\varphi}-\frac{\left|\nabla p\right|^{2}}{p}\right)+2t\int_{K} \frac{\nabla p\cdot\nabla\varphi}{p+t\varphi}+t^{2}\int_{K}\frac{\left|\nabla \varphi\right|^{2}}{p+t\varphi}.\]
Assume first that \(q\) is bounded and compactly supported. Then it holds on \(K\)
\[p+t\varphi\geq p-t\left|\varphi\right|\geq p-\frac{1}{2}\inf_{K}p\geq\frac{p} {2},\qquad\forall\,t\leq\frac{1}{2\left\|\varphi\right\|_{\infty}}\inf_{K}p,\]
so that
\[\frac{1}{t}\left|\frac{\left|\nabla p\right|^{2}}{p+t\varphi}-\frac{\left| \nabla p\right|^{2}}{p}\right|\leq 2\frac{\left|\nabla p\right|^{2}}{p^{2}}| \varphi|,\quad\frac{\left|\nabla p\cdot\nabla\varphi\right|}{p+t\varphi}\leq 2 \frac{\left|\nabla p\cdot\nabla\varphi\right|}{p},\quad\frac{\left|\nabla \varphi\right|^{2}}{p+t\varphi}\leq 2\frac{\left|\nabla\varphi\right|^{2}}{p}.\]
We notice that the r.h.s. is integrable on \(K\) as \(\frac{1}{p}\in L_{\text{loc}}^{\infty}\), \(\frac{\left|\nabla p\right|^{2}}{p}=\frac{1}{4}\left|\nabla\sqrt{p}\right|^{2} \in L^{1}\) and \(\nabla q=2\sqrt{q}\nabla\sqrt{q}\in L^{\infty}\cdot L^{2}\subset L^{2}.\) Thus we can apply the dominated convergence theorem and show that \(I_{K}\left(p_{t}\right)\) has derivative at \(t=0+\):
\[\left.\frac{dI_{K}\left(p_{t}\right)}{dt}\right|_{t=0+}=-\int_{K}\frac{\left| \nabla p\right|^{2}}{p^{2}}\varphi+2\int_{K}\frac{\nabla p\cdot\nabla\varphi }{p}.\]
We deduce from convexity of \(I_{K}\) established in Lemma 4.2 that
\[I_{K}\left(q\right)-I_{K}\left(p\right)\geq-\int_{K}\frac{\left|\nabla p \right|^{2}}{p^{2}}\varphi+2\int_{K}\frac{\nabla p\cdot\nabla\varphi}{p}.\]
Next we take the limit \(K\uparrow\mathbb{R}^{d}\) and we observe that the r.h.s. converges as eq. (4.2) holds for the first term and \(\frac{\left|\nabla p\right|^{2}}{p}=\frac{1}{4}\left|\nabla\sqrt{p}\right|^{2} \in L^{1}\), \(\nabla q\) is compactly supported for the second term. Using further integration by parts, since \(q\) is compactly supported, we obtain
\[I\left(q\right)-I\left(p\right)\geq-\int_{\mathbb{R}^{d}}\left(\frac{\left| \nabla p\right|^{2}}{p^{2}}+2\nabla\cdot\left(\frac{\nabla p}{p}\right) \right)\left(q-p\right)\]
To conclude it remains to deal with the general case \(q\in\mathcal{P}_{H}\) not necessarily bounded and compactly supported. Given \(M>0\), we consider the distribution \(q_{M}\propto\mathbf{1}_{\left|x\right|\leq M}q\wedge M\) and we apply the inequality above to \(q_{M}.\) Taking the limit \(M\rightarrow\infty\), it is clear that \(I(q_{M})\) converges to \(I(q)\) for the l.h.s. while we can deal with the r.h.s. by dominated convergence since \(q\in\mathcal{P}_{2}\) and eq. (4.2) holds.
(iii). Denote \(H_{K}\left(p\right):=\int_{K}\log(p)p\) for \(K\subset\mathbb{R}^{d}\) compact and \(p\in\mathcal{P}_{H}\). We note that \(\lim_{K\uparrow\mathbb{R}^{d}}H_{K}\left(p\right)=H\left(p\right)\). We have
\[H_{K}\left(p_{t}\right)-H_{K}\left(p\right)=\int_{K}\left(\log(p+t\varphi)- \log(p)\right)p+t\int_{K}\varphi\log(p+t\varphi).\]
Assume first that \(q\) is bounded. Then it holds for \(t\in\left[0,\frac{1}{2}\right]\) that \(\frac{1}{2}\inf_{K}p\leq p+t\varphi\leq\|p\|_{\infty}\vee\|q\|_{\infty}\) so that
\[\frac{1}{t}\left|\left(\log(p+t\varphi)-\log(p)\right)p\right|\leq\left| \varphi\right|,\quad\left|\varphi\log(p+t\varphi)\right|\leq C.\]
Thus we can apply the dominated convergence theorem and show that \(H_{K}\left(p_{t}\right)\) has derivative at \(t=0+\):
\[\frac{dH_{K}\left(p_{t}\right)}{dt}\bigg{|}_{t=0+}=\int_{K}\left(1+\log(p) \right)\varphi.\]
We deduce from convexity of \(H_{K}\) that
\[H_{K}\left(q\right)-H_{K}\left(p\right)\geq\int_{K}\left(1+\log(p)\right)\varphi.\]
Next we take the limit \(K\uparrow\mathbb{R}^{d}\) and we observe that the r.h.s. converges as \(p,q\in\mathcal{P}_{2}\) and eq. (4.1) holds. We obtain
\[H\left(q\right)-H\left(p\right)\geq\int_{\mathbb{R}^{d}}\left(1+\log(p)\right) \left(q-p\right)=\int_{\mathbb{R}^{d}}\log(p)(q-p)\]
To conclude it remains to deal with the general case \(q\in\mathcal{P}_{H}\) not necessarily bounded. Given \(M>0\), we consider the distribution \(q_{M}\propto q\wedge M\) and we apply the inequality above to \(q_{M}\in L^{\infty}\). Taking the limit \(M\to\infty\), it is clear that \(H(q_{M})\) converges to \(H(q)\) for the l.h.s. while we can deal with the r.h.s. by dominated convergence since \(q\in\mathcal{P}_{2}\) and eq. (4.1) holds.
### Decrease of Free Energy along Schrodinger Dynamics
**Proposition 4.4**.: _The generalized free energy function decreases along \((p_{t})_{t\geq 0}\) solution to eq. (2.6), more precisely, we have_
\[\frac{d}{dt}\mathfrak{F}^{\sigma,\gamma}(p_{t})=-\int_{\mathbb{R}^{d}}\left| \frac{\delta\mathfrak{F}^{\sigma,\gamma}}{\delta p}(p_{t},x)\right|^{2}p_{t}( x)dx, \tag{4.3}\]
_where \(\frac{\delta\mathfrak{F}^{\sigma,\gamma}}{\delta p}\) is defined in eq. (2.4)._
Proof.: By Proposition 4.3 we have
\[\mathfrak{F}^{\sigma,\gamma}(p_{t+h})-\mathfrak{F}^{\sigma, \gamma}(p_{t}) \geq\int_{\mathbb{R}^{d}}\frac{\delta\mathfrak{F}^{\sigma,\gamma}}{ \delta p}(p_{t},x)(p_{t+h}-p_{t})(x)dx\] \[=-\int_{\mathbb{R}^{d}}\frac{\delta\mathfrak{F}^{\sigma,\gamma}}{ \delta p}(p_{t},x)\int_{t}^{t+h}\frac{\delta\mathfrak{F}^{\sigma,\gamma}}{ \delta p}(p_{s},x)p_{s}(x)dsdx\]
Similarily we have
\[\mathfrak{F}^{\sigma,\gamma}(p_{t+h})-\mathfrak{F}^{\sigma,\gamma}(p_{t}) \leq-\int_{\mathbb{R}^{d}}\frac{\delta\mathfrak{F}^{\sigma,\gamma}}{\delta p }(p_{t+h},x)\int_{t}^{t+h}\frac{\delta\mathfrak{F}^{\sigma,\gamma}}{\delta p} (p_{s},x)p_{s}(x)dsdx.\]
The conclusion then follows from the dominated convergence theorem. Indeed, in view of Lemma 4.5 below, \(t\mapsto\frac{\delta\mathfrak{F}^{\sigma,\gamma}}{\delta p}(p_{t},x)\) is continuous and \(\sup_{t\leq T}\left|\frac{\delta\mathfrak{F}^{\sigma,\gamma}}{\delta p}(p_{t},x)\right|\leq C_{T}(1+|x|^{2})\) for any \(T>0.\) In addition, Proposition 3.11 ensures that \(\int|x|^{4}\sup_{t\leq T}p_{t}(x)dx<\infty.\)
**Lemma 4.5**.: _The mapping \(t\mapsto\frac{\delta\mathfrak{F}^{\sigma,\gamma}}{\delta p}(p_{t},x)\) is continuous and, for any \(T>0,\) there exists \(C_{T}>0\) such that_
\[\left|\frac{\delta\mathfrak{F}^{\sigma,\gamma}}{\delta p}(p_{t},x)\right|\leq C _{T}(1+|x|^{2}),\qquad\forall\,t\in[0,T]. \tag{4.4}\]
Proof.: Let \(u\) be the solution to eq. (3.1). We have
\[\frac{\delta\mathfrak{F}^{\sigma,\gamma}}{\delta p}(p_{t},\cdot)=\frac{\delta F }{\delta p}\left(p_{t},\cdot\right)+\frac{\sigma^{2}}{2}\Delta u_{t}-\frac{ \sigma^{2}}{4}\left|\nabla u_{t}\right|^{2}+\gamma u_{t}.\]
Then we observe that the mapping \(t\mapsto p_{t}\) is \(\mathcal{W}_{1}\)-continuous by construction and so \(t\mapsto\frac{\delta F}{\delta p}\left(p_{t},x\right)\) is continuous in view Assumption 2.5. We deduce that \(t\mapsto\frac{\delta\mathfrak{F}^{\sigma,\gamma}}{\delta p}(p_{t},x)\) is also continuous as \(u\in C^{2}\). In addition, according to Assumption 2.5,
\[\left|\frac{\delta F}{\delta p}\left(p_{t},x\right)\right|\leq C|x|+\left|\frac {\delta F}{\delta p}\left(p_{t},0\right)\right|\leq C_{T}(1+|x|),\]
where the last inequality follows from the continuity of \(t\mapsto\frac{\delta F}{\delta p}\left(p_{t},0\right).\) We conclude that eq. (4.4) holds in view of Corollary 3.3 and Lemma 3.9.
The dissipation of energy yields that the Fisher information of the mean-field Schrodinger dynamics \(p\) are actually uniformly bounded in time. This allows us to extend previous bounds from \([0,T]\) to \([0,\infty)\) which is crucial to study the asymptotic behavior of \(p\).
**Lemma 4.6**.: _It holds_
\[\sup_{t>0}\left\{\int_{\mathbb{R}^{d}}|x|^{2}p_{t}(x)dx+\int_{\mathbb{R}^{d}} |\nabla\sqrt{p_{t}}(x)|^{2}dx\right\}<+\infty \tag{4.5}\]
Proof.: First we observe that, denoting by \(q\) the Gaussian density with variance \(\upsilon^{2}\),
\[H(p)=H(p\,|\,q)+\int\log(q(x))p(x)\,dx\geq-\frac{d}{2}\log(2\pi\upsilon^{2})- \frac{1}{2\upsilon^{2}}\int_{\mathbb{R}^{d}}|x|^{2}p(x)dx\]
Then it follows from Assumption 2.3 by choosing \(\upsilon\) sufficiently large that there exist \(C,c>0\) such that
\[\mathfrak{F}^{\sigma,\gamma}(p_{t})\geq-C+c\int_{\mathbb{R}^{d}}|x|^{2}p_{t}( x)dx+\sigma^{2}\int_{\mathbb{R}^{d}}|\nabla\sqrt{p_{t}}(x)|^{2}dx,\qquad\forall\,t \geq 0.\]
Since the free energy is decreasing according to Proposition 4.4, we deduce that
\[\sup_{t\geq 0}\left\{c\int_{\mathbb{R}^{d}}|x|^{2}p_{t}(x)dx+\sigma^{2}\int_{ \mathbb{R}^{d}}|\nabla\sqrt{p_{t}}(x)|^{2}dx\right\}\leq C+\mathfrak{F}^{ \sigma,\gamma}(p_{0}).\]
**Lemma 4.7**.: _Let \(u\) be the solution to eq. (3.1). It holds_
\[\sup_{t\geq 0}|\nabla u(t,x)|\leq C(1+|x|).\]
Proof.: It follows from Lemma 3.9 that \(\nabla u(t,\cdot)\) is Lipschitz uniformly in time. We deduce that
\[4\int_{\mathbb{R}^{d}}|\nabla\sqrt{p_{t}}(x)|^{2}dx=\int_{\mathbb{R}^{d}}| \nabla u(t,x)|^{2}p_{t}(x)dx\geq\frac{1}{2}|\nabla u(t,0)|^{2}-L^{2}_{\nabla u }\int_{\mathbb{R}^{d}}|x|^{2}p_{t}(x)dx.\]
We conclude that by Lemma 4.6 that \(\sup_{t\geq 0}|\nabla u(t,0)|<\infty.\)
Using Lemma 4.7, it is straightforward to extend the Gaussian bounds of Proposition 3.11 and the quadratic bound of Lemma 4.5 from \([0,T]\) to \(\mathbb{R}_{+}\).
**Corollary 4.8**.: _There exist \(\underline{c},\overline{c},\underline{C},\overline{C},C>0,\) such that for all \(t\geq 0,\,x\in\mathbb{R}^{d},\)_
\[\underline{C}e^{-\underline{c}|x|^{2}}\leq p_{t}(x)\leq\overline{C}e^{- \overline{c}|x|^{2}},\qquad\left|\frac{\delta\mathfrak{F}^{\sigma,\gamma}}{ \delta p}(p_{t},x)\right|\leq C(1+|x|^{2}).\]
### Proof of Theorem 2.8
Proof of Theorem 2.8.: We start by observing the family \((p_{t})_{t\geq 0}\) is relatively compact for the uniform norm on \(C(\mathbb{R}^{d}).\) This property follows easily from Arzela-Ascoli Theorem as
\[p_{t}(x)\leq Ce^{-c|x|^{2}},\qquad|\nabla p_{t}(x)|=|\nabla u_{t}(x)|p_{t}(x) \leq C(1+|x|)e^{-c|x|^{2}},\]
by Lemma 4.7 and Corollary 4.8. Let \(p^{*}\) be an arbitrary cluster point of \((p_{t})_{t\geq 0},\)_i.e._, \(p_{t_{k}}\) converges uniformly to \(p^{*}\) for some sequence \(t_{k}\uparrow\infty.\) Note that, in view of the Gaussian bound above, the convergence also occurs in \(\mathcal{W}_{p}\) for any \(p\geq 1.\) The aim of the proof is to show that \(p^{*}\) is the unique minimizer of \(\mathfrak{F}^{\sigma,\gamma}.\)
(i). Let us show first that, for almost all \(h>0,\)
\[\liminf_{k\to\infty}\int_{\mathbb{R}^{d}}\left|\frac{\delta\mathfrak{F}^{ \sigma,\gamma}}{\delta p}(p_{t_{k}+h},x)\right|^{2}p(t_{k}+h,x)dx=0. \tag{4.6}\]
Indeed, suppose by contradiction that there exists \(h>0\) such that
\[0 <\int_{0}^{h}\liminf_{k\to\infty}\left\{\int_{\mathbb{R}^{d}} \left|\frac{\delta\mathfrak{F}^{\sigma,\gamma}}{\delta p}(p_{t_{k}+s},x) \right|^{2}p_{t_{k}+s}(x)dx\right\}ds\] \[\leq\liminf_{k\to\infty}\int_{0}^{h}\left\{\int_{\mathbb{R}^{d}} \left|\frac{\delta\mathfrak{F}^{\sigma,\gamma}}{\delta p}(p_{t_{k}+s},x) \right|^{2}p_{t_{k}+s}(x)dx\right\}ds,\]
where the last inequality is due to Fatou's lemma. It is would lead to a contradiction as
\[\mathfrak{F}^{\sigma,\gamma}(p_{t_{k+1}})-\mathfrak{F}^{\sigma, \gamma}(p_{t_{0}}) =\sum_{j=0}^{k}\mathfrak{F}^{\sigma,\gamma}(p_{t_{j+1}})-\mathfrak{F }^{\sigma,\gamma}(p_{t_{j}})\] \[=-\sum_{j=0}^{k}\int_{0}^{t_{j+1}-t_{j}}\int_{\mathbb{R}^{d}} \left|\frac{\delta\mathfrak{F}^{\sigma,\gamma}}{\delta p}(p_{t_{j}+s},x) \right|^{2}p_{t_{j}+s}(x)dxds\]
where the l.h.s. is bounded from below and the r.h.s. diverges to \(-\infty.\)
(ii). From now on denote by \(t_{k}^{h}:=t_{k}+h\) where \(h>0\) is chosen so that eq. (4.6) holds. Let \(q\) be an arbitrary probability measure in \(\mathcal{P}_{H}.\) Due to the first order inequality established in Proposition 4.3, we have
\[\mathfrak{F}^{\sigma,\gamma}(q)-\mathfrak{F}^{\sigma,\gamma}(p_{t_{k}^{h}})\ \geq\ \int_{\mathbb{R}^{d}}\frac{\delta\mathfrak{F}^{\sigma,\gamma}}{\delta p}(p_{t_ {k}^{h}},x)(q-p_{t_{k}^{h}})(x)dx.\]
In view of Corollary 4.8, we have
\[\sup_{t\geq 0}\left|\frac{\delta\mathfrak{F}^{\sigma,\gamma}}{\delta p}(p_{t}, x)\right|\leq C(1+|x|^{2}),\qquad\sup_{t\geq 0}\int_{\mathbb{R}^{d}}|x|^{2}p_{t}(x)dx<\infty.\]
Hence, for any \(\varepsilon>0,\) we can find \(K\) big enough such that for all \(k,j\in\mathbb{N},\)
\[\mathfrak{F}^{\sigma,\gamma}(p_{t_{k}^{h}})\leq\mathfrak{F}^{\sigma,\gamma}(q )-\int_{|x|\leq K}\frac{\delta\mathfrak{F}^{\sigma,\gamma}}{\delta p}(p_{t_{k} ^{h}},x)(q-p_{t_{k}^{h}})(x)dx+\varepsilon.\]
Further it follows from Cauchy-Schwartz inequality that
\[\left|\int_{|x|\leq K}\frac{\delta\mathfrak{F}^{\sigma,\gamma}}{\delta p}(p_{t _{k}^{h}},x)(q-p_{t_{k}^{h}})(x)dx\right|\leq\left(\int_{\mathbb{R}^{d}}\left| \frac{\delta\mathfrak{F}^{\sigma,\gamma}}{\delta p}(p_{t_{k}^{h}},x)\right|^{2 }p_{t_{k}^{h}}(x)dx\int_{|x|\leq K}\frac{|q-p_{t_{k}^{h}}|^{2}}{p_{t_{k}^{h}}} (x)dx\right)^{\frac{1}{2}}.\]
Assume first that \(q\) is bounded and note that the second term on the r.h.s. is also bounded as \(\inf_{k,h,x}p_{t_{k}^{h}}(x)>0\) by Corollary 4.8. Thus we deduce by taking the limit \(k\to\infty\) that
\[\liminf_{k\to\infty}\mathfrak{F}^{\sigma,\gamma}(p_{t_{k}^{h}})\leq\mathfrak{F }^{\sigma,\gamma}(q),\]
for any \(q\in\mathcal{P}_{H}\) bounded. If \(q\in\mathcal{P}_{H}\) is not necessarily bounded, this inequality also holds as it holds for the distribution \(q_{M}\propto q\wedge M\) and \(\mathfrak{F}^{\sigma,\gamma}(q_{M})\to\mathfrak{F}^{\sigma,\gamma}(q)\) as \(M\to\infty\).
(iii). Denote by \((p_{t}^{*})_{t\geq 0}\) the solution to eq. (2.6) starting from \(p_{0}^{*}=p^{*}.\) We observe by Lemma 4.9 below that \(p_{t_{k}^{h}}\) and \(\nabla\log(p_{t_{k}^{h}})\) converges pointwise to \(p_{h}^{*}\) and \(\nabla\log(p_{h}^{*})\) respectively. In view of Lemma 4.7 and Corollary 4.8, it follows easily by the dominated convergence theorem that \(\lim_{k\to\infty}F(p_{t_{k}^{h}})=F(p_{h}^{*})\) as \(p_{t_{k}^{h}}\to p_{h}^{*}\) in \(\mathcal{W}_{2}\),
\[\lim_{k\to\infty}H(p_{t_{k}^{h}})=\lim_{k\to\infty}\int\log(p_{t_{k}^{h}})p_{t _{k}^{h}}=\int\log(p_{h}^{*})p_{h}^{*}=H(p_{h}^{*})\]
and
\[\lim_{k\to\infty}I(p_{t_{k}^{h}})=\lim_{k\to\infty}\int|\nabla\log(p_{t_{k}^{h }})|^{2}p_{t_{k}^{h}}=\int|\nabla\log(p_{h}^{*})|^{2}p_{h}^{*}=I(p_{h}^{*})\]
We deduce that
\[\lim_{k\to\infty}\mathfrak{F}^{\sigma,\gamma}(p_{t_{k}^{h}})=\mathfrak{F}^{ \sigma,\gamma}(p_{h}^{*}).\]
Hence \(p_{h}^{*}\) is a minimizer of \(\mathfrak{F}^{\sigma,\gamma}.\) In view of Proposition 4.1, this minimizer is unique and thus \(p_{h}^{*}\) does not depend on \(h\) and coincides with its (pointwise) limit \(p_{0}^{*}=p^{*}\) when \(h\to 0\).
(iv). As a byproduct, we observe that \(p^{*}\) is a stationary solution to eq. (2.6) and thus it satisfies
\[\frac{\delta\mathfrak{F}^{\sigma,\gamma}}{\delta p}(p^{*},\cdot)=0.\]
**Lemma 4.9**.: _Using the notations above, \(p_{t_{k}^{h}}\) converges uniformly to \(p_{h}^{*}\) and \(\nabla\log p_{t_{k}^{h}}\) converges to \(\nabla\log p_{h}^{*}\) in \(\|\cdot\|_{(2)}\) as \(k\to\infty.\)_
Proof.: (i). Let us show first that \(\nabla\log p_{t_{k}}\) converges to \(\nabla\log p^{*}\) in \(\|\cdot\|_{(2)}\). According to Lemma 3.9 and Lemma 4.7, \((\nabla\log p_{t_{k}})_{k\in\mathbb{N}}\) lives in a \(\|\cdot\|_{(2)}\)-compact set. Consequently, there is a subsequence and a Lipschitz continuous function \(f\) such that \(\lim_{k\to\infty}\|\nabla\log p_{t_{k}}-f\|_{(2)}=0.\) Therefore, we have for almost all \(x,y\in\mathbb{R}^{d}\),
\[\log p^{*}(x)-\log p^{*}(y) =\lim_{k\to\infty}\Big{(}\log p_{t_{k}}(x)-\log p_{t_{k}}(y)\Big{)}\] \[=\lim_{k\to\infty}\int_{0}^{1}\nabla\log p_{t_{k}}(\lambda x+(1- \lambda)y)\cdot(x-y)d\lambda\] \[=\int_{0}^{1}f(\lambda x+(1-\lambda)y)\cdot(x-y)d\lambda.\]
So \(f=\nabla\log p^{*}\) and the desired result follows.
(ii). Next we show that \((p_{t_{k}^{h}},\nabla\log p_{t_{k}^{h}})\) converges to \((p_{h}^{*},\nabla\log p_{h}^{*})\) in \(\mathcal{W}_{1}\otimes\|\cdot\|_{(2)}.\) This follows immediately from Proposition 3.12 as \(\nabla\log p_{t_{k}}\) converges to \(\nabla\log p^{*}\) in \(\|\cdot\|_{(2)}.\)
(iii). It remains to prove that \(p_{t_{k}^{h}}\) converges uniformly to \(p_{h}^{*}.\) This is a consequence of Lemma 1 in [2] which establishes conditions under which convergence in law implies (uniform) convergence of the density distributions.
### Proof of Theorem 2.11
**Theorem 4.10**.: _Let \(p(dx)=e^{-u(x)}dx\) satisfy a Poincare inequality with constant \(C_{p}\), i.e. for all \(f\in H^{1}(p)\) such that \(\int fdp=0\) we have_
\[\int f^{2}dp\leq C_{P}\int|\nabla f|^{2}dp. \tag{4.7}\]
_Assume that \(u\in C^{1}\) and define the operator_
\[\mathcal{L}:=\Delta-\nabla u\cdot\nabla.\]
_Then for all functions \(f\in C^{2}\cap W^{2,2}(p)\) such that \(f^{2}\in W^{2,1}(p)\) and \(\mathcal{L}f\in L^{2}(p),\) we have_
\[C_{p}^{-1}\left(\int_{\mathbb{R}^{d}}f(x)p(dx)\right)^{2}\int_{ \mathbb{R}^{d}}|\nabla f(x)|^{2}p(dx)\\ \leq\int_{\mathbb{R}^{d}}f(x)^{2}p(dx)\int_{\mathbb{R}^{d}}\big{(} \mathcal{L}f(x)\big{)}^{2}p(dx)-\left(\int_{\mathbb{R}^{d}}f(x)\mathcal{L}f(x )p(dx)\right)^{2}. \tag{4.8}\]
**Remark 4.11**.: _Note that it follows from integration by parts that_
\[\int_{\mathbb{R}^{d}}\mathcal{L}f(x)p(dx)=0,\quad\int_{\mathbb{R}^{d}}|\nabla f (x)|^{2}p(dx)=-\int_{\mathbb{R}^{d}}f(x)\mathcal{L}f(x)p(dx).\]
_Moreover, if \(p^{f}(dx)=f(x)^{2}p(dx)\) is a probability measure then the right hand side of the inequality is equal to the variance of \(\frac{\mathcal{L}f}{f}\) under \(p^{f}\), namely \(\mathrm{Var}_{p^{f}}(\frac{\mathcal{L}f}{f})\)._
Proof of Theorem 4.10.: Let \(f=f_{0}+\bar{f}\), where \(\bar{f}=\int fdp\) is the mean. For the right-hand side of the inequality (4.8) we observe that
\[\int_{\mathbb{R}^{d}}f^{2}dp\int_{\mathbb{R}^{d}}\big{(}\mathcal{ L}f\big{)}^{2}dp-\left(\int_{\mathbb{R}^{d}}f\mathcal{L}fdp\right)^{2}\\ =\bar{f}^{2}\int(\mathcal{L}f)^{2}dp+\int f_{0}^{2}dp\int(\mathcal{ L}f_{0})^{2}dp-\left(\int f_{0}\mathcal{L}f_{0}dp\right)^{2}\ \geq\ \bar{f}^{2}\int(\mathcal{L}f)^{2}dp,\]
by using Cauchy-Schwarz inequality. Meanwhile for the left-hand side, by integration by parts, Cauchy-Schwarz inequality and Poincare inequality we obtain
\[\int|\nabla f|^{2}dp=-\int f\mathcal{L}fdp=-\int f_{0}\mathcal{L }fdp\\ \leq\left(\int f_{0}^{2}dp\right)^{1/2}\left(\int(\mathcal{L}f)^ {2}dp\right)^{1/2}\leq C_{\mathbb{P}}^{1/2}\left(\int|\nabla f|^{2}dp\int( \mathcal{L}f)^{2}dp\right)^{1/2}.\]
We obtain the desired inequality by combining the estimates above.
**Proposition 4.12**.: _If \(u:\mathbb{R}^{d}\to\mathbb{R}\) decomposes as \(u=v+w\) with \(v,w\in C^{2}\), \(\nabla^{2}v\geq\eta\) and \(|\nabla w|\leq M\). Then there exists a constant \(C_{p}=C(\eta,M,d)\) such that the Poincare inequality(4.7) holds._
Proof.: This is a direct consequence of Corollary 1.6 (1) in [1].
Proof of Theorem 2.11.: Recall that \(p_{t}\) is the classical solution to the mean field Schrodinger dynamics (2.7). For each \(t>0\), denote \(F_{t}:=\frac{\delta F}{\delta p}(p_{t},\cdot)\) and let \(\hat{p}_{t}(dx)=\exp(-\hat{u}_{t})dx\) be the minimizer to the linearized optimization problem
\[\hat{p}_{t}=\operatorname*{argmin}_{p\in\mathcal{P}_{H}}\int F_{t}dp+\frac{ \sigma^{2}}{4}I(p). \tag{4.9}\]
We recognize that it is the minimizer of the mean field optimization problem if we replace \(F(p)\) by \(\int F_{t}dp\). According to Theorem 2.8, \(\hat{p}_{t}=e^{-\hat{u}_{t}}\) satisfies \(\hat{u}_{t}=\hat{v}_{t}+\hat{w}_{t}\) with \(\nabla^{2}\hat{v}_{t}\geq c\), \(|\nabla\hat{w}_{t}|\leq C\) for all \(t>0\). Thus \(\hat{p}_{t}\) verifies a Poincare inequality with a constant \(C_{p}\) independent of time by Proposition 4.12. Note also that
\[\frac{\sigma^{2}}{2}\Delta\hat{u}_{t}-\frac{\sigma^{2}}{4}|\nabla\hat{u}_{t}|^ {2}+F_{t}-\hat{\lambda}_{t}=0, \tag{4.10}\]
with
\[\hat{\lambda}_{t}=\int\left(\frac{\sigma^{2}}{2}\Delta\hat{u}_{t}-\frac{\sigma ^{2}}{4}|\nabla\hat{u}_{t}|^{2}+F_{t}\right)\hat{p}_{t}=\int\left(\frac{\sigma ^{2}}{4}|\nabla\hat{u}_{t}|^{2}+F_{t}\right)\hat{p}_{t}.\]
The desired result follows by applying the functional inequality (4.8) with distribution \(\hat{p}_{t}\) and function \(f_{t}=\sqrt{p_{t}/\hat{p}_{t}}.\) Define \(\mathcal{L}_{t}=\Delta-\nabla\hat{u}_{t}\cdot\nabla\) and \(f_{t}=\sqrt{p_{t}/\hat{p}_{t}}.\) First we observe by direct computation using \(f_{t}=\exp((\hat{u}_{t}-u_{t})/2)\) that
\[\frac{\mathcal{L}_{t}f_{t}}{f_{t}}=\frac{1}{2}\Delta\hat{u}_{t}-\frac{1}{4}| \nabla\hat{u}_{t}|^{2}-\left(\frac{1}{2}\Delta u_{t}-\frac{1}{4}|\nabla u_{t} |^{2}\right). \tag{4.11}\]
Then it follows from eq. (4.10) that
\[\frac{\mathcal{L}_{t}f_{t}}{f_{t}}=\sigma^{-2}\hat{\lambda}_{t}-\sigma^{-2} \left(\frac{\sigma^{2}}{2}\Delta u_{t}-\frac{\sigma^{2}}{4}|\nabla u_{t}|^{2} +F_{t}\right).\]
Thus the right-hand side of eq. (4.8) corresponds to
\[\frac{d\overline{\hat{\sigma}}^{\sigma}(p_{t})}{dt} =-\int\left|\frac{\sigma^{2}}{2}\Delta u_{t}-\frac{\sigma^{2}}{4} |\nabla u_{t}|^{2}+F_{t}-\lambda_{t}\right|^{2}dp_{t}\] \[=-\int\left|\frac{\sigma^{2}}{2}\Delta u_{t}-\frac{\sigma^{2}}{4 }|\nabla u_{t}|^{2}+F_{t}-\hat{\lambda}_{t}\right|^{2}dp_{t}+\left(\hat{ \lambda}_{t}-\lambda_{t}\right)^{2}\] \[=-\sigma^{4}\text{Var}_{p_{t}}\left(\frac{\mathcal{L}_{t}f_{t}}{f _{t}}\right).\]
As for the left-hand side, we have for the first term
\[\int f_{t}d\hat{p}_{t}=\int\sqrt{p_{t}\hat{p}_{t}}dx\geq C>0\]
by using the Gaussian bounds provided in Corollary 4.8. Regarding the second term, it holds by using (4.11),
\[\int|\nabla f_{t}|^{2}d\hat{p}_{t}=-\int f_{t}\mathcal{L}_{t}f_{t}d\hat{p}_{t }=\sigma^{-2}\int\left(\frac{\sigma^{2}}{2}\Delta u_{t}-\frac{\sigma^{2}}{4}| \nabla u_{t}|^{2}+F_{t}\right)dp_{t}-\sigma^{-2}\hat{\lambda}_{t}\]
Using further the definition of \(\hat{\lambda}_{t}\) and integration by parts, we obtain
\[\sigma^{2}\int|\nabla f_{t}|^{2}d\hat{p}_{t} =\int\left(\frac{\sigma^{2}}{4}|\nabla u_{t}|^{2}+F_{t}\right)dp _{t}-\int\left(\frac{\sigma^{2}}{4}|\nabla\hat{u}_{t}|^{2}+F_{t}\right)d\hat{p }_{t}\] \[=\int F_{t}(dp_{t}-d\hat{p}_{t})+\frac{\sigma^{2}}{4}(I(p_{t})-I( \hat{p}_{t}))\] \[\geq\int F_{t}(dp_{t}-dp^{*})+\frac{\sigma^{2}}{4}(I(p_{t})-I(p^{ *}))\]
where the last inequality follows from the optimality of \(\hat{p}_{t}\) in eq. (4.9).
By Theorem 4.10 and the above computations, we deduce that
\[\frac{d\mathfrak{F}^{\sigma}(p_{t})}{dt} \leq-\frac{(C\sigma)^{2}}{C_{p}}\left(\int F_{t}(dp_{t}-dp^{*})+ \frac{\sigma^{2}}{4}(I(p_{t})-I(p^{*}))\right)\] \[\leq-\frac{(C\sigma)^{2}}{C_{p}}\left(\mathfrak{F}^{\sigma}(p_{t} )-\mathfrak{F}^{\sigma}(p^{*})\right),\]
where the last inequality is due to the convexity of \(F\). Therefore, the exponential convergence of the free energy (2.8) follows with a constant \(c=\frac{(C\sigma)^{2}}{C_{p}}\).
In order to obtain the exponential convergence of the relative Fisher information, define \(f_{t}^{*}:=\sqrt{\frac{p_{t}}{p^{*}}}\), \(\mathcal{L}^{*}:=\Delta-\nabla u^{*}\cdot\nabla\), and repeat the previous computation:
\[I(p_{t}|p^{*}) =4\int|\nabla f_{t}^{*}|^{2}dp^{*}=-4\int f_{t}^{*}\mathcal{L}^{ *}f_{t}^{*}dp^{*}\] \[=4\sigma^{-2}\left(\int\frac{\delta F}{\delta p}(p^{*},x)(p_{t}- p^{*})dx+\frac{\sigma^{2}}{4}(I(p_{t})-I(p^{*}))\right)\] \[\leq 4\sigma^{-2}\left(\mathfrak{F}^{\sigma}(p_{t})-\mathfrak{F }^{\sigma}(p^{*})\right),\]
where the last inequality is again due to the convexity of \(F\).
## 5 Gradient Flow with Relative Entropy
Let \(p_{i}^{h}\) be defined in eq. (2.10). The proof of Theorem 2.13 essentially relies on applying Arzela-Ascoli Theorem to the (continuous interpolation of) family \(((t,x)\mapsto p_{[t/h]}^{h}(x))_{h>0}.\) To this end, we need to ensure equicontinuity and boundedness in the two subsequent sections. In the sequel, we fix a time horizon \(T<\infty\) and we denote by \(N_{h}:=[T/h]\). Additionally, \(C,c>0\) stand for a big and a small constant respectively, independent of \(h,i,j,k,\ell\) with the restriction that \(i,j,k,\ell\leq N_{h}\), which may change from line to line.
### Equicontinuity in Space of the Discrete Flow
The goal of this section is to obtain uniform Gaussian bounds of the family \((p_{i}^{h})_{h,i\leq N_{h}}\) as in Proposition 3.11 and to deduce equicontinuity in space of the discrete flow.
**Proposition 5.1**.: _For some \(\underline{C},\overline{C},\underline{c},\overline{c}>0\) we have for all \(h>0,i\leq N_{h},x\in\mathbb{R}^{d},\)_
\[\underline{C}e^{-\underline{c}|x|^{2}}\leq p_{i}^{h}(x)\leq\overline{C}e^{- \overline{c}|x|^{2}}.\]
_In addition, it holds_
\[\sup_{h,i\leq N_{h}}\|\nabla p_{i}^{h}\|_{\infty}<+\infty.\]
Proof.: The Gaussian bounds are a direct consequence of Lemma A.4, whose assumptions are satisfied according to Lemmas 5.2-5.6 below. As for the second part, it follows from the identity \(\nabla p_{i}^{h}=p_{i}^{h}\nabla\log(p_{i}^{h})\) by using the Gaussian bounds above and the fact that \(|\nabla\log(p_{i}^{h}(x))|\leq C(1+|x|)\) according to Lemmas 5.5 and 5.6 below.
Recall that the mapping \(p_{i}^{h}\) is a solution to the stationary mean-field Schrodinger equation eq. (2.11). In other words, if we denote \(u_{i}^{h}:=-\log(p_{i}^{h})\), it holds
\[\frac{\sigma^{2}}{2}\Delta u_{i}^{h}-\frac{\sigma^{2}}{4}\left|\nabla u_{i}^{ h}\right|^{2}+\frac{\delta F}{\delta p}\left(p_{i}^{h},\cdot\right)+h^{-1}u_{i-1} ^{h}-h^{-1}u_{i}^{h}=\lambda_{i}^{h}, \tag{5.1}\]
with \(\lambda_{i}^{h}\) given by eq. (2.12). The key point is to observe that we have the decomposition \(u_{i}^{h}=v_{i}^{h}+w_{i}^{h}\) with \(v_{i}^{h}\) uniformly convex and \(w_{i}^{h}\) uniformly Lipschitz. It follows by using similar
arguments to Section 3.1. In this setting there is a slight ambiguity in the definition of \(v_{i}^{h}\) (and thus \(w_{i}^{h}\)) due to the normalizing constant. Following Remark 3.4, we define \(v_{i}^{h}\) as the solution to
\[\frac{\sigma^{2}}{2}\Delta v_{i}^{h}-\frac{\sigma^{2}}{4}\left|\nabla v_{i}^{h} \right|^{2}+g+h^{-1}v_{i-1}^{h}-h^{-1}v_{i}^{h}=\bar{\lambda}_{i}^{h},\]
where
\[\bar{\lambda}_{i}^{h}=\int_{\mathbb{R}^{d}}\left(\frac{\sigma^{2}}{2}\Delta v _{i}^{h}-\frac{\sigma^{2}}{4}\left|\nabla v_{i}^{h}\right|^{2}+g+h^{-1}v_{i-1 }^{h}-h^{-1}v_{i}^{h}\right)p_{i}^{h}.\]
**Lemma 5.2**.: _The function \((v_{i}^{h})_{h,i\leq N_{h}}\) are uniformly \(\kappa^{*}\)-convex for some \(\kappa^{*}>0\)._
Proof.: Due to Remark 3.7, \(v_{i+1}^{h}\) is \(\theta_{i+1}^{h}\)-convex with
\[\theta_{i+1}^{h} :=\frac{\sqrt{h^{-2}+4\sigma^{2}\Big{(}\underline{\kappa}+h^{-1} \min\left(\theta_{i}^{h},\sqrt{\underline{\kappa}}/\sigma\right)\Big{)}}-h^{ -1}}{2\sigma^{2}}\] \[=\frac{\sqrt{4\sigma^{2}\underline{\kappa}-4\sigma^{4}\min\left( \theta_{i}^{h},\sqrt{\underline{\kappa}}/\sigma\right)^{2}+\left(h^{-1}+2 \sigma^{2}\min\left(\theta_{i}^{h},\sqrt{\underline{\kappa}}/\sigma\right) \right)^{2}}-h^{-1}}{2\sigma^{2}}\] \[\geq\min\big{(}\theta_{i}^{h},\sqrt{\underline{\kappa}}/\sigma \big{)}.\]
Recall that \(\theta_{0}^{h}=\underline{\eta}\). Finally we obtain that \(v_{i+1}^{h}\) is \(\min\big{(}\underline{\eta},\sqrt{\underline{\kappa}}/\sigma\big{)}\)-convex.
**Lemma 5.3**.: _The Hessian's \((\nabla^{2}v_{i}^{h})_{h,i\leq N_{h}}\) are uniformly bounded._
Proof.: As in Proposition 3.6, we may obtain the probabilistic representation of \(\nabla v_{i+1}^{h}\):
\[\nabla v_{i+1}^{h}(x)=\mathbb{E}\left[\int_{0}^{t}e^{-s/h}\Big{(}\nabla g(X_{s })+h^{-1}\nabla v_{i}^{h}(X_{s})\Big{)}ds+e^{-t/h}\nabla v_{i+1}^{h}(X_{t}) \right],\]
with
\[X_{s}=x-\int_{0}^{s}\frac{\sigma^{2}}{2}\nabla v_{i+1}^{h}(X_{r})dr+\sigma W_ {s}.\]
Let \(X^{\prime}\) satisfy the same SDE as \(X\) but with initial value \(x^{\prime}\). Since \(v_{i+1}^{h}\) is \(\kappa^{*}\)-convex, it follows from the same arguments as eq. (3.12) that
\[\left|X_{t}-X^{\prime}_{t}\right|\leq e^{-\frac{\sigma^{2}\kappa^{*}t}{2}} \left|x-x^{\prime}\right|.\]
Further we obtain
\[\left|\nabla v_{i+1}^{h}(x)-\nabla v_{i+1}^{h}(x^{\prime})\right|\] \[\leq \mathbb{E}\left[\int_{0}^{t}e^{-s/h}(C+h^{-1}\|\nabla^{2}v_{i}^{ h}\|_{\infty})\left|X_{s}-X^{\prime}_{s}\right|ds+e^{-t/h}\|\nabla^{2}v_{i+1}^{h} \|_{\infty}\left|X_{t}-X^{\prime}_{t}\right|\right]\] \[\leq \int_{0}^{t}e^{-\left(h^{-1}+\frac{\sigma^{2}\kappa^{*}}{2} \right)s}(C+h^{-1}\|\nabla^{2}v_{i}^{h}\|_{\infty})\left|x-x^{\prime}\right|ds +e^{-\left(h^{-1}+\frac{\sigma^{2}\kappa^{*}}{2}\right)t}\|\nabla^{2}v_{i+1}^{ h}\|_{\infty}\left|x-x^{\prime}\right|.\]
Letting \(t\to\infty\), we get \(\|\nabla^{2}v_{i+1}^{h}\|_{\infty}\leq\frac{Ch+\|\nabla^{2}v_{i}^{h}\|_{ \infty}}{1+\frac{\sigma^{2}\kappa^{*}h}{2}}\). Therefore,
\[\|\nabla^{2}v_{i+1}^{h}\|_{\infty}\ \leq\ 2C\frac{1-\frac{1}{\left(1+ \frac{\sigma^{2}\kappa^{*}h}{2}\right)^{i+1}}+\frac{\|\nabla^{2}v_{0}^{h}\|_{ \infty}}{\left(1+\frac{\sigma^{2}\kappa^{*}h}{2}\right)^{i+1}}\ \leq\ \frac{2C}{\sigma^{2}\kappa^{*}}+\overline{\eta}.\]
**Lemma 5.4**.: _The gradients \((\nabla w_{i}^{h})_{h,i\leq N_{h}}\) are uniformly bounded._
Proof.: As in Proposition 3.8, we observe that \(w_{i+1}^{h}\) is the value function of the stochastic control problem:
\[w_{i+1}^{h}(x)=\inf_{\alpha}\mathbb{E}\,\Big{[}\int_{0}^{T}e^{-s/ h}\left(G\left(p_{i+1}^{h},X_{s}^{\alpha}\right)+h^{-1}w_{i}^{h}(X_{s}^{\alpha})+ \frac{\sigma^{2}}{4}|\alpha_{s}|^{2}+\bar{\lambda}_{i}^{h}-\lambda_{i}^{h} \right)ds\\ +e^{-T/h}w_{i+1}^{h}\left(X_{T}^{\alpha}\right)\Big{]},\]
with
\[dX_{s}^{\alpha}=-\frac{\sigma^{2}}{2}\left(\nabla v_{i+1}^{h}\left(X_{s}^{ \alpha}\right)+\alpha_{s}\right)ds+\sigma dW_{s},\quad X_{0}^{\alpha}=x.\]
Further as in eq. (3.13), we may estimate
\[\Big{|}w_{i+1}^{h}(x)-w_{i+1}^{h}(x^{\prime})\Big{|}\leq\int_{0}^ {T}e^{-\left(h^{-1}+\frac{\sigma^{2}\kappa^{*}}{2}\right)s}(C+h^{-1}\|\nabla w _{i}^{h}\|_{\infty})|x-x^{\prime}|\\ +e^{-\left(h^{-1}+\frac{\sigma^{2}\kappa^{*}}{2}\right)T}\|\nabla w _{i+1}^{h}\|_{\infty}|x-x^{\prime}|.\]
Let \(T\to\infty\) and obtain \(\|\nabla w_{i+1}^{h}\|_{\infty}\leq\frac{Ch+\|\nabla w_{i}^{h}\|_{\infty}}{1+ \frac{\sigma^{2}\kappa^{*}h}{2}}\). Therefore,
\[\|\nabla w_{i+1}^{h}\|_{\infty}\ \leq\ \frac{2C}{\sigma^{2}\kappa^{*}}+\| \nabla w_{0}\|_{\infty}.\]
**Lemma 5.5**.: _The Hessians \((\nabla^{2}u_{i}^{h})_{h,i\leq N_{h}}\) are uniformly bounded._
Proof.: As in the proof of Lemma 3.9, the Feynman-Kac formula ensures that
\[\nabla u_{i+1}^{h}(x)=\mathbb{E}\left[\int_{0}^{\infty}e^{-t/h} \left(\nabla\frac{\delta F}{\delta p}(p_{i+1}^{h},X_{t})+h^{-1}\nabla u_{i}^{ h}(X_{t})\right)dt\right],\]
with
\[X_{t}=x-\frac{\sigma^{2}}{2}\int_{0}^{t}\nabla u_{i+1}^{h}(X_{s})ds+\sigma W_ {t}.\]
Let \(Y\) satisfy the same SDE starting from \(y\). By the reflection coupling in Theorem A.7, we have
\[\mathcal{W}_{1}\left(p_{t}^{X},p_{t}^{Y}\right)\leq Ce^{-ct}|x-y|,\]
where \(p^{X},\ p^{Y}\) are the marginal distribution of \(X,\ Y\) respectively. Then observe that
\[\big{|}\nabla u_{i+1}^{h}(x)-\nabla u_{i+1}^{h}(y)\big{|} \leq\int_{0}^{\infty}Ce^{-t/h-ct}|x-y|+\int_{0}^{\infty}e^{-t/h}h^ {-1}\mathbb{E}\left[\big{|}\nabla u_{i}^{h}(X_{t})-\nabla u_{i}^{h}(Y_{t}) \big{|}\right]\] \[=\frac{Ch}{1+ch}|x-y|+\int_{0}^{\infty}e^{-t/h}h^{-1}\mathbb{E} \left[\big{|}\nabla u_{i}^{h}(X_{t})-\nabla u_{i}^{h}(Y_{t})\big{|}\right]dt.\]
Next apply the same estimate on \(\big{|}\nabla u_{i}^{h}(X_{t})-\nabla u_{i}^{h}(Y_{t})\big{|}\), and obtain
\[\big{|}\nabla u_{i+1}^{h}(x)-\nabla u_{i+1}^{h}(y)\big{|} \leq\frac{2Ch}{1+ch}|x-y|\\ +\int_{0}^{\infty}e^{-t_{0}/h}h^{-1}\int_{0}^{\infty}e^{-t_{1}/h}h ^{-1}\mathbb{E}\left[\big{|}\nabla u_{i-1}^{h}(X_{t_{0}+t_{1}}^{(t_{0})})- \nabla u_{i-1}^{h}(Y_{t_{0}+t_{1}}^{(t_{0})})\big{|}\right]dt_{1}dt_{0},\]
with
\[X_{0}^{(t_{0})}=x,\quad dX_{t}^{(t_{0})}=\left\{\begin{aligned} -\frac{\sigma^{2}}{2}\nabla u_{i+1}^{h}(X_{t}^{(t_{0})})dt+ \sigma dW_{t},&\text{for $t\in[0,t_{0})$}\\ -\frac{\sigma^{2}}{2}\nabla u_{i}^{h}(X_{t}^{(t_{0})})dt+\sigma dW_{t },&\text{for $t\geq t_{0}$}.\end{aligned}\right.\]
By repeating the procedure, we eventually obtain for \(i\geq 1\)
\[\left|\nabla u_{i+1}^{h}(x)-\nabla u_{i+1}^{h}(y)\right|\leq\frac{ (i+1)Ch}{1+ch}|x-y|\\ +\int_{0}^{\infty}\cdots\int_{0}^{\infty}e^{-\sum_{j=0}^{i}t_{j}/ h}h^{-(i+1)}\mathbb{E}\left[|\nabla u_{0}(X_{\sum_{j=0}^{i}t_{j}}^{(t_{0}, \cdots,t_{i-1})})-\nabla u_{0}(Y_{\sum_{j=0}^{i}t_{j}}^{(t_{0},\cdots,t_{i-1}) })|\right]dt_{i}\cdots dt_{0},\]
with
\[X_{0}^{(t_{0},\cdots,t_{i-1})}=x,\ \ dX_{t}^{(t_{0},\cdots,t_{i-1})}=-\frac{ \sigma^{2}}{2}\nabla u_{j}^{h}(X_{t}^{(t_{0},\cdots,t_{i-1})})dt+\sigma dW_{t },\ \ \text{for $t\in[t_{i-j},t_{i+1-j})$}.\]
Again it follows from the reflection coupling that
\[\mathcal{W}_{1}\left(p_{t}^{X^{(t_{0},\cdots,t_{i-1})}},p_{t}^{Y^{(t_{0}, \cdots,t_{i-1})}}\right)\leq Ce^{-ct}|x-y|,\]
where \(p^{X^{(t_{0},\cdots,t_{i-1})}},\ p^{Y^{(t_{0},\cdots,t_{i-1})}}\) are the marginal distribution of \(X^{(t_{0},\cdots,t_{i-1})},\ Y^{(t_{0},\cdots,t_{i-1})}\) respectively. In particular, the constants \(c,\ C\) do not depend on \((t_{0},\cdots,t_{i-1})\) by Lemmas 5.2-5.4. Finally we get
\[\left|\nabla u_{i+1}^{h}(x)-\nabla u_{i+1}^{h}(y)\right| \leq\frac{(i+1)Ch}{1+ch}|x-y|\] \[\quad+C\int_{0}^{\infty}\cdots\int_{0}^{\infty}e^{-\sum_{j=0}^{i }t_{j}(h^{-1}+c)}h^{-(i+1)}|x-y|dt_{i}\cdots dt_{0}\] \[\leq C(T+1)|x-y|,\]
and the desired result follows.
**Lemma 5.6**.: _The vectors \(\left(\nabla u_{i}^{h}(0)\right)_{h,i\leq N_{h}}\) are uniformly bounded._
Proof.: By positivity of the entropy and definition of the variational problem (2.10), we have for all \(i\geq 0\) that
\[\mathfrak{F}^{\sigma}(p_{i+1}^{h})\leq\mathfrak{F}^{\sigma}(p_{i+1}^{h})+h^{- 1}H(p_{i+1}^{h}|p_{i}^{h})\leq\mathfrak{F}^{\sigma}(p_{i}^{h})+h^{-1}H(p_{i}^ {h}|p_{i}^{h})=\mathfrak{F}^{\sigma}(p_{i}^{h}).\]
In addition it follows from Assumption 2.3 that
\[\lambda\int_{\mathbb{R}^{d}}|x|^{2}p_{i}^{h}(x)dx+\sigma^{2}\int_{\mathbb{R}^ {d}}|\nabla\sqrt{p_{i}^{h}}(x)|^{2}dx\leq\mathfrak{F}^{\sigma}(p_{i}^{h}).\]
Therefore we have
\[\sup_{i\leq N_{h}}\left\{\lambda\int_{\mathbb{R}^{d}}|x|^{2}p_{i}^{h}(x)dx+ \sigma^{2}\int_{\mathbb{R}^{d}}|\nabla\sqrt{p_{i}^{h}}(x)|^{2}dx\right\}\leq \mathfrak{F}^{\sigma}(p_{0}).\]
Since we have proved \(\sup_{h,i\geq N_{h}}\|\nabla^{2}u_{i}^{h}\|_{\infty}<\infty\), we obtain
\[4\int_{\mathbb{R}^{d}}|\nabla\sqrt{p_{i}^{h}}(x)|^{2}dx=\int_{\mathbb{R}^{d}}| \nabla u_{i}^{h}(x)|^{2}p_{i}^{h}(x)dx\geq\frac{1}{2}|\nabla u_{i}^{h}(0)|^{2} -C\int_{\mathbb{R}^{d}}|x|^{2}p_{i}^{h}(x)dx.\]
Finally we obtain \(\sup_{h,i\leq N_{h}}|\nabla u_{i}^{h}(0)|<\infty\).
### Equicontinuity in Time of the Discrete Flow
We aim to show the equicontinuity in time of the family \((p^{h})_{h>0}\) as established in the proposition below. We also demonstrate as a preliminary step and for later that the family of function \((t\mapsto\lambda_{\lfloor t/h\rfloor}^{h})_{h>0}\) defined by (2.12) is bounded and, up to a linear interpolation, equicontinuous.
**Proposition 5.7**.: _There exists constants \(C,c\) such that_
\[\sup_{h,i<j\leq N_{h}}|p_{j}^{h}-p_{i}^{h}|\leq C\exp(-c|x|^{2})(j-i)h.\]
_Additionally, the sequence \((\lambda_{i}^{h})_{h.i\leq N_{h}}\) is uniformly bounded,_
\[\sup_{h,i\leq N_{h}}|\lambda_{i}^{h}|<+\infty,\]
_and there exists a modulus of continuity (m.o.c.), \(\varpi:\mathbb{R}_{+}\to\mathbb{R}_{+}\) such that_
\[\sup_{h,i<j\leq N_{h}}|\lambda_{j}^{h}-\lambda_{i}^{h}|\leq\varpi((j-i)h).\]
**Remark 5.8**.: _W.l.o.g. we can assume that \(\varpi\) is continuous and monotone. Then we can define_
\[\tilde{\varpi}(t)=t\sup_{t\leq s\leq T}\frac{\varpi(s)}{s},\quad\text{which satisfies}\]
_By definition, it satisfies \(\tilde{\varpi}\geq\varpi\) and \(\frac{\tilde{\varpi}(s)}{s}\geq\frac{\tilde{\varpi}(t)}{t}\) for all \(s\leq t.\) It is also a m.o.c. as we observe that \(\lim_{t\to 0+}\tilde{\varpi}(t)=0\) by distinguishing two cases \(\sup_{0<s\leq T}\frac{\varpi(s)}{s}<+\infty\text{ or }=\infty\). By this construction, if a discrete sequence \((a_{i}^{h})_{i\leq N_{h}}\) satisfies \(\sup_{i<j\leq N_{h}}|a_{i}^{h}-a_{j}^{h}|\leq\varpi((j-i)h)\), then its linear interpolation \(\tilde{a}^{h}\) admits \(\tilde{\varpi}\) as m.o.c., i.e., \(\sup_{s<t\leq T}|\tilde{a}^{h}(t)-\tilde{a}^{h}(s)|\leq\tilde{\varpi}(t-s)\)._
Proof.: _Formulas for \(\lambda_{k}^{h}\)._ The normalization condition for \(u_{k}^{h},k\leq N_{h}\) writes
\[1=\int\exp(-u_{k}^{h})=\int\exp(-u_{k-1}^{h})\exp\left(-h\frac{ u_{k}^{h}-u_{k-1}^{h}}{h}\right)\\ =\int p_{k-1}^{h}\exp\left(-h\left(\frac{\sigma^{2}}{2}\Delta u_{ k}^{h}-\frac{\sigma^{2}}{4}|\nabla u_{k}^{h}|^{2}+\frac{\delta F}{\delta p}(p_{k}^{h },\cdot)-\lambda_{k}^{h}\right)\right).\]
This allows us to obtain the following formula for \(\lambda_{k}^{h}\):
\[\lambda_{k}^{h}=-\frac{1}{h}\log\int p_{k-1}^{h}\exp\left(-h\left( \frac{\sigma^{2}}{2}\Delta u_{k}^{h}-\frac{\sigma^{2}}{4}|\nabla u_{k}^{h}|^{2 }+\frac{\delta F}{\delta p}(p_{k}^{h},\cdot)\right)\right)\\ =:-\frac{1}{h}\log\int p_{k-1}^{h}\exp(-hB_{k}^{h}). \tag{5.2}\]
By writing the normalization in the backward way,
\[1=\int\exp(-u_{k-1}^{h})=\int\exp(-u_{k}^{h})\exp\left(h\frac{u_{k}^{h}-u_{k- 1}^{h}}{h}\right)=\int\exp(-u_{k}^{h})\exp(h(B_{k}^{h}-\lambda_{k}^{h})),\]
we obtain a similar formula
\[\lambda_{k}^{h}=\frac{1}{h}\log\int p_{k}^{h}\exp(hB_{k}^{h}). \tag{5.3}\]
We apply Jensen's inequality to eq. (5.2) and eq. (5.3) to obtain
\[\int p_{k}^{h}B_{k}^{h}\leq\lambda_{k}^{h}\leq\int p_{k-1}^{h}B_{k}^{h}. \tag{5.4}\]
Estimates from Lemma 5.5 and Lemma 5.6 gives us the bound \(\sup_{x,h,k\leq N_{h}}\frac{|B_{k}^{h}(x)|}{1+|x|^{2}}<+\infty\). Then by an application of Proposition 5.1 we prove the first claim \(\sup_{h,k\leq N_{h}}|\lambda_{k}^{h}|<+\infty\).
_Time regularity of \(p_{k}^{h}\)._ We note that according to the (discretized) stationary HJB equation,
\[|u_{j}^{h}(x)-u_{i}^{h}(x)|=\left|h\left(\sum_{s=i+1}^{j}B_{s}^{h}-\sum_{s=i+1} ^{j}\lambda_{s}^{h}\right)\right|\leq C(j-i)h(1+|x|^{2})+C(j-i)h\leq C(j-i)h(1+ |x|^{2}), \tag{5.5}\]
and using the bound from Proposition 5.1, we obtain
\[|p_{j}^{h}(x)-p_{i}^{h}(x)|=|\exp(-u_{j}^{h}(x))-\exp(-u_{i}^{h}( x))|\leq\max\{p_{j}^{h}(x),p_{i}^{h}(x)\}|u_{j}^{h}(x)-u_{i}^{h}(x)|\\ \leq C\exp(-c|x|^{2})\cdot C(j-i)h(1+|x|^{2})\leq C\exp(-c|x|^{2}) (j-i)h, \tag{5.6}\]
which is our second claim. This implies the \(\mathcal{W}_{1}\)-time regularity of \(p_{k}^{h}\)
\[\mathcal{W}_{1}(p_{j}^{h},p_{i}^{h})=\sup_{||\varphi||_{\mathrm{ Lip}}\leq 1}\left|\int\varphi(p_{j}^{h}-p_{i}^{h})\right|=\sup_{||\varphi||_{ \mathrm{Lip}}\leq 1}\left|\int(\varphi-\varphi(0))(p_{j}^{h}-p_{i}^{h})\right|\\ \leq\int|x||p_{j}^{h}(x)-p_{i}^{h}(x)|dx\leq\int|x|\cdot C\exp(-c| x|^{2})(j-i)h\leq C(j-i)h. \tag{5.7}\]
_Uniform continuity of \(\frac{\delta F}{\delta p}\)._ Thanks to the estimate in Proposition 5.1, \(\{p_{k}^{h}\}_{h,k\leq N_{h}}\) forms a relatively compact set in \(\mathcal{W}_{1}\), and the \(\mathcal{W}_{1}\)-continuity of \(p\mapsto\frac{\delta F}{\delta p}(p,0)\) becomes uniform. That is, there exists a m.o.c. \(\omega_{0}:\mathbb{R}_{+}\to\mathbb{R}_{+}\) such that
\[\left|\frac{\delta F}{\delta p}(p_{k}^{h},0)-\frac{\delta F}{\delta p}(p_{ \ell}^{h},0)\right|\leq\omega_{0}(\mathcal{W}_{1}(p_{k}^{h},p_{\ell}^{h})), \quad\forall h>0,\forall k\leq N_{h},\forall\ell\leq N_{h}.\]
Integrating along the straight line from \(0\) to any \(x\in\mathbb{R}^{d}\) and using the assumptions on \(\nabla\frac{\delta F}{\delta p}\), we obtain
\[\left|\frac{\delta F}{\delta p}(p_{k}^{h},x)-\frac{\delta F}{ \delta p}(p_{\ell}^{h},x)\right|\\ \leq\left|\frac{\delta F}{\delta p}(p_{k}^{h},0)-\frac{\delta F}{ \delta p}(p_{\ell}^{h},0)\right|+\int_{0}^{1}\left|x\cdot\left(\nabla\frac{ \delta F}{\delta p}(p_{k}^{h},tx)-\frac{\delta F}{\delta p}(p_{\ell}^{h},tx) \right)\right|dt\\ \leq\omega_{0}(\mathcal{W}_{1}(p_{k}^{h},p_{\ell}^{h}))+L|x| \mathcal{W}_{1}(p_{k}^{h},p_{\ell}^{h})\leq C(1+|x|)\omega(\mathcal{W}_{1}(p_{ k}^{h},p_{\ell}^{h})). \tag{5.8}\]
for another m.o.c. \(\omega(y)=y+\omega_{0}(y)\).
_Estimating the difference._ We first note that thanks to eq. (5.4) we can approximate \(\lambda_{k}^{h}\) by \(\int p_{k}^{h}B_{k}^{h}\), up to a uniform \(O(h)\) error. More precisely,
\[|r_{k}^{h}|:=\left|\lambda_{k}^{h}-\int p_{k}^{h}B_{k}^{h}\right| \leq\left|\int(p_{k}^{h}-p_{k-1}^{h})B_{k}^{h}\right|\\ \leq\int C\exp(-c|x|^{2})h\cdot B_{k}^{h}\leq\int C\exp(-c|x|^{2})h \cdot C(1+|x|^{2})\leq Ch,\]
where we used eq. (5.6) in the second inequality and uniform bounds on \(B_{k}^{h}\) in the last one. It suffices then to study the difference
\[\int p_{j}^{h}B_{j}^{h}-p_{i}^{h}B_{i}^{h}=\int p_{i}^{h}(B_{j}^{h}-B_{i}^{h})+ \int(p_{j}^{h}-p_{i}^{h})B_{j}^{h}=:\delta+\delta^{\prime}.\]
We bound the second part, using again the time regularity result eq. (5.6),
\[|\delta^{\prime}|\leq\int(j-i)hC\exp(-c|x|^{2})|B_{j}^{h}|\leq C(j-i)h\int\exp( -c|x|^{2})C(1+|x|^{2})\leq C(j-i)h.\]
As for the first part, we decompose it into three terms, each of which we treat separately:
\[\delta=\frac{\sigma^{2}}{2}\int p_{i}^{h}(\Delta u_{j}^{h}-\Delta u _{i}^{h})+\frac{\sigma^{2}}{4}\int p_{i}^{h}(|\nabla u_{j}^{h}|^{2}-|\nabla u_{i }^{h}|^{2})+\int p_{i}^{h}\left(\frac{\delta F}{\delta p}(p_{j}^{h},\cdot)- \frac{\delta F}{\delta p}(p_{i}^{h},\cdot)\right)\\ =:\delta_{1}+\delta_{2}+\delta_{3}.\]
We apply integration by parts to the first term, using the previous estimates on \(\nabla u_{i}^{h},p_{i}^{h}\) and the time regularity result of \(\nabla u_{i}^{h}\) from Lemma 5.9 below,
\[|\delta_{1}|=\frac{\sigma^{2}}{2}\left|\int p_{i}^{h}\nabla u_{i}^{h}\cdot( \nabla u_{j}^{h}-\nabla u_{i}^{h})\right|\leq C\int p_{i}^{h}(1+|x|)^{2}((j-i) h)^{1/2}\leq C((j-i)h)^{1/2}.\]
The second term is treated in the same way:
\[|\delta_{2}|\leq\frac{\sigma^{2}}{4}\int p_{i}^{h}(|\nabla u_{j}^{h}|+|\nabla u _{i}^{h}|)\cdot|\nabla u_{j}^{h}-\nabla u_{i}^{h}|\leq C((j-i)h)^{\frac{1}{2}}.\]
Combining eq. (5.7) and eq. (5.8), we can then bound
\[|\delta_{3}|\leq\int p_{i}^{h}\left|\frac{\delta F}{\delta p}(p_{j}^{h},\cdot) -\frac{\delta F}{\delta p}(p_{i}^{h},\cdot)\right|\leq C\int p_{i}^{h}(1+|x|) \omega(\mathcal{W}_{1}(p_{j}^{h},p_{i}^{h}))\leq C\omega(C(j-i)h).\]
Collecting the bounds on \(r,\delta^{\prime},\delta\), we derive finally
\[|\lambda_{j}^{h}-\lambda_{i}^{h}|\leq\sum_{n=1}^{3}|\delta_{n}|+| \delta^{\prime}|+|r_{j}^{h}|+|r_{i}^{h}| \leq 2C((j-i)h)^{\frac{1}{2}}+C\omega(C(j-i)h)+C(j-i)h+2Ch\] \[\leq C(((j-i)h)^{\frac{1}{2}}+\omega(C(j-i)h))=:\varpi((j-i)h).\]
**Lemma 5.9**.: _There exists a constant \(C\) such that for all \(h\in(0,1),i<j\leq N_{h}\), we have_
\[|\nabla u_{j}^{h}(x)-\nabla u_{i}^{h}(x)|\leq C((j-i)h)^{\frac{1}{2}}(1+|x|), \quad\forall x\in\mathbb{R}^{d}.\]
Proof.: By taking spatial derivatives of the HJB equation (5.1), we see the following is satisfied for \(i<k\leq j\)
\[\frac{1}{h}(\nabla u_{k}^{h}-\nabla u_{k-1}^{h})=\frac{\sigma^{2}}{2}\Delta \nabla u_{k}^{h}-\frac{\sigma^{2}}{2}\nabla^{2}u_{k}^{h}\cdot\nabla u_{k}^{h}+ \nabla\frac{\delta F}{\delta p}(p_{k}^{h},\cdot)=:\frac{\sigma^{2}}{2}\Delta \nabla u_{k}^{h}+A_{k}^{h}, \tag{5.9}\]
where by estimates in Lemma 5.5 and Lemma 5.6 we know that
\[\sup_{h,i\leq N_{h}}|A_{i}^{h}(x)|\leq C(1+|x|),\quad\forall x\in\mathbb{R}^{d}.\]
The solution to eq. (5.9) admits the following representation
\[\nabla u_{k}^{h}=\int_{0}^{\infty}e^{-h^{-1}t}(P_{\sigma^{2}t}A_{k}^{h}+\frac {1}{h}P_{\sigma^{2}t}\nabla u_{k-1}^{h})dt,\]
where \(P_{t}\) is the heat kernel generated by \(\frac{1}{2}\Delta\). Iterating this procedure with descending \(k\), we obtain
\[\nabla u_{j}^{h}=\sum_{n=1}^{j-i}h^{-(n-1)}\int_{t_{1},\ldots,t_{n }\geq 0}e^{-h^{-1}(t_{1}+\ldots+t_{n})}P_{\sigma^{2}(t_{1}+\cdots+t_{n})}A_{ j+1-n}^{h}dt_{1}\cdots dt_{n}\\ +h^{-(j-i)}\int_{t_{1},\ldots,t_{j-i}\geq 0}e^{-h^{-1}(t_{1}+ \ldots+t_{j-i})}P_{\sigma^{2}(t_{1}+\cdots+t_{j-i})}\nabla u_{i}^{h}dt_{1} \cdots dt_{j-i}.\]
Here we used the semigroup property of the heat kernel. Denoting \(\gamma(x;n,\beta)=\frac{x^{n-1}e^{-\beta x}\beta^{n}}{\Gamma(n)}\) the gamma distribution density, we have equivalently
\[\nabla u_{j}^{h}=h\sum_{n=1}^{j-i}\int_{0}^{\infty}\gamma(t;n,h^{-1})P_{\sigma^ {2}t}A_{j+1-n}^{h}dt+\int_{0}^{\infty}\gamma(t;j-i,h^{-1})P_{\sigma^{2}t}\nabla u _{i}^{h}dt.\]
Subtracting \(\nabla u_{i}^{h}\), we obtain
\[|\nabla u_{j}^{h}(x)-\nabla u_{i}^{h}(x)|\\ \leq h\sum_{n=1}^{j-i}\int_{0}^{\infty}\gamma(t;n,h^{-1})|P_{\sigma ^{2}t}A_{j+1-n}^{h}(x)|dt+\int_{0}^{\infty}\gamma(t;j-i,h^{-1})(P_{\sigma^{2}t }\nabla u_{i}^{h}(x)-\nabla u_{i}^{h}(x))dt\\ \leq h\sum_{n=1}^{j-i}\int_{0}^{\infty}\gamma(t;n,h^{-1})\cdot C( 1+(\sigma^{2}t)^{1/2}+|x|)dt+\int_{0}^{\infty}\gamma(t;j-i,h^{-1})(\sigma^{2} t)^{1/2}||\nabla^{2}u_{i}^{h}||_{\infty}dt\\ \leq Ch^{3/2}\sum_{n=1}^{j-i}\frac{\Gamma(n+\frac{1}{2})}{\Gamma (n)}+C(j-i)h(1+|x|)+Ch^{1/2}\frac{\Gamma(j-i+\frac{1}{2})}{\Gamma(j-i)}\\ \leq C((j-i)h)^{3/2}+C(j-i)h\cdot(1+|x|)+C((j-i)h)^{1/2}.\]
In the second inequality we used the following properties of the heat kernel: \(P_{t}|\cdot|(x)\leq c_{d}\sqrt{t}+|x|\), \(||P_{t}f-f||_{\infty}\leq\sqrt{t}||f||_{\rm Lip}\). In the last inequality we used the log-convexity of the gamma function along the positive real line: \(\Gamma(x+\frac{1}{2})\leq\sqrt{\Gamma(x)\Gamma(x+1)}=\sqrt{x}\Gamma(x)\) for \(x>0\).
### Proof of Theorem 2.13
Proof of Theorem 2.13.: To regain continuity in time, we define the linear interpolation flow
\[f^{h}(t)=(1-s)f_{i}^{h}+sf_{i+1}^{h},\quad\text{for }t=(i+s)h,s\in[0,1),\quad f =p,\lambda.\]
From Proposition 5.1 and Proposition 5.7 together with Remark 5.8, it holds for all \(h>0\), \(t,s\in[0,T]\), \(x,y\in\mathbb{R}^{d}\),
\[\left|\lambda^{h}(t)\right|\leq C,\quad\left|\lambda^{h}(t)-\lambda ^{h}(s)\right|\leq\tilde{\varpi}(|t-s|), \tag{5.10}\] \[\underline{C}e^{-\mathbb{E}|x|^{2}}\leq p^{h}(t,x)\leq\overline{C} e^{-\overline{c}|x|^{2}},\qquad\left|p^{h}(t,x)-p^{h}(s,y)\right|\leq C \left(|t-s|+|x-y|\right). \tag{5.11}\]
Thus we can apply Arzela-Ascoli Theorem to ensure that the family of functions \((p^{h})_{h}\) and \((\lambda^{h})_{h}\) are relatively compact in \(C([0,T]\times\mathbb{R}^{d})\) and \(C([0,T])\) respectively. Define also \(\psi^{h}:=\sqrt{p^{h}}\) and notice that, by using the elementary inequality \(|\sqrt{a}-\sqrt{b}|\leq\sqrt{|a-b|}\),
\[\underline{C}e^{-\mathbb{E}|x|^{2}}\leq\psi^{h}(t,x)\leq\overline{C}e^{- \overline{c}|x|^{2}},\qquad\left|\psi^{h}(t,x)-\psi^{h}(s,y)\right|\leq C\left( |t-s|^{\frac{1}{2}}+|x-y|^{\frac{1}{2}}\right), \tag{5.12}\]
Let \(p\) and \(\lambda\) be cluster points, _i.e._, there exists \(h_{n}\downarrow 0\) such that \(p^{h_{n}}\to p\) and \(\lambda^{h_{n}}\to\lambda\) uniformly. Note that \(\psi^{h_{n}}\) also converges to \(\psi:=\sqrt{p}\) uniformly on \([0,T]\times\mathbb{R}^{d}\) and that \((p,\psi,\lambda)\) satisfies (5.10)-(5.12) as well.
Now we verify that the limit \((p,\psi,\lambda)\) solves the mean-field Schrodinger equation in the weak sense, _i.e._, for all \(\varphi\in C_{c}^{2}(\mathbb{R}^{d})\), we have for all \(t\in[0,T]\),
\[\int(\psi(t,x)-\psi(0,x))\varphi(x)dx\\ =\int_{0}^{t}\int\frac{\sigma^{2}}{2}\psi(s,x)\Delta\varphi(x)- \frac{1}{2}\frac{\delta F}{\delta p}(p_{s},x)\psi(s,x)\varphi(x)+\frac{1}{2} \lambda(s)\psi(s,x)\varphi(x)dxds. \tag{5.13}\]
From the construction of \(u_{i}^{h},\lambda_{i}^{h},\) we know the following holds for \(i\leq N_{h},\)
\[\int\sum_{k=1}^{i}\log\frac{\psi^{h}(kh,x)}{\psi^{h}((k-1)h,x)}\psi ^{h}(kh,x)\varphi(x)dx\] \[=h\sum_{k=1}^{i}\int\frac{\sigma^{2}}{2}\psi^{h}(kh,x)\Delta \varphi(x)-\frac{1}{2}\frac{\delta F}{\delta p}(p^{h}(kh),x)\psi^{h}(kh,x) \varphi(x)+\frac{1}{2}\lambda^{h}(kh)\psi^{h}(kh,x)\varphi(x)dx. \tag{5.14}\]
Let \(i=\lfloor t/h\rfloor\) be the unique integer such that \(t\in[ih,(i+1)h)\) and denote the difference between the left and right hand sides of eqs. (5.13) and (5.14) by \(\delta^{\ell}(h),\delta^{r}(h)\) respectively. We wish to show that both \(\delta^{\ell}(h_{n}),\delta^{r}(h_{n})\) converge to zero when \(n\rightarrow\infty,\) so that eq. (5.13) is proved. For the left hand side we have \(\delta^{\ell}(h)=\sum_{n=1}^{3}\delta_{n}^{\ell}(h)\) with
\[\delta_{1}^{\ell}(h) =\int(\psi(t,x)-\psi(ih,x))\varphi(x)dx,\] \[\delta_{2}^{\ell}(h) =\int(\psi(ih,x)-\psi^{h}(ih,x))\varphi(x)dx,\] \[\delta_{3}^{\ell}(h) =\int\sum_{k=1}^{i}\left(\psi^{h}(kh,x)-\psi^{h}((k-1)h,x)-\log \frac{\psi^{h}(kh,x)}{\psi^{h}((k-1)h,x)}\psi^{h}(kh,x)\right)\varphi(x)dx.\]
The first part is of order \(h^{1/2},\) thanks to the time regularity of \(\psi.\) The second part converges to \(0\) along the sequence \(h_{n}\) by uniform convergence \(\psi^{h_{n}}\rightarrow\psi\). For the third part we note that, by using eq. (5.5),
\[\left|\psi^{h}(kh,x)-\psi^{h}((k-1)h,x)-\log\frac{\psi^{h}(kh,x)} {\psi^{h}((k-1)h,x)}\psi^{h}(kh,x)\right|\] \[\qquad\qquad\qquad=\left|\exp(-u_{k}^{h}(x)/2)-\exp(-u_{k-1}^{h}( x)/2)+\frac{1}{2}\exp(-u_{k}^{h}(x)/2)(u_{k}^{h}(x)-u_{k-1}^{h}(x))\right|\] \[\qquad\qquad\qquad\leq\max\{\psi_{k}^{h}(x),\psi_{k-1}^{h}(x)\} |u_{k}^{h}(x)-u_{k-1}^{h}(x)|^{2}\leq C\exp(-c|x|^{2})h^{2},\]
so that \(\delta_{3}^{\ell}(h)\leq Ch\int\exp(-c|x|^{2})\varphi(x)dx\leq Ch\). For the right hand side difference we also have \(\delta^{r}(h)=\sum_{n=1}^{3}\delta_{n}^{r}(h)\) with
\[\delta_{1}^{r}(h) =\int_{ih}^{l}\int\frac{\sigma^{2}}{2}\psi(s,x)\Delta\varphi(x)- \frac{1}{2}\frac{\delta F}{\delta p}(p_{s},x)\psi(s,x)\varphi(x)+\lambda(s) \psi(s,x)\varphi(x)dxds,\] \[\delta_{2}^{r}(h) =\int\int_{0}^{ih}\frac{\sigma^{2}}{2}(\psi-\psi^{h})(s,x)\Delta \varphi(x)\] \[\qquad\quad-\frac{1}{2}\left(\frac{\delta F}{\delta p}(p_{\cdot},\cdot)\psi-\frac{\delta F}{\delta p}(p_{\cdot}^{h},\cdot)\psi^{h}+\lambda \psi-\lambda^{h}\psi^{h}\right)(s,x)\varphi(x)dxds,\] \[\delta_{3}^{r}(h) =\frac{\sigma^{2}}{2}\int\left(\int_{0}^{ih}\psi^{h}(s,x)ds-h \sum_{k=1}^{i}\psi^{h}(ih,x)\right)\Delta\varphi(x)dx\] \[\qquad\quad+\frac{1}{2}\int\left(\int_{0}^{ih}\frac{\delta F}{ \delta p}(p^{h}(s),x)\psi^{h}(s,x)ds-h\sum_{k=1}^{i}\frac{\delta F}{\delta p}( p^{h}(kh),x)\psi^{h}(kh,x)\right)\varphi(x)dx\] \[\qquad\quad+\frac{1}{2}\int\left(\int_{0}^{ih}\lambda^{h}(s)\psi ^{h}(s,x)ds-h\sum_{k=1}^{i}\lambda^{h}(kh)\psi^{h}(kh,x)\right)\varphi(x)dx.\]
The first part satisfies \(|\delta_{1}^{r}(h)|\leq Ch,\) thanks to the bounds on \(p,\psi,\lambda\). The second part goes to zero along the sequence \(h_{n}\) by uniform convergence. The last part goes to zero when \(h\to 0,\) thanks to the uniform bounds and time regularity estimates on \((p^{h},\psi^{h},\lambda^{h}).\)
Now we prove that the weak solution is actually a classical solution. As mentioned before, the limit \(p,\psi,\lambda\) inherits the bounds and regularity estimates on \(p^{h},\psi^{h},\lambda^{h}\). In particular, if one notes \(D(t,x)=-\frac{\delta F}{\delta p}(p_{t},x)\psi(t,x)+\lambda(t)\psi(t,x)\), one realizes that \(D\) is uniformly continuous on \([0,T]\times\mathbb{R}^{d}\). Take a mollified sequence \(\psi_{\varepsilon}=\psi\star\rho_{\varepsilon},D_{\varepsilon}=D\star\rho_{\varepsilon}\), with
\[\rho\in D_{c}^{\infty}((-\infty,0)\times\mathbb{R}^{d}),\quad\rho\geq 0, \quad\int\rho=1,\quad\rho_{\varepsilon}=\varepsilon^{-d-2}\rho(\varepsilon^{-2 }\cdot,\varepsilon^{-1}\cdot),\]
and consider the heat equation satisfied by \(\psi_{\varepsilon}\), we have the following holds,
\[\psi(t,x)=P_{\sigma^{2}t}\psi_{\varepsilon}(0,x)+\frac{1}{2}\int_{0}^{t}P_{ \sigma^{2}(t-s)}(D_{\varepsilon}(s,\cdot))(x)ds.\]
One can show that in the limit \(\varepsilon\to 0\) we recover the Duhamel formula,
\[\psi(t,x)=P_{\sigma^{2}t}\psi_{0}(x)+\frac{1}{2}\int_{0}^{t}P_{\sigma^{2}(t-s) }\left(-\frac{\delta F}{\delta p}(p_{s},\cdot)\psi(s,\cdot)+\lambda(s)\psi(s, \cdot)\right)(x)ds.\]
One then readily verifies that \(\partial_{t}\psi,\nabla^{2}\psi\) exist everywhere and are continuous by the dominated convergence theorem so that \(\psi\) is a classical solution to
\[\partial_{t}\psi=\frac{\sigma^{2}}{2}\Delta\psi-\frac{1}{2}\left(\frac{\delta F }{\delta p}(p_{t},\cdot)-\lambda_{t}\right)\psi,\]
That is to say \(p\) is a classical solution to the mean-field Schrodinger equation.
By uniqueness of classical solution, we conclude that \((p^{h},\lambda^{h})\to(p,\lambda)\) in sense \(C([0,T]\times\mathbb{R}^{d})\times C([0,T])\). Finally, the step flow \((t,x)\mapsto p^{h}_{[t/h]}(x)\) converges to the same limit as \(h\to 0\) by time regularity estimates in eq. (5.10).
## Appendix A Appendix
### Regularity of Solution to HJB Equation
Throughout this section, we assume that Assumptions 2.5, 2.6 and 3.1 hold. Let \(u\) be the unique viscosity solution to the HJB equation (3.3). We start by establishing upper and lower bounds on \(u\).
**Lemma A.1**.: _The function \(u\) satisfies for all \(t\in[0,T],\,x\in\mathbb{R}^{d}\)_
\[-C_{T}\leq u(t,x)\leq C_{T}(1+|x|^{2}).\]
Proof.: Under Assumption 2.5 and 3.1 we have \(-C_{T}\leq\frac{\delta F}{\delta p}(m_{t},x)\leq C_{T}(1+|x|^{2})\). On the other hand, under Assumption 2.6 the initial value satisfies \(-C\leq u(0,x)\leq C(1+|x|^{2})\). The desired bound of \(u\) follows from the comparison principle.
To show existence and uniqueness of the classical solutions to HJB equation (3.3), it is convenient to consider the change of variable \(\psi:=e^{-\frac{1}{2}u}.\) Note that due to Lemma A.1, it holds for all \(t\in[0,T],\)
\[e^{-C_{T}(1+|x|^{2})}\leq\psi(t,x)\leq C_{T}.\]
**Lemma A.2**.: _The function \(\psi\) is the viscosity solution to_
\[\partial_{t}\psi=\frac{\sigma^{2}}{2}\Delta\psi-\frac{1}{2}\left(\frac{\delta F }{\delta p}(m_{t},\cdot)-\gamma u\right)\psi,\quad\psi(0,x)=e^{-\frac{1}{2}u( 0,x)}.\] (A.1)
_Moreover,_
\[\psi(t,x):=\mathbb{E}\left[e^{-\frac{1}{2}\int_{0}^{t}\left(\frac{\delta F}{ \delta p}(m_{t-s},x+\sigma W_{s})-\gamma u(t-s,x+\sigma W_{s})\right)ds}\psi(0,x+\sigma W_{t})\right].\] (A.2)
Proof.: Since the function \(x\mapsto e^{-\frac{1}{2}x}\) is monotone, \(\psi\) is a viscosity solution to eq. (A.1) if and only if \(u\) is a viscosity solution to eq. (3.3). By the bound of \(u\) in Lemma A.1, we have
\[\mathbb{E}\left[e^{\frac{-1}{2}\int_{0}^{t}u(t-s,x+\sigma W_{s})ds}\right]\leq \frac{1}{t}\int_{0}^{t}\mathbb{E}\left[e^{\frac{\gamma tG_{T}}{2}(1+|x+ \sigma W_{s}|^{2})}\right]ds<\infty,\]
for all \(t\leq\delta\) with \(\delta\) small enough. Also note that \(\big{(}-\frac{\delta F}{\delta p}(m_{t},\cdot)\big{)}_{t\in[0,T]}\) and \(\psi(0,\cdot)\) are bounded from above. So for \(t\leq\delta\) we may well define
\[\tilde{\psi}(t,x):=\mathbb{E}\left[e^{-\frac{1}{2}\int_{0}^{t}\big{(}\frac{ \delta F}{\delta p}(m_{t-s},x+\sigma W_{s})-\gamma u(t-s,x+\sigma W_{s})\big{)} ds}\psi(0,x+\sigma W_{t})\right].\]
It is easy to verify that \(\tilde{\psi}\) is a viscosity solution to eq. (A.1), so equal to \(\psi\) on \([0,\delta]\). Also note that \(\psi=e^{-\frac{1}{2}u}\leq C_{T}\) thanks to Lemma A.1. So we may further define for \(t\in(\delta,2\delta]\)
\[\tilde{\psi}(t,x) :=\mathbb{E}\left[e^{-\frac{1}{2}\int_{0}^{t-s}\big{(}\frac{ \delta F}{\delta p}(m_{t-s},x+\sigma W_{s})-\gamma u(t-s,x+\sigma W_{s})\big{)} ds}\tilde{\psi}(\delta,x+\sigma W_{t-\delta})\right]\] \[=\mathbb{E}\left[e^{-\frac{1}{2}\int_{0}^{t}\big{(}\frac{\delta F }{\delta p}(m_{t-s},x+\sigma W_{s})-\gamma u(t-s,x+\sigma W_{s})\big{)}ds} \psi(0,x+\sigma W_{t})\right].\]
Therefore the desired probabilistic representation (A.2) follows from induction.
**Proposition A.3**.: _The function \(\psi=e^{-\frac{u}{2}}\) is the unique classical solution to eq. (A.1). Moreover, \(\psi\) belongs to \(C^{3}(Q_{T})\) for all \(T>0\) and the gradient \(\nabla\psi\) satisfies the growth condition \(|\nabla\psi(t,x)|\leq C_{T}(1+|x|^{2})\)._
Proof.: Thanks to Lemma A.2, it is enough to verify that \(\psi\in C^{3}(Q_{T})\cap C(\bar{Q}_{T})\). Also it follows from Lemma A.2 that
\[\bigg{(}e^{-\frac{1}{2}\int_{0}^{s}\big{(}\frac{\delta F}{\delta p}(m_{t-r},x +\sigma W_{r})-\gamma u(t-r,x+\sigma W_{r})\big{)}dr}\psi(t-s,x+\sigma W_{s}) \bigg{)}_{s\in[0,t]}\]
is a continuous martingale. By the martingale representation theorem and Ito's formula, we have for all \(0\leq r^{\ast}\leq t\) that
\[\psi(t,x)=\mathbb{E}\Big{[}-\frac{1}{2}\int_{0}^{t-r}\Big{(}\frac {\delta F}{\delta p}(m_{t-s},x+\sigma W_{s})-\gamma u(t-s,x+\sigma W_{s}) \Big{)}\psi(t-s,x+\sigma W_{s})ds\\ +\psi(r,x+\sigma W_{t-r})\Big{]}.\] (A.3)
Recall that \(|\frac{\delta F}{\delta p}(m_{t},x)|+|u(t,x)|\leq C_{T}(1+|x|^{2})\) on \([0,T]\times\mathbb{R}^{d}\), so for all \(t\leq T\) we have
\[\int_{0}^{t}\mathbb{E}\left[\Big{|}\Big{(}\frac{\delta F}{\delta p}(m_{t-s},x +\sigma W_{s})-\gamma u(t-s,x+\sigma W_{s})\Big{)}\psi(t-s,x+\sigma W_{s}) \frac{\sigma^{-1}W_{s}}{s}\Big{|}\right]ds<\infty.\]
As a result \(\nabla\psi\) exists and is equal to
\[\nabla\psi(t,x)=\mathbb{E}\Big{[}-\frac{1}{2}\int_{0}^{t}\Big{(} \frac{\delta F}{\delta p}(m_{t-s},x+\sigma W_{s})-\gamma u(t-s,x+\sigma W_{s}) \Big{)}\psi(t-s,x+\sigma W_{s})\frac{\sigma^{-1}W_{s}}{s}ds\\ +\nabla\psi(0,x+\sigma W_{t})\Big{]}.\]
Therefore we obtain \(|\nabla\psi(t,x)|\leq C_{T}(1+|x|^{2})\), and
\[|\nabla u(t,x)|=2\left|\frac{\nabla\psi(t,x)}{\psi(t,x)}\right|\leq C_{T}(1+| x|^{2})e^{C_{T}(1+|x|^{2})}.\]
In particular we have \(\mathbb{E}\big{[}|\nabla u(t,x+\sigma W_{s})|^{2}\big{]}<\infty\) for \(s\) small enough. So for \(r<t\) and \(r\) close enough to \(t\) we have
\[\nabla\psi(t,x)=\mathbb{E}\Big{[}-\frac{1}{2}\int_{0}^{t-r}\nabla\Big{(}\big{(} \frac{\delta F}{\delta p}(m_{t-s},\cdot)-\gamma u\big{)}\psi(t-s,\cdot)\Big{)} (x+\sigma W_{s})ds+\nabla\psi(r,x+\sigma W_{t-r})\Big{]}.\]
Further note that
\[\int_{0}^{t-r}\mathbb{E}\left[\left|\nabla\left((\frac{\delta F}{\delta p}(m_{ t-s},\cdot)-\gamma u\right)\psi(t-s,\cdot)\right)(x+\sigma W_{s})\frac{ \sigma^{-1}W_{s}}{s}\right|\right]ds<\infty,\]
So \(\nabla^{2}\psi\) exist and is equal to
\[\nabla^{2}\psi(t,x)=\mathbb{E}\Big{[}-\frac{1}{2}\int_{0}^{t-r} \nabla\Big{(}\big{(}\frac{\delta F}{\delta p}(m_{t-s},\cdot)-\gamma u\big{)} \psi(t-s,\cdot)\Big{)}(x+\sigma W_{s})\frac{\sigma^{-1}W_{s}}{s}ds\\ +\nabla\psi(r,x+\sigma W_{t-r})\frac{\sigma^{-1}W_{t-r}}{t-r} \Big{]}.\]
Further, in order to compute the time partial derivative, recall eq. (A.3). Since we have already proved that \(x\mapsto\psi(t,x)\) belongs to \(C^{2}\), it follows from Ito's formula that
\[\psi(t,x)-\psi(r,x)=\mathbb{E}\Big{[}\int_{0}^{t-r}\Big{(}\frac{ \sigma^{2}}{2}\Delta\psi(r,x+\sigma W_{s})\\ -\frac{1}{2}\Big{(}\frac{\delta F}{\delta p}(m_{t-s},x+\sigma W_{ s})-\gamma u(t-s,x+\sigma W_{s})\Big{)}\psi(t-s,x+\sigma W_{s})\Big{)}ds \Big{]}.\]
Then clearly \(\partial_{t}\psi\) exists and \(\psi\) satisfies eq. (A.1). Moreover, using the same argument we can easily show that \(\nabla^{3}\psi\) and \(\partial_{t}\nabla\psi\) exist and are continuous on \(Q\).
### Gaussian Bounds
The aim of this section is to establish a crucial technical result which ensures that if a family of probability writes as the exponential of a sum of Lipschitz and convex functions then it admits Gaussian bounds.
**Lemma A.4**.: _Let \(\mathcal{T}\) be an index set. For \(t\in\mathcal{T}\), we assume that the family of probability measures \(p_{t}=e^{-(v_{t}+w_{t})}\) satisfies the following conditions:_
1. _For some_ \(C>c>0\) _we have_ \(cI_{d}\leq\nabla^{2}v_{t}\leq CI_{d}\) _for all_ \(t\in\mathcal{T}\)_._
2. _The vectors_ \(\nabla v_{t}(0)\) _are uniformly bounded, i.e.,_ \(\sup_{t\in\mathcal{T}}|\nabla v_{t}(0)|<\infty\)_._
3. _The gradients_ \(\nabla w_{t}\) _are uniformly bounded, i.e.,_ \(\sup_{t\in\mathcal{T}}\|\nabla w_{t}\|_{\infty}<\infty\)_._
_Then there exist \(\underline{c},\overline{c},\underline{C},\overline{C}>0,\) such that for all \(t\in\mathcal{T},\)\(x\in\mathbb{R}^{d},\)_
\[\underline{C}e^{-\underline{c}|x|^{2}}\leq p_{t}(x)\leq\overline{C}e^{- \overline{c}|x|^{2}}\]
Proof.: As the potential decomposes \(u=v+w\) we decompose the probability measure \(p_{t}=q_{t}r_{t}\) with \(q_{t}=\exp\left(-v_{t}\right)/Z_{1,t}\) and \(r_{t}=\exp\left(-w_{t}\right)/Z_{2,t}\), such that \(\int q_{t}=1\) and \(\int q_{t}r_{t}=1\).
We first derive some estimates on \(v\) and the corresponding measure \(q\). From Assumption (i), the following inequalities holds
\[|\nabla v_{t}(x)-\nabla v_{t}(0)|\,|x|\geq(\nabla v_{t}(x)-\nabla v_{t}(0)) \cdot x\geq c\,|x|^{2}\]
For each \(t\in\mathcal{T}\), let \(x_{t}\) be the unique \(x\) such that \(\nabla v_{t}\left(x\right)=0\), _i.e._, \(v\) is minimized at \(x_{t}\). Plugging \(x_{t}\) in the inequality above, we obtain \(\left|\nabla v_{t}\left(0\right)\right|\left|x_{t}\right|\geq c\left|x_{t} \right|^{2}\). Thus, in view of Assumption (ii), \(x_{t}\) is bounded in \(\mathbb{R}^{d}\), _i.e._,
\[\sup_{t\in\mathcal{T}}\left|x_{t}\right|<+\infty.\] (A.4)
Denote \(\tilde{v}_{t}\left(x\right)=v_{t}\left(x\right)-v_{t}\left(x_{t}\right)\) and \(\tilde{Z}_{1,t}=\int\exp\left(-\tilde{v}_{t}\right)\).We have by definition \(q_{t}=\exp\left(-\tilde{v}_{t}\right)/\tilde{Z}_{1,t}\) and \(\tilde{v}_{t}\left(x_{t}\right)=0\) as well as \(\nabla\tilde{v}_{t}\left(x_{t}\right)=0\). From estimates on second-order derivatives, it is immediate that
\[\frac{1}{2}c\left|x-x_{t}\right|^{2}\leq\tilde{v}_{t}\left(x \right) \leq\frac{1}{2}C\left|x-x_{t}\right|^{2},\] \[\left(\frac{c}{2\pi}\right)^{d/2}\leq\tilde{Z}_{1,t}^{-1} \leq\left(\frac{C}{2\pi}\right)^{d/2}\]
so that
\[\left(\frac{c}{2\pi}\right)^{d/2}\exp\left(-\frac{C}{2}\left|x-x_{t}\right|^{2 }\right)\leq q_{t}\leq\left(\frac{C}{2\pi}\right)^{d/2}\exp\left(-\frac{c}{2} \left|x-x_{t}\right|^{2}\right).\] (A.5)
Now we estimate the \(r\) part. Denote \(\tilde{w}_{t}\left(x\right)=w_{t}\left(x\right)-w_{t}\left(x_{t}\right)\) and \(\tilde{Z}_{2,t}=\int q_{t}\exp\left(-\tilde{w}_{t}\right)\). We have by definition \(r_{t}=\exp\left(-\tilde{w}_{t}\right)/\tilde{Z}_{2,t}\) and \(\tilde{w}_{t}\left(x_{t}\right)=0\). Thanks to Assumption (iii), we know that \(\nabla w_{t}=\nabla\tilde{w}_{t}\) is uniformly bounded by some constant, denoted \(L\). Therefore it holds
\[-L\left|x-x_{t}\right|\leq\tilde{w}_{t}\left(x\right)\leq L\left|x-x_{t}\right|\] \[\int q_{t}\exp\left(-L\left|x-x_{t}\right|\right)\leq\tilde{Z}_{2,t}\leq\int q_{t}\exp\left(L\left|x-x_{t}\right|\right)\]
In particular, in view of eq. (A.5) and eq. (A.4), it holds \(c\leq\tilde{Z}_{2,t}\leq C\) for some \(c,C>0.\) We obtain that
\[C^{-1}\exp\left(-L\left|x-x_{t}\right|\right)\leq r_{t}\leq c^{-1}\exp\left(L \left|x-x_{t}\right|\right).\] (A.6)
Since \(p_{t}=q_{t}r_{t}\), the conclusion follows immediately from eq. (A.4), eq. (A.5) and eq. (A.6).
### Reflection Coupling
In the section we recall the reflection coupling technique developped in [7, 8] and use it to estimate the \(\mathcal{W}_{1}\)-distance between the marginal laws of two diffusion processes with drift \(b\) and \(b+\delta b\).
**Assumption A.5**.: _The drifts \(b\) and \(\delta b\) satisfy_
* \(b\) _and_ \(\delta b\) _are Lipschitz in_ \(x\)_, i.e., there is a constant_ \(L>0\) _such that_ \[\left|b(t,x)-b(t,y)\right|+\left|\delta b(t,x)-\delta b(t,y)\right|\leq L|x-y |,\quad\text{for all }t\in[0,T],\,x,y\in\mathbb{R}^{d};\]
* _there exists a continuous function_ \(\kappa:(0,\infty)\to\mathbb{R}\) _such that_ \(\limsup_{r\to\infty}\kappa(r)<0\)_,_ \(\int_{0}^{1}r\kappa^{+}(r)dr<\infty\) _and_ \[(x-y)\cdot\left(b(t,x)-b(t,y)\right)\leq\kappa(|x-y|)|x-y|^{2},\quad\text{for all }t\in[0,T]\,\,x,y\in\mathbb{R}^{d}.\]
**Remark A.6**.: _If \(b(t,x)=-(\alpha(t,x)+\nabla\beta(t,x))\) with \(\alpha\) bounded and \(\beta\) strictly convex in \(x\) uniformly on \(t,\) i.e.,_
\[(\nabla\beta(t,x)-\nabla\beta(t,y))\cdot(x-y)\geq\eta\left|x-y\right|^{2},\]
_then the function \(b\) satisfies Assumption A.5 (ii) with \(\kappa(r)=\frac{2\|\alpha\|_{\infty}}{r}-\eta.\)_
**Theorem A.7**.: _Let Assumption A.5 hold. Consider the following two diffusion processes_
\[dX_{t}=b(t,X_{t})dt+\sigma dW_{t},\qquad dY_{t}=(b+\delta b)(t,Y_{t})dt+\sigma dW_ {t}^{\prime},\]
_and denote their marginal distributions by \(p_{t}^{X}:=\mathcal{L}(X_{t})\) and \(p_{t}^{Y}:=\mathcal{L}(Y_{t})\). Then we have_
\[\mathcal{W}_{1}(p_{t}^{X},p_{t}^{Y})\leq\ell e^{-c\sigma^{2}t}\Big{(} \mathcal{W}_{1}(p_{0}^{X},p_{0}^{Y})+\int_{0}^{t}e^{c\sigma^{2}s}\mathbb{E} \big{[}|\delta b(s,Y_{s})|\big{]}ds\Big{)},\quad\text{for all }t\in[0,T],\] (A.7)
_where the constants \(\ell\) and \(c\) only depend on the function \(\kappa(\cdot)/\sigma^{2}\)._
Proof.: We first recall the reflection-synchronuous coupling introduced in [8]. Introduce Lipschitz functions \(\mathrm{rc}:\mathbb{R}^{d}\times\mathbb{R}^{d}\mapsto[0,1]\) and \(\mathrm{sc}:\mathbb{R}^{d}\times\mathbb{R}^{d}\mapsto[0,1]\) satisfying
\[\mathrm{rc}^{2}(x,y)+\mathrm{sc}^{2}(x,y)=1.\]
Fix a small constant \(\eta>0\). We impose that \(\mathrm{rc}(x,y)=1\) whenever \(|x-y|>\eta\) and \(\mathrm{rc}(x,y)=0\) if \(|x-y|\leq\eta/2\). The so-called reflection-synchronuous coupling is the strong solution to the following SDE system:
\[dX_{t} =b(t,X_{t})dt+\mathrm{rc}(X_{t},Y_{t})\sigma dW_{t}^{1}+\mathrm{ sc}(X_{t},Y_{t})\sigma dW_{t}^{2}\] \[dY_{t} =(b+\delta b)(t,Y_{t})dt+\mathrm{rc}(X_{t},Y_{t})(I-2e_{t}\langle e _{t},\cdot\rangle)\sigma dW_{t}^{1}+\mathrm{sc}(X_{t},Y_{t})\sigma dW_{t}^{2},\]
where \(W^{1},W^{2}\) are \(d\)-dimension independent standard Brownian motion and
\[e_{t}=\frac{X_{t}-Y_{t}}{|X_{t}-Y_{t}|}\quad\text{for }X_{t}\neq Y_{t},\quad \text{and}\quad e_{t}=u\quad\text{for }X_{t}=Y_{t}\]
with \(u\in\mathbb{R}^{d}\) a fixed arbitrary unit vector. We denote by \(\mathrm{rc}_{t}:=\mathrm{rc}(X_{t},Y_{t})\) and define \(r_{t}:=|X_{t}-Y_{t}|\). Observe that
\[dr_{t}=\langle e_{t},b(t,X_{t})-b(t,Y_{t})-\delta b(t,Y_{t})\rangle dt+2 \mathrm{rc}_{t}\sigma dW_{t}^{\circ},\]
where \(W^{\circ}\) is a one-dimension standard Brownian motion, see [7] for details.
Next we construct an important auxiliary function \(f\) as in [8, Section 5.3]. First define two constants:
\[R_{1} =\inf\{R\geq 0:\ \kappa(r)\leq 0,\text{ for all }r\geq R\}\] \[R_{2} =\inf\{R\geq R_{1}:\ \kappa(r)R(R-R_{1})\leq-4\sigma^{2},\text{ for all }r\geq R\}.\]
Further define
\[\varphi(r)=e^{-\frac{1}{2\sigma^{2}}\int_{0}^{r}u\kappa^{+}(u)du},\quad\Phi(r )=\int_{0}^{r}\varphi(u)du,\quad g(r)=1-\frac{c}{2}\int_{0}^{r}\Phi(u)\varphi( u)^{-1}du,\]
where the constant \(c=\left(\int_{0}^{R_{2}}\Phi(r)\varphi(r)^{-1}\right)^{-1}\), and eventually define the auxiliary function
\[f(r)=\int_{0}^{r}\varphi(u)g(u\wedge R_{2})du.\]
One easily checks that
\[r\varphi(R_{1})\leq\Phi(r)\leq 2f(r)\leq 2\Phi(r)\leq 2r,\quad\text{for all }r>0.\]
Note also that \(f\) is increasing and concave. In addition, \(f\) is linear on \([R_{2},+\infty)\), twice continuously differentiable on \((0,R_{2})\) and satisfies
\[2\sigma^{2}f^{\prime\prime}(r)\leq-r\kappa^{+}(r)f^{\prime}(r)-c\sigma^{2}f( r),\quad\text{for all }r\in(0,\infty)\backslash\{R_{2}\}.\]
This inequality follows easily by direct computation on \([0,R_{2})\) and we refer to [8] for a detailed justification on \((R_{2},+\infty)\). Then we have by Ito-Tanaka formula that
\[df(r_{t})\leq\left(f^{\prime}_{-}(r_{t})\langle e_{t},b(t,X_{t})-b(t,Y_{t})- \delta b(t,Y_{t})\rangle+2\sigma^{2}\mathrm{rc}_{t}^{2}f^{\prime\prime}(r_{t}) \right)dt+2\mathrm{rc}_{t}f^{\prime}_{-}(r_{t})\sigma dW_{t}^{\circ}.\]
Further note that \(f^{\prime}_{-}(r)\in[\varphi(R_{1}),1]\) and
\[\langle e_{t},b(t,X_{t})-b(t,Y_{t})\rangle\leq 1_{r_{t}<\eta}|b|_{Lip}\eta+1_{r _{t}\geq\eta}r_{t}\kappa^{+}(r_{t}).\]
Therefore we have
\[de^{c\sigma^{2}t}f(r_{t})\leq e^{c\sigma^{2}t}\Big{(}2\mathrm{rc }_{t}f^{\prime}_{-}(r_{t})\sigma dW_{t}^{\circ}+|\delta b(t,Y_{t})|dt+1_{r_{t} <\eta}(c\sigma^{2}f(r_{t})+|b|_{Lip}\eta)dt\\ +1_{r_{t}\geq\eta}\big{(}c\sigma^{2}f(r_{t})+r_{t}\kappa^{+}(r_{t })f^{\prime}(r_{t})+2\sigma^{2}f^{\prime\prime}(r_{t})\big{)}dt\Big{)}.\]
It follows from appendix A.3 that
\[de^{c\sigma^{2}t}f(r_{t})\leq e^{c\sigma^{2}t}\left(2\mathrm{rc}_{t}f^{\prime} _{-}(r_{t})\sigma dW_{t}^{\circ}+\Big{(}|\delta b(t,Y_{t})|+(c\sigma^{2}+|b|_{ Lip})\eta\Big{)}dt\right).\]
Taking expectation on both sides, we obtain
\[\mathbb{E}[e^{c\sigma^{2}t}f(r_{t})-f(r_{0})]\leq\int_{0}^{t}e^{c\sigma^{2}s} \Big{(}\mathbb{E}\big{[}|\delta b(s,Y_{s})|\big{]}+(c\sigma^{2}+|b|_{Lip}) \eta\Big{)}ds.\]
Again due to the construction of \(f\) we have
\[\mathcal{W}_{1}(p_{t}^{X},p_{t}^{Y}) \leq\mathbb{E}[r_{t}]\leq 2\varphi(R_{1})^{-1}\mathbb{E}[f(r_{t})]\] \[\leq 2\varphi(R_{1})^{-1}e^{-c\sigma^{2}t}\left(\mathbb{E}[f(r_{0 })]+\int_{0}^{t}e^{c\sigma^{2}s}\Big{(}\mathbb{E}\big{[}|\delta b(s,Y_{s})| \big{]}+(c\sigma^{2}+|b|_{Lip})\eta\Big{)}ds\right)\] \[\leq 2\varphi(R_{1})^{-1}e^{-c\sigma^{2}t}\left(\mathcal{W}_{1}(p _{0}^{X},p_{0}^{Y})+\int_{0}^{t}e^{c\sigma^{2}s}\Big{(}\mathbb{E}\big{[}| \delta b(s,Y_{s})|\big{]}+(c\sigma^{2}+|b|_{Lip})\eta\Big{)}ds\right).\]
By passing the limit \(\eta\to 0\), we finally obtain the estimate (A.7).
|
2304.10705 | Graph based Label Enhancement for Multi-instance Multi-label learning | Multi-instance multi-label (MIML) learning is widely applicated in numerous
domains, such as the image classification where one image contains multiple
instances correlated with multiple logic labels simultaneously. The related
labels in existing MIML are all assumed as logical labels with equal
significance. However, in practical applications in MIML, significance of each
label for multiple instances per bag (such as an image) is significant
different. Ignoring labeling significance will greatly lose the semantic
information of the object, so that MIML is not applicable in complex scenes
with a poor learning performance. To this end, this paper proposed a novel MIML
framework based on graph label enhancement, namely GLEMIML, to improve the
classification performance of MIML by leveraging label significance. GLEMIML
first recognizes the correlations among instances by establishing the graph and
then migrates the implicit information mined from the feature space to the
label space via nonlinear mapping, thus recovering the label significance.
Finally, GLEMIML is trained on the enhanced data through matching and
interaction mechanisms. GLEMIML (AvgRank: 1.44) can effectively improve the
performance of MIML by mining the label distribution mechanism and show better
results than the SOTA method (AvgRank: 2.92) on multiple benchmark datasets. | Houcheng Su, Jintao Huang, Daixian Liu, Rui Yan, Jiao Li, Chi-man Vong | 2023-04-21T02:24:49Z | http://arxiv.org/abs/2304.10705v1 | # Graph based Label Enhancement for Multi-instance Multi-label learning
###### Abstract
Multi-instance multi-label (MIML) learning ignores label significance, such as the image classification where one image contains multiple semantics as instances, correlated with multiple labels of different significance in real are reduced to the logical labels. Ignoring labeling significance will greatly lose the semantic information of the object, so that MIML is not applicable in complex scenes with a poor learning performance. However, existing LE methods focuses on single instance tasks in which the inter-instance information in MIML is ignored, numerous feature spaces of MIML makes traditional LE difficult to mine implicit information simultaneously. To this end, this paper proposed a novel MIML framework based on graph label enhancement, namely GLEMIML, to improve the classification performance of MIML by leveraging label significance. GLEMIML first recognizes the correlations among instances by establishing the graph and then migrates the implicit information mined from the feature space to the label space via nonlinear mapping, thus recovering the label significance. Finally, GLEMIML is trained on the enhanced data through matching and interaction mechanisms. GLEMIML (AvgRank: 1.44) can effectively improve the performance of MIML by mining the label distribution mechanism and show better results than the SOTA method (AvgRank: 2.92) on multiple benchmark datasets.
1Institute of Collaborative Innovation, University of Macau, Macau S.A.R
2Faculty of Science and Technology, University of Macau, Macau S.A.R
3College of Information Engineering, Sichuan Agricultural University, China
{mc25695, hjt.violler }@connect.um.edu.mo, {202105787,
202105891,202005852}@stu.sicau.edu.cn,[email protected]
## 1 Introduction
Objects in the real world are often polysemantic, consisting of multiple instances associated with multiple labels [20]. Traditional supervised learning can be regarded as the degradation of objects with complex semantics, where useful information is lost at the representation stage. Nevertheless, in the multi-instance multi-label learning (MIMLL) framework, the objects can correspond to a bag of instances with a set of labels. In MIML, numerous practical problems can be properly formalized [23]. For example, multi-structural domain multifunctional proteins exist in nature that are formed by aggregating multiple structural domains. Structural domains may perform their functions independently or perform multiple functions in concert with neighbouring structural domains [26]. In the MIML, different structural domains can be divided into different instances and form the multifunctionality of a protein into a set of functional labels that can more effectively represent real-world problems.
In MIML tasks, simplified logical labels such as {0,1} are often used for labeling, thus losing more abundant semantic information. As shown in Figure 1, the significance of logical labels in each bag differs significantly. The above situations abound in practical applications. If ignoring the influence of labeling significance, MIML will be inaccurate and ineffective. To this end, it is urgent to leverage the labeling significance with richer semantic information from the existing logical labels of MIML.
Figure 1: An example of label enhancement for MIML learning
Label enhancement (LE) is a tool to efficiently mine label significance (i.e., also known as label distribution[14]). Nevertheless, existing LE methods mainly focus on single instance multiple labels (SIML) tasks [13]. If applying LE directly to MIML, a downgrade strategy may be required to convert MIML tasks to SIML tasks, thus using LE. However, this will lead to a significant loss of important implicit information based on inter-instance correlations. In addition, there may be numerous redundant features in MIML, resulting in traditional label enhancement with noisy information, thus cannot effectively utilize the recovered label distribution information. With difficulty in label quantification, it is challenging to construct the label distribution data artificially. Consequently, it is imminent to mine the potential label distribution information of MIML effectively.
To this end, this paper applies the features embedding for in-bag instances to mine the inter-instances correlation via Laplace matrices. To cope with the large and indeterminate feature space, the topological information of the feature space is mined by mapping it to a common subspace. Subsequently, the migrated information is used to recover the label distribution, and the inter-label correlation information is mined using Laplace matrices in the label distribution space. Finally, to avoid the label space recovered using the large feature space not being effectively exploited by the MIML classifier, a matching interaction mechanism is used to match LE with a multi-instance label distribution classifier through a hybrid label loss function. With the lack of a multi-instances label distribution dataset, we apply this to the MIML dataset and compare it with various approaches.
The main contributions of this paper are as follows: 1) a new MIML label enhancement method, GLEMIML, is proposed, to provide new insights into label enhancement in a multi-instance framework. 2) A new method formining inter-instances correlation information is proposed to mine different instance by projecting them in a subspace utilizing successive embeddings.
## 2 Related Work
### Multi-instance Multi-label learning
MIML learning has been extensively studied in recent years. Compared with traditional frameworks, MIML can solve problems more naturally in complex objects with multiple-semantics [15]. At the same time, MIMLBOOST and MIMLSVM algorithms based on degradation strategy and D-MIMSVM algorithm based on regularization framework are proposed for the first time. After MIML framework was proposed, a MIML classification network was proposed by using MultiLayer Perceptrons (MLP) and optimized by using Back-Propagation (BP), which was called MIMLNN [13]. By improving on MIMLNN, combined with three Hausdorff distance metric optimizations, it is proposed that EnMILNN can show excellent performance in various protein data sets [21]. DeepMIML [12] utilizes deep learning research to automatically locate key input forms that trigger labels while retaining MIML's instance-label relationship discovery capabilities. By combining CNN with MIML, a deep multimodal CNN for MIML image classification is proposed. By using the CNN architecture to automatically generate MIML instance representations, labeling is grouped by subsequent layers to achieve label correlation. Combined with label group context multiple Modal instances come from campuses to distinguish different groups of visually similar objects [20].
### Label Enhancement
Label enhancement aims to recover the label distribution from logical labels in the training set to guide classifiers' learning effectively. Graph Laplacian label enhancement (GLLE) [17] exploits the general topological information of the feature space and the correlation between labels to mine the hidden labeling significance. TMV-LE [13] utilizes the factorization of tensors to adopt general representation with multi-view joint mining for a more comprehensive topology, which is to obtain the joint subspace of multi-view and migrate it to the label space. FLEM [13] designed a matching and interaction mechanism, which completed the label distribution and prediction model correspondence with an integrated training process of LE. Label enhancement with Sample Correlations (LESC) uses the sample low-rank representation of the feature space. Generalized Label Enhancement with Sample Correlations (gLESC) uses tensor multi-rank minimization to explore the sample correlation in the feature space and label space [13]. The Label Distribution based Confidence Estimation (LDCE) [10] estimates the confidence of the observed label, which cleans the boundary between the label and the noise label so that the reliable label can recover the label distribution.
## 3 Methodology
### Notations Definition And Overall Framework
The related concepts and notations used in this paper are listed as follows. Given a MIML dataset consisting of the global feature space of all bags that are denoted as \(X\). The feature space of \(i\)-_th_ bag can be denoted as \(x_{p}\) each bag has a different number of instances, \(k\)-_th_ instance of \(i\)-_th_ bag can be denoted as \(x_{i}^{k}\). The feature space of \(i\)-_th_ bag can be denoted as \(x_{i}\)=\(x_{i}^{l}\),...,\(x_{i}^{k}\), so the global feature space of all bags \(X\) also can be expressed as \(\bar{X}\)={\(l^{\prime}_{1}\),...,\(x_{i}^{k}\)}...{\(x_{i}^{l}\)}. Meanwhile, the original logical label space corresponding to the whole is \(L\), the label space of \(i\)-_th_ bag can be denoted as \(l_{i}\), \(t\)-_th_ label of \(i\)-_th_ bag can be denoted as \(l^{\prime}_{i}\), the logical label space of the \(i\)-_th_ bag can be denoted as \(l_{i}\)=\(l^{\prime}_{i}l^{\prime}_{i}\),\(l^{\prime}_{i}\),\(\ldots\),\(l^{\prime}_{i}\), therefore, the global logical label space of all bags also can be expressed as \(L\)={\(l^{\prime}_{i}\),...,\(l^{\prime}_{i}\)}...{\(l^{\prime}_{i}\)}...{\(l^{\prime}_{i}\)}. The global label distribution space of all bags is D, where label distribution space of the \(i\)-_th_ bag can be denoted as \(d_{p}\), therefore, the global label distribution space of all bags can also be represented as \(D\)={\(d^{\prime}_{1}\),...,\(d^{\prime}_{1}\)}...{\(d^{\prime}_{1}\)}...{\(d^{\prime}_{n}\)}.
The GLEMIL model is divided into three parts. The first part is a graph-based label enhancement. The second part
is a MIML classifier with a simple two-layer fully connected network. Moreover, the third part is an interaction loss optimization for MIML enhancers and MIML Classifiers. To make the results of label enhancement from being effectively utilized by classifiers, GLEMIML uses the interactive loss optimization framework to guide the training of label enhancement using a MIML classifier. The flow chart of the model is shown in Figure 2.
### Label Enhancement
In MIML, with difficulty and cost in quantifying the label distribution, a large number of objects cannot be separated from instances in practical applications, such as protein molecules, complex images, etc., so one bag is often assigned with logic labels. The label distribution information can be recovered by mining relevant information in the feature space. In GLEMIML, label distribution information is recovered from logical labels by mining the correlation between labels[11], the correlation between instances and the topology information of feature space.
**Instance Relevance Mining**
In MIML tasks, many scenarios require multiple instances acting on same labels [13]. Compared with single instance multiple labelstask, the sample of MIML may consist of multiple instances, Multiple instances often have correlations. However, traditional LE algorithms do not have the concept of multiple instances, and the correlation information between instances cannot be mined effectively.
At the same time, multiple instances often bring a large and redundant feature space, and directly mining the correlation of instances in the original feature space will bring a huge amount of computation and noise interference.
In order to effectively mine the correlation between instances, we first use the bag\(x_{i}\) to project the data into a low-dimensional space in units of instances, which can be expressed as
\[\sigma(x_{i})\!-\!\!\left\langle\sigma(x_{i}^{k}),\sigma(x_{i}^{2}),...,\sigma (x_{i}^{k})\right\rangle \tag{1}\]
where \(\sigma(x_{i})\) is a nonlinear projection that sequentially projects the instances in the bag into a low-dimensional feature space. In this way, according to the smoothing assumption [15], two instances that are close to each other in the low-dimensional feature space may have correlation to the same labels. Therefore, define \(\sigma(x_{i}^{k})\)to be a nonlinear mapping of the \(k\)-_th_ instance of the \(i\)-_th_ bag. If \(\sigma(x_{i}^{k})\) is the K-nearest neighbor of \(\sigma(x_{i}^{m})\) and \(\sigma(x_{i}^{m})\) also is the K-nearest neighbor (KNN) of \(\sigma(x_{i}^{k})\), then \(\sigma(x_{i}^{m})\) is connected to \(\sigma(x_{i}^{k})\). Therefore, any instances in the bag can be expressed as
\[a_{km}\!=\!\!\left\{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
instance; \(\omega_{2}\big{(}W(\mathcal{X})\big{)}\) is to mine the graph structure between instances and map it nonlinear to mine the correlation between instances; \(\omega_{3}\)_(Y)_ is the non-linear mapping of logical labels used to mine the correlation of label space. Finally, it is fused in the high dimensional space and migrated to the label space to recover the label distribution space.
**Label Correlation In Label Distribution**
Label correlation is also an implicit important information. In logical labels, the simplification is for {0,1} labels, and the correlation between labels is also simplified. For example, two labels may show different significance in the label distribution space, but in logical label space, they might all show up as 1, and their correlation is strengthened because of the simplification of logical labels. Thus, the correlation of relevant labels that havesimilar significance is diluted. Therefore, only mining label correlation in logical label space cannoeffectively mine label correlation.
And when the labels are recovered from logical labels to label distribution, the inter-labels correlation information is also recovered. The closer the two labels are, the closer the descriptiveness corresponding to the labels should be. Therefore, the label distribution can be labeled as
\[D\!=\!T(D)\!+\!\omega_{1}\big{(}\mathcal{X})\!+\!\omega_{2}\big{(}W(\mathcal{ X})\big{)}\!+\!\omega_{3}\big{(}\mathcal{Y}\big{)} \tag{5}\]
Among them _T(D)_, is to find the closest K related labels in the label distribution through KNN, and establish a connected graph between label.
### Optimization Framework
It is important to avoid the separation of the classifier and label enhancer, because in this way, the label distribution recovered by the label enhancer may not be an effective guide to MIML classification. Not only that, the prediction information of the classifier guides the training of the label enhancer.
In order to avoid the separation of label enhancer and classifier in the training process, we use matching and interaction mechanism to realize the interaction between label enhancement and classifier.
**Label Enhancement Optimization Framework**
To recover a more accurate label distribution, the recovered label distribution can help the MIML classifier simultaneously. We construct a hybrid loss function optimizer. The label enhancement loss function of the label enhancer is represented by the prediction loss of the MIML classifier, bag correlation loss, and the loss caused by the threshold strategy.
Inspired by the asymmetric focal loss [11][10], we use the same interaction label loss to construct the prediction loss of the MIML classifier to guide the optimization of label enhancement.
\[L_{CL}\!=\!\frac{l}{k}\!\sum_{j=l}^{k}\!\begin{cases}\big{(}I\!-\!p_{j}\big{)} ^{j+}\log\big{(}p_{j}^{*}\big{)}j\in\!\Omega_{pos}\\ \Big{(}p_{j}\Big{)}^{j^{*}}\log\big{(}I\!-\!p_{j}^{*}\big{)}j\in\!\Omega_{neg} \end{cases} \tag{6}\]
where \(p_{j}\) is the predicted output of the prediction model via sigmoid, and \(p_{j}^{*}\) is the predicted output of the label distribution loss estimate via the sigmoid operation. \(\Omega_{pos}\) is the set of related samples, and \(\Omega_{neg}\) is the set of uncorrelated samples. \(\gamma+\) and \(\gamma\)- are two hyperparameters used to control the loss function.
For label enhancement, according to the smoothness assumption, we assume that two models with similar feature spaces can perform similar label spaces.
\[Z_{g}\!=\!sim\!\big{(}x_{i}x_{j}\big{)} \tag{7}\]
_sim(a,b)_ is a metric similarity function, i.e., cosine similarity.
With similar feature spaces and topologies in two bags, the label distribution spaces of them could maintain similar distributions.
\[A_{g}\!=\!sim\!\big{(}d_{i}d_{j}\big{)} \tag{8}\]
Where, \(d_{i}\) and \(d_{j}\) are the corresponding label distribution values of \(x_{i}\) and \(x_{j}\).
Therefore, the loss function for bag correlation can be expressed as
\[L_{Sim}=\Big{(}\frac{\sum_{i}^{m}\sum(j_{i}\gamma^{\prime}d_{i})}{Z}\Big{)}^{2} \tag{9}\]
where \(Z\) is the number of bags.
Because the label enhancement is data preprocessing rather than data noise detection and correction, we assume that the labeling significance of relevant labels in logical labels must be greater than that of irrelevant labels. To avoid when after label enhancement, the contiguous values of relevant labels are lower than those of irrelevant labels. Therefore, in the training process, we must set a threshold to make relevant labels more descriptive than irrelevant labels.
The loss function based on the threshold strategy can be expressed as
\begin{table}
\begin{tabular}{c c c c c} Dataset & instances & labels & Instances per bag & Labels per bag \\ & & & (Mean\(\pm\) std.) & (Mean\(\pm\) std.) \\ \hline Text Data For MIML(Text) & 2000 & 7 & \(1.15\pm 0.37\) & \(3.56\pm 2.71\) \\ Image Data for MIML(Image) & 2000 & 5 & \(15.00\pm 0.00\) & \(1.24\pm 0.17\) \\ Geobacter Sulfurreducens(GS) & 379 & 320 & \(3.20\pm 1.21\) & \(3.14\pm 3.33\) \\ Azotobacter Yinelandiii(AV) & 407 & 340 & \(3.07\pm 1.16\) & \(4.00\pm 6.97\) \\ Haloarcula Marismortui(HM) & 304 & 234 & \(3.13\pm 1.09\) & \(3.25\pm 3.02\) \\ Pyrococcus Furiosus(PF) & 425 & 321 & \(3.10\pm 1.09\) & \(4.48\pm 6.33\) \\ Saccharomyces Cerevisiae(SC) & 3509 & 1566 & \(1.86\pm 1.36\) & \(5.89\pm 11.52\) \\ Caenorhabditis Elegans(CE) & 2512 & 940 & \(3.39\pm 4.20\) & \(6.07\pm 11.25\) \\ Drosophila Melanogaster(DM) & 2605 & 1035 & \(3.51\pm 3.49\) & \(6.02\pm 10.24\) \\ \hline \end{tabular}
\end{table}
Table 1: Statistics of the nine datasets.
\[L_{threshold}\!\!-\!\!\frac{l}{M}\!\!\sum_{i=l}^{M}max(max(d_{i}^{nese})\!-min(d_{i}^{ pos}),0) \tag{10}\]
among them, \(d_{i}^{pos}\) refers to the set of label distribution values corresponding to all relevant labels in the \(i\)-_th_ bag, and \(d_{i}^{neg}\) refers to the set of label distribution values corresponding to all irrelevant labels in the \(i\)-_th_ bag.
Therefore, our final optimization framework can be expressed as
\[L_{CLE}\!\!=\!\!\beta_{i}L_{CL}\!\!+\!\!\beta_{2}L_{Sim}\!\!+\!\beta_{3}L_{ threshold}\!\!-\!\!\sum_{i=l}\beta_{i}\!-\!I \tag{11}\]
among them, \(\beta_{1}\), \(\beta_{2}\) and \(\beta_{3}\) are a set of hyperparameters, which are used to optimize the ratio of the three strategies of classifier, threshold and similarity between bags.
**Classifier Optimization Framework**
While using the classification output to guide the training of label enhancement, it is also possible to apply the label distribution to guide the learning of the classifier. Compared with MIMIL's algorithm, leveraging label distribution can learn relevant features more effectively.
For the loss function that the MIML classifier uses label distribution to guide classification, we define it as follows
\[L_{DC}\!\!-\!\!\frac{l}{n}\!\sum_{i=l}^{n}\!(\sum_{j-l}^{k}d_{i}^{l}logd_{i}^ {l}\!\left(e^{\Sigma_{n-l}^{k}s^{l}\!-\!\!s_{i}^{l}}\right)) \tag{12}\]
among them, \(n\) is the number of bags, and \(k\) is the number of labels. \(s_{i}^{l}\) is the logical prediction output of the classifier for the \(i\)-_th_ bag. The uniform label distribution loss \(L_{DC}\) is a generalized form of cross-entropy with the upper bound of the cross-entropy loss, which can effectively help the model to achieve better convergence [22].
Additionally, the training mechanism of matching interaction makes the output of enhanced labeling cannot effectively guide the classifier learning at the beginning, so the loss of classification output and logical labeling is introduced to guide the learning of the classifier. Here we choose a binary cross-entropy loss function.
\[L_{LC}\!\!-\!\!\frac{l}{c}\!\sum_{j=l}^{c}\!\!\begin{cases}log\left(p_{j} \right)j\in\!\!\!\Omega_{pos}\\ log\left(l\!-\!p_{j}\right)j\in\!\!\!\Omega_{neg}\end{cases} \tag{13}\]
The final loss function of our proposed method can be obtained as follows
\begin{table}
\begin{tabular}{c|c|c c|c c c c} \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Metrics} & \multicolumn{2}{c|}{LE} & \multicolumn{4}{c}{MIML} \\ \cline{3-8} & & GEEMMLML & MI-FLEM & MIMLN & MIMLSVM & EnMIMLNN & WEL & KASIR \\ \hline \multirow{2}{*}{Image} & HL\(\downarrow\) & **0.1650(1)** & 0.2480(5) & 0.2252(4) & 0.3408(7) & 0.1736(2) & 0.3275(6) & 0.1867(3) \\ & RL\(\downarrow\) & **0.1590(1)** & 0.2635(5) & 0.2513(4) & 0.4723(7) & 0.1900(3) & 0.4698(6) & 0.1780(2) \\ & mAP \(\uparrow\) & 0.7091(3) & 0.4938(7) & 0.7085(4) & 0.5084(6) & 0.7652(2) & 0.6132(5) & **0.8012(1)** \\ & Ma-Fl \(\uparrow\) & **0.6747(1)** & 0.4693(7) & 0.6111(4) & 0.6441(3) & 0.5975(5) & 0.5434(6) & 0.6617(2) \\ \hline \multirow{2}{*}{Text} & HL\(\downarrow\) & **0.0235(1)** & 0.0332(2) & 0.0384(3) & 0.1753(7) & 0.0457(5) & 0.1143(6) & 0.0420(4) \\ & RL\(\downarrow\) & **0.0097(1)** & 0.0156(2) & 0.0261(3) & 0.2273(7) & 0.0372(5) & 0.2130(6) & 0.02884(4) \\ & mAP \(\uparrow\) & **0.9467(1)** & 0.8980(5) & 0.9203(4) & 0.6668(7) & 0.9390(3) & 0.8568(6) & 0.9418(2) \\ & Ma-Fl \(\uparrow\) & **0.9380(1)** & 0.9119(3) & 0.8804(4) & 0.9201(2) & 0.8484(6) & 0.8371(7) & 0.8674(5) \\ \hline \multirow{2}{*}{GS} & HL\(\downarrow\) & **0.0089(1)** & 0.0098(3) & 0.0118(6) & 0.0111(5) & 0.0097(2) & 0.0126(7) & 0.0099(4) \\ & RL\(\downarrow\) & **0.2847(1)** & 0.3309(4) & 0.3415(5) & 0.2946(2) & 0.3175(3) & 0.7167(6) & 0.8724(7) \\ & mAP \(\uparrow\) & 0.2353(3) & 0.0911(6) & 0.2443(2) & 0.2251(4) & **0.3397(1)** & 0.2094(5) & 0.0221(7) \\ & Ma-Fl \(\uparrow\) & **0.0750(1)** & 0.0272(4) & 0.0004(7) & 0.0050(6) & 0.0669(2) & 0.04283(3) & 0.0147(5) \\ \hline \multirow{2}{*}{AV} & HL\(\downarrow\) & **0.0088(1)** & 0.0114(3) & 0.0126(4) & 0.0147(5) & 0.0107(2) & 0.0148(6) & 0.0150(7) \\ & RL\(\downarrow\) & 0.3271(2) & 0.3947(4) & 0.4095(5) & **0.2408(1)** & 0.3329(3) & 0.7655(6) & 0.9129(7) \\ & mAP & **0.2628(1)** & 0.1069(6) & 0.1805(5) & 0.2612(3) & 0.2623(2) & 0.1909(4) & 0.0330(7) \\ & Ma-Fl & **0.0789(1)** & 0.0645(2) & 0.0053(7) & 0.0064(6) & 0.0475(3) & 0.0445(4) & 0.0157(5) \\ \hline \multirow{2}{*}{HM} & HL\(\downarrow\) & **0.0109(1)** & 0.0128(3) & 0.0153(6) & 0.0142(4) & 0.0119(2) & 0.0162(7) & 0.0147(5) \\ & RL\(\downarrow\) & **0.2209(1)** & 0.3524(5) & 0.2834(4) & 0.2352(2) & 0.2644(3) & 0.6060(6) & 0.8529(7) \\ & mAP \(\uparrow\) & 0.3236(2) & 0.2434(6) & 0.2577(5) & 0.2890(3) & **0.4201(1)** & 0.2679(4) & 0.0541(7) \\ & Ma-Fl \(\uparrow\) & **0.1585(1)** & 0.1126(3) & 0.0072(7) & 0.0239(6) & 0.0732(4) & 0.1149(2) & 0.0430(5) \\ \hline \multirow{2}{*}{PF} & HL\(\downarrow\) & **0.0084(1)** & 0.0129(2) & 0.136(5) & 0.0154(4) & 0.0160(6) & 0.0187(7) & 0.0154(4) \\ & RL\(\downarrow\) & **0.2407(1)** & 0.3073(4) & 0.3493(5) & 0.2886(3) & 0.2687(2) & 0.6006(6) & 0.8420(7) \\ & mAP \(\uparrow\) & 0.2690(2) & 0.1159(6) & 0.2364(4) & 0.2384(3) & **0.3714(1)** & 0.2132(5) & 0.0657(7) \\ & Ma-Fl \(\uparrow\) & **0.1138(1)** & 0.0430(4) & 0.0062(7) & 0.0080(6) & 0.0734(2) & 0.0268(5) & 0.0468(3) \\ \hline \multirow{2}{*}{SC} & HL\(\downarrow\) & **0.0035(1)** & 0.0039(5) & 0.0041(5) & 0.0041(5) & 0.0036(2) & NA/A7 & 0.0039(3) \\ & RL\(\downarrow\) & **0.2151
\[L_{C}\!\!=\!\!\rho L_{LC}\!+\!\!(\textit{1-\rho})L_{DC} \tag{14}\]
where \(\rho\) is a hyperparameter that balances the logistic loss and label distribution loss.
## 4 Experiments
### Datasets & Features
In this section, nine real-world MIML datasets are used for experimental comparisons, including one image dataset for MIML [1], one text dataset for MIML [1], and other seven protein MIML datasets [1]. We divide the data set into the training set, test set and verification set according to 7:2:1, and the statistics of the final data set are given in Table 1.
### Comparing Algorithms And Evaluation
Five state-of-the-art MIML algorithms include MIMLNN [3], MIMLSVM [1], MIMLWL [11], KASIR[12], EnMIMLNN [21] are selected for comparisons. Additionally, since there are no related MIML algorithms considering label enhancement, to prove the effectiveness of our proposed GLEMIML, one representative label enhancement algorithm namely FLEM [1] is also used for comparison, which is trained by the matching interaction mechanisms will expanding all examples into one-dimensional vectors in order as input, thus making FLEM can achieve MIML using label enhancement with namely MI-FLEM. All the parameters are set by default parameters of the original paper.
Due to the limitations, the four most commonly used evaluation metrics in label enhancement are selected for experimental analysis, including Hamming Loss (HL), Ranking Loss (RL), macro-Average Precision (mAP), and Ma-Fl [11][1][1], The smaller value the first two metrics, the better performances can be obtained. And the larger value of the latter two metrics, the better performance.
### Comparing Experimental Results
As shown in Table 2, the best performance is marked in bold. Meanwhile, we also show the results of the average rank of each comparing method. By analyzing the experimental results, we can get the following conclusions. 1) Among the 36 cases (9 datasets \(\times\) 4 evaluation metrics), our model has reached the first rank in the overall ranking, and it outperforms the other comparing algorithms in most cases 2) Compared with the traditional MIML algorithm, the two MIML algorithms by considering label enhancement have reached the first or third place in the comprehensive ranking, which shows that effectiveness and significance of leveraging label significances in the MIML task. 3) Nevertheless, compared with the FLEM, our proposed model shows better performance. With a more complex feature space in MIML, FLEM cannot fully and effectively mine the hidden information from the feature space, while GLEMIML fully exploits the latent information to supply a more effective and accurate model of label enhancement, thus achieving more satisfactory performances.
### Ablation Study
Ablation experiments of GLEMIML on two representative datasets are further conducted to demonstrate the effectiveness of each step in our model. The experimental results are shown in Table 3.
GLEMIML-A is a model that chooses a single-layer fully connected neural network as a classifier; GLEMIML-B is a model that chooses a three-layer fully connected neural network as a classifier; GLEMIML-Ccancels the interaction between instances in the model. Through the comparison of GLEMIML with GLEMIML-A and GLEMIML-B, although the selection of a single-layer fully connected classifier can also guide the learning of MIML, simple linear classification cannot achieve more complex classification tasks. When the number of layers of the classifier exceeds two layers, although multiple layers may bring relatively better extraction results, there are no longer better benefits while with a cost of converging not easily. Compared with GLEMIML-C without considering the instance's correlation, GLEMIML fully mined the correlation between instances, thus improving the effectiveness of label enhancement for MIML tasks.
## 5 Conclusions
This paper proposes a novel joint MIML framework based on graph label enhancement, namely GLEMIML, to unravel the problems in which ignoring labeling significance easily results in ineffective learning problems in MIML. GLEMIL uses the correlation graph structure to mine the correlation between instances to realize the label distribution by migrating the feature topology information to the label space. Subsequently, by utilizing a MIML classifier to guide the training of LE through the matching and interaction mechanism, GLEMIML can achieve more effective and accurate performances by leveraging label distribution. The experimental
\begin{table}
\begin{tabular}{c|c|c c} \hline \hline Scenario & Metrics & Image & SC \\ \hline GLEMIML & HL\(\downarrow\) & 0.1650 & 0.0035 \\ & RL\(\downarrow\) & 0.1590 & 0.2150 \\ & mAP\(\uparrow\) & 0.7091 & 0.0428 \\ & Ma-F1\(\uparrow\) & 0.6747 & 0.0123 \\ \hline GLEMIML-A & HL\(\downarrow\) & 0.2650 & 0.0037 \\ & RL\(\downarrow\) & 0.3072 & 0.2503 \\ & mAP\(\uparrow\) & 0.4451 & 0.0147 \\ & Ma-F1\(\uparrow\) & 0.4156 & 0.0004 \\ \hline GLEMIML-B & HL\(\downarrow\) & 0.1700 & 0.0032 \\ & RL\(\downarrow\) & 0.1735 & 0.2261 \\ & mAP\(\uparrow\) & 0.6930 & 0.0492 \\ & Ma-F1\(\uparrow\) & 0.6388 & 0.0030 \\ \hline GLEMIML-C & HL\(\downarrow\) & 0.1845 & 0.0037 \\ & RL\(\downarrow\) & 0.1877 & 0.2429 \\ & mAP\(\uparrow\) & 0.6338 & 0.0380 \\ & Ma-F1\(\uparrow\) & 0.5125 & 0.0089 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ablation experiment
results on multiple datasets prove the superiority of GLEMIML on the MIML task. GLEMIML (AvgRank: 1.44) performed much better in the data set than MI-FLEM(Avg. Rank: 4.19) compared to improved SML markup enhancement. Compared with the multi-example multi-label algorithm, GLEMIML(AvgRank: 1.44) is also much higher than the optimal model EnMIMLNN(Avg. Rank: 2.92).
|
2305.18070 | Forensic Video Steganalysis in Spatial Domain by Noise Residual
Convolutional Neural Network | This research evaluates a convolutional neural network (CNN) based approach
to forensic video steganalysis. A video steganography dataset is created to
train a CNN to conduct forensic steganalysis in the spatial domain. We use a
noise residual convolutional neural network to detect embedded secrets since a
steganographic embedding process will always result in the modification of
pixel values in video frames. Experimental results show that the CNN-based
approach can be an effective method for forensic video steganalysis and can
reach a detection rate of 99.96%. Keywords: Forensic, Steganalysis, Deep
Steganography, MSU StegoVideo, Convolutional Neural Networks | Mart Keizer, Zeno Geradts, Meike Kombrink | 2023-05-29T13:17:20Z | http://arxiv.org/abs/2305.18070v1 | # Forensic Video Steganalysis in Spatial Domain by Noise Residual Convolutional Neural Network
###### Abstract
This research evaluates a convolutional neural network (CNN) based approach to forensic video steganalysis. A video steganography dataset is created to train a CNN to conduct forensic steganalysis in the spatial domain. We use a noise residual convolutional neural network to detect embedded secrets since a steganographic embedding process will always result in the modification of pixel values in video frames. Experimental results show that the CNN-based approach can be an effective method for forensic video steganalysis and can reach a detection rate of 99.96%.
keywords: Forensic, Steganalysis, Deep Steganography, MSU StegoVideo, Convolutional Neural Network +
Footnote †: journal: Forensic Science International: Digital Investigation
## 1 Introduction
Steganography is the hiding of secret information inside innocent looking (digital) objects, whereas, steganalysis is the science of detecting steganography (Li et al., 2011). This research relates to forensic steganalysis, which refers to a level of steganalysis that attempts to obtain further in-depth knowledge about a hidden message (Chutani and goyal, 2019). Steganography and (forensic) steganalysis have become an important field of interest in forensic science,
due to the rising popularity of steganography among criminals and terrorists (Choudhary, [2012]) (European Commission, [202]). A reason for the rise in popularity is that "stand-alone" encryption methods have proved to be insecure. This is mainly a result of the successful and continuous effort of law enforcement agencies to compromise encrypted criminal communications, such as with the "EncroChat Hack" in [2020] (O'Rourke, [202]). Steganography can also be used along with encryption, which provides more security and robustness (Taha et al., [2019]), because one first needs to find the encrypted secret and extract it from the carrier before it can be decrypted.
An object in which a secret is hidden is called a cover. Many covers can be used for steganography (e.g., text files, images, audio files), but in this paper, we focus on video steganography, i.e., steganography with video files as cover. This is a popular research area within steganography due to the high secret hiding potential of video files compared to, for example, images and audio files (Lu et al., [2019]). Videos are in fact a moving stream of images and sounds and, therefore, any distortions in a video file might go by unobserved by humans. Additionally, video files are generally larger in size and, therefore, have more potential bytes to alter to embed a secret.
Criminals and terrorists can use video steganography to hide information and to communicate with each other (Garcia, [2018]). There exist numerous real-world examples where steganography has been used with bad intentions (Dalal and Juneja, [2021]). US officials claimed, for example, that Al-Qaeda used steganography to plan the 9/11 attack. Also in several cases, steganography was used to hide child pornography. Therefore, specialized tools need to be developed by law enforcement agencies to detect such criminal content on, for example, confiscated computers. Criminals can also communicate with video steganography by, for example, uploading an innocent looking video with an embedded message to a popular online video sharing platform. The intended receiver of the message only needs to know where to look to extract the video and its
hidden content. This is very difficult to detect because of the vast number of video files on such platforms. Unfortunately, the research on video steganalysis is lacking behind the research on video steganography. Therefore, forensic video steganalysis requires more interest from forensic researchers.
One possible approach for forensic steganalysis is to use a convolutional neural network (CNN). A CNN is a special type of artificial neural network that is most commonly applied to analyze images, therefore one can use a CNN to conduct forensic steganalysis from the perspective of the spatial domain, where the modification of pixel values are analyzed to detect a hidden secret.
This study evaluates a CNN and its video steganalysis capabilities (i.e., its ability to detect video steganography) in spatial domain. The Noise Residual Convolutional Neural Network (NR-CNN), proposed by (Liu and Li, 2020), is trained on a video steganography dataset to detect two steganography tools: Deep Video Steganography (Sathyan, 2019) and MSU StegoVideo (Moscow State University) 2006). The remainder of this paper is organized as follows. Section 2 describes the related research papers from this research. Section 3 describes how the steganography dataset is created and how the NR-CNN is trained and evaluated. The results are presented and discussed in section 4 and conclusions and future directions are in section 5.
## 2 Related work
### Deep Steganography
Deep Steganography refers to steganography implemented with a deep neural network, which is an artificial neural network with multiple layers between the input and output layers. A Deep Steganography method was proposed by (Baluja, 2017) to hide an image inside an other image of the same size. The proposed Deep Steganography network consists of three parts: a preparation network, a hiding network and a reveal network, which are trained together on
an image dataset. The input of the network is a cover image and a secret image and the output is the target cover image and the target secret image after the embedding and extracting process. During training the difference between the input and the output images are used as error terms. In this way, the network can learn how to hide images inside each other with the lowest number of noticeable differences. Once the network is trained, it is split into an encoding and decoding network, which can then be used as a image steganography tool. In addition, since videos basically consists of a number of combined images, this tool can also be transformed into a video steganography tool.
### MSU StegoVideo
MSU StegoVideo1 is a video steganographic tool developed by (Moscow State University, 2009). The tool can be used to hide any file inside a video, with small video distortions. It has two important settings which are the data redundancy, which determines the amount of data that is hidden in each frame, and a noise setting, which determines the amount of added noise. The tool also requires a password to embed and extract the hidden secret. The Moscow State University does not provide any source code for MSU StegoVideo.
Footnote 1: [https://www.compression.ru/video/stego](https://www.compression.ru/video/stego) yidco/index _gn.html
### Noise Residual Convolutional Neural Network
A universal steganalysis method was proposed by (Liu and Li, 2020) to detect both intraprediction mode and motion vector based steganography. Since, both steganography methods eventually lead to the modification of pixel values in decoded frames, they designed a Noise Residual Convolutional Neural Network (NR-CNN) from the perspective of the spatial domain. In a data-driven manner the network automatically learned features and implemented classification. The trained NR-CNN reached a detection accuracy from 59.82% up to 99.74% for different embedding rates.
3 Method
Avideo dataset is created with a mix of regular videos and two types of steganography videos. The videos from that dataset along with their corresponding class labels are used to train a convolutional neural network. Specifically, it is trained to classify frames from the videos into three categories: regular, deep-stego and MSU-stego. Thus, the trained network should be able to determine if a video is a regular video or a video that is embedded with one of the two steganographic embedding techniques.
### Data
The VLOG dataset (Fouhey et al., 2018) is used to obtain video material for the experiments. This dataset consists of 2.747 video files with a total duration of 314 hours and over 30 million frames, therefore, we do not require the whole dataset for our experiment. However, we need to generalize our dataset as much as possible, to make sure the trained network works well on unseen data. Therefore, the final dataset should contain video frames from as much different videos as possible. Hence, the VLOG dataset is transformed into a "10-sec" dataset by splitting the videos into videos of around 10 seconds each. Now, when we randomly pick a video from the 10-sec dataset we obtain a random section from a random video from the VLOG dataset. Next, we define a parameter \(k\), which represents the number of 10-second videos we want in the final steganography dataset (stego-dataset). This stego-dataset, therefore, will contain \(k/3\) Deep Steganography (DVS) videos, \(k/3\) MSU StegoVideo (MSU) videos, and \(k/3\) regular videos. The regular videos are created by simply copying \(k/3\) videos from the 10-sec dataset into the stego-dataset and removing them from the 10-sec dataset.
#### 3.1.1 Deep Video Steganography dataset
The deep neural network hiding technique, proposed by (Baluja, 2017) and discussed in section2.1, embeds an image into another image of the same size. In figure 1, an example is shown. This technique was implemented by (Sathyan,
2019) for usage with video frames and made publicly available in the Deep Video Steganography (DVS) repository3. This repository is used to create the DVS videos for the stego-dataset, which is done by picking two distinct random videos from the 10-sec dataset and hiding one of those videos inside the other using DVS. The resulting video is added to the stego-dataset and the two randomly picked videos are removed from the 10-sec dataset. We repeat this \(k/3\) times.
Footnote 3: [https://github.com/anilsathyan7/Deep-Video-Steganography-Hiding-Videos-in-Plain-Sight](https://github.com/anilsathyan7/Deep-Video-Steganography-Hiding-Videos-in-Plain-Sight)
Although the DVS technique is visually difficult to detect, the number of bits required to encode a secret frame inside a cover frame of the same size is between 1 and 4 bits per pixel (bpp). Given that previous studies (Xuan et al., 2005)(Zou et al., 2006) have demonstrated that bit rates of 0.1 bbp can already be discovered by statistical analysis, it is expected that the DVS videos can also be detected by a CNN-based approach.
Figure 1: Example of the deep neural network hiding technique, with a picture of a cat as secret image and a picture of a dog as cover image.
#### 3.1.2 MSU StegoVideo dataset
MSU StegoVideo1[Moscow State University, 2006], discussed in section 4.2 is a public video steganographic tool that can hide a secret file inside the frames of a video. Figure 2 shows an example of a secret text message embedded into a frame from a landscape video. We use MSU StegoVideo to create the MSU videos for the stego-dataset. This is done by randomly picking and concatenating \(k/3\) distinct videos from the 10-sec dataset into a large video file. Next, we hide a text file containing Lorem Ipsum text inside the concatenated video using MSU StegoVideo, with data redundancy set to 2 and noise set to 100. The video with the embedded secret is then split into 10-second videos again, now all containing a part of the secret.
Footnote 1: [https://www.compression.ru/video/stego](https://www.compression.ru/video/stego) _video/index _en.html
MSU StegoVideo is not open source and is, therefore, difficult to analyze. However, we approximated that the embedding rate of the secret text file is 0.1 bits per pixel (bpp), which is at least 10 times lower compared to the DVS technique. This might result in a lower detection rate compared to DVS.
Figure 2: Example of the MSU StegoVideo tool, with a frame from a landscape video as cover.
### Network Architecture
We chose to use the Noise Residual Convolutional Neural Network (NR-CNN), as proposed by (Liu and Li, 2020) and discussed in section 2.3. They showed that the NR-CNN can detect both intraprediction mode and motion vector-based steganography by detecting the modification of pixel values in decoded frames. The steganography methods we aim to detect are not based on intraprediction modes or motion vectors but do also modify pixel values, therefore, the prospect is that the NR-CNN will also be effective in detecting both the DVS and MSU embedding approach.
The network architecture of the NR-CNN is shown in figure 1. The first layer is the residual convolutional layer "ResConv" whose role is to compute the steganographic noise residual from the video frames. It consists of 34 custom-made filters showed in figure 2.30 high-pass filters and 4 global filters. These filters will be automatically optimized during training. The input of the "ResConv" layer is single channel 224x224 image data (gray-scaled video frames). The
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|} \hline _tt_ & Layer & Input & Output & Kernel Size & Notes \\ \hline
**1** & Residual convolutional layer & 1channel & 34 channels & (\(\mathbb{S}^{*}\)5) & Stride = 1, Batch Normalization, PTLU \\ \hline
2 & Convolutional layer 1 & 34 channels & 34 channels & (\(\mathbb{M}^{*}\)3) & Stride = 1, Batch Normalization, ReLu \\ \hline
3 & Convolutional layer 2 & 34 channels & 34 channels & (\(\mathbb{M}^{*}\)3) & Stride = 1, Batch Normalization, ReLu \\ \hline
4 & Convolutional layer 3 & 34 channels & 34 channels & (\(\mathbb{M}^{*}\)3) & Stride = 1, Batch Normalization, ReLu \\ \hline
5 & Average pooling layer 1 & 34 channels & 34 channels & (\(\mathbb{M}^{*}\)2) & Stride = 2 \\ \hline
6 & Steganalysis residual block 1 & 34 channels & 34 channels & (\(\mathbb{M}^{*}\)3) & \\ \hline
7 & Average pooling layer 2 & 34 channels & 34 channels & (\(\mathbb{M}^{*}\)3) & Stride = 2 \\ \hline
8 & Steganalysis residual block 2 & 34 channels & 34 channels & (\(\mathbb{M}^{*}\)3) & \\ \hline
9 & Average pooling layer 3 & 34 channels & 34 channels & (\(\mathbb{M}^{*}\)3) & Stride = 2 \\ \hline
10 & Convolutional layer 4 & 34 channels & 32 channels & (\(\mathbb{M}^{*}\)3) & Stride = 1, Batch Normalization, ReLu \\ \hline
**11** & Average pooling layer 4 & 32 channels & 32 channels & (\(\mathbb{M}^{*}\)2) & Stride = 2 \\ \hline
12 & Convolutional layer 5 & 32 channels & 16 channels & (\(\mathbb{M}^{*}\)3) & Stride = 1, Batch Normalization, ReLu \\ \hline
13 & Convolutional layer 6 & 16 channels & 16 channels & (\(\mathbb{M}^{*}\)3) & Stride = 3, Batch Normalization, ReLu \\ \hline
14 & Fully connected layer & 16\({}^{3}\)3\({}^{*}\)3 features & 3 features & NA & \\ \hline
15 & Softmax layer & 3 features & 3 outcomes & NA & \(\alpha\): regular, 1: deep stego, 2: MSU stego \\ \hline \end{tabular}
\end{table}
Table 1: The architecture of the Noise Residual Convolutional Neural Network (Liu and Li, 2020). The first layer computes the residual, the twelve subsequent layers perform the feature extraction and the final two layers complete the classification.
feature extraction part of the architecture consists of 6 convolutional layers with batch normalization, 4 average pooling layers and 2 steganalysis residual blocks. The steganalysis residual blocks are based on ResNet (He et al., 2016), but are improved for steganalysis by changing the final mapping from \(F(x)+x\) to \(x-F(x)\) to retain the steganographic residual signal. The classification part of the network consists of a fully connected (linear) layer and a softmax layer with three outputs representing the three possible classes: regular, deep-stego and MSU-stego.
### Experimental setup
The final steganography dataset contains \(k=3555\) videos, which are split into a training set, a validation set and a test set with respectively 1700, 174, and 1681 videos. The training set is used to train the NR-CNN and the validation set is used to validate the network during training. The test set is used to validate the network after training.
Before training, the videos from the training set are split into 5 segments, i.e., 2-second videos. From each segment, a random frame is picked to train the network with. This avoids training the network on possibly nearly similar subsequent frames, which may result in a network that is only working on the training set, i.e. overfitting. The frames are resized to 224x224 pixels and transformed into single channel gray-scale images. During the final evaluation
Figure 3: The 34 custom-made filters from the residual convolution layer of which the first 30 are high-pass filters and the last 4 are global filters. The filters will be optimized during training.
of the network, we have a similar pre-processing step, but the videos from the test set are split into 20 segments instead of 5 to eventually evaluate the trained network on more frames.
The NR-CNN is implemented on the deep learning framework PyTorch, and is trained on 8,000 frames with a batch size of 20. The rest of the training setup is similar to the setup used by (Liu and Li) 2020) for the NR-CNN. Hence, the number of training iterations is 150 epochs, and the network optimizer is AdaDelta with learning rate \(lr=0.4\), decay rate \(\rho=0.95\), a weight decay of \(5\times 10^{-4}\), and epsilon parameter \(eps=1\times 10^{-8}\). During training we try minimizing the Cross-Entropy loss.
The complete source code is available on Github[]
## 4 Results & Discussion
In this section, we will evaluate the effectiveness of the trained NR-CNN on the detection of the Deep Video Steganography and MSU StegoVideo tools. The results are shown in table 2 and 3 which lists the predictions from the network on, respectively, the frames of the training set and the test set. The training set with 34,000 frames has an accuracy of 99.99% with 3 wrong predictions, and the test set with 33,620 frames has an accuracy of 99.96% with 13 wrong predictions. During inspection of the frames that where wrongly predicted we found a few possible causes for the faulty classifications. Out of the 16 frames there where 6 almost completely black frames, which may indicate that black frames are difficult to classify. Also 6 frames with content from a CCTV camera where wrongly classified, probably due to the visible horizontal lines that we typically find in CCTV camera footage. This conjecture is supported by two other wrongly predicted frames, which also had horizontal lines in the frames
content. The two remaining frames did not show any clear possible cause of misclassification.
These results show that the NR-CNN is able to detect both the DVS and MSU embedding approaches with a remarkably high accuracy. One of the reasons for these high detection rates could be the fact that both steganography techniques leave behind some sort of clearly distinguishable watermark on the video frames. Such watermarks might be relatively easy to detect for convolutional neural networks, since CNN's are designed to classify shapes that have a number of similar features, which is the case for watermarks. It is however expected that every video steganography method will leave behind some sort of watermark and that makes the CNN-based steganalysis approach very promising. Another reason for the high accuracy on both techniques could be the high embedding rate, which is between 1 and 4 bits per pixel for DVS and approximately 0.1 for the MSU technique. However, if the embedding rate would have a significant impact on the detection rate, we should have expected to obtain more faulty
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
\begin{tabular}{c} Train labelsPredictions \\ \end{tabular} & regular & deep-stego & MSU-stego \\ \hline regular & 11580 & 0 & 0 \\ \hline deep-stego & 1 & 10817 & 2 \\ \hline MSU-stego & 0 & 0 & 11600 \\ \hline \end{tabular}
\end{table}
Table 2: The predictions from the NR-CNN per label class of the training set. The values represent the number of frames.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
\begin{tabular}{c} Test labelsPredictions \\ \end{tabular} & regular & deep-stego & MSU-stego \\ \hline regular & 11620 & 0 & 0 \\ \hline deep-stego & 2 & 10512 & 6 \\ \hline MSU-stego & 0 & 5 & 11475 \\ \hline \end{tabular}
\end{table}
Table 3: The predictions from the NR-CNN per label class of the test set. The values represent the number of frames.
predictions for the MSU-stego class than for the deep-stego class, which is not the case.
5. Conclusions & future directions
The high accuracy of 99.99% and 99.96% on the train and test set, respectively, show that the Noise Residual Convolutional Network (NR-CNN), from (Liu and Li, 2020), is not limited to detecting intraprediction mode and motion vector-based steganography, but can also be trained to accurately detect and classify the Deep Video Steganography (DVS) and MSU StegoVideo (MSU) methods. Given that the NR-CNN is also able to detect the DVS and MSU steganographic methods, which are completely different in their embedding techniques compared to the intraprediction mode and motion vector-based methods, we can start considering the NR-CNN as a promising general forensic video steganalysis network. However, it should be noted that, for now, the detection capabilities of the NR-CNN are only tested on steganography methods that where present in the training set. Future studies should, therefore, investigate whether the NR-CNN is also able to detect steganography methods that it has not seen before.
The resulting trained NR-CNN from this research can be used to accurately identify and classify steganography videos created with the Deep Video Steganography and the MSU StegoVideo tool. Furthermore, the created dataset could be expanded to train the network in detecting and classifying additional steganographic methods. However, in order to create the videos for the dataset, the corresponding tools for these steganographic methods have to be available. This is the case for publicly available steganography methods but it is, for example, also possible that a new steganographic tool is found on a confiscated computer from a person of interest. This tool could then be utilized to expand the created steganography dataset, by creating additional steganography videos with a video dataset and self fabricated hidden secrets. Subsequently, the network
has to be retrained on the expanded dataset to also detect and classify that new tool in the future.
Experimental results show that a convolutional neural network based approach to detect and classify video steganography can perform exceptionally well. Future research should investigate whether the NR-CNN can be trained to use as a general forensic steganalysis tool.
|
2310.13393 | Optimal Best Arm Identification with Fixed Confidence in Restless
Bandits | We study best arm identification in a restless multi-armed bandit setting
with finitely many arms. The discrete-time data generated by each arm forms a
homogeneous Markov chain taking values in a common, finite state space. The
state transitions in each arm are captured by an ergodic transition probability
matrix (TPM) that is a member of a single-parameter exponential family of TPMs.
The real-valued parameters of the arm TPMs are unknown and belong to a given
space. Given a function $f$ defined on the common state space of the arms, the
goal is to identify the best arm -- the arm with the largest average value of
$f$ evaluated under the arm's stationary distribution -- with the fewest number
of samples, subject to an upper bound on the decision's error probability
(i.e., the fixed-confidence regime). A lower bound on the growth rate of the
expected stopping time is established in the asymptote of a vanishing error
probability. Furthermore, a policy for best arm identification is proposed, and
its expected stopping time is proved to have an asymptotic growth rate that
matches the lower bound. It is demonstrated that tracking the long-term
behavior of a certain Markov decision process and its state-action visitation
proportions are the key ingredients in analyzing the converse and achievability
bounds. It is shown that under every policy, the state-action visitation
proportions satisfy a specific approximate flow conservation constraint and
that these proportions match the optimal proportions dictated by the lower
bound under any asymptotically optimal policy. The prior studies on best arm
identification in restless bandits focus on independent observations from the
arms, rested Markov arms, and restless Markov arms with known arm TPMs. In
contrast, this work is the first to study best arm identification in restless
bandits with unknown arm TPMs. | P. N. Karthik, Vincent Y. F. Tan, Arpan Mukherjee, Ali Tajer | 2023-10-20T10:04:05Z | http://arxiv.org/abs/2310.13393v2 | # Optimal Best Arm Identification with Fixed Confidence in Restless Bandits
###### Abstract
We study best arm identification in a _restless_ multi-armed bandit setting with finitely many arms. The discrete-time data generated by each arm forms a homogeneous Markov chain taking values in a common, finite state space. The state transitions in each arm are captured by an _ergodic_ transition probability matrix (TPM) that is a member of a single-parameter exponential family of TPMs. The real-valued parameters of the arm TPMs are _unknown_ and belong to a given space. Given a function \(f\) defined on the common state space of the arms, the goal is to identify the best arm--the arm with the largest average value of \(f\) evaluated under the arm's stationary distribution--with the fewest number of samples, subject to an upper bound on the decision's error probability (i.e., the _fixed-confidence_ regime). A lower bound on the growth rate of the expected stopping time is established in the asymptote of a vanishing error probability. Furthermore, a policy for best arm identification is proposed, and its expected stopping time is proved to have an asymptotic growth rate that matches the lower bound. It is demonstrated that tracking the long-term behavior of a certain Markov decision process and its state-action visitation proportions are the key ingredients in analyzing the converse and achievability bounds. It is shown that under every policy, the state-action visitation proportions satisfy a specific approximate flow conservation constraint and that these proportions match the optimal proportions dictated by the lower bound under any asymptotically optimal policy. The prior studies on best arm identification in restless bandits focus on _independent observations_ from the arms, _rested_ Markov arms, and restless Markov arms with _known_ arm TPMs. In contrast, this work is the first to study best arm identification in restless bandits with unknown arm TPMs.
## 1 Introduction
Multi-armed bandits constitute an effective probabilistic model for sequential decision-making under uncertainty. In the canonical multi-armed bandit models, each arm is assumed to yield random rewards generated by an unknown reward distribution. The arms are selected sequentially over time to optimize a pre-specified reward measure. The two common frameworks to formalize bandit algorithms are _regret minimization_ and _pure exploration_. In regret minimization, the objective is to have an arm selection policy that minimizes the difference between the expected reward realized and the maximum reward achievable by an oracle that knows the true reward distributions. Minimizing such regret measures captures the inherent _exploration-exploitation_ trade-off that specifies the balance between the desire to choose the arms with high expected rewards (exploitation) against the need to explore other arms to acquire better information discrimination (exploration). In this context, there exists a wide range of algorithms for different settings based on the notions of Upper Confidence Bound (UCB) [1, 2] and Thompson Sampling [3] An in-depth analysis of these algorithms and a detailed survey of other studies on regret minimization can be found in [4].
The pure exploration framework, on the other hand, focuses on identifying one or a group of arms with specified properties using the fewest samples. Pure exploration disregards the reward regret incurred and is closely related to the literature on sequential hypothesis testing [5, 6]. In pure exploration, algorithm design involves forming optimal data-adaptive sampling decisions and characterizing optimal stopping times.
A practical instance of a pure exploration problem is _best arm identification_ (BAI), which entails finding the best arm--the arm with the largest mean reward--as quickly and accurately as possible. Broadly, BAI is studied in two complementary regimes: the _fixed-budget_ regime, in which the objective is to use a pre-specified number of arm sampling rounds to identify the best arm with minimal error probability, and the _fixed-confidence_ regime, where the goal is to minimize the number of arm sampling rounds required to find the best arm with a pre-specified decision accuracy level. In this paper, we focus on BAI in a multi-armed bandit with restless Markov arms and focus on the fixed-confidence regime. In the rest of this section, we specify the problem framework and the technical contributions.
### Problem Description
We consider a _restless_ multi-armed bandit setting with finitely many arms. In restless bandits, each arm has a finite number of states that evolve over time according to a homogeneous Markov chain taking values in a common, finite state space. We assume that the transition probability matrix (TPM) governing the state transitions in each arm belongs to a single-parameter exponential family of ergodic TPMs. Hence, a restless bandit setting with \(K\) arms can be specified by \(K\) TPMs. The real-valued parameters of the TPMs are unknown and belong to a given parameter space. The vector of TPM parameters specifies the problem instance. The TPM of each arm, being ergodic, is associated with a unique stationary distribution.
In this setting, we adopt a non-constant _reward_ function defined on the common state space of the arms. Accordingly, the _best arm_ is defined as the arm with the largest average reward computed under the arm's stationary distribution. The learner is unaware of the underlying arm parameters and is faced with the task of identifying the best arm. The learner selects the arms sequentially and one at a time 1 Upon selecting an arm, the learner observes the current state of the arm. At the same time, the unobserved arms are _restless_ and _continue_ undergoing state transitions. Given a pre-specified confidence level \(\delta\in(0,1)\), the learner's goal is to minimize the expected number of arm selections required to find the best arm while ensuring that the terminal error probability does not exceed \(\delta\).
Footnote 1: For simplicity in presentation, we assume that the learner selects only one arm at each time instant. The results of this paper can be easily extended to the case when the learner samples a subset of arms at each time instant.
### Key Analytical Challenges
The continuous evolution of the unobserved arms necessitates that the learner, at each time instance, maintains a record of (a) each arm's _delay_, which is defined as the time elapsed since an arm was last selected, and (b) each arm's _last observed state_, which is the state of each arm as observed at the last instance that it was selected. Keeping track of each arm's delay and the last observed state provides the learner with a historical perspective on how each arm performed or behaved during its previous selection. This information serves as a reference point for understanding an arm's characteristics or potential changes, helping the learner assess the arm's current state relative to its past behavior. The existing studies on restless bandits establish that the arm delays and the last observed states collectively form a _controlled Markov chain_, with the arm selections serving as the controls and thereby influencing the overall behavior of the system (e.g., [7, Section 5]). In other words, we are in the setting of a _Markov decision process_ (MDP) in which the state space is the space of all arm delays and last observed states, and the action space is the set of arms. We remark that such an MDP has potentially a _countably infinite_ state space induced by the arm delays that may progressively increase with time. We write \(\mathcal{M}\) as a shorthand representation for the above MDP.
**MDP ergodicity.** Previous studies on restless arms, such as [7, 8, 9], have emphasized the importance of considering the ergodicity properties of the MDP \(\mathcal{M}\) in their analysis. These studies typically establish some form of convergence of empirical functionals (e.g., reward, cost, and state-action visitations) to their respective true values, relying on the ergodicity/communication properties of \(\mathcal{M}\). The task of proving such convergence is exacerbated when dealing with countable state MDPs (such as \(\mathcal{M}\)). Prior studies on countable-state MDPs reveal that guaranteeing the desired ergodicity properties relies on various regularity conditions. For example, [10] and [11] assume that the countable-state MDPs therein are ergodic under _every_ stationary control policy. This condition is met in [9, 12] under a so-called "trembling hand" model. However, imposing similar conditions in our work has significant implications. It restricts the learner's choice of allowable policies to only those that make \(\mathcal{M}\) ergodic; as such, \(\mathcal{M}\) is merely communicating (see Lemma 3.1, a weaker property than ergodicity [13, Section 8.3.1]. One central challenge in this paper is devising a policy under which the MDP \(\mathcal{M}\) has "near-ergodicity" properties and yet is amenable to analysis.
**Tracking the proportions of state-action visitations.** Prior studies on BAI in the fixed-confidence regime have established problem-dependent lower bounds on the expected time required to find the best arm [14, 15]. Characterizing these bounds involves solving sup-inf optimization problems, where the outer supremum is with respect to all probability distributions on the arms, while the inner infimum accounts for alternative problem instances with varying best arm locations. The key to achieving such lower bounds is tracking the proportions of arm selections with time
and ensuring that these proportions match the unique optimal ("sup"-attaining) proportion in the long run. These are the principles in the design of, for instance, "C-tracking" and "D-tracking" algorithms in [14]. In contrast to these known results, when dealing with restless Markov arms, the lower bounds are characterized not by the proportions of arm selections but rather by the proportions of _state-action visitations_ of the MDP \(\mathcal{M}\). Achieving such lower bounds necessitates ensuring that the proportion of visits to each state-action pair in the long term aligns with the optimal proportion specified by the lower bound. In particular, _merely matching the long-term proportions of action visitations (arm selections) with the optimal arm selection proportions may not lead to achieving the lower bound_. The primary challenge here is that while the learner can directly control the arm selections and the associated visitations, the learner lacks control over the state evolution of the MDP and, thereby, the state visitation proportions. Consequently, devising a policy that inherently guarantees the correct visitation proportion for each state-action pair is pivotal to achieving the lower bound. In this paper, we provide a comprehensive solution to this complex challenge.
### Our Contributions
We highlight the key contributions of the paper and how we address the challenges outlined in the previous section.
1. **Maximum-delay constraint.** As mentioned earlier, the customary ergodicity assumptions of prior works, critical for analytical tractability, do not apply directly to our specific setting. As a solution to render the countable-state MDP \(\mathcal{M}\) amenable to analysis, we constrain the _maximum delay_ of each arm to be equal to a fixed and large positive integer denoted by \(R\). This reduces the MDP's countably infinite state space to a finite state space. Despite this reduction, we show that the communication properties of the finite-state MDP with max-delay equal to \(R\) (denoted \(\mathcal{M}_{R}\)) and the unconstrained MDP \(\mathcal{M}\) are identical (Lemma 3.2), thereby not compromising our results significantly. We note that while it is computationally prohibitive to realize the countable-state MDP \(\mathcal{M}\) on a machine with finite memory, the finite-state MDP \(\mathcal{M}_{R}\) can indeed be realized on a machine with finite memory.
2. **Instance-dependent lower bound.** Given a problem instance specified by a vector of arm parameters, we establish a problem-dependent lower bound on the limiting growth rate of the expected number of arm selections (or simply the expected stopping time) required to find the best arm (Proposition 4.1). This growth rate is captured by the solution to a sup-inf optimization problem. In this problem, the outer supremum is over the _polytope_ of all state-action distributions satisfying the flow constraint and the maximum delay constraint, and the inner infimum is over all alternative problem instances with the best arm distinct from the best arm in the true problem instance. Furthermore, the set over which the supremum is evaluated _depends_ on the true problem instance. This is in contrast to the existing literature on BAI [14, 15, 16]. Consequently, it is unclear if this supremum is attained by a unique element in the set. Notably, the uniqueness of the sup-attaining solution in [14, 15, 16] significantly simplifies the subsequent analysis.
3. **Sup-inf optimization.** We show that when \(R\) is the maximum delay of each arm, the objective function appearing in the sup-inf optimization of the lower bound contains Kullback-Leibler divergence terms that are functions of _powers_ of TPMs up to order \(R\). The presence of second- and higher-order TPM powers further hinders simplifying the inner infimum, unlike in [14, 15, 16] where the inner infimum may be simplified further and cast as a minimum over finitely many non-best arms. Notwithstanding this, we employ a version of Berge's maximum theorem for non-compact sets [17, Theorem 1.2] to show that the inner infimum expression is a _continuous_ function in its arguments despite the non-compactness of the set of alternative problem instances. We use this result to show that the potential _set_ of sup-attaining solutions is convex, compact, and upper-hemicontinuous in the arm parameters.
4. **Policy design.** We design a policy that selects the arms according to a certain time-dependent probability distribution on the arms, _conditional_ on the current state of the MDP \(\mathcal{M}_{R}\), while respecting the maximum delay constraint. This is in contrast to the explicit selection of arms under the C-tracking and D-tracking algorithms in [14]. We show that under this policy, the MDP \(\mathcal{M}_{R}\) is "near-ergodic" in the following sense: if the probability distribution for selecting the arms at any given time \(n\) were to be frozen and used to select the arms for all subsequent times \(t\geq n\), then the MDP \(\mathcal{M}_{R}\) becomes ergodic, admits a unique stationary distribution (on the space of state-action pairs), and consequently every state-action pair is visited infinitely often (Lemma 6.1).
5. **Convergence of state-action visitations and asymptotic optimality.** We compute the empirical _state-action-state_ transition probabilities and use this to design a test statistic that mimics the form of the inner infimum term in the lower bound expression (see (38)). We employ this test statistic in conjunction with a random, time-dependent _threshold_ that is a function of state-action visitations, and stop further selection of arms whenever the test statistic exceeds the threshold. We show that this leads to stopping in finite time almost surely and declaring the best arm correctly with the desired accuracy (Proposition 6.4). Furthermore, we show that the limiting growth rate of the expected stopping time satisfies an upper bound that matches the lower bound (Proposition 6.6). Our proof of the upper bound relies on showing the convergence of the empirical state-action visitation proportions to the _set_ of sup-attaining state-action probability distributions governing the lower bound (Lemma 6.3).
### Overview of Prior Studies
**Prior works on BAI.** BAI falls within the active sequential hypothesis testing framework of Chernoff [5] and Albert [6], and has since been studied in a plethora of contexts. [18] studies fixed-confidence BAI and provides a successive elimination algorithm for finding the best arm, proving an upper bound on its stopping time that only holds with high probability. For a similar setting as in [18], [14] presents (a) a sup-inf lower bound on the limiting growth rate of the expected stopping time using change-of-measure arguments, and (b) two algorithms for tracking the proportions of arm selections (C-tracking and D-tracking), along with upper bounds on stopping times that hold almost surely and in expectation for both algorithms. While the optimal solution to the lower bound in [14] was shown to be unique, [19] investigates the case when the optimal solution is potentially non-unique and/or the set of all optimal solutions is non-convex. The paper [20] investigates fixed-confidence BAI in _linear_ bandits with finitely/uncountably many arms and provides nearly-matching lower and upper bounds on the limiting growth rate of the expected stopping time. While the algorithms in the aforementioned studies explicitly compute the sup-attaining solution(s) at every time step for an empirical problem instance arising from empirical arm means, the recent study [21] proposes a computationally efficient policy that circumvents the computation of the sup-attaining solution(s). In another direction, [22] investigates fixed-budget BAI, proposes a _successive-rejects_ algorithm, and obtains an error probability upper bound for the same. While problem-dependent lower bounds are commonplace in the studies on fixed-confidence BAI, deriving such bounds for the fixed-budget regime is often challenging. Instead, the studies on fixed-budget BAI characterize _minimax lower bounds_ on the error probability; such a lower bound dictates that there exists a problem under which every policy incurs an error probability with the minimum value given by the lower bound. In this space, the paper [23] obtains a minimax lower bound on the error probability of fixed-budget BAI, along with an upper bound that is order-wise tight in the exponent of the error probability. Yang and Tan [24] investigate fixed-budget BAI in linear bandits and propose an algorithm based on the idea of G-optimal designs. They prove a minimax lower bound on the error probability, similar to [23], and obtain an upper bound on the error probability of their algorithm.
**Prior works on pure exploration in Markov bandits.** While Markov bandits have been extensively explored in the context of regret minimization [25, 26, 27, 7, 8, 28], they have not been explored as well in the context of pure exploration. [29] studies fixed-confidence odd arm identification in nested Markov bandits (where the unobserved arms do not exhibit state transitions, and the goal is to find the anomalous or odd arm). The studies in [9, 12] extend the results of [29] to the setting of restless arms, using a trembling-hand model for arms selection inspired by cognitive neuroscience. [16] investigates BAI in rested Markov bandits under a parametric model for arm TPMs and _hidden_ Markov observations from the arms, proposes a sup-inf lower bound on the limiting growth rate of the expected stopping time, and proposes a D-tracking rule similar to [14]. The setting of rested arms can be viewed as a special case of the setting of restless arms in which the arm delays are always equal to \(1\). Hence, the Kullback-Leibler divergence terms appearing in the lower bound of [16] are not functions of the second and higher order powers of TPMs. As a result, the inner infimum expression of the lower bound therein may be simplified further and cast as a minimum over finitely many non-best arms, as in [14]. This simplification may be exploited further to demonstrate the uniqueness of the optimal solution to the lower bound, thus greatly simplifying the achievability analysis. However, the presence of higher-order powers of TPMs in the Kullback-Leibler divergence terms in our setting do not permit further simplification of the inner infimum expression, thereby forcing us to work with a _set_ of optimal solutions to the lower bound and its associated analytical challenges (e.g., upper-hemicontinuity instead of continuity). [30] investigates BAI in restless bandits when the arm TPMs are known up to a permutation. Our work studies BAI in restless bandits with unknown TPMs.
**Related works on MDPs.** The paper [31] studies the problem of identifying the best policy in MDPs-- the one that maximizes the expected sum of discounted rewards over an infinite time horizon--in the fixed-confidence regime, when the learner has access to the next state _and_ action at every time instant (generative model). In a follow-up work [32], the results in [31] are extended to the case when the learner can access only the next action but not the next state (as in our work). Both studies [31, 32] present lower and upper bounds on the limiting growth rate of the time to identify the best policy. They propose a relaxation to their lower bounds by leveraging the structure of the MDP reward function, leading to a discrepancy of factor \(2\) between the upper and lower bounds. However, a similar relaxation of the lower bound as in [31, 32] is not possible in our work. This is because the notion of MDP rewards is void in our work since the central problem we address is BAI and not reward maximization (or regret minimization), which is typical of MDPs. For an in-depth review of MDPs, see [13]. For more related works on MDPs, see [32] and the references therein.
### Paper Organisation
In Section 2, we introduce the single-parameter exponential family of TPMs and the central objective of our paper. In Section 3, we introduce the countable-state MDP of arm delays and last observed states, outline its flow conservation
property, and describe a reduction of its countably infinite state space to a finite state space via a constraint on the maximum delay of each arm. In Section 4, we present a lower bound on the asymptotic growth rate of the expected stopping time, the first main result of the paper. In Section 5, we present our policy for BAI. In Section 6, we present results on the performance of our policy. In Section 7, we include a short discussion, provide concluding remarks, and outline future directions. The detailed proofs of all the results stated in the paper are presented in the appendices.
## 2 Preliminaries
Let \(\mathbb{N}\coloneqq\{1,2,\ldots\}\) denote the set of positive integers. All vectors are column vectors unless stated otherwise.
### Restless Bandit Model
We consider a _restless_ multi-armed bandit setting with \(K\geq 2\) arms in which each arm has a finite number of states that temporally evolve according to a discrete-time homogeneous Markov process taking values in a common, finite state space \(\mathcal{S}=\{1,\ldots,|\mathcal{S}|\}\). To formalize the transitions of the Markov processes of different arms on \(\mathcal{S}\), we define a parameterized family of transition probability matrices (TPMs) as \(\mathcal{P}(\Theta)\coloneqq\{P_{\theta}\ :\ \theta\in\Theta\}\), where \(\Theta\subset\mathbb{R}\) is a fixed and known parameter space, and for each \(\theta\in\Theta\), \(P_{\theta}\) is a valid TPM. We denote the parameter of arm \(a\in[K]\coloneqq\{1,\ldots,K\}\) by \(\theta_{a}\), and assume that the evolution of states on arm \(a\) is governed by the TPM \(P_{\theta_{a}}\). Accordingly, we define \(\mathbf{\theta}\coloneqq[\theta_{1},\ldots,\theta_{K}]^{\top}\in\Theta^{K}\) and refer to \(\mathbf{\theta}\) as a _problem instance_. \(\mathbb{P}_{\mathbf{\theta}}\) and \(\mathbb{E}_{\mathbf{\theta}}\), respectively, denote the probability measure and the associated expectation induced by instance \(\mathbf{\theta}\). We assume that each arm's temporal evolution is independent of the rest.
### Single-Parameter Exponential Family of TPMs
We assume that the TPMs are generated according to a single-parameter exponential family studied in [16]. The model studied here is a generalization of the single-parameter exponential family model for independent observations from the arms studied in [33]. Fix an irreducible2 TPM \(P\) on \(\mathcal{S}\). We call \(P\) the _generator_ of the family. Let \(f:\mathcal{S}\to\mathbb{R}\) be a known function. Given \(P\) and \(f\), define \(\tilde{P}_{\theta}\) for any \(\theta\in\Theta\) such that
Footnote 2: This means that the whole state space \(\mathcal{S}\) constitutes a single communicating class.
\[\tilde{P}_{\theta}(j|i)=P(j|i)\ \exp(\theta\cdot f(j))\,\qquad\forall i,j\in \mathcal{S}\, \tag{1}\]
where \(P(j|i)\) and \(\tilde{P}_{\theta}(j|i)\) denote the \((i,j)\)-th entry of \(P\) and \(\tilde{P}_{\theta}\), respectively. The rows of \(\tilde{P}_{\theta}\) do not necessarily sum up to 1. Hence, \(\tilde{P}_{\theta}\) is not necessarily a valid TPM. Nevertheless, we can normalize (1) suitably to obtain a valid TPM in the following manner. For each \(\theta\in\Theta\), let \(\rho(\theta)\) be the Perron-Frobenius eigenvalue of \(\tilde{P}_{\theta}\). From the Perron-Frobenius theorem [34, Theorem 8.8.4], we know that there exist unique left and right eigenvectors associated with the eigenvalue \(\rho(\theta)\), say \(\mathbf{u}_{\theta}=[\mathbf{u}_{\theta}(i):i\in\mathcal{S}]^{\top}\) and \(\mathbf{v}_{\theta}=[\mathbf{v}_{\theta}(i):i\in\mathcal{S}]^{\top}\), respectively, such that \(\mathbf{u}_{\theta}(i)>0,\mathbf{v}_{\theta}(i)>0\) for all \(i\in\mathcal{S}\), and \(\sum_{i\in\mathcal{S}}\mathbf{u}_{\theta}(i)\,\mathbf{v}_{\theta}(i)=1\). Subsequently, the single-parameter exponential family with generator \(P\) is defined as \(\mathcal{P}_{\theta}=\{P_{\theta}\ :\ \theta\in\Theta\}\), where for each \(\theta\in\Theta\), \(P_{\theta}\) is specified by
\[P_{\theta}(j|i)=\frac{\mathbf{v}_{\theta}(j)}{\rho(\theta)\,\mathbf{v}_{\theta }(i)}\ \tilde{P}_{\theta}(j|i)\,\qquad i,j\in\mathcal{S}. \tag{2}\]
It can be readily verified that (2) specifies a valid TPM since
\[\sum_{j\in\mathcal{S}}P_{\theta}(j|i)=\frac{1}{\rho(\theta)\,\mathbf{v}_{ \theta}(i)}\sum_{j\in\mathcal{S}}\mathbf{v}_{\theta}(j)\tilde{P}_{\theta}(j|i )=1\,\qquad\forall i\in\mathcal{S},\ \forall\theta\in\Theta. \tag{3}\]
Furthermore, for each \(\theta\in\Theta\), the matrix \(P_{\theta}\) is irreducible and positive recurrent. Hence, \(P_{\theta}\) has a unique stationary distribution, which we denote by \(\mu_{\theta}=[\mu_{\theta}(i):i\in\mathcal{S}]^{\top}\). Note that \(P_{0}=P\).
Next, similar to [16], we impose mild assumptions on \(P\). For this purpose, define \(M_{f}=\max_{i\in\mathcal{S}}f(i)\) and \(m_{f}=\min_{i\in\mathcal{S}}f(i)\). Accordingly, define the sets
\[\mathcal{S}_{M_{f}}=\{i\in\mathcal{S}:f(i)=M_{f}\}\,\qquad\text{and}\qquad \mathcal{S}_{m_{f}}=\{i\in\mathcal{S}:f(i)=m_{f}\}. \tag{4}\]
**Assumption 2.1**.: _We assume that \(P\) satisfies the following properties._
* A\({}_{1}\)_: The submatrix of_ \(P\) _with rows and columns in_ \(S_{M_{f}}\) _is irreducible._
* A\({}_{2}\)_: For every_ \(i\in\mathcal{S}\setminus\mathcal{S}_{M_{f}}\)_, there exists_ \(j\in\mathcal{S}_{M_{f}}\) _such that_ \(P(j|i)>0\)
* \(\mathrm{A}_{3}\)_: The submatrix of_ \(P\) _with rows and columns in_ \(S_{m_{f}}\) _is irreducible._
* \(\mathrm{A}_{4}\)_: For every_ \(i\in\mathcal{S}\setminus\mathcal{S}_{m_{f}}\)_, there exists_ \(j\in\mathcal{S}_{m_{f}}\) _such that_ \(P(j|i)>0\)_._
These assumptions, collectively, are mild and cover a wide range of models. For instance, when \(P\) has strictly positive entries, it satisfies all of the above assumptions. In Remark 3, later in the paper, we elaborate on the crucial role of the above parametric model in our study.
For any integer \(d\geq 1\) and TPM \(Q\in\mathcal{P}(\Theta)\), let \(Q^{d}\) denote the matrix obtained by multiplying \(Q\) with itself \(d\) times. Also, for any \(i,j\in\mathcal{S}\) and \(d\geq 1\), let \(Q^{d}(j|i)\) denote the \((i,j)\)-th entry of \(Q^{d}\), and let \(Q^{d}(\cdot|i)\) denote the \(i\)-th row of \(Q^{d}\).
### Best Arm Identification
Corresponding to the reward function \(f:\mathcal{S}\to\mathbb{R}\) and an instance \(\boldsymbol{\theta}=[\theta_{a}:a\in[K]]^{\top}\), we define
\[\eta_{\theta_{a}}\coloneqq\sum_{i\in\mathcal{S}}\;f(i)\,\mu_{\theta_{a}}(i)\;,\qquad a\in[K]\;, \tag{5}\]
as the _mean_ of arm \(a\). Notice that (5) specifies the average value of \(f\) under the stationary distribution of arm \(a\). Accordingly, we define the _best arm_\(a^{\star}(\boldsymbol{\theta})\) under the instance \(\boldsymbol{\theta}\) as
\[a^{\star}(\boldsymbol{\theta})\coloneqq\arg\max_{a\in\mathcal{A}}\;\eta_{ \theta_{a}}=\arg\max_{a\in\mathcal{A}}\;\sum_{i\in\mathcal{S}}\;f(i)\,\mu_{ \theta_{a}}(i)\;. \tag{6}\]
We assume that \(a^{\star}(\boldsymbol{\theta})\) is unique for all \(\boldsymbol{\theta}\in\Theta^{K}\). In fixed-confidence BAI, a learner who does not have any prior knowledge of the instance \(\boldsymbol{\theta}\), wishes to identify \(a^{\star}(\boldsymbol{\theta})\) with the fewest number of arm selections (on the average) such that the decision error probability is confined below a pre-specified confidence level (a more formal specification of the problem objective is deferred until Section 3.5). To distinguish the best arm from the rest, we write \(\textsc{Alt}(\boldsymbol{\theta})\) to denote the set of all problem instances _alternative_ to \(\boldsymbol{\theta}\), i.e., those instances under which the best arm differs from the one under \(\boldsymbol{\theta}\). Hence,
\[\textsc{Alt}(\boldsymbol{\theta})\coloneqq\{\boldsymbol{\lambda}\in\Theta^{K }:\exists\;a\neq a^{\star}(\boldsymbol{\theta})\;\text{such that}\;\eta_{ \lambda_{a}}>\eta_{\lambda_{a^{\star}(\boldsymbol{\theta})}}\}\;. \tag{7}\]
The Perron-Frobenius eigenvalue of \(\tilde{P}_{\theta}\), \(\rho(\theta)\), is pivotal in analyzing the properties of \(\textsc{Alt}(\boldsymbol{\theta})\). To formalize the connection, define \(A(\theta)\coloneqq\log\rho(\theta)\), \(\theta\in\Theta\). An important property of the family in (2) is that \(A\) is differentiable, and \(\hat{A}=\frac{\mathrm{d}A}{\mathrm{d}\theta}\) is a strictly increasing and bijective map, as noted in the following lemma (see [16] for a proof).
**Lemma 2.2**.: _[_16_, Lemma 2]_ _Let \(P\) be an irreducible TPM on the finite state space \(\mathcal{S}\) satisfying Assumptions \(\mathrm{A}_{1}\)-\(\mathrm{A}_{4}\). Let \(f:\mathcal{S}\to\mathbb{R}\) be a non-constant function. Consider the single-parameter exponential family of TPMs defined in (2), with \(\tilde{P}_{\theta}\) as defined in (1). Let \(A(\theta)=\log\rho(\theta)\) denote the log Perron-Frobenius eigenvalue of \(\tilde{P}_{\theta}\). Then, the following properties hold._
1. \(\theta\mapsto A(\theta)\) _is analytic._
2. \(P_{\theta}\) _is irreducible and positive recurrent, and hence admits a unique stationary distribution, say_ \(\mu_{\theta}\)_._
3. \(\hat{A}(\theta)=\eta_{\theta}=\sum_{i\in\mathcal{S}}f(i)\,\mu_{\theta}(i)\)_._
4. \(\hat{A}\) _is strictly increasing._
5. _Let_ \(\mathcal{M}\coloneqq\{\eta\in\mathbb{R}:\eta=\eta_{\theta}\text{ for some }\theta\in\Theta\}\)_. Then, the map_ \(\theta\mapsto\hat{A}(\theta)\) _is a bijection between_ \(\Theta\) _and_ \(\mathcal{M}\)_._
The fact that \(\hat{A}:\Theta\to\mathcal{M}\) is a strictly increasing bijection implies that
\[\textsc{Alt}(\boldsymbol{\theta}) =\{\boldsymbol{\lambda}\in\Theta^{K}:\exists\;a\neq a^{\star}( \boldsymbol{\theta})\;\text{such that}\;\eta_{\lambda_{a}}>\eta_{\lambda_{a^{\star}( \boldsymbol{\theta})}}\}\] \[=\{\boldsymbol{\lambda}\in\Theta^{K}:\exists\;a\neq a^{\star}( \boldsymbol{\theta})\;\text{such that}\;\hat{A}(\lambda_{a})>\hat{A}(\lambda_{a^{ \star}(\boldsymbol{\theta})})\}\] \[=\{\boldsymbol{\lambda}\in\Theta^{K}:\exists\;a\neq a^{\star}( \boldsymbol{\theta})\;\text{such that}\;\lambda_{a}>\lambda_{a^{\star}( \boldsymbol{\theta})}\}. \tag{8}\]
### Best Arm Identification Policy
To find the best arm, the learner selects the arms sequentially, one at each time \(n\in\mathbb{N}\cup\{0\}\). Let \(A_{n}\in[K]\) be the arm selected at time \(n\), and let \(\bar{X}_{n}\in\mathcal{S}\) be the state of arm \(A_{n}\) observed by the learner. We assume that the arms
are _restless_, i.e., the unobserved arms continue to undergo state transitions even though they are not selected. Let \((A_{0:n},\bar{X}_{0:n})\coloneqq(A_{0},\bar{X}_{0},\ldots,A_{n}\,\bar{X}_{n})\) denote the history of all the arm selections and states observed up to time \(n\), generating the filtration
\[\mathcal{F}_{n}\coloneqq\sigma(A_{0:n},\bar{X}_{0:n})\;,\quad n\geq 0\;. \tag{9}\]
A BAI policy can be specified by three decision rules: (i) arm selection rule \(\pi_{n}\) that is \(\mathcal{F}_{n-1}\)-measurable and specifies the arm to be selected at time \(n\); (ii) a stopping rule adapted to \(\{\mathcal{F}_{n}\ :\ n\geq 0\}\) that specifies the (random) time \(\tau\) at which to terminate the arm selection process; and (iii) a terminal decision rule that is \(\mathcal{F}_{\tau}\)-measurable and specifies a candidate best arm \(a\in[K]\) at the stopping time. Writing \(\pi=\{\pi_{n}:n\geq 0\}\), we denote a generic BAI policy by the tuple \((\pi,\tau,a)\). Finally, for a pre-specified error tolerance level \(\delta\in(0,1)\), we define
\[\Pi(\delta)\coloneqq\{(\pi,\tau,a):\;\mathbb{P}_{\boldsymbol{\theta}}(\tau<+ \infty)=1\;,\;\mathbb{P}_{\boldsymbol{\theta}}(a\neq a^{*}(\boldsymbol{ \theta}))\leq\delta\;,\quad\forall\;\boldsymbol{\theta}\in\Theta^{K}\}\;, \tag{10}\]
as the collection of all policies that (a) stop in finite time almost surely, and (b) have an error probability no greater than the prescribed tolerance \(\delta\) under _every_ instance \(\boldsymbol{\theta}\in\Theta^{K}\). The canonical BAI definition entails identifying a policy in \(\Pi(\delta)\) that has the smallest average stopping time. We will show that in the restless bandit setting of interest, there needs to be an additional constraint, leading to a collection of policies that form a subset of \(\Pi(\delta)\). We will discuss the necessary details in Section 3 and provide the exact BAI formulation in Section 3.5.
**Remark 1**.: _In order to be precise, it is essential to express \(\mathbb{P}_{\boldsymbol{\theta}}\) and \(\mathbb{E}_{\boldsymbol{\theta}}\) more explicitly as \(\mathbb{P}_{\boldsymbol{\theta}}^{\pi}\) and \(\mathbb{E}_{\boldsymbol{\theta}}^{\pi}\) respectively under policy \(\pi\), as these are contingent on the specific policy \(\pi\). Nevertheless, for the sake of brevity, we omit the subscript \(\pi\), and urge the reader to bear the dependence on \(\pi\) in mind._
## 3 Delays, Last Observed States, and a Markov Decision Process
The continued evolution of the unobserved arms necessitates the learner to maintain, at each time instance, a record of (a) each arm's _delay_, which is defined as the time elapsed since an arm was last selected, and (b) each arm's _last observed state_, which is the state of each arm at the last instance that it was selected. Keeping track of each arm's delay and the last observed state provides the learner with a historical perspective on how each arm performed or behaved during its previous selection. This information serves as a reference point for understanding an arm's characteristics or potential changes, helping the learner assess the arm's current state relative to its past behavior. The notion of arm delays is a key distinguishing feature of the setting of restless arms and is superfluous when each arm yields independent and identically distributed (i.i.d.) observations or when the unobserved arms do not evolve (_rested_ arms).
Without loss of generality, we assume that every policy initially uses the first \(K\) time slots to sequentially select and collect samples from arms \(1\) through \(K\), with \(A_{0}=1,A_{1}=2,\ldots,A_{K-1}=K\). This ensures that each arm is observed at least once. For \(n\geq K\), let \(d_{a}(n)\) and \(i_{a}(n)\), respectively, denote the delay and the last observed state of arm \(a\) at time \(n\). Let \(\mathbf{d}(n)\coloneqq(d_{1}(n),\ldots,d_{K}(n))\) and \(\mathbf{i}(n)\coloneqq(i_{1}(n),\ldots,i_{K}(n))\) denote the vectors of arm delays and the last observed states at time \(n\). We set \(\mathbf{d}(K)=(K,K-1,\ldots,1)\), noting that with reference to \(n=K\), arm \(1\) was last observed \(K\) time instants earlier (i.e., at \(n=0\)), arm \(2\) was last observed \(K-1\) time instants earlier (i.e., at \(n=1\)), and so on. The following rule specifies how \(d_{a}(n)\) and \(i_{a}(n)\) can be updated recursively. When arm \(a^{\prime}\in[K]\) is selected at time \(n\), i.e., \(A_{n}=a^{\prime}\), we have
\[d_{a}(n+1)=\begin{cases}d_{a}(n)+1,&a\neq a^{\prime}\;,\\ 1,&a=a^{\prime}\;,\end{cases}\qquad\text{and}\qquad i_{a}(n+1)=\begin{cases}i_ {a}(n),&a\neq a^{\prime}\;,\\ \bar{X}_{n},&a=a^{\prime}\;.\end{cases} \tag{11}\]
Note that \(d_{a}(n)\geq 1\) for all \(n\geq K\), with \(d_{a}(n)=1\) if and only if \(A_{n-1}=a\). Also note that \((A_{0:n-1},\bar{X}_{0:n-1})\equiv(A_{0:n-1},\{(\mathbf{d}(s),\mathbf{i}(s))\}_ {n=K}^{n})\). It is clear that the process \(\{(\mathbf{d}(n),\mathbf{i}(n))\}_{n=K}^{\infty}\) takes values in a subset \(\mathbb{S}\) of the _countably infinite_ set \(\mathbb{N}^{K}\times\mathcal{S}^{K}\). The subset \(\mathbb{S}\) is formed based on the constraint that at any time \(n\geq K\), exactly one component of \(\mathbf{d}(n)\) is equal to \(1\), and all the other components are strictly greater than \(1\). Given \(\boldsymbol{\theta}\in\Theta^{K}\), we note that
\[\mathbb{P}_{\boldsymbol{\theta}}\Big{(}\mathbf{d}(n+1),\mathbf{i}(n+1)\mid\{( \mathbf{d}(s),\mathbf{i}(s))\}_{s=K}^{n},A_{0:n-1},A_{n}\Big{)}=\mathbb{P}_{ \boldsymbol{\theta}}\Big{(}\mathbf{d}(n+1),\mathbf{i}(n+1)\mid(\mathbf{d}(n), \mathbf{i}(n)),A_{n}\Big{)}\;,\;\forall n\geq K. \tag{12}\]
This indicates that the evolution of the process \(\{(\mathbf{d}(n),\mathbf{i}(n))\}_{n=K}^{\infty}\) is _controlled_ by the sequence \(\{A_{n}\}_{n=K}^{\infty}\) of arm selections. Alternatively, \(\{(\mathbf{d}(n),\mathbf{i}(n))\}_{n=K}^{\infty}\) is a _controlled Markov chain_, with \(\{A_{n}\}_{n=K}^{\infty}\) being the sequence of controls.3 In other words, we are in the setting of a _Markov decision process_ (MDP) whose state space, action space, and the associated transition probabilities can be specified as follows:
Footnote 3: The phrase “controlled Markov chain” is borrowed from [11].
* _State space:_ The state space of the MDP is \(\mathbb{S}\), with \((\mathbf{d}(n),\mathbf{i}(n))\) being the state at time \(n\).
* _Action space_: The action space of the MDP is \([K]\), with action \(A_{n}\) at time \(n\) being \(\mathcal{F}_{n-1}\)-measurable.
* _Transition probabilities:_ The transition probabilities under the instance \(\mathbf{\theta}\) are given by \[\mathbb{P}_{\mathbf{\theta}}(\mathbf{d}(n+1) =\mathbf{d}^{\prime},\mathbf{i}(n+1)=\mathbf{i}^{\prime}\mid \mathbf{d}(n)=\mathbf{d},\mathbf{i}(n)=\mathbf{i},A_{n}=a)\] \[=\begin{cases}\mathbb{P}_{\theta_{a}}^{d_{a}}(i_{a}^{\prime} \mid i_{a}),&\text{if }d_{a}^{\prime}=1\text{ and }d_{a}^{\prime}=d_{\hat{a}}+1\quad\forall\hat{a} \neq a,\\ &i_{\hat{a}}^{\prime}=i_{\hat{a}}\quad\forall\hat{a}\neq a,\\ 0,&\text{otherwise},\end{cases}\] (13)
Note that the right-hand side of (13) is independent of \(n\) and, therefore, it is stationary. Subsequently, we define
\[Q_{\mathbf{\theta}}(\mathbf{d}^{\prime},\mathbf{i}^{\prime}\mid \mathbf{d},\mathbf{i},a)\coloneqq\mathbb{P}_{\mathbf{\theta}}(\mathbf{d}(n+1)= \mathbf{d}^{\prime},\mathbf{i}(n+1)=\mathbf{i}^{\prime}\mid\mathbf{d}(n)= \mathbf{d},\mathbf{i}(n)=\mathbf{i},A_{n}=a)\;. \tag{14}\]
Let \(\mathcal{M}_{\mathbf{\theta}}\) denote the MDP with state space \(\mathbb{S}\), action space \([K]\), and transition probabilities given by \(Q_{\mathbf{\theta}}\).
### Reduction from Countable State Space to Finite State Space
The existing studies on countable-state MDPs (and more generally controlled Markov chains) impose additional regularity conditions on the transition probabilities of the MDP to facilitate tractable analysis. One commonly used regularity condition is that "under every stationary policy for choosing the actions, the MDP is ergodic"; see, for instance, [11, Section II, pp. 58] and [10, Assumption A4]. Imposing a similar regularity condition in our setting in order to make the MDP \(\mathcal{M}_{\mathbf{\theta}}\) ergodic implies restricting the space of all possible policies of the learner significantly. As such, the MDP \(\mathcal{M}_{\mathbf{\theta}}\) is only _communicating_ (a property much weaker than than ergodicity [13, Section 8.3.1]) as demonstrated in the below result.
**Lemma 3.1**.: _For every \(\mathbf{\theta}\in\Theta^{K}\), the MDP \(\mathcal{M}_{\mathbf{\theta}}\) is communicating, i.e., for all \((\mathbf{d},\mathbf{i}),(\mathbf{d}^{\prime},\mathbf{i}^{\prime})\in\mathbb{S}\), there exists \(N\geq 1\) (possibly depending on \((\mathbf{d},\mathbf{i})\) and \((\mathbf{d}^{\prime},\mathbf{i}^{\prime})\)) and a policy \(\pi\) such that under the policy \(\pi\),_
\[\mathbb{P}_{\mathbf{\theta}}(\mathbf{d}(n+N)=\mathbf{d}^{\prime}, \mathbf{i}(n+N)=\mathbf{i}^{\prime}\mid\mathbf{d}(n)=\mathbf{d},\mathbf{i}(n) =\mathbf{i})>0\;,\qquad\forall n\geq K\;. \tag{15}\]
As an alternative to imposing the customary regularity conditions, to facilitate further analysis in our work, we reduce the countable state space \(\mathbb{S}\) of the MDP to a finite state space by constraining the delay of each arm to be no more than a finite and positive integer, say \(R\). Under this constraint, once the delay of an arm reaches \(R\) at any given time, this arm is forcefully selected at the next time instant. We refer to this constraint on arm delays as the _\(R\)-max-delay constraint_.
Let \(\mathbb{S}_{R}\subset\mathbb{S}\) denote the subset of all arm delays and last observed states in which the delay of each arm is at most \(R\). Furthermore, let \(\mathbb{S}_{R,a}\subset\mathbb{S}_{R}\) denote the subset of all arm delays and last observed states in which the delay of arm \(a\) is equal to \(R\). The modified transition probabilities for the MDP \(\mathcal{M}_{\mathbf{\theta}}\) under the \(R\)-max-delay constraint are as follows:
* _Case 1:_\((\mathbf{d},\mathbf{i})\notin\bigcup_{a=1}^{K}\;\mathbb{S}_{R,a}\). In this case, the transition probabilities are as in (13).
* _Case 2:_\((\mathbf{d},\mathbf{i})\in\mathbb{S}_{R,a}\) for some \(a\in\mathcal{A}\). In this case, when \(A_{n}=a\), \[\mathbb{P}_{\mathbf{\theta}}(\mathbf{d}(n+1) =\mathbf{d}^{\prime},\mathbf{i}(n+1)=\mathbf{i}^{\prime}\mid \mathbf{d}(n)=\mathbf{d},\mathbf{i}(n)=\mathbf{i},A_{n}=a)\] \[=\begin{cases}\mathbb{P}_{\theta_{a}}^{R}(i_{a}^{\prime}\mid i_{a} ),&\text{if }d_{a}^{\prime}=1\text{ and }d_{\hat{a}}^{\prime}=d_{\hat{a}}+1\text{ for all }\tilde{a}\neq a,\\ &i_{\hat{a}}^{\prime}=i_{\hat{a}}\text{ for all }\tilde{a}\neq a,\\ 0,&\text{otherwise},\end{cases}\] (16) and when \(A_{n}\neq a\), the transition probabilities are undefined. Noting that the right-hand side of (16) is independent of \(n\), we define \[Q_{\mathbf{\theta},R}(\mathbf{d}^{\prime},\mathbf{i}^{\prime}\mid \mathbf{d},\mathbf{i},a)\coloneqq\mathbb{P}_{\mathbf{\theta}}(\mathbf{d}(n+1)= \mathbf{d}^{\prime},\mathbf{i}(n+1)=\mathbf{i}^{\prime}\mid\mathbf{d}(n)= \mathbf{d},\mathbf{i}(n)=\mathbf{i},A_{n}=a)\;.\] (17)
Going forward, we write \(\mathcal{M}_{\mathbf{\theta},R}\) to denote the finite-state MDP with state space \(\mathbb{S}_{R}\), action space \([K]\), and transition probabilities \(Q_{\mathbf{\theta},R}\). The following analogue of Lemma 3.1 shows that despite the finite-state space reduction described above, the MDP \(\mathcal{M}_{\mathbf{\theta},R}\) is still communicating. A proof of this follows along the same lines as the proof of Lemma 3.1 and is omitted for brevity.
**Lemma 3.2**.: _Fix \(R\geq K\). For every \(\mathbf{\theta}\in\Theta^{K}\), the MDP \(\mathcal{M}_{\mathbf{\theta},R}\) is communicating._
### MDP Transition Kernel
It is convenient to view a policy as a (randomized) rule for mapping any given \((\mathbf{d},\mathbf{i})\in\mathbb{S}_{R}\) to an action \(a\in\mathcal{A}\). Given a policy \(\pi=\{\pi(a\mid\mathbf{d},\mathbf{i}):(\mathbf{d},\mathbf{i},a)\in\mathbb{S}_{R }\times[K]\}\) and \(\boldsymbol{\theta}\in\Theta^{K}\), where \(\pi(a\mid\mathbf{d},\mathbf{i})\) is the probability of choosing action \(a\) when the MDP \(\mathcal{M}_{\boldsymbol{\theta},R}\) is in state \((\mathbf{d},\mathbf{i})\), we define \(Q_{\boldsymbol{\theta},\pi}\) as the _transition kernel_ of the MDP \(\mathcal{M}_{\boldsymbol{\theta},R}\) under \(\pi\). Formally,
\[Q_{\boldsymbol{\theta},\pi}(\mathbf{d}^{\prime},\mathbf{i}^{ \prime},a^{\prime}\mid\mathbf{d},\mathbf{i},a)\coloneqq Q_{\boldsymbol{ \theta},R}(\mathbf{d}^{\prime},\mathbf{i}^{\prime}\mid\mathbf{d},\mathbf{i},a )\cdot\pi(a^{\prime}\mid\mathbf{d}^{\prime},\mathbf{i}^{\prime})\;,\quad \forall(\mathbf{d},\mathbf{i},a),(\mathbf{d}^{\prime},\mathbf{i}^{\prime},a^{ \prime})\in\mathbb{S}_{R}\times[K]. \tag{18}\]
For any \(r\in\mathbb{N}\), we write \(Q^{r}_{\boldsymbol{\theta},\pi}\) to denote the \(r\)-fold self-product of \(Q_{\boldsymbol{\theta},\pi}\). Note that (18) represents the probability of transitioning from the state-action \((\mathbf{d},\mathbf{i},a)\) to the state-action \((\mathbf{d}^{\prime},\mathbf{i}^{\prime},a^{\prime})\) in a single time step under \(\pi\) and under the instance \(\boldsymbol{\theta}\). Also, when there is no ambiguity, we write \(Q_{\boldsymbol{\theta},\pi}(\mathbf{d}^{\prime},\mathbf{i}^{\prime}\mid \mathbf{d},\mathbf{i})\) to denote the probability of transitioning from the state \((\mathbf{d},\mathbf{i})\) to the state \((\mathbf{d}^{\prime},\mathbf{i}^{\prime})\) in a single time step under \(\pi\) and under the instance \(\boldsymbol{\theta}\). We mask the dependence of \(Q_{\boldsymbol{\theta},\pi}\) on \(R\) for notational clarity and ask the reader to bear this dependence in mind.
### A Uniform Arm Selection Policy and Ergodicity of the Transition Kernel
For later use, we record here a uniform arm selection policy that, while respecting the \(R\)-max-delay constraint, selects the arms uniformly at random at every time instant. We denote this policy by \(\pi^{\text{unif}}\). Formally, for all \((\mathbf{d},\mathbf{i},a)\in\mathbb{S}_{R}\times[K]\),
\[\pi^{\text{unif}}(a\mid\mathbf{d},\mathbf{i})=\begin{cases}\frac{1}{K},&( \mathbf{d},\mathbf{i})\in\bigcup_{a^{\prime}=1}^{K}\mathbb{S}_{R,a^{\prime}}, \\ 1,&(\mathbf{d},\mathbf{i})\in\mathbb{S}_{R,a},\\ 0,&(\mathbf{d},\mathbf{i})\in\bigcup_{a^{\prime}\neq a}\mathbb{S}_{R,a^{ \prime}}.\end{cases} \tag{19}\]
Note that \(\pi^{\text{unif}}\) is a stationary policy, i.e., the probabilities in (19) do not depend on time. The following lemma demonstrates that under \(\pi^{\text{unif}}\), the MDP transition kernel is ergodic for every \(\boldsymbol{\theta}\in\Theta^{K}\).
**Lemma 3.3**.: _Fix \(R\geq K\). The transition kernel \(Q_{\boldsymbol{\theta},\pi^{\text{unif}}}\) is ergodic for all \(\boldsymbol{\theta}\in\Theta^{K}\)._
While the above ergodicity property naturally emerges within the framework of our paper, it is pragmatically _assumed_ to hold in [32]; see, for instance, [32, Assumption 2, p.9]. As we shall see, the ergodicity property of Lemma 3.3 shall play an important role in the analysis of the BAI policy that we propose later in the paper.
### State-Action Visitations and Flow Conservation
Given \(n\geq K\) and \((\mathbf{d},\mathbf{i},a)\in\mathbb{S}_{R}\times[K]\), let
\[N(n,\mathbf{d},\mathbf{i},a)\coloneqq\sum_{t=K}^{n}\mathbf{1}_{ \{\mathbf{d}(t)=\mathbf{d},\,\mathbf{i}(t)=\mathbf{i},\,A_{t}=a\}}\;,\quad \text{and}\quad N(n,\mathbf{d},\mathbf{i})\coloneqq\sum_{a=1}^{K}\,N(n, \mathbf{d},\mathbf{i},a)\;, \tag{20}\]
denote, respectively, the number of times the state-action pair \((\mathbf{d},\mathbf{i},a)\) and state \((\mathbf{d},\mathbf{i})\) are visited up to time \(n\). We refer to these as the _state-action visitations_ and _state visitations_ up to time \(n\). The next result shows that the expected values of these visitations satisfy an approximate _flow-conservation_ property.
**Lemma 3.4** (Flow conservation).: _Fix \(R\geq K\), \(\boldsymbol{\theta}\in\Theta^{K}\), and \((\mathbf{d}^{\prime},\mathbf{i}^{\prime},a)\in\mathbb{S}_{R}\times[K]\). Under every policy \(\pi\),_
\[\left|\mathbb{E}_{\boldsymbol{\theta}}[N(n,\mathbf{d}^{\prime}, \mathbf{i}^{\prime})]-\sum_{(\mathbf{d},\mathbf{i})\in\mathbb{S}_{R}}\;\sum_{a= 1}^{K}\mathbb{E}_{\boldsymbol{\theta}}[N(n,\mathbf{d},\mathbf{i},a)]\,Q_{ \boldsymbol{\theta},R}(\mathbf{d}^{\prime},\mathbf{i}^{\prime}|\mathbf{d}, \mathbf{i},a)\right|\leq 1\;,\qquad\forall n\geq K\;. \tag{21}\]
In (21), the first term on the left-hand side may be interpreted as the total _outward flow_ from the state \((\mathbf{d}^{\prime},\mathbf{i}^{\prime})\) at time \(n\), whereas the second term may be interpreted as the total _inward flow_ into state \((\mathbf{d}^{\prime},\mathbf{i}^{\prime})\) at time \(n\). Then, (21) dictates that the outward flow for \((\mathbf{d}^{\prime},\mathbf{i}^{\prime})\) almost matches its inward flow for all times and for all \((\mathbf{d}^{\prime},\mathbf{i}^{\prime})\in\mathbb{S}_{R}\). In this sense, (21) may be regarded as an approximate flow conservation property for the process \(\{(\mathbf{d}(n),\mathbf{i}(n)):n\geq K\}\).
We note here that the \(R\)-max-delay constraint may be expressed in terms of the state-action visitations and the state visitations as follows. For all \((\mathbf{d},\mathbf{i},a)\in\mathbb{S}_{R}\times[K]\) and \(n\geq K\),
\[N(n,\mathbf{d},\mathbf{i},a)=\begin{cases}N(n,\mathbf{d},\mathbf{i }),&(\mathbf{d},\mathbf{i})\in\mathbb{S}_{R,a},\\ 0,&(\mathbf{d},\mathbf{i})\in\bigcup_{a^{\prime}\neq a}\mathbb{S}_{R,a^{\prime}},\\ \text{unaltered},&(\mathbf{d},\mathbf{i})\notin\bigcup_{a^{\prime}=1}^{K}\mathbb{S}_{R,a^{ \prime}}.\end{cases} \tag{22}\]
In (22), the first line on the right-hand side depicts the scenario when \((\mathbf{d},\mathbf{i})\in\mathbb{S}_{R,a}\), i.e., \(d_{a}=R\). In this case, because arm \(a\) is forcefully selected following every occurrence of \((\mathbf{d},\mathbf{i})\), it follows that \(N(n,\mathbf{d},\mathbf{i},a)=N(n,\mathbf{d},\mathbf{i})\) for all \(n\geq K\). On the other hand, if \((\mathbf{d},\mathbf{i})\in\bigcup_{a^{\prime}\neq a}\mathbb{S}_{R,a^{\prime}}\), then there exists \(a^{\prime}\neq a\) such that \(d_{a^{\prime}}=R\), and therefore arm \(a^{\prime}\) is forcefully selected following every occurrence of \((\mathbf{d},\mathbf{i})\), thereby implying that \(N(n,\mathbf{d},\mathbf{i},a)=0\) for all \(n\geq K\). The last line on the right-hand side of (22) depicts the scenario when \(d_{a}<R\) for all \(a\in[K]\).
### \(R\)-max-constrained BAI
Given an error probability threshold \(\delta\in(0,1)\) and \(R\geq K\), based on the definition of \(\Pi(\delta)\) in (10) we define
\[\Pi_{R}(\delta)\coloneqq\{(\pi,\tau,a)\in\Pi(\delta)\ :\ (\pi,\tau,a)\text{ satisfies }R \text{-max-delay constraint}\}\, \tag{23}\]
which is the collection of all policies that stop in finite time almost surely, satisfy an error probability that is no greater than \(\delta\) under _every_ instance \(\boldsymbol{\theta}\in\Theta^{K}\), and respect the \(R\)-max-delay constraint. We anticipate from similar results in the literature that \(\inf_{\pi\in\Pi_{R}(\delta)}\mathbb{E}\boldsymbol{\theta}[\tau_{\pi}]\sim \Omega(\log(1/\delta))\), where the asymptotics is as \(\delta\downarrow 0\). Our objective in this paper is to precisely characterize the value of
\[\lim_{\delta\downarrow 0}\inf_{\pi\in\Pi_{R}(\delta)}\frac{\mathbb{E} \boldsymbol{\theta}[\tau_{\pi}]}{\log(1/\delta)} \tag{24}\]
in terms of \(\boldsymbol{\theta}\) and \(R\). For the remainder of the paper, we fix \(R\geq K\).
## 4 Lower Bound
In this section, we present an instance-dependent lower bound for (24). Throughout the analysis, given two probability mass functions \(p\) and \(q\) with identical support, we define \(D_{\text{KL}}(p\|q)\) as the Kullback-Leibler (KL) divergence between \(p\) and \(q\). Given \(\boldsymbol{\theta}\in\Theta^{K}\), let \(\Sigma_{R}(\boldsymbol{\theta})\) denote the space of all probability mass functions \(\nu\) satisfying
\[(\text{Flow conservation})\quad\sum_{a=1}^{K}\nu(\mathbf{d}^{ \prime},\mathbf{i}^{\prime},a)=\sum_{(\mathbf{d},\mathbf{i})\in\mathbb{S}_{R} }\sum_{a=1}^{K}\nu(\mathbf{d},\mathbf{i},a)\,Q_{\boldsymbol{\theta},R}( \mathbf{d}^{\prime},\mathbf{i}^{\prime}\mid\mathbf{d},\mathbf{i},a)\,\ \ \ \ \ \forall(\mathbf{d}^{ \prime},\mathbf{i}^{\prime})\in\mathbb{S}_{R}\, \tag{25}\] \[(R\text{-max-delay constraint})\quad\nu(\mathbf{d},\mathbf{i},a) =\sum_{a^{\prime}=1}^{K}\nu(\mathbf{d},\mathbf{i},a^{\prime})\,\ \ \ \forall(\mathbf{d},\mathbf{i})\in\mathbb{S}_{R,a},\ a\in[K]. \tag{26}\]
Let \(Q_{\boldsymbol{\theta},R}(\cdot\mid\mathbf{d},\mathbf{i},a)\coloneqq[Q_{ \boldsymbol{\theta},R}(\mathbf{d}^{\prime},\mathbf{i}^{\prime}\mid\mathbf{d}, \mathbf{i},a)\,:\ (\mathbf{d}^{\prime},\mathbf{i}^{\prime})\in\mathbb{S}_{R}]^{\top}\). The following proposition gives a lower bound on (24).
**Proposition 4.1**.: _For any \(\boldsymbol{\theta}\in\Theta^{K}\),_
\[\liminf_{\delta\downarrow 0}\inf_{\pi\in\Pi_{R}(\delta)}\frac{\mathbb{E} \boldsymbol{\theta}[\tau_{\pi}]}{\log(1/\delta)}\geq\frac{1}{T^{*}_{R}( \boldsymbol{\theta})}\, \tag{27}\]
_where \(T^{*}_{R}(\boldsymbol{\theta})\) in (27) is given by_
\[T^{*}_{R}(\boldsymbol{\theta})=\sup_{\nu\in\Sigma_{R}(\boldsymbol{ \theta})}\inf_{\boldsymbol{\lambda}\in\text{A}:\text{I}\boldsymbol{\theta}} \sum_{(\mathbf{d},\mathbf{i})\in\mathbb{S}_{R}}\sum_{a=1}^{K}\nu(\mathbf{d}, \mathbf{i},a)\,D_{\text{KL}}(Q_{\boldsymbol{\theta},R}(\cdot\mid\mathbf{d}, \mathbf{i},a)\|Q_{\boldsymbol{\lambda},R}(\cdot\mid\mathbf{d},\mathbf{i},a)). \tag{28}\]
_In (28), the KL divergence is computed on the vectorized forms of the distributions \(Q_{\boldsymbol{\theta},R}(\cdot\mid\mathbf{d},\mathbf{i},a)\) and \(Q_{\boldsymbol{\lambda},R}(\cdot\mid\mathbf{d},\mathbf{i},a)\) viewed as conditional probability distributions on \(\mathbb{S}_{R}\), conditioned on \((\mathbf{d},\mathbf{i},a)\)._
Recalling (16), we note that \(Q_{\boldsymbol{\theta},R}(\cdot\mid\mathbf{d},\mathbf{i},a)\) and \(Q_{\boldsymbol{\lambda},R}(\cdot\mid\mathbf{d},\mathbf{i},a)\) are functions of \(P^{d_{a}}_{\theta}\) and \(P^{d_{a}}_{\lambda_{a}}\), respectively, where \(1\leq d_{a}\leq R\). That is, the KL divergence in (28) is a function of _powers_ of arm TPMs of order up to \(R\). Because of the presence of TPM powers, the inner infimum expression in (28) cannot be simplified any further. This is in contrast to the inner infimum expressions appearing in prior works on BAI dealing with i.i.d. observations from the arms [14] (where the arm delays are inconsequential because of the i.i.d. nature of observations) or rested Markov arms [16] (where \(d_{a}\equiv 1\) for all \(a\)). Furthermore, the supremum in (28) is over the _instance-dependent_ set \(\Sigma_{R}(\boldsymbol{\theta})\), which is in contrast to the prior works on BAI [14, 16] in which the supremum is over the _instance-independent_ simplex of arm distributions. The constant \(T^{*}_{R}(\boldsymbol{\theta})\) measures the "hardness" of problem instance \(\boldsymbol{\theta}\) in the following sense: the closer the arm TPMs are to one another in the KL divergence sense, the smaller the value of \(T^{*}_{R}(\boldsymbol{\theta})\), and therefore the larger the stopping time.
The next result shows that the supremum in (28) can be replaced by a maximum, i.e., the supremum in (28) is attained for some \(\nu\in\Sigma_{R}(\boldsymbol{\theta})\).
**Lemma 4.2**.: _Let_
\[\psi(\nu,\mathbf{\theta})=\inf_{\mathbf{\lambda}\in\textsc{Alt}(\mathbf{\theta})}\;\sum_{(\mathbf{ \mathbf{d}},\mathbf{i})\in\mathbb{S}_{R}}\;\sum_{a=1}^{K}\;\nu(\mathbf{\mathbf{d}}, \mathbf{i},a)\,D_{\textsc{KL}}(Q_{\mathbf{\theta},R}(\cdot\mid\mathbf{\mathbf{d}}, \mathbf{i},a)\|Q_{\mathbf{\lambda},R}(\cdot\mid\mathbf{\mathbf{d}},\mathbf{i},a)), \quad\nu\in\Sigma_{R}(\mathbf{\theta}),\;\mathbf{\theta}\in\Theta^{K}\;. \tag{29}\]
_Then, \(\psi\) is continuous under the topology induced by the sup-norm metric on \(\Sigma_{R}(\mathbf{\theta})\times\mathbb{R}^{K}\). Consequently, the supremum in (28) may be replaced by a maximum. Furthermore, the mapping \(\mathbf{\theta}\mapsto T^{*}_{R}(\mathbf{\theta})\) is continuous, and the set-valued mapping \(\mathbf{\theta}\mapsto\mathcal{W}^{*}(\mathbf{\theta})\), with_
\[\mathcal{W}^{*}(\mathbf{\theta})\coloneqq\{\nu\in\Sigma_{R}(\mathbf{\theta}):\psi( \nu,\mathbf{\theta})=T^{*}_{R}(\mathbf{\theta})\}\;, \tag{30}\]
_is upper-hemicontinuous and compact-valued._
From (8), it is evident that \(\textsc{Alt}(\mathbf{\theta})\) is non-compact for each \(\mathbf{\theta}\in\Theta^{K}\). To establish the continuity of \(\psi\), we rely on a version of Berge's maximum theorem [17, Theorem 1.2] for non-compact sets. Our proof of Lemma 4.2 is an adaptation of the proof of [19, Theorem 4], taking into account the dependence of \(\Sigma_{R}(\mathbf{\theta})\) on the problem instance \(\mathbf{\theta}\). In [19], the counterpart of \(\Sigma_{R}(\mathbf{\theta})\) is the simplex of all probability distributions on the arms--an instance-independent set.
**Remark 2**.: _Although we keep \(R\) fixed throughout the paper, we note here the following monotonicity property: \(T^{*}_{R}(\mathbf{\theta})\leq T^{*}_{R+1}(\mathbf{\theta})\) for all \(R\). Indeed, writing \(\psi\) and \(\mathcal{W}^{*}\) more explicitly as \(\psi_{R}\) and \(\mathcal{W}^{*}_{R}\) to emphasize their dependence on \(R\), it is straightforward to see that (a) the larger the value of \(R\), the larger the cardinality of \(\mathbb{S}_{R}\), and (b) for any \(\nu\in\mathcal{W}^{*}_{R}(\mathbf{\theta})\), defining \(\tilde{\nu}\) via \(\tilde{\nu}(\mathbf{\mathbf{d}},\mathbf{i},a)=\nu(\mathbf{\mathbf{d}},\mathbf{i},a)\, \mathbf{1}_{\{(\mathbf{d},\mathbf{i})\in\mathbb{S}_{R}\}}\) for all \((\mathbf{\mathbf{d}},\mathbf{i},a)\in\mathbb{S}_{R+1}\times[K]\), we have \(\tilde{\nu}\in\Sigma_{R+1}(\mathbf{\theta})\). Therefore, it follows that_
\[T^{*}_{R+1}(\mathbf{\theta})\geq\psi_{R+1}(\tilde{\nu},\mathbf{\theta})=\psi_{R}(\nu,\mathbf{\theta})=T^{*}_{R}(\mathbf{\theta})\;,\qquad\forall R\in\mathbb{N}\;. \tag{31}\]
_Hence, \(\lim_{R\to\infty}T^{*}_{R}(\mathbf{\theta})\) exists. See Section 7 for further discussions._
From (27) and (29), it is evident that to achieve the lower bound in (27), it is critical to control the values of the empirical state-action visitation proportions \(\{N(n,\mathbf{\mathbf{d}},\mathbf{i},a)/n:(\mathbf{\mathbf{d}},\mathbf{i},a)\in \mathbb{S}_{R}\times[K]\}\), and ensure that these long-term fractions converge to the set \(\mathcal{W}^{\star}(\mathbf{\theta})\) under the instance \(\mathbf{\theta}\). In particular, merely ensuring that the empirical arm selection proportions converge to their respective optimal proportions given by the lower bound _does not_ suffice for achievability.
**Remark 3**.: _The single-parameter exponential family of TPMs outlined in Section 2 serves a specific and critical purpose in our paper. Given unknown TPMs \(\{P_{k}:k\in[K]\}\) with no structural constraints on their entries as in (2), suppose that \(T^{*}_{R}(P_{1},\ldots,P_{K})\) (the analogue of \(T^{*}_{R}(\mathbf{\theta})\) in the absence of the parametric model) is the constant appearing in the corresponding lower bound expression. To achieve this lower bound, as outlined above, it is critical to ensure that the long-term state-action visitation proportions converge to \(\mathcal{W}^{\star}(P_{1},\ldots,P_{K})\) (the analogue of \(\mathcal{W}^{\star}(\mathbf{\theta})\) in (30) in the absence of the parametric model). However, because the TPMs \(\{P_{k}:k\in[K]\}\) are not known beforehand, and they must be estimated along the way using arm observations characterized by delays. This is a fundamentally challenging task. It is noteworthy that the estimated matrices are not guaranteed to be ergodic. Furthermore, even after the TPM estimates are obtained, it is the estimates of the arm means that ultimately enable identifying the best arm. Consequently, a critical need arises for a continual alternation between estimating arm means and estimating the arm TPMs. This alternation is facilitated by the adoption of the parametric model in our study, by virtue of the one-to-one correspondence between the set of arm means and the set of parameters (see item 5 under Lemma 2.2). A similar alternation is facilitated by the parametric models adopted in [14, 16, 21]. Estimating \(\mathbf{\theta}=[\theta_{1},\ldots,\theta_{K}]^{\top}\) allows us to estimate the TPMs and the arm means simultaneously._
## 5 Achievability: A Policy for Best Arm Identification
In this section, we propose a policy for BAI that works with the _set_ of optimal solutions (30) at each time, and ensures that the long-term state-action visitation proportions converge to the "correct" set of optimal proportions.
### Parameter Estimates
We start by forming estimates for the unknown parameters of the arms. Noting the one-to-one correspondence between \(\theta\in\Theta\) and \(\eta_{\theta}\in(m_{f},M_{f})\) from Lemma 2.2, it suffices to estimate \(\eta_{\theta_{a}}\) for each \(a\in[K]\). For all \(n\) and \(a\in[K]\), let \(N_{a}(n)=\sum_{(\mathbf{\mathbf{d}},\mathbf{i})\in\mathbb{S}_{R}}N(n,\mathbf{\mathbf{d}},\mathbf{i},a)\) denote the number of times arm \(a\) is selected up to time \(n\), where \(N(n,\mathbf{\mathbf{d}},\mathbf{i},a)\) is as defined in (20). Subsequently, our estimates \(\widehat{\mathbf{\eta}}(n)\coloneqq[\widehat{\eta}^{1}(n),\ldots,\widehat{\eta}^{K }(n)]^{\top}\) at time \(n\) are given by
\[\widehat{\eta}^{a}(n)=\begin{cases}0,&N_{a}(n)=0\;,\\ \frac{1}{N_{a}(n)}\sum_{t=0}^{n}\mathbf{1}_{\{A_{t}=a\}}\,f(\bar{X}_{t}),&N_{a}(n) >0\;.\end{cases} \tag{32}\]
The next step in the design of our policy, a crucial step, is the construction of an arms selection rule under which almost surely, (a) the above estimates converge to their true values, and (b) the state-action visitation proportions inherently converge to the correct set of optimal proportions.
### Arms Selection Rule
Recall the uniform arm selection policy \(\pi^{\text{unif}}\) defined in (19). From Lemma 3.3, we know that the controlled Markov chain \(\{(\mathbf{d}(n),\mathbf{i}(n))\}_{n=K}^{\infty}\) is, in fact, an ergodic Markov chain under the policy \(\pi^{\text{unif}}\). Let \(\mu_{\boldsymbol{\theta}}^{\text{unif}}=[\{\mu_{\boldsymbol{\theta}}^{\text{ unif}}(\mathbf{d},\mathbf{i}):(\mathbf{d},\mathbf{i})\in\mathbb{S}_{R}\}^{\top}\) denote the corresponding stationary distribution when the underlying instance is \(\boldsymbol{\theta}\). Let
\[\nu_{\boldsymbol{\theta}}^{\text{unif}}(\mathbf{d},\mathbf{i},a)\coloneqq\mu_{ \boldsymbol{\theta}}^{\text{unif}}(\mathbf{d},\mathbf{i})\cdot\pi^{\text{unif} }(a|\mathbf{d},\mathbf{i})\;,\quad\forall(\mathbf{d},\mathbf{i},a)\in\mathbb{ S}_{R}\times[K]\;. \tag{33}\]
Fix \(\eta\in(0,1)\). Let \(\widehat{\theta}_{a}(n)=\dot{A}^{-1}(\widehat{\eta}_{a}(n))\) for each \(a\in[K]\), and let \(\widehat{\boldsymbol{\theta}}(n)=[\widehat{\theta}_{1}(n),\ldots,\widehat{ \theta}_{a}(n)]^{\top}\) denote the vector of estimated arm parameters at time \(n\). Choose an arbitrary \(\nu_{\boldsymbol{\kappa}}^{\eta}\in\mathcal{W}^{*}(\widehat{\boldsymbol{ \theta}}(n))\), and let
\[\pi_{\boldsymbol{\theta}(n)}^{\eta}(a|\mathbf{d},\mathbf{i})\coloneqq\frac{ \eta\,\nu_{\boldsymbol{\theta}(n)}^{\text{unif}}\,(\mathbf{d},\mathbf{i},a)+( 1-\eta)\,\nu_{n}^{\star}(\mathbf{d},\mathbf{i},a)}{\eta\,\mu_{\boldsymbol{ \theta}(n)}^{\text{unif}}(\mathbf{d},\mathbf{i})+(1-\eta)\,\sum_{a^{\prime}=1 }^{K}\nu_{n}^{\star}(\mathbf{d},\mathbf{i},a^{\prime})}\;,\quad(\mathbf{d}, \mathbf{i},a)\in\mathbb{S}_{R}\times[K]\;. \tag{34}\]
Let \(\{\varepsilon_{n}\}_{n=1}^{\infty}\) be a sequence such that \(\varepsilon_{n}>0\) for all \(n\) and \(\varepsilon_{n}\to 0\) as \(n\to\infty\). Let
\[\pi_{n}=\varepsilon_{n}\pi^{\text{unif}}+(1-\varepsilon_{n})\,\pi_{\boldsymbol {\theta}(n-1)}^{\eta}\;,\quad\forall n\geq K\;. \tag{35}\]
Then, for all \(n\geq K\), our arms selection rule is as follows:
\[\Pr(A_{n}=a|A_{0:n-1},\bar{X}_{0:n-1})=\pi_{n}(a|\mathbf{d}(n),\mathbf{i}(n)) \;,\quad a\in[K]\;. \tag{36}\]
Note that (36) defines a _conditional_ probability distribution on the arms, conditional on the arm delays and last observed states. Our recipe for selecting the arms, based on using a _mixture_ with uniform policy as in (35), is inspired by [6, 32] and plays a critical role in proving that the MDP \(\mathcal{M}_{\boldsymbol{\theta},R}\) has "near-ergodicity" properties under the rule in (36) for every \(\boldsymbol{\theta}\in\Theta^{K}\). As we shall shortly see, the latter near-ergodicity property hinges on the fact that \(\pi_{n}(a|\mathbf{d},\mathbf{i})\geq\varepsilon_{n}\,\pi^{\text{unif}}(a| \mathbf{d},\mathbf{i})=\varepsilon_{n}/K>0\) whenever the arm delays are all strictly smaller than \(R\).
**Remark 4** (\(\eta\)-mixture).: _It is unclear whether \(\sum_{a=1}^{K}\nu_{n}^{\star}(\mathbf{d},\mathbf{i},a)>0\) for all \((\mathbf{d},\mathbf{i})\in\mathbb{S}_{R}\). If the preceding property indeed holds, we may simply use \(\pi_{\boldsymbol{\theta}(n)}^{\eta}(a|\mathbf{d},\mathbf{i})=\nu_{n}^{\star}( \mathbf{d},\mathbf{i},a)/\sum_{a=1}^{K}\nu_{n}^{\star}(\mathbf{d},\mathbf{i},a)\). Recognizing that this property may not potentially hold true, we design an "\(\eta\)-mixture" of \(\nu_{n}^{\star}\) with \(\mu_{\boldsymbol{\theta}(n)}^{\text{unif}}\), and normalize this mixture to arrive at (34). Observe that the denominator of the right-hand side of (34) is strictly positive for every \((\mathbf{d},\mathbf{i})\in\mathbb{S}_{R}\) and hence well defined._
### Test Statistic, Stopping Rule, and Recommendation Rule
Let \(S_{R}=|\mathbb{S}_{R}|\) denote the cardinality of the set \(\mathbb{S}_{R}\). Recall from (32) that \(\widehat{\boldsymbol{\eta}}(n)=(\widehat{\eta}_{a}(n):a\in[K])\) denotes the estimates of the arm means at time \(n\). Let \(\widehat{\theta}_{a}(n)=\dot{A}^{-1}(\widehat{\eta}_{a}(n))\) for each \(a\in[K]\), and let \(\widehat{\boldsymbol{\theta}}(n)=(\widehat{\theta}_{a}(n):a\in[K])\). For all \(n\geq K\) and \((\mathbf{d},\mathbf{i},a)\in\mathbb{S}_{R}\times[K]\), let
\[\widehat{Q}_{n}(\mathbf{d}^{\prime},\mathbf{i}^{\prime}|\mathbf{d},\mathbf{i},a )\coloneqq\begin{cases}\frac{1}{N(n,\mathbf{d},\mathbf{i},a)}\sum_{t=K}^{n} \mathds{1}_{\{(\mathbf{d}(t),\mathbf{i}(t))=(\mathbf{d},\mathbf{i}),\,A_{t}=a,\,(\mathbf{d}(t+1),\mathbf{i}(t+1))=(\mathbf{d}^{\prime},\mathbf{i}^{\prime}) \}},&N(n,\mathbf{d},\mathbf{i},a)>0\\ \frac{1}{S_{R}},&N(n,\mathbf{d},\mathbf{i},a)=0\;.\end{cases} \tag{37}\]
Note that \(\sum_{(\mathbf{d}^{\prime},\mathbf{i}^{\prime})\in\mathbb{S}_{R}}\widehat{Q}_{ n}(\mathbf{d}^{\prime},\mathbf{i}^{\prime}|\mathbf{d},\mathbf{i},a)=1\), and hence (37) defines a probability mass function on \(\mathbb{S}_{R}\). Our test statistic at time \(n\), denoted by \(Z(n)\), is then given by
\[Z(n)\coloneqq\inf_{\mathbf{\lambda}\in\text{Ait}(\widehat{\boldsymbol{\theta} }(n))}\sum_{(\mathbf{d},\mathbf{i})\in\mathbb{S}_{R}}\,\sum_{a=1}^{K}N(n, \mathbf{d},\mathbf{i},a)\,D_{\text{KL}}(\widehat{Q}_{n}(\cdot\mid\mathbf{d}, \mathbf{i},a)\|Q_{\mathbf{\lambda},R}(\cdot\mid\mathbf{d},\mathbf{i},a))\;, \tag{38}\]
where \(\widehat{Q}_{n}\) is as defined in (37). Furthermore, let
\[\zeta(n,\delta)\coloneqq\log\left(\frac{1}{\delta}\right)+(S_{R}-1)\,\sum_{( \mathbf{d},\mathbf{i})\in\mathbb{S}_{R}}\,\sum_{a=1}^{K}\log\left(e\left[1+ \frac{N(n,\mathbf{d},\mathbf{i},a)}{S_{R}-1}\right]\right)\;. \tag{39}\]
Combining the test statistic in (38) and the threshold in (39), we define our stopping rule as follows:
\[\tau\coloneqq\inf\{n\geq K:Z(n)\geq\zeta(n,\delta)\}\;. \tag{40}\]
At the stopping time, we output the arm with the largest empirical mean value, i.e., \(\arg\max_{a\in[K]}\widehat{\eta}_{a}(\tau)\).
In summary, our policy, which we call _restless D-tracking_ or Rstl-Dtrack in short, takes the following parameters as its inputs: \(R\in\mathbb{N}\), \(K\in\mathbb{N}\), \(\eta\in(0,1)\), and \(\delta\in(0,1)\). To start, the policy selects arm \(1\) at time \(n=0\), arm \(2\) at time \(n=1\), and so on until arm \(K\) at time \(n=K-1\). For all \(n\geq K\), it checks for the validity of the condition \(Z(n)\geq\zeta(n,\delta)\) (defined in (38)). If this condition holds, the policy stops and outputs \(\arg\max_{a}\widehat{\eta}_{a}(n)\). If \(Z(n)<\zeta(n,\delta)\), then the policy continues and selects arm \(A_{n+1}\) according to the rule in (36) while respecting the \(R\)-max-delay constraint. We write \(\pi^{\textsc{Rstl-Dtrack}}\) to symbolically denote the policy Rstl-Dtrack. The pseudocode for \(\pi^{\textsc{Rstl-Dtrack}}\) is given in Algorithm 1. In Section 7 later in the paper, we make some remarks on the computational aspects of our policy (specifically on evaluating the infimum expression in (38) at every time step).
```
0:\(K\in\mathbb{N}\): number of arms. \(R\in\mathbb{N}\): maximum tolerable arm delay. \(\eta\in(0,1)\) : mixture parameter. \(\delta\in(0,1)\): confidence level.
0:\(a_{\pi^{\textsc{Rstl-Dtrack}}}\): the best arm.
1: Initialise: \(n=0\), \(N_{a}(n)=0\), \(\widehat{\eta}_{a}(n)=0\) for all \(a\in[K]\), \(N(n,\mathbf{d},\mathbf{i},a)=0\) for all \((\mathbf{d},\mathbf{i},a)\in\mathbb{S}_{R}\times[K]\), \(\textsc{stop}=0\).
2:for\(n<K\)do
3: Select arm \(A_{n}=n+1\).
4:endfor
5:whilestop\(==0\)do
6: Update \((\mathbf{d}(n),\mathbf{i}(n))\). Update \(\widehat{\eta}_{a}(n)\) for each \(a\in[K]\).
7: Set \(\theta_{a}(n)=\hat{A}^{-1}(\widehat{\eta}_{a}(n))\) for each \(a\in[K]\). Set \(\widehat{\boldsymbol{\theta}}(n)=(\widehat{\theta}_{1}(n),\ldots,\widehat{ \theta}_{a}(n))\).
8: Evaluate \(Z(n)\) according to (38).
9:if\(Z(n)\geq\zeta(n,\delta)\)then
10:stop\(=1\).
11:\(\widehat{a}=\arg\max_{a}\widehat{\eta}_{a}(n)\). Resolve ties at random.
12:else
13: Select \(A_{n}\sim\pi_{n}(\cdot|\mathbf{d}(n),\mathbf{i}(n))\), where \(\pi_{n}\) is as defined in (35).
14:\(n\gets n+1\).
15:endif
16:endwhile
17:return\(\widehat{a}\).
```
**Algorithm 1** D-Tracking for BAI in Restless Multi-Armed Bandits (Rstl-Dtrack)
**Remark 5**.: _The definition of \(Z(n)\) in (38) resembles (29) albeit with (a) \(\boldsymbol{\theta}\) replaced with \(\widehat{\boldsymbol{\theta}}(n)\), and (b) \(Q_{\boldsymbol{\theta},R}\) replaced with \(\widehat{Q}_{n}\). In the settings of the prior works [14, 16, 21], (38) specializes to the classical generalized likelihood ratio (GLR) test statistic having simple closed-form expressions. However, (38) does not admit a simple closed-form expression because of the presence of arm delays of order \(2\) or higher (which are absent from [14, 16, 21]). In [32], a simplification to (38) is proposed by relaxing the infimum to a larger set than \(\textsc{Alt}(\widehat{\boldsymbol{\theta}}(n))\) by leveraging the specific structure of rewards therein. A similar simplification is not possible in our setting because the notion of rewards is absent in our work. See Section 7 for a further discussion._
## 6 Theoretical Guarantees
In this section, we provide theoretical guarantees for the proposed Rstl-Dtracking policy. Let \(\mathbb{V}\) denote the set of all _valid_\((\mathbf{d},\mathbf{i},a)\) tuples, i.e., those tuples for which the selection of arm \(a\) in state \((\mathbf{d},\mathbf{i})\) is permissible under the \(R\)-max-delay constraint. That is, for any \((\mathbf{d},\mathbf{i},a)\notin\mathbb{V}\), we have \(N(n,\mathbf{d},\mathbf{i},a)=0\) almost surely for all \(n\geq K\).
The first result below shows that under the proposed arms selection rule in (36), every valid \((\mathbf{d},\mathbf{i},a)\) tuple is visited infinitely often and at a rate of \(\Omega(n^{1/4})\) with high probability.
**Lemma 6.1**.: _Fix \(\boldsymbol{\theta}\in\Theta^{K}\). Let \(S_{R}=|\mathbb{S}_{R}|\)._
1. _The proposed arms selection rule in (_36_) with_ \(\varepsilon_{n}=n^{-\frac{1}{2(1+S_{R})}}\) _satisfies_ \[\mathbb{P}_{\boldsymbol{\theta}}\bigg{(}\forall(\mathbf{d},\mathbf{i},a)\in \mathbb{V},\quad\lim_{n\to\infty}N(n,\mathbf{d},\mathbf{i},a)=+\infty\bigg{)}=1\;.\] (41)
2. _Under the above arms selection rule, for every_ \(\alpha\in(0,1)\)_,_ \[\mathbb{P}_{\boldsymbol{\theta}}\left(\forall(\mathbf{d},\mathbf{i},a)\in \mathbb{V},\;\forall n\geq K,\quad N(n,\mathbf{d},\mathbf{i},a)\geq\left( \frac{n}{\lambda_{\alpha}(\boldsymbol{\theta})}\right)^{1/4}-1\right)\geq 1- \alpha\;,\] (42) _where_ \(\lambda_{\alpha}(\boldsymbol{\theta})=\frac{(1+S_{R})^{2}}{\sigma_{\boldsymbol {\theta}}^{2}}\log^{2}(1+\frac{K\,S_{R}}{\alpha})\)_. Here,_ \(\sigma_{\boldsymbol{\theta}}>0\) _is a constant that depends only on_ \(\boldsymbol{\theta}\)_._
**Remark 6** (Choice of \(\varepsilon_{n}\)).: _Our proof of Lemma 6.1 is an adaptation of a similar proof in [32]. Notice that the "\(\varepsilon_{n}\)-mixture" rule in (35) satisfies the following decomposition property for the transition kernels that facilitates analysis:_
\[Q_{\boldsymbol{\theta},\pi_{n}}=\varepsilon_{n}\,Q_{\boldsymbol{\theta},\pi \text{\tiny{init}}}+(1-\varepsilon_{n})\,Q_{\boldsymbol{\theta},\pi_{\theta(n- 1)}^{2}}. \tag{43}\]
_Choosing \(\varepsilon_{n}=n^{-\beta}\) where \(\beta<\frac{1}{1+S_{R}}\) leads to a convenient closed-form expression for \(\lambda_{\alpha}(\boldsymbol{\theta})\). We use \(\beta=\frac{1}{2(1+S_{R})}\), and hence \(\varepsilon_{n}=n^{-\frac{1}{2(1+S_{R})}}\). For additional details, we refer the reader to the proof of Lemma 6.1 in the appendix._
An immediate consequence of Lemma 6.1 is that under the proposed arms selection rule in (36), each arm \(a\in[K]\) is explored at a rate \(\Omega(n^{1/4})\) with high probability (w.h.p.), thereby ensuring that w.h.p., we have \(\boldsymbol{\eta}(n)\to\boldsymbol{\eta}\). This is formalized in the following lemma.
**Lemma 6.2**.: _Given \(\xi>0\) and a positive integer \(N\geq K\), let_
\[C_{N}^{2}(\xi)\coloneqq\bigcap_{n=N^{5}}^{N^{6}}\left\{\|\widehat{ \boldsymbol{\eta}}(n)-\boldsymbol{\eta}\|_{2}\leq\xi\right\}\,. \tag{44}\]
_Consider the non-stopping version of policy \(\pi^{\text{\tiny{Rsl-Dtrack}}}\) (with the same parameters as that of \(\pi^{\text{\tiny{Rsl-Dtrack}}}\)). Under this policy, for all \(\xi>0\) and \(N\geq K\),_
\[\mathbb{P}_{\boldsymbol{\theta}}\left(\overline{C_{N}^{2}(\xi)}\right)\leq \frac{1}{N^{2}}+\frac{2^{K/2+2}\,K^{K/4}}{\sigma_{\boldsymbol{\theta}}^{K/4}} \,N^{9K/4+7}\,\exp\left(-\frac{\sqrt{\sigma_{\boldsymbol{\theta}}}\,\xi^{2}\,N ^{1/4}}{8\,\sqrt{K}\,(2\,M_{f})}\right)\;, \tag{45}\]
_where \(\sigma_{\boldsymbol{\theta}}\) is the constant from Lemma 6.1, and \(M_{f}=\max_{i\in\mathcal{S}}f(i)\)._
Combining Lemma 6.2 along with the upper-hericontinuity property of the mapping \(\boldsymbol{\lambda}\to\mathcal{W}^{\star}(\boldsymbol{\lambda})\) from Lemma 4.2, we establish a concentration result for the empirical state-action visitation proportions under \(C_{N}^{2}(\xi)\).
**Lemma 6.3**.: _Fix \(\boldsymbol{\theta}\in\Theta^{K}\), \(\nu\in\mathcal{W}^{\star}(\boldsymbol{\theta})\), and \(\eta\in(0,1)\). Let \(\omega_{\boldsymbol{\theta},\nu}^{\star}=\eta\,\nu_{\boldsymbol{\theta}}^{\text {\tiny{init}}}+(1-\eta)\,\nu\). Consider the non-stopping version of policy \(\pi^{\text{\tiny{Rsl-Dtrack}}}\) (with the same parameters as that of \(\pi^{\text{\tiny{Rsl-Dtrack}}}\)). Under this policy, for all \(\xi>0\), there exists a time \(N_{\xi}>0\) such that for all \(N\geq N_{\xi}\) and all \(n\geq\sqrt{N}+1\),_
\[\mathbb{P}_{\boldsymbol{\theta}}\left(\exists(\mathbf{d},\mathbf{i},a):\left| \frac{N(n,\mathbf{d},\mathbf{i},a)}{n-K+1}-\omega_{\boldsymbol{\theta},\nu}^{ \star}(\mathbf{d},\mathbf{i},a)\right|>K_{\xi}(\boldsymbol{\theta},\nu)\,\xi \left|C_{N}^{2}(\xi)\right)=O\bigg{(}\exp\left(-n\xi^{2}\right)\bigg{)}\;, \tag{46}\]
_where \(K_{\xi}(\boldsymbol{\theta},\nu)\) is a constant that depends on \(\xi\), \(\boldsymbol{\theta}\) and \(\nu\), and satisfies_
\[\limsup_{\xi\downarrow 0}K_{\xi}(\boldsymbol{\theta},\nu)<+\infty\quad\forall\nu \in\mathcal{W}^{\star}(\boldsymbol{\theta}),\,\boldsymbol{\theta}\in\Theta^{K}\;. \tag{47}\]
Lemma 6.3 is one of the important results of this paper. It establishes that under any instance \(\boldsymbol{\theta}\in\boldsymbol{\Theta}^{K}\), the empirical state-action visitation proportions converge w.h.p. to \(\omega_{\boldsymbol{\theta},\nu}^{\star}\), for every \(\nu\in\mathcal{W}^{\star}(\boldsymbol{\theta})\). Disregarding the scaling factor \(\eta\) in the expression for \(\omega_{\boldsymbol{\theta},\nu^{\star}}\), the above result implies that under the instance \(\boldsymbol{\theta}\), the empirical state-action visitation proportions converge to the desired set \(\mathcal{W}^{\star}(\boldsymbol{\theta})\). This, as we shall soon see, is pivotal to establishing asymptotic optimality of the policy Rstl-Dtrack. In the proof, we show that under the policy Rstl-Dtrack, the MDP \(\mathcal{M}_{\boldsymbol{\theta},R}\) possesses a "near-ergodicity" property in the following sense: for any fixed \(n\), if \(\pi=\pi_{n}\) is used for selecting the arms at all times, then by virtue of Lemma 3.3, the corresponding transition kernel \(Q_{\boldsymbol{\theta},\pi}\) is ergodic; let its stationary distribution under the instance \(\boldsymbol{\theta}\) be \(\omega_{\boldsymbol{\theta},n}^{\star}\). We find a bound on \(\|\omega_{\boldsymbol{\theta},n}^{\star}-\omega_{\boldsymbol{\theta},\nu}^{ \star}\|_{\infty}\) to arrive at the exponential bound in (46).
The next result below demonstrates that any arbitrary arms selection rule, in conjunction with the stopping rule in (40) and the threshold in (39), satisfies the desired error probability constraint.
**Proposition 6.4**.: _Fix \(\mathbf{\theta}\in\Theta^{K}\). For all \(\delta\in(0,1)\),_
\[\mathbb{P}_{\mathbf{\theta}}\left(\exists n\geq K:\;\sum_{(\mathbf{d},\mathbf{i})\in \mathbb{S}_{n}}\sum_{a=1}^{K}\;N(n,\mathbf{d},\mathbf{i},a)\,D_{\text{KL}}( \widehat{Q}_{n}(\cdot\mid\mathbf{d},\mathbf{i},a)\|Q_{\mathbf{\theta},R}(\cdot\mid \mathbf{d},\mathbf{i},a))>\zeta(n,\delta)\right)\leq\delta\;. \tag{48}\]
_Consequently, for any algorithm with an arbitrary sampling rule, stopping time \(\tau\) given by (40) (with the threshold as in (39)), and best arm recommendation \(\widehat{a}=\arg\max_{a}\widehat{\eta}_{a}(\tau)\) we have_
\[\mathbb{P}_{\mathbf{\theta}}(\tau<\infty,\;\eta_{\widehat{a}}<\eta_{a^{*}(\mathbf{ \theta})})\leq\delta\;. \tag{49}\]
In particular, we note that (49) holds for the proposed arms selection rule in (36). The next result below shows that the stopping time of policy Rstl-Dtrack is finite almost surely, and satisfies an almost-sure asymptotic upper bound that nearly matches with the lower bound in (27).
**Proposition 6.5**.: _Fix \(\eta\in(0,1)\). For all \(\delta\in(0,1)\), the stopping time \(\tau\) of policy \(\pi^{\text{Rstl-Dtrack}}\) is finite almost surely, and hence \(\pi^{\text{Rstl-Dtrack}}\in\Pi(\delta)\). Furthermore,_
\[\mathbb{P}_{\mathbf{\theta}}\left(\limsup_{\delta\downarrow 0}\frac{\tau}{ \log(1/\delta)}\leq\frac{1}{\eta\,T^{*}_{\text{unif}}(\mathbf{\theta})+(1-\eta)\,T ^{*}_{R}(\mathbf{\theta})}\right)=1\;, \tag{50}\]
_where \(T^{*}_{\text{unif}}(\mathbf{\theta})\) in (50) is defined as_
\[T^{*}_{\text{unif}}(\mathbf{\theta})\coloneqq\inf_{\mathbf{\lambda}\in\text{ALT}(\bm {\theta})}\sum_{(\mathbf{d},\mathbf{i})\in\mathbb{S}_{n}}\sum_{a=1}^{K}\nu^{ \text{unif}}_{\mathbf{\theta}}(\mathbf{d},\mathbf{i},a)\,D_{\text{KL}}(Q_{\mathbf{ \theta},R}(\cdot\mid\mathbf{d},\mathbf{i},a)\|Q_{\mathbf{\lambda},R}(\cdot\mid \mathbf{d},\mathbf{i},a))\;. \tag{51}\]
The main result of this section, an upper bound on the growth rate of the expected stopping time of our algorithm, is presented next.
**Proposition 6.6**.: _Fix \(\eta\in(0,1)\). For all \(\delta\in(0,1)\), the expected stopping time \(\mathbb{E}_{\mathbf{\theta}}[\tau]\) of policy \(\pi^{\text{Rstl-Dtrack}}\) is finite. Furthermore,_
\[\limsup_{\delta\downarrow 0}\frac{\mathbb{E}_{\mathbf{\theta}}[\tau]}{\log(1/ \delta)}\leq\frac{1}{\eta\,T^{*}_{\text{unif}}(\mathbf{\theta})+(1-\eta)\,T^{*}_{ R}(\mathbf{\theta})}\;. \tag{52}\]
_Consequently, letting \(\eta\downarrow 0\), we have_
\[\limsup_{\eta\downarrow 0}\;\limsup_{\delta\downarrow 0}\frac{\mathbb{E}_{\mathbf{ \theta}}[\tau]}{\log(1/\delta)}\leq\frac{1}{T^{*}_{R}(\mathbf{\theta})}\;. \tag{53}\]
Combining Proposition 6.6 with Proposition 4.1, we see that \(1/T^{*}_{R}(\mathbf{\theta})\) captures the optimal growth rate of the expected stopping time for BAI in restless bandits with problem instance \(\mathbf{\theta}\in\Theta^{K}\), i.e.,
\[\frac{1}{T^{*}_{R}(\mathbf{\theta})}\leq\liminf_{\delta\downarrow 0}\;\inf_{ \pi\in\Pi_{R}(\delta)}\frac{\mathbb{E}_{\mathbf{\theta}}[\tau_{\pi}]}{\log(1/ \delta)}\leq\limsup_{\eta\downarrow 0}\;\limsup_{\delta\downarrow 0}\frac{ \mathbb{E}_{\mathbf{\theta}}[\tau_{\pi^{\text{Rstl-Dtrack}}}]}{\log(1/\delta)}\leq \frac{1}{T^{*}_{R}(\mathbf{\theta})}\;. \tag{54}\]
## 7 Concluding Remarks and Future Directions
In this paper, we have studied BAI in restless multi-armed bandits under the fixed-confidence regime, when the TPM of each arm belongs to a single-parameter exponential family of TPMs and the arm parameters are unknown. We have shown that the restless nature of the arms gives rise to the notion of arm delays and last observed states, the combination of which constitutes an MDP with a countable state space and a finite action space. By constraining the delay of each arm to be at most \(R\) for some fixed, positive integer \(R\), we have reduced the countable state space to a finite set, making the problem amenable to tractable analysis. Under the above \(R\)-max-delay constraint, we have obtained a problem instance-dependent lower bound on the limiting growth rate of the expected stopping time (time required to find the best arm) subject to an upper bound on the error probability, in the limit as the error probability vanishes. We have showed that the lower bound is characterized by the solution to a max-min optimization problem in which the outer'max' is over the set of all state-action occupancy measures satisfying (a) the \(R\)-max-delay constraint, and (b) a natural flow-conservation constraint. The inner'min' is over the set of alternative problem instances. We have devised a policy (Rstl-Dtrack) for BAI, based on the idea of D-tracking [14], that first estimates the unknown parameters of the arms, and then samples an arm at any given time according to a conditional probability distribution
on the arms, conditioned on the values of arm delays and last observed states at that time. As for the stopping rule, we have devised a test statistic whose form is akin to that of the inner'min' expression of the lower bound, but with the true MDP state-action-state transition probabilities replaced with its empirical counterpart. In conjunction with a _random_ threshold that is a function of the desired error probability, we have designed a rule for stopping further selection of arms whenever the test statistic exceeds the threshold. We have shown that our policy stops in finite time almost surely, satisfies the desired error probability, and is asymptotically optimal.
The computational complexity of the Rstl-Dtrack policy is a notable concern, particularly regarding the computation of the infimum in (38) at each time step, which can be quite resource-intensive. Due to the presence of arm delays, simplifying this infimum any further is a formidable challenge, as discussed in Section 5. One potential approach to alleviating this computational burden is to adopt a technique proposed in [21]. Their method involves expressing the inner infimum in the lower bound using "projection measures," which is computationally more tractable, especially for single-parameter exponential families. However, unlike in [21], the projection measures in our specific context will depend on \(\nu\), the variable of optimization in the outer'sup' expression of the lower bound; this in turn may be attributed to the arm delays in our setting. Resolving this challenge remains an open issue and an intriguing avenue for further research. Additionally, recent studies, such as [35], have demonstrated the promise of Thomson sampling-based policies in reducing computational complexity of BAI. It could be worthwhile to explore the extensions to restless settings, which might offer further computational efficiencies and improve the performance of our policy.
**Future directions:** While we keep \(R\) fixed throughout the paper, it is interesting to note that \(T^{*}_{R}(\mathbf{\theta})\), the constant appearing in the lower bound, is monotone increasing in \(R\) and therefore admits a limit as \(R\to\infty\); see Remark 2. It is natural to expect that \(\lim_{R\to\infty}T^{*}_{R}(\mathbf{\theta})=T^{*}(\mathbf{\theta})\), where \(T^{*}(\mathbf{\theta})\) is the constant governing the lower bound without the maximum delay constraint. A cursory examination of the analysis in [30, Section XI] reveals that the above relation indeed holds in the special case when the observations from each arm are i.i.d. However, in the general setting of restless arms, it is unclear whether the above relation holds, and a formal justification of this could be an interesting future direction. While it is natural to expect that \(T^{*}_{R}(\mathbf{\theta})\) ought to depend on the _mixing times_ of the arms, our analysis does not bring out this dependence explicitly. Considering a simple \(2\)-armed restless bandit problem with \(\mathcal{S}=\{0,1\}\) in which one arm yields i.i.d. observations according to \(\text{Ber}(1/2)\) distribution while the other arm is a slowly mixing Markov process, characterizing \(T^{*}_{R}(\mathbf{\theta})\) explicitly in terms of the mixing time of the second arm could be an interesting direction to explore. Furthermore, considering a dataset of offline observations from arms with inherent delays, an investigation into how incorporating this offline data affects the overall sample complexity of BAI along the lines of [36] would be insightful. Finally, we note that the extensions to _hidden Markov_ observations from the arms, wherein at each time \(n\), the learner observes \(\bar{Y}_{n}=u(\bar{X}_{n})\) for some known/unknown function \(u\), may be of interest. Here, the technical key challenge is that while successive observations \(\bar{X}_{n}\) and \(\bar{X}_{n+1}\) from any given arm possess a Markov dependence, the same may not be said about \(\bar{Y}_{n}\) and \(\bar{Y}_{n+1}\). While [16] considers hidden Markov observations in rested bandits, the lower bound therein does not capture the "hidden" aspect of the observations. It may therefore be worthwhile to first establish a lower bound for BAI with hidden Markov observations in rested bandits and subsequently undertake a formal study of restless hidden Markov bandits.
## Acknowledgements
The primary author, P. N. Karthik, wishes to extend deep gratitude to Prof. Shie Mannor (Technion Israel Institute of Technology, Haifa, Israel), Aymen Al Marjani (ENS Lyon, France), and Dr. Karthikeyan Shanmugam (Google Research India, Bengaluru, India) for the invaluable and enlightening discussions. The author also wishes to express sincere gratitude to Dr. Vrettos Moulos (Google Research, New York) for generously sharing some portions of unpublished research from his doctoral studies and for engaging in extensive discussions. A portion of this research was conducted during the author's tenure as a Visiting Researcher at the Technion.
The work of Arpan Mukherjee and Ali Tajer was supported in part by the RPI-IBM Artificial Intelligence Research Collaboration and in part by the U.S. National Science Foundation award ECCS-193310.
|
2308.09181 | Comparison of saturation rules used for gyrokinetic quasilinear
transport modeling | Theory-based transport modeling has been widely successful and is built on
the foundations of quasilinear theory. Specifically, the quasilinear expression
of the flux can be used in combination with a saturation rule for the toroidal
mode amplitude. Most transport models follow this approach. Saturation rules
are heuristic and difficult to rigorously derive. We compare three common
saturation rules using a fairly accurate quasilinear expression for the fluxes
computed using local linear gyrokinetic simulation. We take plasma parameters
from experimental H-mode profiles and magnetic equilibrium and include
electrons, Deuterium, and Carbon species. We find that the various saturation
rules give qualitatively similar behavior. This may help explain why the
different theory-based transport models can all predict core tokamak profiles
reasonably well. Comparisons with nonlinear local and global gyrokinetic
simulations are also discussed. | Scott E. Parker, Calder Haubrich, Qiheng Cai, Stefan Tirkas, Yang Chen | 2023-08-17T20:43:49Z | http://arxiv.org/abs/2308.09181v1 | # Comparison of saturation rules used for gyrokinetic quasilinear transport modeling
###### Abstract
Theory-based transport modeling has been widely successful and is built on the foundations of quasilinear theory. Specifically, the quasilinear expression of the flux can be used in combination with a saturation rule for the toroidal mode amplitude. Most transport models follow this approach. Saturation rules are heuristic and difficult to rigorously derive. We compare three common saturation rules using a fairly accurate quasilinear expression for the fluxes computed using local linear gyrokinetic simulation. We take plasma parameters from experimental H-mode profiles and magnetic equilibrium and include electrons, Deuterium, and Carbon species. We find that the various saturation rules give qualitatively similar behavior. This may help explain why the different theory-based transport models can all predict core tokamak profiles reasonably well. Comparisons with nonlinear local and global gyrokinetic simulations are also discussed.
gyrokinetic; quasilinear; transport; kinetic; model; plasma; simulation; turbulence
## 1 Introduction
Prediction of turbulent particle and energy transport is critical for improving the performance of a fusion reactor. Much progress has been made with reduced models in the core region, from the pedestal top inwards. Theory-based models, including the trapped gyro-Landau fluid (TGLF) model[1, 2, 3, 4], the multi-mode model (MMM)[5, 6, 7, 8], and the gyrokinetic transport model QuaLiKiz[9, 10, 11, 12] are successful in predicting core density and temperature profiles over a range of tokamak plasma operating conditions. Additionally, quasilinear theory is used widely to compare with both experiment and nonlinear gyrokinetic simulation[13, 14, 15, 16]. Typically one takes the quasilinear expression for the flux and invokes a heuristic saturation rule to obtain the mode amplitude thereby determining the nonlinear flux. While the level of the fluxes obtained using this type of approach may not be accurate, the parametric dependence on wavelength and plasma parameters is often insightful. Even with the successes of the various theory-based transport models for predicting core density and temperature profiles, there is still a need for better understanding. For example, particle transport and associated density build-up is less well understood[17, 18]. Additionally, High-Z impurities, e.g. Tungsten in ITER, will not fully ionize and can produce significant radiative power loss if core concentrations are not well controlled[19, 20].
Here, we further examine the quasilinear transport modeling approach. We compare to local and global nonlinear gyrokinetic simulation which best models the governing equations with relatively few approximations. We will directly compare three widely used saturation rules, two of which come from simple scaling arguments[14, 9], and a third which has been shown to give reasonable parameter dependence for fluxes[21]. While the comparisons we present are rudimentary, we are unaware of such a study of the sensitivity to the saturation rule. We will also compare with the TGLF model which is specifically designed to agree with flux-tube gyrokinetic simulation from the CGYRO code[22, 23]. The goal of this paper is to examine the sensitivity of the choice of saturation rule which is the part of the theory least well understood. Generally, tokamaks operate within regimes that have relatively good confinement and hence it is not unreasonable to assume the turbulence is weak and made up of a number of active linear eigenmodes
that are interacting due to weak nonlinear coupling. Linear calculations with gyrokinetic codes are routine and fast computationally. One can easily obtain linear fluxes from gyrokinetic simulation. Nonlinear simulations are much more compute-intensive. However, no information on the saturation level is available from linear calculations. Therefore, it is common to invoke a "saturation rule" that gives the saturation level of the turbulence as a function of the linear growth rate, wave number, and other parameters. The capability to derive a rigorous saturation rule is elusive. One reasonable approach is to obtain an empirical saturation rule using the scaling of nonlinear gyrokinetic simulation[2]. While the saturation rule is probably the weakest link with regard to rigor, the assumption of a quasilinear expression for the flux and how the quasilinear flux is calculated may also be approximate.
We will discuss results from gyrokinetic simulation using the GENE and GEM codes. The GENE code will be used for linear and nonlinear local calculations, including the calculation of the quasilinear expression for the flux[24; 25]. We use GENE for linear calculations due to its high accuracy, good convergence properties, and comprehensive physics capability. GEM is an efficient tool for nonlinear global simulation due to both its robust behavior over a wide range of parameters and fast performance on parallel computing platforms[26; 27]. For this study, we choose realistic plasma profiles and magnetic equilibrium from a conventional ELMy H-mode DIII-D case (162940) just prior to the onset of an ELM[28]. We include electrons and two ion species, namely, Deuterium (main) and Carbon (impurity). Details will be discussed in Sec. 2. We begin by discussing the plasma parameters for our study in Sec. 2. In Sec. 3, we investigate the linear properties of the selected profile. In Sec. 4, the quasilinear theory is described and the resulting turbulent transport is compared for three different saturation rules. In Sec. 5 we compare to nonlinear fluxes from the GENE and GEM codes.
## 2 Tokamak plasma parameters and assumptions
For comparing the three saturation rules in quasilinear theory, we use DIII-D discharge 162940 which is an ELMy H-mode case. Magnetic equilibrium and profiles are constructed prior to ELM onset. This particular case has been used recently for electron-temperature-gradient and micro-tearing mode studies[29; 28; 30]. Details about this particular case can be found in Ref. [28]. We use a Miller equilibrium[31] and obtain the Miller parameters from EFIT equilibrium with a \(513\times 513\)\((R,Z)\) grid, and density and temperature profiles.
The purpose of this work is to directly compare theoretical models and not predict experimental transport levels. We will not include the effect of equilibrium shear flow because simple quasi-linear theory does not take this into account. Including zonal flow and cross-coupling between electron and ion scales continue to be an active research topic[4; 11]. Neglecting shear flow in the quasilinear theory will allow for a more transparent comparison of the various saturation rules. We include realistic collisionality in the linear analysis as well as in the nonlinear gyrokinetic simulations. Gyrokinetic ions and drift-kinetic electrons with electromagnetic fluctuations perpendicular to B (\(\delta B_{\perp}\)) will be used in the linear and nonlinear gyrokinetic simulations. \(\delta B_{\parallel}\) will be neglected. The plasma \(\beta\) is reduced for some nonlinear simulations presented in Sec. 5, and details will be discussed there.
Fig. 1 (a) shows the main ion (Deuterium) and electron density profiles for DIII-D 162940. (b) shows the carbon impurity density profile. (c) shows the electron temperature profile, and (d) shows the main ion temperature profile. For this study, the impurity temperature is assumed to be equal to the main ion temperature. And, we will only account for one impurity species, namely Carbon. The Carbon profile is hollow hence we expect inward radial flux for the impurity species. We choose three radial locations shown in Table 1 for our study, where \(\rho=r/a\) and r is the Miller radial coordinate[31],
\[r=\frac{R_{\text{max}}-R_{\text{min}}}{2}, \tag{1}\]
where \(R_{\text{max}}\) and \(R_{\text{min}}\) are the maximum and minimum major radius of each flux surface, respectively. \(a\) is the value of r, from Eq. (1), at the separatrix.
Table 1 gives the local parameters at the three radial locations (\(\rho\)) for our analysis. We begin by examining linear stability at these three radial locations. We note that the quasilinear analysis in Sec. 4 is a local analysis and is based on the local parameters given in Table 1. Physical quantities such as the major radius, \(B\), \(n\) and \(T\) are important for
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \(\rho\) & \(\frac{R}{L_{T_{i}}}\) & \(\frac{R}{L_{T_{i}}}\) & \(\frac{L_{e}}{T_{i}}\) & \(\frac{R}{L_{ne}}\) & \(\frac{R}{L_{ni}}\) & \(\frac{R}{L_{nci}}\) & \(\frac{R_{c}}{n_{e}}[\%]\) & \(q\) & \(\hat{s}\) & \(\beta_{e}[\%]\) & \(\kappa\) & \(\delta\) & \(\zeta\) \\ \hline
0.8 & 6.71 & 7.49 & 0.87 & 1.44 & 2.40 & -0.71 & 5.16 & 2.28 & 1.75 & 1.00 & 1.47 & 0.21 & -0.03 \\ \hline
0.85 & 9.16 & 8.94 & 0.87 & 1.84 & 3.06 & -0.75 & 5.37 & 2.56 & 2.17 & 0.85 & 1.51 & 0.24 & -0.04 \\ \hline
0.9 & 12.49 & 11.17 & 0.88 & 2.36 & 3.98 & -0.79 & 5.65 & 2.97 & 2.94 & 0.67 & 1.55 & 0.28 & -0.05 \\ \hline \end{tabular}
\end{table}
Table 1: Local tokamak plasma parameters at \(\rho\)=0.8, 0.85 and 0.9.
determining collisionality and for conversion to physical (SI) units. \(\frac{R}{\mathcal{L}_{Ti}}\) and \(\frac{R}{\mathcal{L}_{Ti}}\) are the normalized temperature gradient of ions and electrons. We assume the Carbon temperature (and temperature profile) is the same as the main ions. \(\frac{R}{L_{ni}}\), \(\frac{R}{L_{ne}}\) and \(\frac{R}{L_{ne}}\) are normalized density gradient of ions, electrons, and carbon. The ratio of electron temperature and ion temperature and the impurity concentration (carbon density to electron density) are given by \(\frac{T_{e}}{T_{i}}\) and \(\frac{n_{C}}{n_{e}}\). \(q\) is the safety factor and \(\dot{s}=\frac{e}{q}\frac{dq}{d\rho}\) is the magnetic shear parameter, and \(\beta_{e}=\mu_{0}n_{e}T_{i}/B^{2}\). The Miller parameters[31] for elongation, triangularity and squareness are \(\kappa\), \(\delta\) and \(\zeta\).
## 3 Linear analysis
We begin by studying the local linear properties of the tokamak plasma parameters (162940) discussed above in Sec. 2, near the pedestal top, and scan \(\rho=0.8,0.85,0.9\). We do initial-value calculations with the GENE code in the flux-tube limit. In Fig. 2, we show the linear growth rate and real frequency for the three radial locations specified in Table 1 versus \(k_{y}\), where \(y\) is the binormal perpendicular coordinate. In the following section, we will also use the linear output from GENE in the form of the particle and energy fluxes and the electrostatic potential linear mode structure to parameterize \(k_{\perp}\). \(\rho_{i}\) is the Deuterium species ion gyroradius and \(v_{th}=\sqrt{T_{i}/m_{i}}\) The "\(i\)" subscript will refer to the main ion Deuterium species throughout the paper. Fig. 2 has qualitative features common to core H-mode plasmas and even the so-called "Cyclone base case"[32]. An ion mode, or ion temperature gradient (ITG) mode, for \(k_{y}\rho_{i}\lesssim 1.4\), and an electron mode, or a collisionless trapped-electron mode (CTEM) dominates for \(k_{y}\rho_{i}\gtrsim 1.4\). As one approaches the steep gradient region (\(\rho=\)0.85 and 0.9) in the pedestal, a negative frequency unstable mode appears at longest resolved wavelength. This is the micro-tearing mode (MTM) which is often the dominant mode in the pedestal region for these parameters[28]. Global analysis is required to accurately model the long-wavelength (MTM).
Fig. 3 shows a linear comparison between TGLF and local GENE at \(\rho=\)0.85. In Sec. 4, we will compare quasilinear theory results to TGLF as well. The real frequencies agree very well between GENE and TGLF. We note that TGLF using the SAT1 model[4] predicts higher growth rates than GENE. GENE is a more accurate local linear gyrokinetic calculation, but it could be that the higher growth rate is compensated by the TGLF SAT1 saturation rule so that TGLF still gives accurate fluxes. We note that global effects would typically be stabilizing, so this is another effect important for realistic modeling not accounted for here.
Figure 1: Profiles for DIII-D 162940 ELMy H-mode just prior to ELM onset. **(a)** Electron and main ion density profiles. **(b)** Impurity density profiles. **(c)** Electron temperature profile. **(d)** Main ion temperature profile.
## 4 Quasilinear theory
### Quasilinear expression of fluxes using linear gyrokinetic simulation
In good confinement regimes, core tokamak turbulence fluctuations are small. It is not unreasonable to assume a superposition of a finite number of linear eigenmodes at small amplitude, leading to the validity of the quasilinear expression for the fluxes. The quasilinear flux is quadratic in the mode amplitude. What is more uncertain (or unknown) is the fluctuation amplitude, and we will discuss three plausible saturation rules in the following section. Linear flux-tube gyrokinetic simulation is used to predict the quasilinear fluxes assuming a saturation rule. We follow Lapillonne[15] prescription and use the GENE code to obtain linear fluxes. GENE uses field line following coordinates \((x,y,z)\) where \(x\) is a radial coordinate, \(y\) is the other binormal coordinate and \(z\) is the coordinate along the field line. We decompose the fluxes in \(k_{y}\) and define a general quasi-linear flux quantity \(F^{ql}\), where \(F\) can represent particle flux \(\Gamma_{\alpha}\) or energy
Figure 3: Comparison of linear frequency and growth rate for GENE and TGLF at \(\rho=0.85\).
Figure 2: Growth rate and real frequency at the radial locations \(\rho=0.8\), 0.85 and 0.9 in Miller geometry from local GENE linear initial-value simulation.
flux \(Q_{\alpha}\) for species \(\alpha\). The linear flux is proportional to the square of the amplitude
\[F_{k_{y}}^{lin}=\hat{G}_{k_{y}}\left|\hat{\Phi}_{0,k_{y}}(z=0)\right|^{2}, \tag{2}\]
where we assume that the mode amplitude can be parameterized at \(z=0\) (or \(\theta=0\)). It is straightforward to calculate the amplitude normalized linear flux \(\hat{G}_{k_{y}}\) from linear gyrokinetic simulations. Given a saturation rule for the amplitude, the fluxes can then be calculated using
\[F^{ql}=\sum_{k_{y}}A^{2}(k_{y})\hat{G}_{k_{y}}^{ql}\Delta k_{y}, \tag{3}\]
where \(\Delta k_{y}\) is the \(k_{y}\) spacing, and \(A^{2}(k_{y})\) parameterizes the mode amplitude and will be determined using three simple saturation rules discussed below. The saturation rule used in Lapillonne[15] is
\[A^{2}(k_{y})=A_{0}^{2}\left(\frac{\gamma_{k_{y}}}{\langle k_{\perp}^{2} \rangle}\right)^{2}. \tag{4}\]
where \(\gamma_{k_{y}}\) is the linear growth rate. Eq. (4) is one of the saturation rules we will examine. Some care is taken in determining \(\left\langle k_{\perp}^{2}\right\rangle\) in the denominator of Eq. (4), and we will follow a similar procedure here for consistency with previous work[15; 21]. \(k_{\perp}\) is averaged over the eigenmode envelope \(\hat{\Phi}_{k_{x},k_{y}}(z)\), and given by
\[\left\langle k_{\perp}^{2}\right\rangle=\frac{\sum_{k_{x}}\int(g^{xx}k_{x}^{2 }+2g^{xy}k_{x}k_{y}+g^{yy}k_{y}^{2})\left|\hat{\Phi}_{k_{x}k_{y}}(z)\right|^{2 }Jdz}{\sum_{k_{x}}\int\left|\hat{\Phi}_{k_{x}k_{y}}(z)\right|^{2}Jdz}, \tag{5}\]
where \(J\) is the Jacobian and \(g^{xx}\), \(g^{xy}\), \(g^{yy}\) are geometric coefficients \(g^{\mu\nu}=\nabla\mu\cdot\nabla\nu\) in the field-line following coordinates[15]. Note that \(\left\langle k_{\perp}^{2}\right\rangle\) given in Eq. (5) is a function of \(k_{y}\).
### Saturation rules
Admittedly, though there may be validity in the quasilinear expression for the fluxes, the parameter dependence of the fluctuation amplitude is difficult to determine without running a nonlinear gyrokinetic simulation many times over a range of parameters[3]. However, we can gain some knowledge of the transport properties by comparing different saturation rules and the sensitivity of the observed trends. In this paper, we compare three common saturation rules and give simple scaling arguments for their origin where they exist. The first saturation rule given in Eq. (4) can be obtained by balancing linear growth with the \(\mathbf{E}\times\mathbf{B}\) advection. For example, the mode would saturate when \(\frac{\partial\delta n}{\partial t}\) balances \(\mathbf{v}_{E}\cdot\nabla\delta n\), where \(\mathbf{v}_{E}\) is the \(\mathbf{E}\times\mathbf{B}\) drift resulting in
\[\gamma\delta n_{k}\sim\frac{k_{\perp}^{2}}{B}\left|\phi_{k}\right|\left|\delta n _{k}\right|,\]
or
\[\frac{e\left|\phi_{k}\right|}{T}\sim\frac{eB}{T}\frac{\gamma_{k}}{k_{\perp}^{ 2}},\]
which is the scaling in Eq. (4) and commonly used[14; 15; 16]. The saturation rule can also be obtained from wave particle trapping resonant particles and gives the correct saturation level in slab geometry[33].
The second saturation rule we will use can come from a dimensional argument where the diffusion coefficient is simply set to \(D=\gamma/k_{\perp}^{2}\). Then, \(D\nabla n_{0}=\left\langle v_{Ex}\delta n\right\rangle\) to obtain
\[\frac{e\left|\phi_{k}\right|^{2}}{T}=A_{0}^{2}\frac{eB}{T}\frac{1}{Lk_{y}} \frac{\gamma_{k}}{k_{\perp}^{2}}, \tag{6}\]
where \(L\) is the gradient scale length. \(L=L_{n}\), the density gradient scale length in this argument. A similar calculation could be made for the thermal diffusivity, so we write \(L\) in Eq. (6) more generally. Eq. (6) is similar to the saturation rule used in QuaLiKiz[9] and in earlier work[13].
Finally, the third saturation rule we will use for comparison is the following
\[\frac{e\left|\phi_{k}\right|^{2}}{T}=A_{0}^{2}\frac{eB}{T}\frac{\gamma_{k}}{k _{\perp}^{2}}, \tag{7}\]
which has a similar \(\frac{1}{k_{\perp}^{2}}\) scaling as Eq. (6), but does not diverge as \(k_{y}\) approaches zero. The saturation rule given in Eq. (7) has been used previously for comparison to nonlinear gyrokinetic simulation and experiment[21]. Our goal here is not to develop a transport model that accurately reproduces nonlinear gyrokinetic simulation. Rather, we admit the saturation rule is the weakness in any weak turbulence model. Therefore, we present results for the three saturation rules above, and in some sense "scan" the sensitivity of the fluxes to the saturation rule.
### Saturation levels and quasilinear fluxes
Fig. 4 shows the three saturation rules, Eqs. (4), (6) and (7), with \(A_{0}=1\) and \(T=T_{i}\), along with the TGLF SAT1 result. GENE gyroBohm units are used in which \(\phi\) is normalized by \(\frac{R}{\rho_{i}}\frac{c\phi}{T_{i}}\). I.e., to convert to SI units [V\({}^{2}\)], take the values presented in Fig. 4 and multiply by \(\left(\frac{\rho_{i}}{R}\frac{T_{i}}{e}\right)^{2}\). The saturation rules Eq. (4), Eq. (6) and Eq. (7) are labeled "Lapillonne(2011)", "Bourdelle(2007)", and "Kumar(2021)", respectively, simply for the convenience of the reader. TGLF SAT1 is labeled "SAT1". No indication of the relative validity of the various models should be made since we are only using the various saturation rules for comparison. Additionally, the various theory-based transports models calculate the fluxes differently than the way we do here. Namely, we use GENE linear results. The overall level of each saturation rule, i.e. the value of \(A_{0}\), has little meaning since quasilinear transport models calibrate the saturation rule using a constant coefficient.
The saturation rules show similar trends, peaking at \(k_{y}\rho_{i}\sim\) 0.2-0.3. There is some variation in the width of the spectra. SAT1 gives the narrowest spectrum, and not surprisingly Eq. (7) gives a broader spectrum compared to Eq. (4) and (6) due to the power of the \(\frac{T_{k}}{k_{\perp}^{2}}\) term and the lack of a \(\frac{1}{k_{y}}\) factor. It is interesting that Eq. (6) and SAT1 are somewhat similar.
Next, we compare the quasilinear flux obtained from the normalized GENE quasilinear flux and the three saturation rules. The results are shown in Fig. 5. We use GENE gyroBohm units where \(Q[\text{SI}]=\left(\frac{v_{th}\rho_{i}^{2}n_{e}T_{i}}{R^{2}}\right)Q_{\text {shown}}\), \(\Gamma[\text{SI}]=\left(\frac{v_{th}\rho_{i}^{2}n_{e}}{R^{2}}\right)\Gamma_{ \text{shown}}\). The results in Fig. (5) have been normalized, by adjusting \(A_{0}\) in the three saturation rules so that the total ion heat flux, \(Q_{i}\) matches that predicted by SAT1. The SAT1 result shown in Fig. 5 is using the TGLF model directly.
The Carbon flux is directed inward, as expected due to the slightly hollow profile of carbon. The results from all four models agree qualitatively, again, with some variation in the breadth of the spectrum with TGLF showing more flux at lower \(k_{y}\). The relatively large values of TGLF fluxes vs. \(k_{y}\) is simply an artifact of normalizing the other quasilinear fluxes to TGLF and the fact the other models have a broader \(k_{y}\) spectrum. TGLF shows some electron flux at higher \(k_{y}\) that the GENE quasilinear fluxes do not.
Figure 4: The three saturation rules obtained from linear GENE along with TGLF SAT1 in GENE gyroBohm units described in the text.
## 5 Comparison with nonlinear gyrokinetic simulations
Local nonlinear flux-tube simulations were carried out using the GENE code at \(\rho=0.85\) to test the validity of the quasilinear models. The perpendicular flux-tube domain used was 167\(\rho_{i}\) (radial) \(\times\) 126\(\rho_{i}\) in size with 256 radial grid points and 32 toroidal modes with \(k_{y}\rho_{i}\) ranging from \(0.05\) to \(1.60\). Grid resolution in the \(z,v_{\parallel}\), and \(\mu\) dimensions were chosen to be \(32\times 32\times 16\) respectively, where the values were found by running linear growth rate convergence tests. Initial runs with the value of \(\beta_{e}=0.85\%\) taken from Table 1 showed nonlinearly excited lower-\(k_{y}\) micro-tearing modes (MTM) dominating the transport at earlier times, e.g. \(t\frac{v_{th}}{h}\leq 20\), and eventually going numerically unstable at late times. Since electromagnetic modes were not observed in the global nonlinear GEM simulations discussed below, \(\beta_{e}\) was reduced to 10% of the original value, and high-quality electrostatic simulation results were obtained. This change
Figure 5: Quasilinear fluxes from GENE versus \(k_{y}\rho_{i}\) at \(\rho=0.85\) for the three saturation rules in GENE gyroBohm units. (**a**) Deuterium heat flux. (**b**) Electron heat flux. (**c**) Carbon heat flux. (**d**) Deuterium particle flux. (**e**) Electron particle flux. (**f**) Carbon particle flux. Fluxes are normalized such that total flux matches SAT1.
Figure 6: Lapillonne QL flux model, NL from GENE, and GEM results versus \(k_{y}\rho_{i}\) at \(\rho=0.85\) in GENE gyroBohm units. GEM fluxes scaled by \(3.49\). A) Deuterium heat flux. B) Electron heat flux. C) Carbon heat flux. D) Deuterium particle flux. E) Electron particle flux. F) Carbon particle flux.
is reasonable since the fluxes are mainly electrostatic in the GEM simulations as nonlocal effects may help to stabilize the low-n electromagnetic modes.
Fig. 6 shows a comparison of particle and heat fluxes versus \(k_{y}\) between local GENE, quasilinear theory and global GEM. The quasilinear fluxes shown in blue and labeled "QL GENE" are in good agreement with the nonlinear GENE results in orange labeled "NL flux-tube GENE" in Fig. 6 except the amplitude \(A_{0}\) is normalized using the nonlinear GENE ion heat flux. This is appropriate since the value of \(A_{0}\) is undetermined in the theory. The global GEM results, discussed below, are scaled by a factor of \(3.49\). Nonlocal effects, including profile, q, and magnetic shear variation are stabilizing. Therefore, it is typical that global calculations are more stable, hence producing lower fluxes.
The results labeled "Nonlinear global GEM" in Fig. 6, are nonlocal nonlinear electromagnetic gyrokinetic simulations using the \(\delta f\) particle-in-cell code GEM. For the present study of ion-scale turbulent transport, a fully drift-kinetic electron species is included using the split-weight scheme[27]. To ensure a steady-state turbulence and transport, a numerical heat source is applied to all species[34], and a numerical scheme[35] is used for evaluating the marker distribution which can evolve significantly in later times. The grid resolution is \((N_{x},N_{y},N_{z})=(128,128,64)\), in the radial, binormal and parallel direction, respectively. The particle number is \(32\)/cell for the ion species and \(64\)/cell for electrons. The time step is \(\Omega_{p}\triangle t=1\) where \(\Omega_{p}\) is the proton gyro-frequency. The radial domain of the nonlocal simulation is \(0.65<r/a<0.95\). Attempts to extend the simulation to the separatrix (\(r/a=1.0\)) lead to nonphysical modes near the edge. The cause of this problem is not clear, but we believe part of the reason is the uncertainty in the equilibrium configuration, including the magnetic surface shape and the density/temperature profiles. In particular, strong poloidal variation of the temperature is expected in a region of steep gradients, but such variation is not modeled in the present study since local Maxwellian distributions that vary only in the radial flux coordinate are assumed. The density and temperature gradients are reduced in a boundary layer of \(\triangle r/a\sim 0.05\) near the outer domain, to avoid peaking of the turbulence near the boundary.
In the toroidal direction, the simulation domain is a toroidal wedge which is \(1/8\) of the torus, and the EM fields are filtered to include only the toroidal mode numbers \(n=0,8,16,\ldots,88\). We note that no nonlinear excitation of low-n electromagnetic modes, e.g. MTMs, was present in the global GEM simulations, possibly due to nonlocal stabilization, in contrast to local GENE results. Fig. 7 shows the ion energy fluxes at various radial locations from GEM. The quality of the simulation seems reasonable.
For comparison to GENE and quasilinear theory, the turbulent fluxes are decomposed into toroidal modes. For example, in the formula for the radial heat flux,
\[Q(r)=\frac{1}{\triangle V}\int_{\triangle V}\frac{1}{2}mv^{2}\,\delta f\, \left(\frac{\mathbf{E}\times\mathbf{b}}{B}+v_{\parallel}\frac{\delta\mathbf{ B}_{\perp}}{B}\right)\cdot\frac{\nabla r}{\left|\nabla r\right|}\,d\mathbf{x}d \mathbf{v},\]
if \(\mathbf{E}\) and \(\mathbf{B}\) are replaced by a specific toroidal component, the contribution of that component to the total flux is obtained. Here \(\triangle V\) is a thin toroidal annulus with a radial size of \(\triangle r/a=0.025\). Results from this procedure are shown in
Figure 7: Global GEM ion heat flux versus time in GENE gyroBohm units at multiple radial locations.
Fig. 8. The raw results are shown as solid red triangles and the solid blue squares are a polynomial fit. The fitted result is shown in Fig. 7. GEM is a particle code and does not evolve the distribution function spectrally in \(k_{y}\) like the GENE calculation. Obtaining \(Q(k_{y})\) involves summing over particle weights for each toroidal mode in GEM and leads to statistical fluctuations in this quantity. The smooth fit shown in Fig. 6 (solid blue squares in Fig. 8) more clearly compares the trends between models. We do the same fitting procedure for all the flux quantities in Fig. 6.
## 6 Discussion and future work
In high confinement tokamak regimes, it is reasonable to assume the quasilinear expression for the flux is valid, but there is still uncertainty in determining the fluctuation level. Here, we compared three common saturation rules using gyrokinetic simulation to determine the quasilinear flux expression. To our knowledge, the various common saturation rules have not been compared in this way before. Considerable realism was taken into account including experimental plasma parameters, collisionality, electromagnetics, and drift-kinetic electrons. We show that with proper normalization to nonlinear gyrokinetic simulation or experiment, the three saturation rules exhibit similar behavior. We also compared to local and global gyrokinetic simulations. The local nonlinear GENE result exhibited nonlinearly excited low \(k_{y}\) electromagnetic modes that were dominant. However, when \(\beta\) was reduced in the simulation to eliminate the low \(k_{y}\) modes, nonlinear GENE showed similar behavior as the quasilinear theory. Electromagnetic global GEM showed qualitatively similar behavior (Fig. 6). GEM did not see a low-n electromagnetic mode, possibly due to nonlocal stabilization, e.g. variation in profiles, q, and magnetic shear in this region. Global GEM also gave a lower ion and electron particle flux. Nonlinear GENE and GEM showed a higher impurity flux. All calculations presented in this paper are quasineutral, of course. Further work will include a better understanding of the larger inward particle flux found in the nonlinear simulations.
This study only scraped the tip of the iceberg, and further investigation is needed to better understand the parametric dependence of the models for a wider variety of plasma equilibria and profiles. Even for this particular case (DIII-D 162940) it would be interesting to look at the effect of including equilibrium shear flow and the radial dependence of the fluxes. These are topics for future work. Detailed comparisons with nonlinear simulation is challenging due to computing resources. However, comparison between the various quasilinear models is relatively fast requiring only linear GENE calculations. Additionally, it would be useful to investigate the relative contributions of particle diffusion and convection and impurity peaking factor[36; 37]. Since, in quasilinear theory, the amplitude cancels out in the determination of the peaking factor, the choice of particular saturation rule should be less important.
Work supported by the U.S. Department of Energy cooperative agreements DE-SC-000801 and DE-FG02-08ER54954. This research used resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility located at Lawrence Berkeley National Laboratory, operated under Contract No. DE-AC02-05CH11231.
Figure 8: GEM Ion heat flux versus \(k_{y}\) and a corresponding smooth polynomial fit.
We thank Gabriele Merlo (University of Texas, Austin) for his continued support with running the GENE code and interpreting results. We thank Emily Belli (GA - General Atomics) and Gary Staebler (Oak Ridge National Laboratory) for access and help with TGLF and GA code modeling software. We also thank Richard Groebner (GA), Brian Grierson (GA), Shawn Haskey (Princeton Plasma Physics Laboratory), and Neeraj Kumar (General Fusion) for providing profiles, magnetic equilibria and useful discussion.
Conceptualization, project administration, supervision, funding acquisition, S. Parker; methodology, S. Parker, C. Haubrich, Q. Cai, S. Tirkas, Y. Chen; software, C. Haubrich, Q. Cai, S. Tirkas, Y. Chen; data curation, C. Haubrich; writing-original draft preparation, S. Parker, C. Haubrich, S. Tirkas, Y. Chen, writing-review and editing, S. Parker, C. Haubrich, Q. Cai, S. Tirkas, Y. Chen. All authors have read and agreed to the published version of the manuscript.
Data and code available on request.
The authors declare no conflict of interest.
|
2307.01358 | Fuchsian differential equations of order 3,...,6 with three singular
points and an accessory parameter I, Equations of order 6 | Fuchsian differential equations $H_j$ of order $j=3,\dots,6$ with three
singular points and one accessory parameter are presented. The shift operators
for $H_6$ are studied, which lead to assign the accessory parameter of $H_6$ a
cubic polynomial of local exponents so that the equations have nice properties:
shift operators and several symmetries. The other equations will be studied in
the forthcoming papers. | Yoshishige Haraoka, Hiroyuki Ochiai, Takeshi Sasaki, Masaaki Yoshida | 2023-07-03T21:15:32Z | http://arxiv.org/abs/2307.01358v2 | Fuchsian differential equations of order 3,...,6 with three singular points and an accessory parameter I, Equations of order 6
###### Abstract
Fuchsian differential equations \(H_{j}\) of order \(j=3,\ldots,6\) with three singular points and one accessory parameter are presented. The shift operators for \(H_{6}\) are studied, which lead to assign the accessory parameter of \(H_{6}\) a cubic polynomial of local exponents so that the equations have nice properties: shift operators and several symmetries. The other equations will be studied in the forthcoming papers.
###### Contents
* 1 Equations \(H_{j},G_{j},E_{j}\)\((j=3,4,5,6)\) and \(E_{2}\)
* 1.1 Equation \(H_{6}\)
* 1.2 Proof of Proposition 1.2
* 1.3 Table of equations \(H_{j}\)\((j=6,5,4,3)\) and \(E_{2}\)
* 1.4 Equations \(G_{j},E_{j}\)\((j=6,5,4,3)\)
* 2 Generalities
* 2.1 Symmetry
* 2.2 \((\theta,\partial)\)-form and \((x,\theta,\partial)\)-form
* 2.3 Spectral type and the number of accessory parameters
* 2.4 Adjoint equations
* 3 Addition and middle convolution
* 3.1 From \(H_{3}\) to \(H_{6},H_{5}\) and \(H_{4}\)
* 3.2 From \(H_{6}\), \(H_{5}\) and \(H_{4}\) to \(H_{3}\)
* 4 Shift operators, shift relations and S-values
* 4.1 The ring of differential operators, left ideals and reducibility
* 4.2 Shift operators and shift relations
* 4.3 S-values
When \(ap\) is a function of \(e\) * 4.5 Reducibility type and shift operators * 4.6 Reducibility type and shift operator when \(\mathrm{ord}(P)=1\) * 4.7 From \(H_{6}\) to \(H_{5}\) and \(H_{3}\) by factorization * 4.8 Polynomial solutions
* 5 The Gauss hypergeometric equation \(E_{2}\)
* 5.1 Exponents at \(x=0\) and \(x=1\)
* 5.2 Transformation \(x\to 1/x\) and the local exponents at \(x=\infty\)
* 5.3 Adjoint operator of \(E_{2}\)
* 5.4 Differentiation
* 5.5 Shift operators of \(E_{2}\)
* 5.6 S-values and reducibility conditions of \(E_{2}\)
* 5.7 Reducibility conditions and the Euler integral representation
* 5.8 Reducible cases of \(E_{2}\)
* 6 Shift operators of \(H_{6}\)
* 6.1 Inverse shift operators and S-values of \(H_{6}\)
* 6.2 Reducible cases of \(H_{6}\)
* 7 Equation \(G_{6}\)
* 7.1 Definition of the equation \(G_{6}(e,a)\)
* 7.2 Proof of Theorem 7.3
* 7.3 Inverse shift operators and S-values of \(G_{6}\)
* 7.4 Adjoint and the coordinate changes \(x\to 1-x\) and \(x\to 1/x\)
* 8 Equation \(E_{6}:=G_{6}(e,0)\)
* 8.1 Interpolative expression of \(E_{6}\) using \(V\)
* 8.2 Explicit expression of the decomposition [1113] when \(s=2,3,\ldots\)
* 9 Shift operators of \(H_{5}\)
* 9.1 Shift operators of \(H_{5}\), S-values and reducibility conditions
* 9.2 Reducible cases of \(H_{5}\)
* 9.3 Shift operators of \(H_{5}\)
* 10 Shift operators of \(H_{4}\)
* 10.1 A shift operator of \(H_{4}\)
* 10.2 Reducible cases of \(H_{4}\)
**Subjectclass**[2020]: Primary 34A30; Secondary 34M35, 33C05, 33C20, 34M03.
**Keywords**: Fuchsian differential equation, accessory parameters, shift operators, reducibility, factorization, middle convolution, symmetry, hypergeometric differential equation, Dotsenko-Fateev equation.
## Introduction
A Fuchsian ordinary differential equation is called rigid if it is uniquely determined by the local behaviors at the regular singular points. In other words, a Fuchsian ordinary differential equation is rigid if it is free of accessory parameters. For rigid Fuchsian ordinary differential equations, we know how to obtain integral representations of solutions, monodromy representations, shift relations, irreducibility conditions, connection coefficients and so on (cf. [8, 5]). While for non-rigid differential equations, we have no way to know those things in general.
In the series of papers: Part I (present one), II ([6]) and III ([7]), we study several Fuchsian equations with three singular points \(\{0,1,\infty\}\). A most naive generalization of the Gauss hypergeometric equation \(E_{2}\) with the Riemann scheme
\[R_{2}:\left(\begin{array}{ccc}x=0:&0&a_{1}\\ x=1:&0&a_{2}\\ x=\infty:&a_{4}&a_{3}\end{array}\right),\quad a_{1}+\cdots+a_{4}=1,\]
would be an equation of order three with the Riemann scheme
\[R_{3}:\left(\begin{array}{ccc}x=0:&0&b_{1}&b_{2}\\ x=1:&0&b_{3}&b_{4}\\ x=\infty:&b_{7}&b_{5}&b_{6}\end{array}\right),\quad b_{1}+\cdots+b_{7}=3,\]
which we denote by \(H_{3}\). This has an expression as
\[H_{3}:x^{2}(x-1)^{2}\partial^{3}+x(x-1)p_{2}\partial^{2}+p_{1}\partial+p_{0},\quad\partial=d/dx\]
where \(p_{2},\ p_{1}\) and \(p_{0}\) are polynomials in \(x\) of degree 1, 2 and 1, respectively. The number of coefficients is 7, and the number of free local exponents is 6, thus one coefficient is not determined by the local exponents. Actually, the constant term of \(p_{0}\) is not determined, which is often called the _accessory parameter_.
\(H_{3}\) is connected via _addition and middle convolution_1with equations \(H_{4},H_{5}\) and \(H_{6}\) of order 4, 5 and 6, with respective Riemann schemes:
Footnote 1: There is no \(H_{7},\ldots,\) see §3.1
\[R_{4}:\left(\begin{array}{ccc}x=0:&0&1&c_{1}&c_{2}\\ x=1:&0&1&c_{3}&c_{4}\\ x=\infty:&c_{8}&c_{5}&c_{6}&c_{7}\end{array}\right),\quad R_{5}:\left( \begin{array}{ccc}x=0:&0&1&d_{1}&d_{2}&d_{3}\\ x=1:&0&1&d_{4}&d_{5}&d_{6}\\ x=\infty:&d_{9}&d_{9}+1&d_{9}+2&d_{7}&d_{8}\end{array}\right),\]
\[R_{6}:\left(\begin{array}{ccc}x=0:&0&1&2&e_{1}&e_{2}&e_{3}\\ x=1:&0&1&2&e_{4}&e_{5}&e_{6}\\ x=\infty:&e_{0}&e_{0}+1&e_{0}+2&e_{7}&e_{8}&e_{9}\end{array}\right),\]
where \(c_{8},\ d_{9}\) and \(e_{0}\) are determined by the Fuchs relation. We assume that the local exponents are so generic that these equations have no logarithmic solution at the singular points. \(H_{j}\) (\(j=3,4,5,6\)) has \(j+3\) free local exponents and one accessory parameter.
For example, \(H_{6}\) is obtained from \(H_{3}\) as follows:
(1) Compose \(x(x-1)X\) from the left, and \(X^{-1}\) from the right, where \(X:=x^{q_{0}}(x-1)^{q_{1}}\). Then the head (top-order term) of the equation changes into \(x^{3}(x-1)^{3}\partial^{3}\).
(2) Compose \(\partial^{3}\) from the left to get \((\theta,\partial)\)-form (refer to SS2.2), where \(\theta:=x\partial\).
(3) Replace \(\theta\) by \(\theta-u\) (middle convolution with parameter \(u\)).
Then the Riemann scheme of the resulting equation is
\[\left(\begin{array}{cccccc}0&1&2&g_{0}+u&b_{1}+g_{0}+u&b_{2}+g_{0}+u\\ 0&1&2&g_{1}+u&b_{3}+g_{1}+u&b_{4}+g_{1}+u\\ 1-u&2-u&3-u&b_{5}-g_{1}-g_{2}-u&b_{6}-g_{1}-g_{2}-u&b_{7}-g_{1}-g_{2}-u\end{array} \right).\]
We rename the local exponents as in \(R_{6}\), and get the equation \(H_{6}\). The shifts of the three new parameters \(g_{0}\to g_{0}\pm 1,g_{1}\to g_{1}\pm 1\) and \(u\to u\pm 1\) induce the shifts of the local exponents:
\[sh_{1}:(e_{1},e_{2},e_{3})\rightarrow(e_{1}\pm 1,e_{2}\pm 1,e_{3} \pm 1),\] \[sh_{2}:(e_{4},e_{5},e_{6})\rightarrow(e_{4}\pm 1,e_{5}\pm 1,e_{6} \pm 1),\] \[sh_{3}:(e_{1},\ldots,e_{7},e_{8},e_{9})\rightarrow(e_{1}\pm 1, \ldots,e_{6}\pm 1,e_{7}\mp 1,e_{8}\mp 1,e_{9}\mp 1).\]
For these shifts, we present the shift operators explicitly (Theorem 6.1). When the equation is rigid, the construction of shift operators is known ([8] Chapter 11).
Since the equation \(H_{6}\) has an accessory parameter, say \(ap\), writing \(H_{6}=H_{6}(e,ap)\), the shift operators for the shifts \(sh_{i}\) send the solutions of \(H_{6}(e,ap)\) to those of \(H_{6}(sh_{i}(e),ap^{\prime})\) for some \(ap^{\prime}\) not necessarily equal to \(ap\).
We find a polynomial \(f(e,a)\) of the local exponents \(e\) with a set \(a\) of parameters such that, for every shift \(sh_{j}\), the shift operator sends the solution of \(H_{6}(e,f(e,a))\) to those of \(H_{6}(sh_{j}(e),f(sh_{j}(e),a))\) (Theorem 7.3). This is the main theorem in this paper.
We set \(G_{6}(e,a)=H_{6}(e,f(e,a))\). By operating a middle convolution to \(G_{6}(e,a)\), we get the equation \(G_{3}(e,a)\) of order 3. Then via addition and middle convolution, we get \(G_{4}(e,a)\) and \(G_{5}(e,a)\) from \(G_{3}(e,a)\), where the accessory parameters are replaced by polynomials of the local exponents of \(H_{4},H_{5}\) and \(H_{3}\), respectively. Finally, we get \(E_{j}(e):=G_{j}(e,0)\), \((j=3,4,5,6)\).
Though no shift operator is found for the equation \(E_{3}(e)\) with generic local exponents, for several codimension-2 conditions on the local exponents, they admit four independent shift operators ([6]). These codimension-2 conditions and the shift operators are lifted to \(E_{6},\ E_{5}\) and \(E_{4}\) ([7]).
This paper is organized as follows. In Section 1, The equation \(H_{6}\) is introduced. We tabulate the equations \(H_{5},H_{4},H_{3}\) and \(G_{j},E_{j}\) (\(j=3,4,5,6\)) without much explanation. This is to show the reader what kind of equations we treat.
In order to define equations and to study shift operators, we need various tools of investigation, which we prepare in Section 2. When a certain transformation such as a transformation caused by a coordinate change is performed to an equation, it may happen that the equation remains the same with certain change of parameters. In such a case, the equation is said to be _symmetric_ relative to this transformation. We study the following symmetries
* adjoint symmetry; when the adjoint equation remains the same, with some change of parameters,
* differentiation symmetry; when derivatives of solutions satisfy the same equation, with some change of parameters,
* symmetry relative to the coordinate changes \(x\to 1/x\) and \(x\to 1-x\).
We recall the notion of _accessory parameters_, which plays a central role in this paper. We see that each of \(H_{j}\)\((j=3,4,5,6)\) has one accessory parameter.
In Section 3, we review the notion of addition and middle convolution, which is important to know how the equations are related among them. Explicit procedure of getting \(H_{6},H_{5},H_{6}\) from \(H_{3}\), and the inverse procedure are presented.
In general, for shifts of local exponents \(sh_{\pm}:e\to e_{\pm}\) of a differential equation \(H(e,ap)\), where \(e_{\pm}\) denote the shifted exponents, if a non-zero differential operator \(P_{\pm}=P_{\pm}(e)\) sends solutions of \(H(e,ap)\) to those of \(E(e_{\pm},ap_{\pm})\), we call the operator \(P_{\pm}\) the _shift operators_ of \(H\) for the shift of the local exponents \(e\to e_{\pm}\). These operators are important tools to see the structure of the space of solutions. If such operators \(P_{\pm}\) exist, we define the mapping \(Sv_{e}\) by \(P_{+}(e_{-})\circ P_{-}(e)\), which turns out to be a constant mod \(H(e)\). 2 We call such a constant the _S-value_ for the shifts \(e\to e_{\pm}\). When \(Sv_{e}\) vanishes then \(H(e)\) is _reducible_. These are discussed in Section 4.
Footnote 2: Composition of two differential operators \(P\) and \(Q\) is denoted by \(P\circ Q\); we often write it as \(PQ\).
In Section 5, we first present these procedures for the Gauss equation \(E_{2}\), which plays the ideal model of our study: we recall the well-known properties such as the shift operators, reducibility conditions, and explicit decompositions when the equation is reducible,..., which will be generalized later for the equations above.
In Section 6, we study shift operators of our main equation \(H_{6}\). We find shift operators for each shift \(sh_{j}\), S-values, and reducibility conditions, and when \(H(\epsilon)\) is reducible for some \(e=\epsilon\), we see how the factorization of \(H(\epsilon)\) is inherited to \(H(sh_{j}(\epsilon))\).
In section 7, we find cubic polynomials \(S_{10},t_{2i},t_{3i}\)\((i=1,2,3)\) of the local exponents such that if the accessory parameter \(ap\) is assigned as
\[f(e,a)=S_{10}+a_{0}+a_{1}t_{21}(e)+\cdots+a_{6}t_{33}(e),\]
where \(a_{0},\ldots,a_{6}\) are constants, and if we put
\[G_{6}(e,a)=H_{6}(e,f(e,a)),\]
then the shift operator for the shift \(sh_{j}\) sends the solution space of \(G_{6}(e,a)\) to that of \(G_{6}(sh_{j}(e),a)\).
In Section 8, we finally reach the equation \(E_{6}(e)=G_{6}(e,0)\), which enjoys fruitful symmetries (e.g. adjoint, differentiation, the coordinate changes \(x\to 1/x,x\to 1-x\),...).
In Section 9, the shift operators of \(H_{5}\) is given; they are derived from the shift operators \(P_{\pm 00}\) and \(P_{0\pm 0}\) of \(H_{6}\). The S-values and reducibility conditions are given. For the equation \(H_{4}\), we find only one shift operator \(\partial\) and its inverse, which is in Section 10. No shift operator is found for the equation \(H_{3}\).
The equations we treated in the papers Part I, II and III are
Part1 \[H_{j},\ G_{j},\ E_{j},\quad(j=6,5,4,3),\ \mbox{and}\ E_{2},\] Part2 \[H_{3},\ E_{3},\quad S\!E_{3},\ Z_{3},\ E_{3a},\cdots,E_{3d},\] Part3 \[S\!E_{j},\ (j=6,5,4,3),\]
where \(E_{2}\) is the Gauss hypergeometric equation, which is related to all others; They are mutually related as in the following figure
\[\begin{array}{cccccccc}&&\mbox{Part 3}\\ \mbox{Part 1}&H_{6}&\longrightarrow&G_{6}&\longrightarrow&E_{6}&\longrightarrow &\mbox{\em SE}_{6}\\ \downarrow&&\downarrow&&\downarrow&&\downarrow\\ \mbox{Part 1}&H_{5}&\longrightarrow&G_{5}&\longrightarrow&E_{5}&\longrightarrow &\mbox{\em SE}_{5}\\ &\downarrow&&\downarrow&&\downarrow&&\downarrow\\ \mbox{Part 1}&H_{4}&\longrightarrow&G_{4}&\longrightarrow&E_{4}&\longrightarrow &\mbox{\em SE}_{4}\\ &\downarrow&&\downarrow&&\downarrow&&\downarrow\\ \mbox{Part 2}&H_{3}&\longrightarrow&G_{3}&\longrightarrow&E_{3}&\longrightarrow &\mbox{\em SE}_{3},\quad\mbox{a few other specializations}\end{array}\]
Horizontal arrows stand for specializations keeping the spectral type, and vertical lines for factorizations. Every equation except \(E_{2}\) has one accessory parameter.
Acknowledgement: We used the software Maple, especially DEtools-package for multiplication and division of differential operators. Interested readers may refer to our list of data written in text files of Maple format 3 for the differential equations and the shift operators treated in this document.
We thank N. Takayama for instructing us about computer systems as well as various computational skills, and K. Mimachi for telling us his results on the Dotsenko-Fateev equation.
We previously submitted to a journal a long paper that contains most of the results in the papers Part I, II and III, and then the two referees gave us kind and useful comments. These helped us rewrite the paper to make the reasoning much clear and the structure straight. We deeply appreciate their kindness. To clarify the story, we divided the long paper into three relatively short ones: Part I, II and III.
Footnote 3: [http://www.math.kobe-u.ac.jp/OpenXM/Math/FDEdata](http://www.math.kobe-u.ac.jp/OpenXM/Math/FDEdata)
## 1 Equations \(H_{j},G_{j},E_{j}\)\((j=3,4,5,6)\) and \(E_{2}\)
\begin{tabular}{r l} \hline
**1.1** & **Equation \(H_{6}\)** \\
**1.2** & **Proof of Proposition 1.2** \\
**1.3** & **Table of equations \(H_{j}\)\((j=6,5,4,3)\) and \(E_{2}\)** \\
**1.4** & **Equations \(G_{j},E_{j}\)\((j=6,5,4,3)\)** \\
**1.4** & \(G_{6}(e,a)\) \\
**1.4** & \(G_{j}(e,a)\)\((j=3,4,5)\) \\
**1.4** & \(E_{j}(e)\)\((j=6,5,4,3)\) \\ \hline \end{tabular}
In this section, we introduce Fuchsian ordinary differential equations \(H_{j},G_{j},E_{j}\)\((j=3,4,5,6)\) of order \(3,\ldots,6\), with three singular points \(\{0,1,\infty\}\).
When we are studying a differential operator \(E\), we often call \(E\) a differential equation and speak about the solutions without assigning an unknown.
The _Riemann scheme_ of an equation is the table of local exponents at the singular points. The _Fuchs relation_ says that the sum of all the exponents equals
\[\frac{1}{2}n(n-1)(m-2), \tag{1.1}\]
where \(n\) is the order of the equation, and \(m\) is the number of singular points; for our equations, \(m=3\).
When an equation \(E\) of order \(n\) is written as
\[E=p_{n}\partial^{n}+\sum_{i=0}^{n-1}p_{i}\partial^{i},\]
where
\[p_{n}=x^{n_{0}}(x-1)^{n_{1}},\ p_{i}=\sum_{j}p_{ij}x^{j}\ (i=0,\ldots,n-1),\ \partial=d/dx,\]
for some integers \(n_{0}\) and \(n_{1}\), we assume the coefficients \(p_{0},\ldots,p_{n}\) have no common factor. \(p_{n}\partial^{n}\) is often called the _head_ of the equation.
A subset \(ap\) of coefficients \(\{p_{ij}\}\) is called a set of _accessory parameters_, if all other coefficients are uniquely written in terms of \(ap\) and the local exponents. The choice of \(ap\) is not unique, but the cardinality of \(ap\) is unique, which is called the _number of accessory parameters_. For \(H_{j}\), it is \(1\), and we choose one and call it _the_ accessory parameter.
When an equation is determined uniquely by the local exponents, it is said to be _free of accessory parameters_ or _rigid_.
### Equation \(H_{6}\)
We present a Fuchsian differential equation \(H_{6}\) of order \(6\) with \(9\) free local exponents, with \(3\) singular points, and with the Riemann scheme
\[R_{6}:\left(\begin{array}{c}x=0:\ \ \ 0\ \ \ \ 1\ \ \ \ \ 2\ \ \ e_{1}\ \ e_{2}\ \ e_{3}\\ x=1:\ \ 0\ \ \ 1\ \ \ \ 2\ \ \ e_{4}\ \ e_{5}\ \ e_{6}\\ x=\infty:\ s\ s+1\ s+2\ e_{7}\ \ e_{8}\ e_{9}\end{array}\right),\quad e_{1}+ \cdots+e_{9}+3s=6,\]
with spectral type4\((3111,3111,3111)\) and with generic local exponents \(e=(e_{1},\ldots,e_{9})\). This is the main equation in this article.
Footnote 4: refer to §2.3
Any equation with Riemann scheme \(R_{6}\) has the following expression
\[T=p_{6}(x)\partial^{6}+\cdots+p_{1}(x)\partial+p_{0}, \tag{1.2}\]
where
\[\begin{array}{llll}p_{6}&=&x^{3}(x-1)^{3},&p_{5}&=&(p_{50}+p_{51}x)x^{2}(x-1)^{ 2},\\ p_{4}&=&(p_{40}+p_{41}x+p_{42}x^{2})x(x-1),&p_{3}&=&p_{30}+p_{31}x+p_{32}x^{2}+p _{33}x^{3},\\ p_{2}&=&p_{20}+p_{21}x+p_{22}x^{2},&p_{1}&=&p_{10}+p_{11}x,\end{array} \tag{1.3}\]
refer to Proposition 2.5 and Corollary 2.6. We call such an expression by use of polynomial coefficients of \(x\) and the differentiation \(\partial\), the \((x,\partial)\)-form (refer to SS2.2 for related expressions). The indicial polynomial at \(x=0\) is given by
\[\rho(\rho-1)(\rho-2)\{(\rho-3)(\rho-4)(\rho-5)+(\rho-3)(\rho-4)p_{50}+(\rho-3) p_{40}+p_{30}\}.\]
So the coefficients \(p_{50},p_{40}\) and \(p_{30}\) are expressed as polynomials of the local exponents \(\{e_{1},e_{2},e_{3}\}\). Do the same at \(x=1\). Then we find that most of the coefficients (as well as \(p_{31}-p_{32}\)) can be expressed by the local exponents \(e_{1},\ldots,e_{9}\), except the following _four_ coefficients:
\[p_{10},\ p_{20},\ p_{21},\ p_{32}.\]
We next ask for the condition that any solution at \(\infty\) has no logarithmic terms, called the no-logarithmic condition. Applying \(T\) to the expression
\[u(x)=x^{-\rho}\sum_{m=0}^{\infty}u_{m}x^{-m},\]
we see that \(Tu\) is expanded as
\[f(\rho)u_{0}x^{-\rho}+[f(\rho+1)u_{1}+g(\rho)u_{0}]x^{-\rho-1}+[f(\rho+2)u_{2} +g(\rho+1)u_{1}+h(\rho)u_{0}]x^{-\rho-2}+\cdots,\]
where
\[\begin{array}{rl}f(\rho)&=&\rho(\rho+1)\cdots(\rho+5)-p_{51}\rho(\rho+1) \cdots(\rho+4)\\ &\quad+p_{42}\rho(\rho+1)\cdots(\rho+3)-p_{33}\rho(\rho+1)(\rho+2)\\ &\quad+p_{22}\rho(\rho+1)-p_{11}\rho+p_{0}\end{array} \tag{1.4}\]
is the indicial polynomial at infinity and
\[\begin{array}{rl}g(\rho)&=&-3\rho(\rho+1)\cdots(\rho+5)-(p_{50}-2p_{51}) \rho(\rho+1)\cdots(\rho+4)\\ &\quad+(p_{41}-p_{42})\rho\cdots(\rho+3)-p_{32}\rho(\rho+1)(\rho+2)\\ &\quad+p_{21}\rho(\rho+1)-p_{10}\rho,\\ h(\rho)&=&3\rho(\rho+1)\cdots(\rho+5)-(-2p_{50}+p_{51})\rho(\rho+1)\cdots(\rho+ 4)\\ &\quad+(p_{40}-p_{41})\rho\cdots(\rho+3)-p_{31}\rho(\rho+1)(\rho+2)\\ &\quad+p_{20}\rho(\rho+1).\end{array} \tag{1.5}\]
The local exponents at infinity, the roots of \(f(\rho)\), are \(s,s+1,s+2\), and the other three are generic; in particular,
\[f(s+k)\neq 0\quad(k\geq 3). \tag{1.6}\]
When \(\rho=s+2\), \(u_{m}\) (\(m\geq 1\)) is determined by the recurrence relation
\[f(s+2+m)u_{m}=F_{m}(u_{0},u_{1},\ldots,u_{m-1}),\]
for some function \(F_{m}\), thanks to (1.6). When \(\rho=s+1\), the equation for \(u_{1}\) becomes
\[f(s+2)u_{1}+g(s+1)u_{0}=0\]
with \(f(s+2)=0\). Therefore we need \(g(s+1)=0\). Then \(u_{m}\) (\(m\geq 2\)) is determined thanks to (1.6). When \(\rho=s\), the equation for \(u_{1}\) becomes
\[f(s+1)u_{1}+g(s)u_{0}=0\]
with \(f(s+1)=0\), and so we need \(g(s)=0\). Moreover the equation for \(u_{2}\) becomes
\[f(s+2)u_{2}+g(s+1)u_{1}+h(s)u_{0}=0\]
with \(f(s+2)=g(s+1)=0\). So we need \(h(s)=0\).
Hence the no-logarithmic condition is given by the _three_ equations:
\[g(s)=0,\quad g(s+1)=0,\quad h(s)=0 \tag{1.7}\]
for the _four_ coefficients \(p_{10}\), \(p_{20}\), \(p_{21}\), \(p_{32}\). Hence, it remains one freedom of choice of the coefficients. So we get
**Proposition 1.1**.: _The differential equation with the Riemann scheme \(R_{6}\) such that any local solution at 0 and 1 does not have logarithmic terms can be written as (1.2) with (1.3). This equation has four free coefficients \(\{p_{10},p_{20},p_{21},p_{32}\}\). Defining three polynomials \(\{f,g,h\}\) by (1.4) and (1.5), the condition that any local solution at \(\infty\) does not have logarithmic terms is given by the system of three equations (1.7)._
**Proposition 1.2**.: _Let_
\[T=T_{0}(\theta)+T_{1}(\theta)\partial+T_{2}(\theta)\partial^{2}+T_{3}(\theta) \partial^{3} \tag{1.8}\]
_be an equation with Riemann scheme \(R_{6}\). Then most of the coefficients can be expressed in terms of the local exponents as_
\[T_{0} = (\theta+2+s)(\theta+1+s)(\theta+s)B_{0},\quad B_{0}=(\theta+e_{7} )(\theta+e_{8})(\theta+e_{9}), \tag{1.9}\] \[T_{1} = (\theta+2+s)(\theta+1+s)B_{1},\quad B_{1}=T_{13}\theta^{3}+T_{12 }\theta^{2}+T_{11}\theta+T_{10},\] (1.10) \[T_{2} = (\theta+2+s)B_{2},\quad B_{2}=T_{23}\theta^{3}+T_{22}\theta^{2}+ T_{21}\theta+T_{20},\] (1.11) \[T_{3} = (-\theta-3+e_{1})(-\theta-3+e_{2})(-\theta-3+e_{3}), \tag{1.12}\]
_where_
\[T_{13} = -3,\quad T_{23}=3,\quad T_{12}=-9+s_{11}-2s_{13},\quad T_{22}=18 +s_{13}-2s_{11},\] \[T_{11} = -8+(s_{11}^{2}+2s_{11}s_{13}-s_{12}^{2}+s_{13}^{2})/3+s_{11}-5s_{ 13}-s_{21}+s_{22}-2s_{23},\] \[T_{21} = 35+(-s_{11}^{2}-2s_{11}s_{13}+s_{12}^{2}-s_{13}^{2})/3-7s_{11}+5 s_{13}+2s_{21}-s_{22}+s_{23},\] \[T_{20} = -T_{10}+19+(s_{11}^{2}s_{13}-s_{11}s_{12}^{2}+s_{11}s_{13}^{2}-s_ {12}^{2}s_{13})/9+(s_{13}^{3}+s_{11}^{3}-2s_{12}^{3})/27\] \[+(-2s_{11}^{2}-4s_{11}s_{13}+s_{11}s_{22}+2s_{12}^{2}+s_{22}s_{12} -2s_{13}^{2}+s_{22}s_{13})/3\] \[-5s_{11}+4s_{13}+3s_{21}-2s_{22}-s_{31}-s_{32}-s_{33},\]
_except \(T_{10}\), which does not affect the local exponents. In this sense, we call this coefficient the accessory parameter. Here \(s_{*}\) are symmetric polynomials of the local exponents:_
\[s_{11} =e_{1}+e_{2}+e_{3},\quad s_{12}=e_{4}+e_{5}+e_{6},\quad s_{13}=e_ {7}+e_{8}+e_{9},\] \[s_{21} =e_{1}e_{2}+e_{1}e_{3}+e_{2}e_{3},\quad s_{22}=e_{4}e_{5}+e_{4}e_{ 6}+e_{5}e_{6}, \tag{1.13}\] \[s_{23} =e_{7}e_{8}+e_{7}e_{9}+e_{8}e_{9},\quad s_{31}=e_{1}e_{2}e_{3}, \quad s_{32}=e_{4}e_{5}e_{6},\] \[s_{33} =e_{7}e_{8}e_{9},\quad s=-(s_{11}+s_{12}+s_{13}-6)/3.\]
**Definition 1.3**.: This equation is denoted by
\[H_{6}=H_{6}(e,T_{10}),\quad e=(e_{1},\dots,e_{9}).\]
### Proof of Proposition 1.2
Since the above operator (1.2): \(T=x^{3}(x-1)^{3}\partial^{6}+\cdots\) can be expressed in \((\theta,\partial)\)-form, we write this equation as (1.8): \(T=T_{0}+T_{1}\partial+\cdots\). Since the head (top order term) of \(T\) is
\[p_{6}\partial^{6}=x^{3}(x-1)^{3}\partial^{6}=x^{6}-3x^{5}\partial^{5}\ \partial+3x^{4}\partial^{4}\ \partial^{2}-x^{3}\partial^{3}\ \partial^{3},\]
and \(x^{i}\partial^{i}=\theta(\theta-1)\cdots(\theta-i+1)\), the terms \(T_{0}\) and \(T_{3}\) are determined by local exponents at \(x=\infty\) and at \(x=0\), as (1.9) and (1.12), thanks to Propositions 2.2 and 2.3. In addition we have
\[T_{13}=-3,\quad T_{23}=3.\]
We could then transform this into \((x,\partial)\)-form \(p_{6}(x)\partial^{6}+\cdots\), and follow the recipe in Proposition 1.1. Instead, we make a coordinate change \(x\to 1/x\) to this equation. Perform the transformation \(x=1/y,w=y\partial_{y},\partial_{y}=d/dy\) to (1.8):
\[T|_{x=1/y}=T_{0}(-w)-T_{1}(-w)yw+T_{2}(-w)y^{2}(w+1)w-T_{3}(-w)y^{3}(w+2)(w+1)w.\]
Multiply \(y^{s}\) from the right, and \(y^{-s}\) from the left:
\[\begin{array}{l}T_{0}(-(w+s))-T_{1}(-(w+s))y(w+s)+T_{2}(-(w+s))y^{2}(w+1+s)(w +s)\\ -T_{3}(-(w+s))y^{3}(w+2+s)(w+1+s)(w+s);\end{array}\]
Multiply \(y^{-3}\) from the left:
\[\begin{array}{l}T_{0}(-(w+s+3))y^{-3}-T_{1}(-(w+s+3))y^{-2}(w+s)\\ \qquad+T_{2}(-(w+s+3))y^{-1}\times(w+1+s)(w+s)\\ \qquad-T_{3}(-(w+s+3))(w+2+s)(w+1+s)(w+s).\end{array} \tag{1.14}\]
The first term is
\[\begin{array}{l}(-(w+s+3)+s)(-(w+s+3)+s+1)(-(w+s+3)+s+2)\\ \qquad\times(-(w+s+3)+e_{7})(-(w+s+3)+e_{8})(-(w+s+3)+e_{9})y^{-3}\\ =(w+3)(w+2)(w+1)(w+s+2-e_{7})(w+s+2-e_{8})(w+s+2-e_{9})y^{-3}\\ =(w+s+2-e_{7})(w+s+2-e_{8})(w+s+2-e_{9})\partial_{y}^{3},\end{array}\]
(by \(\partial_{y}^{3}=(w+1)(w+2)(w+3)y^{-3}\)) the last term is
\[\begin{array}{l}(-(w+s+3)+3-e_{1})(-(w+s+3)+3-e_{2})(-(w+s+3)+3-e_{3})\\ \qquad\times(w+s)(w+s+1)(w+s+2)\\ =(w+s)(w+s+1)(w+s+2)(w+e_{1}+s)(w+e_{2}+s)(w+e_{3}+s),\end{array}\]
and the second term is (polynomial of \(\theta\))\(y^{-2}\) and the third term is (polynomial of \(\theta\))\(y^{-1}\); these must be polynomials of \((w,\partial_{y})\). Since
\[\partial_{y}^{2}=(w+1)(w+2)y^{-2}\quad\mbox{and}\quad\partial_{y}=(w+1)y^{-1},\]
\((w+1)(w+2)\) divides \(T_{1}(-(w+s+3))\), and \((w+1)\) divides \(T_{2}(-(w+s+3))\), that is,
\[(\theta+2+s)(\theta+1+s)\,|\,T_{1}(\theta),\quad\mbox{and}\quad(\theta+2+s)\,| \,T_{2}(\theta).\]
Now we are ready. We put \(T_{1}(\theta)\) and \(T_{2}(\theta)\) as in (1.10) and (1.11), and transform it to \((x,\partial)\)-form: \(T=p_{6}\partial^{6}+p_{5}\partial^{5}+\cdots\). We have
\[p_{6}=x^{3}(x^{3}+T_{13}x^{2}+T_{23}x-1),\quad p_{5}=x^{2}((e_{7}+e_{8}+e_{9}+ 3s+18)+\cdots),\dots\]
All the coefficients \(p_{ij}\) are expressed in terms of
\[e_{1},\dots,e_{9},s,\quad T_{13},T_{12},T_{11},T_{10}\ (=p_{10}),\ T_{23},T_{22},T_{21},T_{20}\ (=p_{20}),\]
where \(s=(6-e_{1}-\dots-e_{9})/3\).
* As we saw already, \(T_{13}=-3,\ T_{23}=3\).
* \(x^{2}(x-1)^{2}\,|\,p_{5}\) leads to \[\begin{array}{rl}T_{12}&=e_{1}+e_{2}+e_{3}-2e_{7}-2e_{8}-2e_{9}-9=s_{11}-2s_ {13}-9,\\ T_{22}&=-2e_{1}-2e_{2}-2e_{3}+e_{7}+e_{8}+e_{9}+18=-2s_{11}+s_{13}+18.\end{array}\]
* \(x(x-1)\,|\,p_{4}\) leads to \[\begin{array}{rl}T_{11}+T_{21}=s_{21}-s_{23}-6s_{11}+27.\end{array}\]
* The requirement that local exponents at \(x=1\) are \(\{e_{4},x_{5},e_{6}\}\) is equivalent to the system \[\begin{array}{rl}T_{11}+3s^{2}-(-2s_{11}-2s_{13}+12)s-5s_{11}+s_{13}+s_{21}+ 2s_{23}+20=0,\\ T_{10}+T_{20}+s^{3}+(s_{11}+s_{13}-6)s^{2}-(-T_{11}+5s_{11}-s_{13}\\ -s_{21}-2s_{23}-20)s+s_{32}+s_{33}+9s_{11}-3s_{21}+s_{31}-27=0.\end{array}\]
Thus \(T_{13},\ T_{12},\ T_{11},\ T_{23},\ T_{22},\ T_{21},\) and \(T_{10}+T_{20}\) are expressed by the local exponents.
### Table of equations \(H_{j}\ (j=6,5,4,3)\) and \(E_{2}\)
We _always assume_ that local solutions corresponding to local exponents with integral difference, such as \(0,\,1,\,2;\,s,\,s+1,\,s+2\), has no logarithmic term, and the other exponents, such as \(e_{1},\,e_{2},\,\dots\) are generic. \(R_{n}\) denotes the Riemann scheme of \(H_{n}\).
We tabulate the equations \(H_{j}\ (j=6,5,4,3)\): they are related to \(H_{6}\) via addition-middle-convolutions and restrictions (see SS3.1, 3.2 and 4.7).
* \(H_{6}=H_{6}(e,T_{10}),\qquad e=(e_{1},\dots,e_{9})\) \[=x^{3}(x-1)^{3}\partial^{6}+x^{2}(x-1)^{2}P_{1}\partial^{5}+x(x-1)P_{2} \partial^{4}+P_{3}\partial^{3}+P_{2}\partial^{2}+P_{1}\partial+P_{0},\] \[=T_{0}+T_{1}\partial+T_{2}\partial^{2}+T_{3}\partial^{3},\quad \theta=x\partial,\] \[R_{6}:\left(\begin{array}{cccc}x=0:&0&1&2&e_{1}&e_{2}&e_{3}\\ x=1:&0&1&2&e_{4}&e_{5}&e_{6}\\ x=\infty:&s&s+1&s+2&e_{7}&e_{8}&e_{9}\end{array}\right),\quad s=(6-e_{1}- \dots-e_{9})/3,\]
where \(P_{j}\) is used symbolically for a polynomial of degree \(j\) in \(x\), and
\[\begin{array}{rl}T_{0}&=&(\theta+s+2)(\theta+s+1)(\theta+s)B_{0},\quad B_{0} =(\theta+e_{7})(\theta+e_{8})(\theta+e_{9}),\\ T_{1}&=&(\theta+s+2)(\theta+s+1)B_{1},\quad B_{1}=T_{13}\theta^{3}+T_{12} \theta^{2}+T_{11}\theta+T_{10},\\ T_{2}&=&(\theta+s+2)B_{2},\quad B_{2}=T_{23}\theta^{3}+T_{22}\theta^{2}+T_{21} \theta+T_{20},\\ T_{3}&=&-(\theta+3-e_{1})(\theta+3-e_{2})(\theta+3-e_{3}),\end{array}\]
where \(T_{13},T_{12},T_{11},T_{23},T_{22},T_{21}\) and \(T_{20}-T_{10}\) are polynomials in \(e_{1},\,\dots,\,e_{9}\); they are given in Proposition 1.2. We choose \(T_{10}\) as the accessory parameter.
* \(H_{5}=H_{5}(e_{1},\ldots,e_{8},B_{510})\) \[=x^{3}(x-1)^{3}\partial^{5}+x^{2}(x-1)^{2}P_{1}\partial^{4}+x(x-1)P_{2} \partial^{3}+P_{3}\partial^{2}+P_{2}\partial+P_{1}\] \[=x\overline{T}_{0}+\overline{T}_{1}+\overline{T}_{2}\partial+ \overline{T}_{3}\partial^{2},\] where \(P_{j}\) is used symbolically for a polynomial of degree \(j\) in \(x\), \[R_{5}:\left(\begin{array}{ccccc}0&1&e_{1}-1&e_{2}-1&e_{3}-1\\ 0&1&e_{4}-1&e_{5}-1&e_{6}-1\\ 1+s&2+s&3+s&e_{7}+1&e_{8}+1\end{array}\right),\qquad s=(6-e_{1}-\cdots-e_{8})/3,\] \[\overline{T}_{0} = (\theta+s+1)(\theta+s+2)(\theta+s+3)B_{50},\quad B_{50}=B_{5}( \theta=\theta+1,e_{9}=0),\] \[\overline{T}_{1} = (\theta+s+1)(\theta+s+2)B_{51},\quad B_{51}:=B_{1}(e_{9}=0),\] \[\overline{T}_{2} = (\theta+s+2)B_{52},\quad B_{52}:=B_{2}(e_{9}=0),\] \[\overline{T}_{3} = -(\theta+3-e_{1})(\theta+3-e_{2}))(\theta+3-e_{3})).\] This is obtained from \(H_{6}\) by putting \(e_{9}=0\), and dividing from the right by \(\partial\). The accessory parameter is the constant term \(B_{510}\) of the polynomial \(B_{51}\) in \(\theta\).
* \(H_{4}=H_{4}(c_{1},\ldots,c_{7},\mathcal{T}_{10})\) \[=x^{2}(x-1)^{2}\partial^{4}+x(x-1)P_{1}\partial^{3}+P_{2}\partial^{2}+P_{1} \partial+P_{0},\] \[=\mathcal{T}_{0}+\mathcal{T}_{1}\partial+\mathcal{T}_{2}\partial^{2},\] where \(P_{j}\) is used symbolically for a polynomial of degree \(j\) in \(x\), \[R_{4}:\left(\begin{array}{ccccc}x=0:&0&1&c_{1}&c_{2}\\ x=1:&0&1&c_{3}&c_{4}\\ x=\infty:&c_{8}&c_{5}&c_{6}&c_{7}\end{array}\right),\quad c_{1}+\cdots+c_{8}=4,\] \[\mathcal{T}_{0} = (\theta+c_{5})(\theta+c_{6})(\theta+c_{7})(\theta+c_{8}),\] \[\mathcal{T}_{1} = -2\theta^{3}+\mathcal{T}_{12}\theta^{2}+\mathcal{T}_{11}\theta+ \mathcal{T}_{10},\] \[\mathcal{T}_{12} = c_{1}+c_{2}-c_{5}-c_{6}-c_{7}-c_{8}-5,\] \[\mathcal{T}_{11} = 3(c_{1}+c_{2})-c_{1}c_{2}+c_{3}c_{4}-c_{5}c_{6}-c_{5}c_{7}-c_{5}c_ {8}-c_{6}c_{7}-c_{6}c_{8}-c_{7}c_{8}-8,\] \[\mathcal{T}_{2} = (\theta-c_{1}+2)(\theta-c_{2}+2),\] where \(\mathcal{T}_{10}\) is the accessory parameter.
* \(H_{3}=H_{3}(b_{1},\ldots,b_{6},a_{00})\) \[=x^{2}(x-1)^{2}\partial^{3}+x(x-1)P_{1}\partial^{2}+P_{2}\partial+P_{1}\] \[=xS_{n}+S_{0}+S_{1}\partial,\] where \(P_{j}\) is used symbolically for a polynomial of degree \(j\) in \(x\), \[R_{3}:\left(\begin{array}{ccccc}x=0:&0&b_{1}&b_{2}\\ x=1:&0&b_{3}&b_{4}\\ x=\infty:&b_{7}&b_{5}&b_{6}\end{array}\right),\quad b_{1}+\cdots+b_{7}=3,\] \[S_{n} = (\theta+b_{5})(\theta+b_{6})(\theta+b_{7}),\] \[S_{0} = -2\theta^{3}+(2b_{1}+2b_{2}+b_{3}+b_{4}-3)\theta^{2}\] \[\qquad+(-b_{1}b_{2}+(b_{3}-1)(b_{4}-1)-b_{5}b_{6}-(b_{5}+b_{6})b_{7 })\theta+a_{00},\] \[S_{1} = (\theta-b_{1}+1)(\theta-b_{2}+1),\] where \(a_{00}\) is the accessory parameter.
* \(E_{2}=E_{2}(a_{1},a_{2},a_{3})=E(a,b,c)\) (the Gauss hypergeometric equation) \[\begin{array}{l}=(\theta+a)(\theta+b)-(\theta+c)\partial\\ =x(x-1)\partial^{2}+((a+b+1)x-c)\partial+ab\end{array}\]
\[R_{2}=\left(\begin{array}{llll}x=0:&0&a_{1}\\ x=1:&0&a_{2}\\ x=\infty:&a_{3}&a_{4}\end{array}\right)=\left(\begin{array}{llll}x=0:&0&1-c \\ x=1:&0&c-a-b\\ \xi=\infty:&a&b\end{array}\right)=R_{abc},\]
where \(a_{1}+\dots+a_{4}=1.\) This equation is rigid.
Summing up,
\[\begin{array}{l}\mbox{name of the equation}\qquad\qquad H_{6}\quad H_{5} \quad H_{4}\quad H_{3}\qquad\quad E_{2}\\ \mbox{order of the equation}\qquad\qquad 6\quad 5\quad 4\quad 3\quad 2\\ \mbox{number of the local exponents}\qquad 9\quad 8\quad 7\quad 6\quad 3\\ \mbox{number of accessory parameters}\quad 1\quad 1\quad 1\quad 1\quad 1\quad 0\end{array}\]
### Equations \(G_{j},E_{j}\) (\(j=6,5,4,3\))
Each of the equations \(H_{j}\) (\(j=6,5,4,3\)) has one accessory parameter. The equations \(G_{j},E_{j}\) are equations \(H_{j}\) with a specified cubic polynomials of the local exponents \(e=(e_{1},e_{2},\dots)\) as the accessory parameter.
#### 1.4.1 \(G_{6}(e,a)\)
The accessory parameter of \(H_{6}\) is denoted by \(T_{10}\). The equation \(G_{6}\) is \(H_{6}\) with a specific cubic polynomial \(T_{10}(e)\) of \(e\) as \(T_{10}\). This polynomial is determined roughly as follows: If the equation \(G_{6}\) admits shift operators for the block shifts
\[sh_{j}:(e_{j},e_{j+1},e_{j+2},s)\to(e_{j}+1,e_{j+1}+1,e_{j+2}+1,s-1)\quad(j=1,4,7),\]
then \(T_{10}(e)\) must be
\[T_{10}(e)=S_{10}+R,\quad R=a_{0}+a_{1}t_{21}+a_{2}t_{22}+a_{3}t_{23}+a_{4}t_{3 1}+a_{5}t_{32}+a_{6}t_{33},\]
where \(S_{10}\) and \(t_{ij}\) are cubic polynomials in \(e\) defined in Theorem 7.1 and Corollary 7.2, and \(a_{0},\dots,a_{6}\) are free constants. We denote the equation with the above \(T_{10}\) by \(G_{6}(e,a)\).
#### 1.4.2 \(G_{j}(e,a)\) (\(j=3,4,5\))
The equation \(H_{3}\) is obtained from \(H_{6}\) by middle convolution (SS3.2.1). The equations \(H_{4}\) and \(H_{5}\) are obtained from \(H_{3}\) by addition and middle convolution (SS3.1). We follow these procedure starting from \(G_{6}(e,a)\) and get \(G_{3}(e,a),G_{4}(e,a)\) and \(G_{5}(e,a)\).
#### 1.4.3 \(E_{j}(e)\) (\(j=6,5,4,3\))
As the most symmetric equation, \(E_{6}\) is defined as \(G_{6}(e,0)\). Equations \(E_{3}(e),E_{4}(e)\)P and \(E_{5}(e)\) are \(G_{3}(e,0),G_{4}(e,0)\) and \(G_{5}(e,0)\), respectively.
## 2 Generalities
* **2.1 Symmetry*
* 2.1.1 Shift symmetry
* 2.1.2 Differentiation, adjoint and coordinate change
* 2.1.3 Symmetries of \(H_{j},G_{j},E_{j}\)
* 2.1.4 Examples
* 2.2 \((\theta,\partial)\)**-form and \((x,\theta,\partial)\)-form*
* 2.2.1 Local exponents at \(0\) and \(\infty\)
* 2.3 Spectral type and the number of accessory parameters
* 2.4 Adjoint equations
* 2.4.1 Adjoints of the operators
* 2.4.2 Self-adjoint equations
* 2.4.3 Adjoint equation in projective differential geometry
In this section, we prepare tools that we need to study our equations in the following sections and the following papers.
### Symmetry
In this subsection, \(H(e,ap)\) denotes a differential equation with local exponents \(e\) and accessory parameters \(ap\), \(G(e,a)\) a differential equation with local exponents \(e\) and accessory parameters \(ap\) assigned as functions of \(e\) with a set \(a\) of parameters, and \(E(e)\) a differential equation with local exponents \(e\) where accessory parameters are assigned as functions of \(e\). Examples are
\[H_{j},\quad G_{j},\quad E_{j}\quad(j=3,4,5,6).\]
#### 2.1.1 Shift symmetry
For a shift \(sh(e)\) of local exponents \(e\) of a differential equation, if a differential operator \(P\) sends
* solutions of \(H(e,ap)\) bijectively to those of \(H(sh(e),ap^{\prime})\), for some \(ap^{\prime}\),
* solutions of \(G(e,a)\) bijectively to those of \(G(sh(e),a)\),
* sends solutions of \(E(e)\) bijectively to those of \(E(sh(e))\),
the operator \(P\) is called a _shift operator_ for the shift \(sh(e)\). The equation with such a property is said to be symmetric with respect to the shift \(sh(e)\).
#### 2.1.2 Differentiation, adjoint and coordinate change
If derivatives of solutions satisfy the same equation, with some change of
* the local exponents \(e\) and the accessory parameters \(ap\), for \(H(e,ap)\),
* the local exponents \(e\) and the parameters \(a\), for \(G(e,a)\),
* the local exponents \(e\), for \(E(e)\),
the equation is said to enjoy differentiation symmetry. This is a kind of shift symmetry.
If the adjoint equation of an equation remains the same, with some change of the exponents and the parameters as itemized above, the equation is said to enjoy adjoint symmetry.
If an equation after a coordinate change of \(x\), remains the same, with some change of the exponents and the parameters as itemized above, the equation is said to be symmetric relative to this transformation.
#### 2.1.3 Symmetries of \(H_{j},G_{j},E_{j}\)
We tabulate the symmetries that \(K_{j}=\{H_{j},G_{j},E_{j}\}\) enjoy (Y=yes, N=no):
\begin{tabular}{c c c c c c} Symmetry & \(K_{6}\) & \(K_{5}\) & \(K_{4}\) & \(K_{3}\) & \(E_{2}\) \\ Shift operators & Y & Y & Y & N & Y \\ Differentiation & Y & N & Y & N & Y \\ Adjoint & Y & Y & Y & Y & Y \\ \(x\to 1/x\) & Y & N & N & Y & Y \\ \(x\to 1-x\) & Y & Y & Y & Y & Y \\ \end{tabular}
#### 2.1.4 Examples
* Adjoint of \(H_{3}(e,a_{00})\) is \(H_{3}(-e_{1},\ldots,-e_{4},2-e_{5},2-e_{6},a^{\prime}_{00})\), where \[\begin{array}{ll}a^{\prime}_{00}&=-e_{1}e_{2}+(e_{1}+e_{2}+e_{3}+e_{4})(e_{5} +e_{6}-2)\\ &+(e_{5}-1)^{2}+(e_{6}-1)^{2}+(e_{5}-1)(e_{6}-1)-1-a_{00}\end{array}\]
* Adjoint of \(H_{6}(e,T_{10}))\) is \(H_{6}(2-e_{1},\ldots,2-e_{6},1-e_{7},1-e_{8},1-e_{9},T^{\prime}_{10})\), where \[T^{\prime}_{10}=6s^{2}+(4s_{12}-18)s-6s_{12}-2s_{21}+2s_{22}-4s_{23}+8-T_{10}.\]
* Coordinate change \(x\to 1-x\) of \(H_{6}\): \[H_{6}(\mathbf{e}_{1},\mathbf{e}_{4},\mathbf{e}_{7},T_{10})|_{x\to 1-x}=H_{6}(\mathbf{e}_{4},\mathbf{e}_{ 1},\mathbf{e}_{7},T^{\prime}_{10}),\] where \[T^{\prime}_{10}=3s^{2}+(s_{11}+s_{12}-s_{23}+2)s+3s_{11}+3s_{12}-3s_{23}-3s_{3 3}-21-T_{10}.\]
* Coordinate change \(x\to 1/x\) of \(H_{6}\): \[x^{-s-3}\circ H_{6}(\mathbf{e}_{1},\mathbf{e}_{4},\mathbf{e}_{7},T_{10})|_{x\to 1/x} \circ x^{s}=H_{6}(\mathbf{e}_{7}-s\mathbf{1},\mathbf{e}_{4},\mathbf{e}_{1}+s\mathbf{1},T^{\prime}_{ 10}),\] where \[\begin{array}{ll}T^{\prime}_{10}&=4s^{3}+(3s_{11}+9)s^{2}+(6s_{11}-s_{12}+2s_ {21}+s_{23}+8)s+s_{33}+6s_{12}+3s_{21}\\ &-3s_{22}+3s_{23}+s_{31}+s_{32}-3+T_{10}.\end{array}\]
Here \(H_{6}|_{x\to 1-x}\) and \(H_{6}|_{x\to 1/x}\) are \(H_{6}\) after the coordinate changes \(x\to 1-x\) and \(x\to 1/x\), respectively.
### \((\theta,\partial)\)-form and \((x,\theta,\partial)\)-form
Given a differential operator \(P=a_{n}(x)\partial^{n}+\cdots\) with polynomial coefficients of order \(n\) in \((x,\partial)\)-form. Rewrite each term as
\[x^{i}\partial^{j}=x^{i-j}(x^{j}\partial^{j}),\quad i\geq j,\qquad x^{i}\partial ^{j}=(x^{i}\partial^{i})\partial^{j-i},\quad i\leq j,\]
and substitute
\[x^{i}\partial^{i}=\theta(\theta-1)\cdots(\theta-i+1),\quad i\geq 1,\quad \theta=x\partial.\]
Then we have
**Proposition 2.1**.: _Any differential operator \(P=a_{n}(x)\partial^{n}\)\(+\cdots\) with polynomial coefficients of order \(n\) can be written uniquely as_
\[P=x^{q}P_{-q}(\theta)+\cdots+xP_{-1}(\theta)+P_{0}(\theta)+P_{1}(\theta) \partial+\cdots+P_{p}(\theta)\partial^{p},\ \ p\leq n,\ q\geq 0,\]
_where \(P_{*}\) is a polynomial in \(\theta\) of degree as follows:_
\[\deg(P_{-q})\leq n,\ldots,\deg(P_{0})\leq n,\quad\deg(P_{1})\leq n-1,\ldots, \deg(P_{p})\leq n-p.\]
_This expression is called the \((x,\theta,\partial)\)-form of \(P\)._
When \(q=0\), the equation has a \((\theta,\partial)\)-form.
\[\begin{array}{ccccc}\mbox{equation}&H_{6}&H_{5}&H_{4}&H_{3}&E_{2}\\ p&3&2&2&1&1\\ q&0&1&0&1&0\end{array}\]
#### 2.2.1 Local exponents at 0 and \(\infty\)
Given an operator \(P=x^{q}P_{-q}+\cdots+P_{p}\partial^{p}\) of \((x,\theta,\partial)\)-form. Assume
\[p,\ q\geq 0,\qquad P_{-q},\ P_{p}\neq 0.\]
Applying \(P\) to a local solution around \(x=0\): \(u=x^{\rho}(1+\cdots)\), we see only the last term is effective to compute local exponents:
\[P_{p}(\theta)\,\partial^{p}\,u=\rho(\rho-1)\cdots(\rho-p+1)P_{p}(\rho-p)x^{ \rho-p}(1+\cdots).\]
**Proposition 2.2**.: _The local exponents at \(x=0\) are \(0\), \(1\),..., \(p-1\) and the roots of \(P_{p}(\rho-p)\)._
At \(x=\infty\), perform the change \(x=1/y,w=y\partial_{y}\), and use the formulae
\[\partial=-yw,\quad\partial^{2}=y^{2}w(w+1),\quad\partial^{3}=-y^{3}w(w+1)(w+2).\ldots\]
Then \(P\) changes into
\[y^{-q}P_{-q}(-w)+\cdots+P_{0}(-w)-P_{1}(-w)yw+P_{2}(-w)y^{2}w(w+1)+\cdots.\]
Applying this to a local solution around \(y=0\): \(v=y^{\rho}(1+\cdots)\), we see only the first term is effective:
\[y^{-q}P_{-q}(-w)\ v=y^{-q}P_{-q}(-\rho)y^{\rho}(1+\cdots).\]
**Proposition 2.3**.: _The local exponents at \(x=\infty\) are the roots of \(P_{-q}(-\rho)\)._
This means that the first and the last terms of the expression \(P=x^{q}P_{-q}+\cdots+P_{p}\partial^{p}\) are determined, up to multiplicative constants, by the local exponents at \(x=0\) and \(\infty\), respectively.
For example, for \(H_{6}\), the first term is
\[(\theta+s+2)(\theta+s+1)(\theta+s)(\theta+e_{7})(\theta+e_{8})(\theta+e_{9}),\]
and the last term is
\[-(\theta+3-e_{1})(\theta+3-e_{2})(\theta+3-e_{3}).\]
### Spectral type and the number of accessory parameters
In this section, the spectral type of a singular point, which characterizes local behavior of solutions at the singular point, is introduced. The set of spectral types of a Fuchsian differential equation determines the number of accessory parameters.
**Definition 2.4**.: Consider a Fuchsian differential equation \(P\) of order \(n\). Suppose at a singular point, the local exponents are given as \(\{s,\ s+1,\ \ldots,\ s+r,\ e_{1},\ \ldots,\ e_{n-r-1}\}\), where \(s,e_{1},\ldots,e_{n-r-1}\) are _generic_ (no algebraic relation among them), and the local solutions do not have logarithmic terms (_i.e._, local monodromy is semi-simple). In this case, we say the singular point has the _spectral type_\((r+1)1\ldots 1\). For the spectral type in a more general situation, see [8, 5].
For example, the equations \(H_{6}\) and the Gauss equation \(E_{2}\) have spectral types \(3111\) and \(11\) at the three singular points, respectively. They are written as
\[(3111,\ 3111,\ 3111)\quad\text{and}\quad(11,\ 11,\ 11),\]
respectively.
**Proposition 2.5**.: _Let \(P\) be a differential operator which has regular singular point at \(x=0\):_
\[P=x^{n}\partial^{n}+x^{n-1}p_{n-1}\partial^{n-1}+\cdots+xp_{1}\partial+p_{0},\]
_where \(p_{j}\) are holomorphic at \(x=0\). If the local exponents at \(x=0\) are \(\{0,1,\ldots,r,\)\(e_{1},\ldots,e_{n-r-1}\}\)\((r=0,1,\ldots,n-1)\) and \(\{e_{1},\ldots,e_{n-r-1}\}\) are generic, then_
\[p_{j}(0)=0,\quad j=0,\ldots,r.\]
_Moreover, if the local solutions do not have logarithmic terms, i.e., if the spectral type at \(x=0\) is \((r+1)1\ldots 1\), then 6_
Footnote 6: \(p_{r}(0)=0\) implies \(x|p_{r}\)
\[x^{2}|p_{r-1},\ \ldots,\ x^{r}|p_{1},\ \ x^{r+1}|p_{0}.\]
Proof.: For notational simplicity we let \(n=6\) and \(r=2\). Set
\[p_{j}=p_{j0}+p_{j1}x+p_{j2}x^{2}+\cdots,\quad u=u_{0}+u_{1}x+u_{2}x^{2}+\cdots\]
Then
\[Pu=(p_{00}+p_{01}x+p_{02}x^{2}+p_{03}x^{3}+\cdots)(u_{0}+u_{1}x+u_{2 }x^{2}+u_{3}x^{3}+\cdots)\] \[+x(p_{10}+p_{11}x+p_{12}x^{2}+\cdots)(u_{1}+2u_{2}x+3u_{3}x^{2}+\cdots)\] \[+x^{2}(p_{20}+p_{21}x+\cdots)(2u_{2}+3!u_{3}x+\cdots)\] \[+x^{3}(p_{30}+\cdots)(3!u_{3}+\cdots)+\cdots,\]
where
\[p_{00}=p_{10}=p_{20}=0.\]
The solutions have no logarithmic term if and only if for arbitrary \(u_{0},u_{1}\) and \(u_{2}\), the coefficients \(u_{3},u_{4},\dots\) are uniquely determined by \(Pu=0\). The coefficient of \(x\) is \(p_{01}u_{0}\), that of \(x^{2}\) is \(p_{01}u_{1}+p_{02}u_{0}+p_{11}u_{1}\). Thus we have
\[p_{01}=p_{02}=p_{11}=0.\]
Since the genericity of the local exponents asserts \(p_{30}\neq 0\), the vanishing of the coefficient of \(x^{3}\) determines \(u_{3}\) as a linear form of \(\{u_{0},u_{1},u_{2}\}\), and so on.
**Corollary 2.6**.: _Let \(P\) be as above. Set_
\[p_{r}=xq_{r},\ p_{r-1}=x^{2}q_{r-1},\ \dots,\ p_{1}=x^{r}q_{1},\ p_{0}=x^{r+1}q_{ 0},\]
_where \(q_{0},\dots,q_{r}\) are holomorphic at \(x=0\). Then \(P\) has the following expression:_
\[x^{-r-1}P=x^{n-r-1}\partial^{n}+x^{n-r-2}p_{n-1}\partial^{n-1}+\cdots+p_{r+1} \partial^{r+1}+q_{r}\partial^{r}+\cdots+q_{1}\partial+q_{0}.\]
_In particular when \(n=6\) and \(r=2\), (i.e., spectral type is \(3111\) )_
\[x^{-3}P=x^{3}\partial^{6}+x^{2}p_{5}\partial^{5}+xp_{4}\partial^{4}+q_{3} \partial^{3}+q_{2}\partial^{2}+q_{1}\partial+q_{0}.\]
The _number of accessory parameters_ is defined as the number of coefficients \(p_{ij}\) of the equation \(P=\sum_{j}\sum_{i}p_{ij}x^{i}\partial^{j}\) which are not determined by the local exponents.
**Proposition 2.7**.: (cf. [8, 5]) _The number of accessory parameters of a Fuchsian equation of order \(n\) with \(m\) singular points is given by_
\[\frac{1}{2}\left\{(m-2)n^{2}-\sum_{\rm singular\ points}({\rm multiplicity\ of\ local\ exponents\ }mod\ 1)^{2}+2\right\}.\]
For \(H_{j}\), \(m=3\). The equation \(H_{6}\) has Riemann scheme \(R_{6}\) (Introduction), its spectral type is \((3111,3111,3111)\); since \(\{6^{2}-3(3^{2}+3\cdot 1^{2})+2\}/2=1\), it has one accessory parameter. The others are computed as
\[\begin{array}{llll}\mbox{equation \ spectral type}\\ H_{6}&(3111,3111,3111):&\{6^{2}-3(3^{2}+3\cdot 1^{2})+2\}/2=1,\\ H_{5}&(2111,2111,311):&\{5^{2}-2(2^{2}+3\cdot 1^{2})-(3^{2}+2\cdot 1^{2})+2\}/2=1,\\ H_{4}&(211,211,1111):&\{4^{2}-2(2^{2}+2\cdot 1^{2})-(4\cdot 1^{2})+2\}/2=1,\\ H_{3}&(111,111,111)&:&\{3^{2}-3(3\cdot 1^{2})+2\}/2=1,\\ E_{2}&(11,11,11)&:&\{2^{2}-3(1^{2}+1^{2})+2\}/2=0.\end{array}\]
The Gauss equation \(E_{2}\) has no accessory parameter. The others have one.
### Adjoint equations
Adjoint equation of a linear differential equation should be discussed under the frame work of projective differential geometry, as we sketch below. In this article, however, we make the following practical definition for _operators_.
**Definition 2.8**.: The adjoint \(P^{*}\) of \(P=\sum p_{j}(x)\partial^{j}\) is defined as
\[P^{*}=\sum(-)^{j}\partial^{j}\circ p_{j}(x).\]
When we are working on differential operators and their adjoints, we _always assume_ that the coefficients are polynomials in \(x\) free of common factor. Otherwise we can not speak of adjoint symmetry:
_Remark 2.9_.: As we see in SS5.3, the adjoint of the Gauss operator \(E=E(a,b,c)\) is again the Gauss operator \(E^{*}=E(1-a,1-b,2-c)\). However, if we apply the above formula for
\[P=\frac{1}{x(1-x)}\;E=\partial^{2}+\frac{(a+b+1)x-c}{x(x-1)}\;\partial+\frac{ ab}{x(x-1)},\]
then the adjoint \(P^{*}\) is an operator with the Riemann scheme
\[\left(\begin{array}{lcr}x=0:&1&c\\ x=1:&1&a+b-c+1\\ x=\infty:&-a-1&-b-1\end{array}\right),\]
which is not Gauss, but \(P^{*}\circ x(x-1)=E^{*}\).
#### 2.4.1 Adjoints of the operators
The adjoint operator of \(H_{j}\) is the same operator with a simple change of local exponents. Once the operator is expressed in the \((x,\theta,\partial)\)-form, this is easily checked by using the following formulae:
\[(PQ)^{*}=Q^{*}P^{*},\quad\partial^{*}=-\partial,\quad\theta^{*}=- \partial\cdot x=-(\theta+1),\] \[(\theta^{i}(\partial)^{j})^{*}=(-\partial)^{j}(-\theta-1)^{i}=(- \theta-1-j)^{i}(-\partial)^{j},\quad\partial\theta=(\theta+1)\partial.\]
For example, the adjoint of \(H_{6}\) is computed as
\[\begin{array}{lcl}T_{0}^{*}&=(-\theta+s+1)(-\theta+s)(-\theta+s-1)(-\theta- 1-e_{7})\cdots(-\theta-1-e_{9}),\\ (T_{1}\partial)^{*}&=\partial^{*}(-\theta+1+s)(-\theta+s)B_{1}(-\theta-1)\\ &=(-\theta+s)(-\theta+s-1)B_{1}(-\theta-2)\cdot(-\partial),\\ (T_{2}\partial^{2})^{*}&=(-\theta+s-1)B_{2}(-\theta-3)\cdot(-\partial)^{2},\\ (T_{3}\partial^{3})^{*}&=(\theta+1+e_{1})(\theta+1+e_{2})(\theta+1+e_{3})(- \partial)^{3}.\end{array}\]
The accessory parameter \(T_{10}\) changes as in SS2.1.4.
Change of the Riemann schemes will jump to the eyes:
* \(H_{6}\): \[\left(\begin{array}{cccccc}x=0:&0&1&2&e_{1}&e_{2}&e_{3}\\ x=1:&0&1&2&e_{4}&e_{5}&e_{6}\\ x=\infty:&s&s+1&s+2&e_{7}&e_{8}&e_{9}\end{array}\right)\] \[\rightarrow\ \left(\begin{array}{cccccc}0&1&2&2-e_{1}&2-e_{2}&2-e_{3}\\ 0&1&2&2-e_{4}&2-e_{5}&2-e_{6}\\ -1-s&-s&1-s&1-e_{7}&1-e_{8}&1-e_{9}\end{array}\right),\]
* \(H_{5}\): \[\left(\begin{array}{ccccc}x=0:&0&1&e_{1}-1&e_{2}-1&e_{3}-1\\ x=1:&0&1&e_{4}-1&e_{5}-1&e_{6}-1\\ x=\infty:&1+s&2+s&3+s&e_{7}+1&e_{8}+1\end{array}\right)\] \[\rightarrow\ \left(\begin{array}{ccccc}0&1&2-e_{1}&2-e_{2}&2-e_{3}\\ 0&1&2-e_{4}&2-e_{5}&2-e_{6}\\ -s-1&-s&-s+1&1-e_{7}&1-e_{8}\end{array}\right),\]
* \(H_{4}\): \[\left(\begin{array}{ccccc}x=0:&0&1&e_{1}&e_{2}\\ x=1:&0&1&e_{3}&e_{4}\\ x=\infty:&e_{5}&e_{6}&e_{7}&e_{8}\end{array}\right)\rightarrow\left(\begin{array} []{ccccc}0&1&1-e_{1}&1-e_{2}\\ 0&1&1-e_{3}&1-e_{4}\\ 1-e_{5}&1-e_{6}&1-e_{7}&1-e_{8}\end{array}\right),\]
* \(H_{3}\): \[\left(\begin{array}{ccccc}x=0:&0&e_{1}&e_{2}\\ x=1:&0&e_{3}&e_{4}\\ x=\infty:&e_{5}&e_{6}&e_{7}\end{array}\right)\rightarrow\left(\begin{array} []{ccccc}0&-e_{1}&-e_{2}\\ 0&-e_{3}&-e_{4}\\ 2-e_{5}&2-e_{6}&2-e_{7}\end{array}\right),\]
* \(E_{2}\): \[\left(\begin{array}{ccccc}x=0:&0&e_{1}\\ x=1:&0&e_{2}\\ x=\infty:&e_{3}&e_{4}\end{array}\right)\rightarrow\left(\begin{array}{ccccc} 0&-e_{1}\\ 0&-e_{2}\\ 1-e_{3}&1-e_{4}\end{array}\right).\]
_Remark 2.10_.: (See the end of the next subsection.) Let \(adj(e_{j})\) be the local exponent \(e_{j}\) of the adjoint equation. Then we have \(adj(e_{j})=n_{j}-e_{j}\) for some integer \(n_{j}\).
#### 2.4.2 Self-adjoint equations
For the equation \(H_{j}\), the self-adjoint one is a special \(E_{j}\), which will be denoted by \(saE_{j}\), and its Riemann scheme by \(saR_{j}\); similar for \(E_{2}\).
* Self-adjoint \(E_{2}\) \[saR_{2}:\left(\begin{array}{ccccc}x=0:&0&0\\ x=1:&0&0\\ x=\infty:&1/2&1/2\end{array}\right).\] \[saE_{2}:x(x-1)\partial^{2}+(2x-1)\partial+1/4\] is irreducible. It is the hypergeometric equation \(E(1/2,1/2,1)\).
* Self-adjoint \(H_{3}\) \[saR_{3}:\left(\begin{array}{ccccc}x=0:&0&0&0\\ x=1:&0&0&0\\ x=\infty:&1&1&1\end{array}\right)\] \(H_{3}\) is self-adjoint if and only if local exponents are as \(saR_{3}\) and the accessory parameter \(a_{00}=-1/2\). \[saE_{3}=x^{2}(x-1)^{2}\partial^{3}+3x(x-1)(2x-1)\partial^{2}+(7x^{2}-7x+1) \partial+x-1/2\] is irreducible. It is the symmetric product of \(saE_{2}\), satisfied by the square of the hypergeometric function \(F(1/2,1/2,1;x)^{2}\).
* Self-adjoint \(H_{4}\) \[saR_{4}:\left(\begin{array}{ccccc}x=0:&0&1&1/2&1/2\\ x=1:&0&1&1/2&1/2\\ x=\infty:&1/2&1/2&1/2&1/2\end{array}\right)\] Let \(saE_{4}\) be the self-adjoint \(E_{4}\). \[saE_{4}=x^{2}(x-1)^{2}\partial^{4}+4x(x-1)(2x-1)\partial^{3}+(29/2x^{2}-29/2 x+9/4)\partial^{2}+(5x-5/2)\partial+1/16\] is irreducible. \(F(1/2,1/2,1;x)^{3}\) does not solve this equation.
* Self-adjoint \(H_{5}\) \[saR_{5}:\left(\begin{array}{ccccc}x=0:&0&1&1/2&1/2&1/2\\ x=1:&0&1&1/2&1/2&1/2\\ x=\infty:&0&1&2&1&1\end{array}\right)\] \(saE_{5}\) is Reducible of type [1,3,1]: \[saE_{5}=x^{3}(x-1)^{3}\partial^{5}+(15x^{2}(2x-1)(x-1)^{2})/2 \partial^{4}\] \[+x(x-1)(256x^{2}-256x+49)\partial^{3}/4\] \[+3(2x-1)(112x^{2}-112x+9)\partial^{2}/8+(-24x+17/4+24x^{2})\partial\] \[=(x^{3}(x-1)^{3}\partial+3x^{2}(x-1)^{2}(2x-1))\circ X\circ\partial,\] where \[X:=\partial^{3}+9(2x-1)\partial^{2}/(2x(x-1))+(76x^{2}-76x+13) \partial/(4x^{2}(x-1)^{2})\] \[+(64x^{3}-96x^{2}+34x-1)/(8x^{3}(x-1)^{3}).\] Riemann scheme of X: \[\left(\begin{array}{ccccc}-1/2&-1/2&-1/2\\ -1/2&-1/2&-1/2\\ 2&2&2\end{array}\right),\] \(X\) is essentially \(saE_{3}\): \[X=A^{-1}\cdot x(x-1)\cdot saE_{3}\circ A,\quad A:=x^{1/2}(x-1)^{-1/2}.\]
* Self-adjoint \(H_{6}\) \[saR_{6}:\left(\begin{array}{ccccc}x=0:&0&1&2&1&1&1\\ x=1:&0&1&2&1&1&1\\ x=\infty:&-1/2&1/2&3/2&1/2&1/2&1/2\end{array}\right)\] \(H_{6}\) is self-adjoint if and only if local exponents are as \(saR_{6}\) and \[T_{10}=-17/4.\] \[saE_{6}=x^{3}(x-1)^{3}\partial^{6}+9x^{2}(x-1)^{2}(2x-1)\partial^{5}\] \[+((391x^{2}-391x+76)(x-1)x)\partial^{4}/4+(2x-1)(91x^{2}-91x+8) \partial^{3}\] \[+(1539/16x^{2}-1539/16x+18)\partial^{2}+((51x)/8-51/16)\partial-3 /64\] is irreducible. \(F(1/2,1/2,1;x)^{5}\) does not solve this equation.
#### 2.4.3 Adjoint equation in projective differential geometry
In general, two linear homogeneous ordinary differential equations are said to be projectively equivalent if one changes into the other by multiplying a function to the equation, multiplying a function to the unknown, and by changing the independent variable. We give a short discussion on the notion of adjoints defined projectively invariant way as follows (cf. [9]). For notational simplicity, we consider a third-order equation
\[E:\quad u^{\prime\prime\prime}+p_{1}u^{\prime\prime}+p_{2}u^{\prime}+p_{3}u=0,\]
and its Schwarz map: \(x\mapsto u(x)=(u^{1}(x),u^{2}(x),u^{3}(x))\), where \(u^{i}\) are independent solutions. It is seen as a curve in the 3-space or the projective plane relative to the homogeneous coordinates. Define its dual curve by the map: \(x\mapsto\xi(x)=u(x)\wedge u(x)^{\prime}\in\wedge^{2}V\), that is, \(\xi(x)=(\xi_{1}(x),\xi_{2}(x),\xi_{3}(x))\), where
\[\xi_{1}=\left|\begin{array}{cc}u^{2}&u^{3}\\ (u^{2})^{\prime}&(u^{3})^{\prime}\end{array}\right|,\quad\xi_{2}=\left| \begin{array}{cc}u^{3}&u^{1}\\ (u^{3})^{\prime}&(u^{1})^{\prime}\end{array}\right|,\quad\xi_{3}=\left| \begin{array}{cc}u^{1}&u^{2}\\ (u^{1})^{\prime}&(u^{2})^{\prime}\end{array}\right|.\]
By computation, we see \(\xi_{1},\xi_{2},\xi_{3}\) satisfy
\[\xi^{\prime\prime\prime}+2p_{1}\xi^{\prime\prime}+(p_{1}^{\prime}+p_{1}^{2}+p _{2})\xi^{\prime}+(p_{2}^{\prime}+p_{1}p_{2}-p_{3})\xi=0,\]
while the adjoint equation \(E^{*}\) of \(E\) is given as
\[E^{*}:\quad v^{\prime\prime\prime}-(p_{1}v)^{\prime\prime}+(p_{2}v)^{\prime}- p_{3}v=0.\]
These two equations look different, but both are equivalent projectively (change \(\xi\) to \(\lambda^{-2}\xi\) and \(v\) to \(\lambda v\) where \(\lambda=\exp(\int\frac{1}{3}p_{1}\,dx)\)) to the equation
\[adjE:\quad w^{\prime\prime\prime}+P_{2}w^{\prime}+(P_{2}^{\prime}-P_{3})w=0,\]
where
\[P_{2}=p_{2}-p_{1}^{\prime}-\frac{1}{3}p_{1}^{2},\quad P_{3}=p_{3}-\frac{1}{3} p_{1}^{\prime\prime}+\frac{2}{27}p_{1}^{3}-\frac{1}{3}p_{1}p_{2}.\]
Namely, the equation \(E^{*}\) is equivalent to the equation satisfied by \(\xi\); this equation of \(\xi\) is sometimes called the Wronskian equation. By the way, the equation \(E\) itself is known to be equivalent projectively (change of coordinate) to
\[u^{\prime\prime\prime}+P_{2}u^{\prime}+P_{3}u=0.\]
Though \(P_{2}\) and \(P_{3}\) are not projectively invariant, the cubic form
\[Rdx^{3},\quad\mbox{where}\quad R=P_{3}-\frac{1}{2}P_{2}^{\prime}\]
is invariant (the Laguerre-Forsyth invariant). Writing this invariant \(R^{*}\) for \(adjE\), we see that
\[R^{*}=-R.\]
This identity of invariants shows a relation of differential equation and its adjoint equation. In general for an equation of order \(n\), invariants \(R_{3},\ldots,R_{n}\) are defined, and they are related to the invariants \(R_{3}^{*},\ldots,R_{n}^{*}\) of the adjoint equation as \(R_{j}^{*}=(-)^{j}R_{j}\) (cf. [9]).
Now we apply the above general theory to the Fuchsian differential equation \(E\). The local exponents of the adjoint equation are given as follows. Let \(e_{1}\), \(e_{2}\), \(e_{3}\) be local exponents of \(E\) at \(x=0\): assume that \(u^{i}\) are chosen as
\[u^{1}=x^{e_{1}}f_{1},\quad u^{2}=x^{e_{2}}f_{2},\quad u^{3}=x^{e_{3}}f_{3},\]
where \(f_{i}\) are holomorphic at \(x=0\) (and \(f_{i}(0)=1\) for simplicity). Within the projective consideration, the differences \(e_{2}-e_{1}\) and \(e_{3}-e_{1}\) make sense. It is easy to see that
\[u\wedge u^{\prime}=(x^{e_{2}+e_{3}-1}g_{1},x^{e_{1}+e_{3}-1}g_{2},x^{e_{1}+e_{ 2}-1}g_{3}),\]
where \(g_{1}=(e_{3}-e_{2})f_{2}f_{3}+xh_{1}\), \(h_{1}\) being holomorphic at \(x=0\), and so on. On the other hand, we have
\[p_{1}=(3-e_{1}-e_{2}-e_{3})/x+h_{1},\quad\lambda^{3}=x^{3-e_{1}-e_{2}-e_{3}} \cdot h_{2},\]
where \(h_{1}\) and \(h_{2}\) are holomorphic at \(x=0\). These explain why \(\{p-e_{1},\,p-e_{2},\,p-e_{3}\}\) (\(p\in\mathbb{Z}\)) (cf. Remark 2.10) appears as a set of local exponents of the adjoint equation \(E^{*}\) at \(x=0\).
## 3 Addition and middle convolution
In this section, addition and middle convolution are introduced. For a given differential equation, these operations give new equations.
**Definition 3.1**.: For a function \(u(x)\), _Riemann-Liouville transformation_ of \(u\) with parameter \(\mu\) is defined as the function in \(x\):
\[I_{\gamma}^{\mu}(u)(x)=\frac{1}{\Gamma(\mu)}\int_{\gamma}u(t)(x-t)^{\mu-1}\,dt,\]
where \(\gamma\) is a cycle.7
Footnote 7: \(\gamma\) is topologically closed and the values of the integrand at the starting point and the ending point agree.
**Definition 3.2**.: For a linear differential operator \(P\) in \(x\) and a function \(f\) in \(x\), the _addition_ by \(f\) is defined as
\[\operatorname{Ad}(f)P:=f\circ P\circ f^{-1}.\]
**Definition 3.3**.: If \(u\) is a solution of a linear differential equation \(P\), the function \(I_{\gamma}^{\mu}(u)\) becomes a solution of the differential equation \(mc_{\mu}(P)\), called the _middle convolution of \(P\) with parameter \(\mu\)_.
The equation \(mc_{\mu}(P)\) is obtained as follows ([8, 5]): Multiply \(P\) by \(\partial^{k}\) with sufficiently large positive integer \(k\) from the left so that \(\partial^{k}P\) can be written as a linear combination of \(\theta^{i}\circ\partial^{j}\), where \(\theta=x\partial\). Then replace \(\theta\) by \(\theta-\mu\), and divide the result by \(\partial\) from the left as many times as possible. (The result is independent of \(k\).) The middle convolution has the additive property
\[mc_{0}=\mbox{id.},\quad mc_{\mu}\circ mc_{\mu^{\prime}}=mc_{\mu+\mu^{\prime}},\]
and so \(mc_{\mu}\) is invertible:
\[(mc_{\mu})^{-1}=mc_{-\mu}.\]
For an operator \(P\) with singular points \(0\), \(1\), \(\infty\), set
\[\begin{array}{ll}d\ =&(\mbox{mult of $0$ in the exponents at $x=0$})\\ &+(\mbox{mult of $0$ in the exponents at $x=1$})\\ &+(\mbox{mult of $\mu$ in the exponents at $x=\infty$})-\mbox{order}(P),\end{array}\]
where multiplicity (abbreviated as mult) is counted mod \(1\). Here and in the following, order\((P)\) denotes the order of the operator \(P\). Then we have
\[\mbox{order}(mc_{\mu}(P))=\mbox{order}(P)-d.\]
It is known that middle convolutions do not change the number of accessory parameters.
#### 3.0.1 A simplest example
If \(P=E_{2}\), then \(d=1+1+(0\mbox{ or }1)-2=0\mbox{ or }1\). Thus any middle convolution of \(E_{2}\) is again a Gauss operator or a 1st order operator. But if we perform an addition first to change the local exponent \(0\) of \(x=0\) or/and \(x=1\) non-zero, then \(d=-2,-1\mbox{ or }0\). So order\((mc_{\mu}(E_{2}))\) can be \(4\) or \(3\) or \(2\). In the following we see how the Gauss equation \(E_{2}\) is transformed to the generalized hypergeometric equation \({}_{3}E_{2}\):
* \(E_{2}\longrightarrow{}_{3}E_{2}\): For a solution \(u\) of the Gauss equation \(E_{2}(e)\), perform a multiplication (called an addition) \(u(x)\to x^{\nu}u(x)\) and then make a middle convolution with parameter \(\mu\). The Riemann scheme changes as \[\left(\begin{array}{ll}x=0:&0&e_{1}\\ x=1:&0&e_{2}\\ x=\infty:&e_{3}&e_{4}\end{array}\right) \underset{x^{\nu}}{\rightarrow} \left(\begin{array}{ll}\nu&e_{1}+\nu\\ 0&e_{2}\\ e_{3}-\nu&e_{4}-\nu\end{array}\right)\] \[\underset{\mu}{\rightarrow} \left(\begin{array}{ll}0&\nu+\mu&e_{1}+\nu+\mu\\ 0&1&e_{2}+\mu\\ 1-\mu&e_{3}-\nu-\mu&e_{4}-\nu-\mu\end{array}\right),\] where \(e_{1}+\cdots+e_{4}=1\). Last one is the Riemann scheme of a generalized hypergeometric equation \({}_{3}E_{2}\).
* \(E_{2}\longleftarrow{}_{3}E_{2}\): For the operator \[{}_{3}E_{2}=(\theta+a_{0})(\theta+a_{1})(\theta+a_{2})-(\theta+b_{1})(\theta+ b_{2})\partial,\] replace \(\theta\) by \(\theta-a_{2}+1\), and we get \[\begin{array}{l}(\theta+a_{0}-a_{2}+1)(\theta+a_{1}-a_{2}+1)(\theta+1)-( \theta+b_{1}-a_{2}+1)(\theta+b_{2}-a_{2}+1)\partial\\ \quad=\partial\ [x(\theta+a_{0}-a_{2}+1)(\theta+a_{1}-a_{2}+1)-(\theta+b_{1}-a_{2})( \theta+b_{2}-a_{2})].\end{array}\] Dividing by \(\partial\) from the left we have a second-order equation. Multiplying a certain power of \(x\) and that of \(x-1\), we get a Gauss equation \(E_{2}\).
### From \(H_{3}\) to \(H_{6},h_{5}\) and \(H_{4}\)
In this section and SS3.2, statements for \(H_{j}\) are valid also for \(G_{j}\) and \(E_{j}\).
#### 3.1.1 From \(H_{3}\) to \(H_{6}\)
We repeat the statement in the Introduction. Perform an addition to \(H_{3}=x^{2}(x-1)^{2}\partial^{3}+\cdots\):
\[L:=x(x-1)\mathrm{Ad}(x^{g_{0}}(x-1)^{g_{1}})H_{3}=x^{3}(x-1)^{3}\partial^{3}+\cdots.\]
Then the Riemann scheme changes as
\[R_{3}:\begin{pmatrix}x=0:&0&b_{1}&b_{2}\\ x=1:&0&b_{3}&b_{4}\\ x=\infty:&b_{7}&b_{5}&b_{6}\end{pmatrix}\to R(L):\begin{pmatrix}g_{0}&b_{1}+g_ {0}&b_{2}+g_{0}\\ g_{1}&b_{3}+g_{1}&b_{4}+g_{1}\\ b_{7}-g_{0}-g_{1}&b_{5}-g_{0}-g_{1}&b_{6}-g_{0}-g_{1}\end{pmatrix}.\]
Note \(b_{1}+\cdots+b_{7}=3.\) Since \(\partial^{3}\circ L\) has a \((\theta,\partial)\)-form, we perform a middle convolution (replace \(\theta\) by \(\theta-u\)), and we get
\[\begin{pmatrix}x=0:&0&1&2&g_{0}+u&b_{1}+g_{0}+u&b_{2}+g_{0}+u\\ x=1:&0&1&2&g_{1}+u&b_{3}+g_{1}+u&b_{4}+g_{1}+u\\ x=\infty:&-u+1&-u+2&-u+3&b_{5}-g_{0}-g_{1}-u&b_{6}-g_{0}-g_{1}-u&b_{7}-g_{0}-g_ {1}-u\end{pmatrix}.\]
Finally we change the names of the exponents as
\[\begin{pmatrix}x=0:&0&1&2&e_{1}&e_{2}&e_{3}\\ x=1:&0&1&2&e_{4}&e_{5}&e_{6}\\ x=\infty:&s&s+1&s+2&e_{7}&e_{8}&e_{9}\end{pmatrix}\]
and regard \(e_{1},\ldots,e_{9}\) are free and \(s\) is determined by the Fuchs relation. Then we find that this is equal to \(H_{6}(e)\).
#### 3.1.2 From \(H_{3}\) to \(H_{5}\)
Perform an addition:
\[(x-1)\mathrm{Ad}((x-1)^{g_{1}})H_{3}=x^{2}(x-1)^{3}\partial^{3}+\cdots,\]
and multiply \(\partial^{2}\) from the left. This admits a \((\theta,\partial)\)-form. Replace \(\theta\) by \(\theta-u\). The resulting equation has the Riemann scheme
\[\begin{pmatrix}0&1&2&b_{2}+u&b_{1}+u\\ 0&1&g_{1}+u&g_{1}+b_{4}+u&b_{3}+g_{1}+u\\ -u+1&2-u&b_{6}-g_{1}-u&b_{5}-g_{1}-u&-b_{1}-b_{2}-b_{3}-b_{4}-b_{5}-b_{6}-g_{1 }-u+3\end{pmatrix}\]
Exchange the singularities \(x=0\) and \(x=\infty\), perform an addition to make the local exponents at \(x=0\) as \(\{0,1,*,*,*\}\), and rename the local exponents to find the result is \(H_{5}\).
#### 3.1.3 From \(H_{3}\) to \(H_{4}\)
Without performing an addition to \(H_{3}\), multiply \(\partial\) from the left and get a \((\theta,\partial)\)-form. Replace \(\theta\) by \(\theta-u\), and do the same as above to get \(H_{4}\).
### From \(H_{6}\), \(H_{5}\) and \(H_{4}\) to \(H_{3}\)
#### 3.2.1 From \(H_{6}\) to \(H_{3}\)
Recall the \((\theta,\partial)\)-form of \(H_{6}\) given in Proposition 1.2. This expression suggests that, thanks to the formulae
\[(\theta+3)(\theta+2)(\theta+1)=\partial^{3}x^{3},\ (\theta+3)(\theta+2)\partial =\partial^{3}x^{2},\ (\theta+3)\partial^{2}=\partial^{3}x,\ \theta\partial=\partial(\theta-1),\]
we can modify the expression by replacing \(\theta\) by \(\theta-t\) (middle convolution with parameter \(t\)), where
\[t:=s-1,\quad s=2-\sum_{i=1}^{9}e_{i},\]
so that \(H_{6}(\theta=\theta-t)\) is divisible by \(\partial^{3}\) from the left and, if we write the quotient by \(mcH=x^{3}(x-1)^{3}\partial^{3}+\cdots\), then its Riemann scheme is
\[R(mcH):\ \left(\begin{array}{ccc}e_{1}+t&e_{2}+t&e_{3}+t\\ e_{4}+t&e_{5}+t&e_{6}+t\\ e_{7}-t&e_{8}-t&e_{9}-t\end{array}\right).\]
We next transform it into \(x^{-(t+e_{1})-1}(x-1)^{-(t+e_{4})-1}mcH\circ x^{t+1_{3}}(x-1)^{t+e_{4}}\). Then the equation can be expressed as \(x^{2}(x-1)^{2}\partial^{3}+\cdots\), and the Riemann scheme changes into
\[\left(\begin{array}{ccc}0&e_{2}-e_{1}&e_{3}-e_{1}\\ 0&e_{5}-e_{4}&e_{6}-e_{4}\\ e_{7}+e_{1}+e_{4}+t&e_{8}+e_{1}+e_{4}+t&e_{9}+e_{1}+e_{4}+t\end{array}\right).\]
Introduce parameters \(\epsilon_{1},...,\epsilon_{7}\) by
\[\begin{array}{l}e_{2}-e_{1}=\epsilon_{1},\ e_{3}-e_{1}=\epsilon_{2},\ e_{5}-e_{4}= \epsilon_{3},\ e_{6}-e_{6}=\epsilon_{4},\\ e_{1}+e_{4}+e_{7}+t=\epsilon_{5},\ e_{1}+e_{4}+e_{8}+t=\epsilon_{6},\ e_{1}+e_{4}+e_{9}+t= \epsilon_{7},\end{array}\]
\(\epsilon_{1}+\cdots+\epsilon_{7}=3\). The equation is \(H_{3}(\epsilon)\), that is, \(E_{3}(e)\) replaced \(e\) by \(\epsilon\).
_Remark 3.4_.: (From \(H_{6}\) to \(H_{5}\)) On the other hand, replace \(\theta\) by \(\theta-e_{9}+1\) in \(H_{6}\) and divide by \(\partial\) from the left to get \(mcH_{5}\). Its Riemann scheme is
\[\begin{pmatrix}0&1&e_{1}+e_{9}-1&e_{2}+e_{9}-1&e_{3}+e_{9}-1\\ 0&1&e_{4}+e_{9}-1&e_{5}+e_{9}-1&e_{6}+e_{9}-1\\ s+1-e_{9}&s+2-e_{9}&s+3-e_{9}&e_{7}-e_{9}+1&e_{8}-e_{9}+1\end{pmatrix}.\]
Put \(e_{i}+e_{9}=\epsilon_{i},\ i=1,\ldots,6\) and \(e_{j}-e_{9}=\epsilon_{j},\ j=7,8\) in \(mcH_{5}\). Then it is equal to \(H_{5}(\epsilon)\).
#### 3.2.2 From \(H_{5}\) to \(H_{3}\)
Recall the \((x,\theta,\partial)\)-form of \(H_{5}=H_{6}(e_{9}=0)/\partial=xT_{0}^{\prime}+T_{1}^{\prime}+\cdots=x^{3}(x- 1)^{3}\partial^{5}+\cdots\): Perform a middle convolution: multiply \(\partial\) to \(H_{5}\) from the left and get a \((\theta,\partial)\)-form, then replace \(\theta\) this time by \(\theta-s\) (\(s=2-\sum_{i=1}^{8}e_{i}\)), and divide it from the left by \(\partial^{3}\), and multiply powers of \(x\) and \(x-1\) to make one of the local exponents at \(0\) and \(1\) to be \(0\). Then we get \(H_{3}\). The procedure is quite analogous to that of getting \(H_{3}\) from \(H_{6}\) shown above.
#### 3.2.3 From \(H_{4}\) to \(H_{3}\)
Recall the \((\theta,\partial)\)-form of \(H_{4}=\mathcal{T}_{0}+\mathcal{T}_{1}\partial+\mathcal{T}_{2}\partial^{2}=x^{2} (x-1)^{2}\partial^{4}+\cdots\). Perform a middle convolution: Replace \(\theta\) by \(\theta-c_{8}\), and divide it from the left by \(\partial\), and multiply powers of \(x\) and \(x-1\) to make one of the local exponents at \(0\) and \(1\) to be \(0\). Then we get \(H_{3}\).
## 4 Shift operators, shift relations and S-values
\begin{tabular}{r l}
**4.1** & **The ring of differential operators, left ideals and reducibility** \\
**4.2** & **Shift operators and shift relations** \\
**4.3** & **S-values** \\
**4.4** & **When** \(ap\) **is a function of** \(e\) \\ & 4.4.1 & Uniqueness of shift operators \\ & 4.4.2 & Composition of shift operators \\ & 4.4.3 & Remote S-values \\ & 4.4.4 & Relation between \(P\) and \(Q\) \\
**4.5** & **Reducibility type and shift operators** \\
**4.6** & **Reducibility type and shift operator when** \(\operatorname{ord}(P)=1\) \\
**4.7** & **From** \(H_{6}\) **to** \(H_{5}\) **and** \(H_{3}\) **by factorization** \\ & 4.7.1 & From \(H_{6}\) to \(H_{5}\) by factorization \\ & 4.7.2 & From \(H_{6}\) to \(H_{3}\) by factorization \\
**4.8** & **Polynomial solutions** \\ \end{tabular}
### The ring of differential operators, left ideals and reducibility
Let \(D=\mathbb{C}(x)[\partial]\) be the ring of ordinary differential operators with coefficients in rational functions of \(x\). We call the degree of the differential operator \(P\) relative to \(\partial\) the _order_ of \(P\) and denote it as \(\operatorname{order}(P)\).
* Every left ideal of \(D\) is principal, because \(D\) admits Euclidean algorithm.
* An operator \(E\in D\) is said to be _reducible_ if it can be written as the product of two operators of positive order. When \(E\) is Fuchsian, it is reducible if and only if its solution space has a monodromy invariant proper non-trivial subspace. \(E\) is said to be _irreducible_ if it is not reducible.
* If \(E\) is irreducible, the left ideal \(DE\) generated by \(E\) is maximal, because, if not, there is a left ideal \(L\) such that \(D\supsetneq L\supsetneq E\), since \(L\) is generated by an element \(F\in D\), \(E\) is divisible by \(F\).
**Lemma 4.1**.: _Consider two operators \(P,E\in D\) such that \(0<\operatorname{order}(P)<\operatorname{order}(E).\) If \(E\) is irreducible, then \(P\) has its (left) inverse in \(D\) modulo \(E\)._
Proof.: Since \(DE\) is maximal and \(P\not\in DE\), we have \(D=DP+DE\), that is, there exist \(R,Q\in D\) satisfying \(1=QP+RE\).
**Definition 4.2**.: A singular point of an equation is said to be _apparent_ if every solution at this point is holomorphic.
**Proposition 4.3**.: \(H_{j}\) (\(j=2,\ldots,6\)) _are irreducible if the local exponents are generic._
Proof.: Suppose a differential operator \(E\) is reducible and is written as \(F_{1}\circ F_{2}\), where \(\operatorname{order}(F_{1})\neq 0\) and \(\operatorname{order}(F_{2})\neq 0\). At each of the singular points,the set of local exponents of \(F_{2}\) is a subset of that of \(E\). The singular points of \(F_{2}\) other than the singular points of \(E\) are apparent, so the local exponents at such points are non-negative integers. The Fuchs relation (1.1) for \(F_{2}\) says that the sum of all the local exponents is an integer. When \(E=H_{j}\), sum of a proper subset of the local exponents \(e_{1},e_{2},\dots\) can not be an integer when the local exponents are generic.
**Definition 4.4**.: For a given \(E\in D\) with the set of singular points \(S\), choose any point \(x_{0}\in\mathbb{C}-S\). Let \(\operatorname{Sol}(E)(x_{0})\) be the solution space of \(E\) at \(x_{0}\). For a loop \(\rho\in\pi_{1}(\mathbb{C}-S,x_{0})\) with base point \(x_{0}\), we can analytically continue a solution at \(x_{0}\) to get another solution at \(x_{0}\). In this sense, \(\operatorname{Sol}(E)(x_{0})\) is a \(\pi_{1}(\mathbb{C}-S,x_{0})\)-module. Since \(x_{0}\) does not matter in the following arguments, from now on we drop \(x_{0}\), and call this space simply _the solution space_ and write as \(\operatorname{Sol}(E)\), which is a \(\pi_{1}(\mathbb{C}-S)\)-module.
**Lemma 4.5**.: \(E\in D\) _is reducible if and only if the solution space \(\operatorname{Sol}(E)\) has a non-zero proper \(\pi_{1}(\mathbb{C}-S)\)-submodule, which is often called a monodromy invariant subspace._
Proof.: If \(E\) factors as \(F_{1}\circ F_{2}\) (\(F_{1},F_{2}\in D\)), then \(\operatorname{Sol}(F_{2})\) gives a \(\pi_{1}(\mathbb{C}-S)\)-submodule of \(\operatorname{Sol}(E)\).
### Shift operators and shift relations
In this and the next subsection, we study shift operators for differential equations with an accessory parameter \(ap\). When \(ap\) is specified as a function of the local exponents, or the differential equation is rigid, just forget \(ap\).
**Definition 4.6**.: In general, let \(H(e,ap)\) be an operator of order \(n\) with the local exponents \(e=(e_{1},\dots)\) and a parameter \(ap\), and \(\operatorname{Sol}(H(e,ap))\) its solution space. For a shift
\[sh_{+}:e\to e_{+},\quad(e_{+})_{i}=e_{i}+n_{i},\quad n_{i}\in\mathbb{Z},\]
a non-zero operator \(P\in D\) of order lower than \(n\) sending
\[\operatorname{Sol}(H(e,ap))\quad\text{ to }\quad\operatorname{Sol}(H(e_{+},ap_{+})),\]
for some \(ap_{+}\), is called a _shift operator_ for the shift \(sh_{+}\) and is denoted by \(P_{+}\). A shift operator for the shift \(sh_{-}:e\to e_{-},\ (e_{-})_{i}=e_{i}-n_{i}\) is denoted by \(P_{-}\).
Here we make an important assumption:
**Assumption:**\(ap_{+}=ap-\alpha(e)\), where \(\alpha\) is a polynomial in \(e\).
Without this, we can not go further; we can not define S-values, which play an important role in studying reducibility of the equations. For every shift operator, we can assume that the coefficients are polynomials of \((e,ap)\) free of common factors.
_Remark 4.7_.: When a differential _equation_ in question is \(Hu=0\), by multiplying a non-zero polynomial to the _operator_\(H\), we can assume that the coefficients of \(H\) has no poles. However, shift operators may have poles as functions of \(x\).
Since \(P_{\pm}\in D\), we have
**Lemma 4.8**.: _The shift operators are \(\pi_{1}(\mathbb{C}-S)\)-morphism, i.e., they commute with the \(\pi_{1}(\mathbb{C}-S)\)-action._
Suppose a shift operator \(P_{+}\in D\) for a shift \(sh_{+}\) exists. Since \(H(e_{+},ap_{+})\circ P_{+}\) is divisible from right by \(H(e,ap)\), there is an operator \(Q_{+}\in D\) satisfying the _shift relation_:
\[(EPQE):\quad H(e_{+},ap_{+})\circ P_{+}=Q_{+}\circ H(e,ap)).\]
Conversely, if there is a pair of non-zero operators \((P_{+},Q_{+})\in D^{2}\) of order smaller than \(n\) satisfying this relation, then \(P_{+}\) is a shift operator for the shift \(sh_{+}\). We often call also the pair \((P_{+},Q_{+})\) the shift operator for \(sh_{+}\). Lemma 4.1 implies
**Proposition 4.9**.: _If \(H(e,ap)\) is irreducible and \(P_{+}\) exists then the inverse operator \(P_{-}\) exists. More precisely,_
\[\begin{array}{ll}P_{+}(e):&\mathrm{Sol}(H_{6}(e,ap))\to\mathrm{Sol}(H_{6}( e_{+},ap_{+})),\quad\ ap_{+}=ap-\alpha(e),\\ P_{-}(e):&\mathrm{Sol}(H_{6}(e,ap))\to\mathrm{Sol}(H_{6}(e_{-},ap_{-})),\quad \ ap_{-}=ap+\alpha(e-n),\end{array}\]
_where \(e_{\pm}=e\pm n\). Same for \(P_{-}\) and \(P_{+}\)._
### S-values
Consider compositions of the two shift operators in the previous subsection:
\[P_{+}(e_{-},ap_{-})\circ P_{-}(e,ap):\mathrm{Sol}(H(e,ap)\to\mathrm{Sol}(H(e_ {-},ap_{-})\to\mathrm{Sol}(H(e,ap)),\]
and
\[P_{-}(e_{+},ap_{+})\circ P_{+}(e,ap):\mathrm{Sol}(H(e,ap)\to\mathrm{Sol}(H(e_ {+},ap_{+})\to\mathrm{Sol}(H(e,ap)),\]
and assume that these maps are constants (times the identity) independent of \(ap\).
**Definition 4.10**.: These constants will be called the _S-values_ for \(sh_{\mp}\), and are denoted as
\[Sv_{sh_{-}}=P_{+}(e_{-},ap_{-})\circ P_{-}(e,ap)\mod H(e,ap)\]
and
\[Sv_{sh_{+}}=P_{-}(e_{+},ap_{+})\circ P_{+}(e,ap)\mod H(e,ap).\]
**Proposition 4.11**.: _The two S-values are related as_
\[Sv_{sh_{-}}(e)=Sv_{sh_{+}}(e_{-}).\]
Proof.: Consider the product of three operators:
\[P_{+}(e_{-},ap_{-})\circ P_{-}(e,ap)\circ P_{+}(e_{-},ap_{-}):\] \[\mathrm{Sol}(H(e_{-},ap_{-}))\to\mathrm{Sol}(H(e,ap))\to\mathrm{ Sol}(H(e_{-},ap_{-}))\to\mathrm{Sol}(H(e,ap)).\]
The product of the left two is a constant \(Sv_{sh_{-}}(e)\), and that of the right two is a constant \(Sv_{sh_{+}}(e_{-})\).
**Proposition 4.12**.: _If for some \(e=\epsilon\), \(Sv_{sh_{+}}(\epsilon)=0\)\((\)resp. \(Sv_{sh_{-}}(\epsilon)=0)\), then \(H(\epsilon,ap)\) and \(H(\epsilon_{+},ap_{+})\)\((\)resp. \(H(\epsilon_{-},ap_{-}))\) are reducible. If \(Sv_{sh_{+}}(\epsilon)\neq 0\)\((\)resp. \(Sv_{sh_{-}}(\epsilon)\neq 0)\), then \(P_{sh_{+}}\)\((\)resp. \(P_{sh_{-}})\) gives an isomorphism: \(\mathrm{Sol}(H(\epsilon,ap))\to\mathrm{Sol}(H(\epsilon_{+},ap_{+}))\)\((\)resp. \(\mathrm{Sol}(H(\epsilon,ap))\to\mathrm{Sol}(H(\epsilon_{-},ap_{-})))\) as \(\pi_{1}(\mathbb{C}-\{0,1\})\)-modules._
Proof.: Shift operators are, by definition, non-zero; this leads to the first statement. Lemma 4.8 implies the second statement.
### When \(ap\) is a function of \(e\)
For a given differential equation \(H(e,ap)\), suppose the accessory parameters \(ap\) are functions \(ap(e)\) of the local exponents \(e\); put \(G(e)=H(e,ap(e))\). We can now discuss shift operators without worrying about the change of accessory parameters.
#### 4.4.1 Uniqueness of shift operators
**Proposition 4.13**.: _If \(G(e)\) is irreducible and if a shift operator \(P\) exists for a shift \(sh:e\to e^{\prime}\), then it is unique up to multiplicative constant._
Proof.: Suppose there are two shift operators \(P_{1}\) and \(P_{2}\) sending \(\operatorname{Sol}(G(e))\) to \(\operatorname{Sol}(G(e^{\prime})\). Let \(R_{2}\) be the inverse operator of \(P_{2}\) modulo \(G(a)\). The operator \(R_{2}P_{1}:\operatorname{Sol}(G(e))\to\operatorname{Sol}(G(e))\) is equivalent modulo \(G(e)\) to an operator of order lower than the order of \(G(e)\). Since it is not zero, by Schur's lemma, it is an isomorphism. Choosing an eigenvector \(f\) with eigenvalue \(c\), we have \(R_{2}P_{1}f=cf\), that is, \((R_{2}P_{1}-c)f=0\). This implies \(R_{2}P_{1}=c\), that is, \(P_{1}=cP_{2}\).
#### 4.4.2 Composition of shift operators
**Lemma 4.14**.: _Let \(G\) be a differential operator with local exponents \(e\). For given shift operators and shift relations for two shifts \(e_{1}\to e_{2}\) and \(e_{2}\to e_{3}\) as_
\[G(e_{2})\circ P(e_{1}\to e_{2}) =Q(e_{1}\to e_{2})\circ G(e_{1}),\] \[G(e_{3})\circ P(e_{2}\to e_{3}) =Q(e_{2}\to e_{3})\circ G(e_{2}),\]
_define the composed operators_
\[P(e_{1}\to e_{3}) :=P(e_{2}\to e_{3})\circ P(e_{1}\to a_{2}),\] \[Q(e_{1}\to e_{3}) :=Q(e_{2}\to e_{3})\circ Q(e_{1}\to e_{2}).\]
_Then they satisfy_
\[G(e_{3})P(e_{1}\to e_{3})=Q(e_{1}\to e_{3})G(e_{1}),\]
_for the composed shift \(e_{1}\to e_{3}\)._
In view of this lemma, we may consider the composition of the maps \(P(e_{1}\to e_{2}):\operatorname{Sol}(G(e_{1}))\longrightarrow\operatorname{ Sol}(G(e_{2}))\) and \(P(e_{2}\to e_{3}):\operatorname{Sol}(G(e_{2}))\longrightarrow\operatorname{ Sol}(G(e_{3}))\) modulo \(G(e_{1})\), denoted by \(P\), on the space \(\operatorname{Sol}(G(e_{1}))\). We solve the equation \(G(e_{3})P=QG(e_{1})\) to get the corresponding operator \(Q\).
#### 4.4.3 Remote S-values
We consider generally a differential operator \(G(e)\) with local exponents \(e\) and let \(P_{+}(e)\) and \(P_{-}(e)\) be shift operators for the shits \(sh_{\pm}:e\to e_{\pm}\):
\[P_{+}(e):\operatorname{Sol}(G(e))\to\operatorname{Sol}(G(e_{+})),\quad P_{-}(e ):\operatorname{Sol}(G(e))\to\operatorname{Sol}(G(e_{-}))\]
satisfying the shift relations
\[G(e_{-})\circ P_{-}(e)=Q_{-}\circ G(e),\quad G(e_{+})\circ P_{+}(e)=Q_{+} \circ G(e),\]
for some \(Q_{-}\) and \(Q_{+}\). We have seen that we get constant \(S(e,-1):=S_{sh_{-}}\) independent of \(x\) such that
\[P_{+}(e_{-})\circ P_{-}(e)=S(e,-1)+R\circ G(e)\]
for some operator \(R\). Composing these kind of identities, we get a constant \(S(e,-2)\), called a _remote S-value_:
\[P_{+}(e_{-})\circ P_{+}(e_{-2})\circ P_{-}(e_{-})\circ P_{-}(e)=S(e,-2)+R\circ G(e)\]
for some \(R\), where \(e_{-2}:=(sh_{-})^{2}(e)\). Comparing this identity with the identity
\[P_{+}(e_{-2})\circ P_{-}(e_{-})=S(e_{-},-1)+R\circ G(e_{-})\]
for some \(R\), multiplied by \(P_{+}(e_{-})\) on the left and \(P_{-}(e)\) on the right, we get
\[S(e,-2)=S(e_{-},-1)S(e,-1).\]
Continuing this process, we have
**Proposition 4.15**.: _In general, define the remote S-value \(S(e,-k)\) by_
\[P_{+}(e_{-})\cdots P_{+}(e_{-(k+1)})P_{-}(e_{-k})\cdots P_{-}(e)=S(e,-k)+R \circ G(e)\]
_for some \(R\), where \(e_{-k}:=(sh_{-})^{k}(e)\). Then, it is the product of S-values:_
\[S(e,-k)=S(e_{-k+1},-1)\cdots S(e_{-},-1)S(e,-1),\quad k=2,\,3,\,\ldots.\]
_Similarly, define the remote S-value \(S(e,k)\) by_
\[P_{-}(e_{+})\cdots P_{-}(e_{k})P_{+}(e_{k-1})\cdots P_{+}(e)=S(e,k)+R\circ G(e)\]
_for some \(R\), where \(e_{k}:=(sh_{+})^{k}(e)\). Then, it is the product of S-values:_
\[S(e,k)=S(e_{k-1},1)\cdots S(e_{+},1)S(e,1),\quad k=2,\,3,\,\ldots.\]
#### 4.4.4 Relation between \(P\) and \(Q\)
Assume an operator \(E=E(e)\) has adjoint symmetry: \(E(e)^{*}=E(adj(e))\) for a linear transformation \(adj\) on the space of local exponents, assume also \(E\) admits a shift relation
\[E(\sigma(e))\circ P=Q\circ E(e)\]
for a shift \(\sigma\). Taking adjoint, we have
\[E(e)^{*}\circ Q^{*}=P^{*}\circ E(\sigma(e)^{*}),\quad\mbox{that is,}\quad E( adj(e))\circ Q^{*}=P^{*}\circ E(adj\circ\sigma(e)).\]
Since \(adj(e)=\sigma\circ adj\circ\sigma(e)\), (recall Remark 2.10: \(adj(e_{j})=\mbox{constant}-e_{j}\)) we have
\[Q^{*}=(-)^{\nu}P(adj\circ\sigma(e)),\quad\nu=\mbox{order}(P)\]
and so we have
**Proposition 4.16**.: _If an operator \(E(e)\) with the adjoint symmetry \(E(e)^{*}=E(adj(e))\) admits a shift relation \(E(\sigma(e))\circ P=Q\circ E(e)\), then_
\[Q=(-)^{\nu}P(adj\circ\sigma(e))^{*},\quad\nu=\mbox{order}(P).\]
### Reducibility type and shift operators
We discuss factorization of Fuchsian operators in \(D=\mathbb{C}(x)[\partial]\).
**Definition 4.17**.: When \(H\in D\) is reducible and factorizes as
\[H=F_{1}\circ\cdots\circ F_{r},\quad F_{j}\in D,\quad 0<\mathrm{order}(F_{j})=n_{j},\ (j=1,\ldots,r),\]
we say \(H\) is _reducible of type \([n_{1},\ldots,n_{r}]\)_; we sometimes call \([n_{1},\ldots,n_{r}]\) the _type of factors_. We often forget commas, for example, we write [23] in place of [2, 3]. When only a set of factors matters, we say \(H\) is _reducible of type \(\{n_{1},\ldots,n_{r}\}\)_.
By repeated use of Lemma 4.5, we have
**Proposition 4.18**.: \(H\) _admits a factorization \(F_{1}\circ\cdots\circ F_{r}\) of type \([n_{1},\ldots,n_{r}]\) if and only if \(\mathrm{Sol}(H)\) has monodromy invariant subspaces_
\[\mathrm{Sol}(H)=S_{1}\supset S_{2}\supset\cdots\supset S_{r},\]
_with_
\[\dim S_{1}/S_{2}=n_{1},\ \dim S_{2}/S_{3}=n_{2},\ldots,\ \dim S_{r}=n_{r}.\]
Note that even if the equation \(H\) has singularity only at \(S=\{0,1,\infty\}\), the factors may have singularities out of \(S\).
**Proposition 4.19**.: _If \(H\) has singularity only at \(S\), then the singular points of \(F_{1}\) and \(F_{r}\) out of \(S\) are apparent._
Proof.: For the factor \(F_{r}\), the claim is obvious. The claim for \(F_{1}\) follows by taking adjoint.
_Remark 4.20_.: The way of factorization is far from unique: in fact, an operator can have different types of factorization such as the shift relation \(H^{\prime}\circ P=Q\circ H\) and the factorizations
\[A\circ B=(A\circ f)\circ(f^{-1}\circ B),\ f\in\mathbb{C}(x),\ f \neq 0,\] \[\partial^{2}=\left(\partial+\frac{1}{x-c}\right)\circ\left( \partial-\frac{1}{x-c}\right),\ c\in\mathbb{C}.\]
Therefore, when we discuss the singularity of the factors of a decomposition, we usually choose the factors so that they have least number of singular points.
Proposition 4.12 and Proposition 4.18 lead to
**Proposition 4.21**.: _Suppose \(H(e)\) and \(H(e_{\pm})\) are connected by shift relations. If \(Sv_{+}(\epsilon)\neq 0\) (resp. \(Sv_{-}(\epsilon)\neq 0\)) for some \(e=\epsilon\), then \(H(\epsilon)\) and \(H(\epsilon_{+})\) (resp. \(H(\epsilon_{-})\) admit the factorization of the same type._
**Theorem 4.22**.: _Assume \(H\) and \(H^{\prime}\) are connected by the shift relation \(H^{\prime}P=QH\). If \(H^{\prime}\) is reducible, so is \(H\). If \(H^{\prime}\) is reducible, so is \(H\)._
Proof.: Assume \(H\) is reducible:
\[H=F_{1}\circ F_{2},\quad n_{j}=\text{ord }(F_{j}),\quad j=1,\,2,\]
and \(F_{2}\) is irreducible. Then, considering the dimension of \(P(\text{Sol}(F_{2}))\), we have three cases:
\((1)\quad\dim P(\text{Sol}(F_{2}))=n_{2},\)
\((2)\quad\ 0<\dim P(\text{Sol}(F_{2}))<n_{2},\)
\((3)\quad P(\text{Sol}(F_{2}))=0.\)
In the first case, \(H^{\prime}\) has an \(n_{2}\)-dimensional solution space \(P(\text{Sol}(F_{2}))\), and, therefore, it is divisible by an irreducible operator of order \(n_{2}\). Thus \(H^{\prime}\) is reducible.
The second case does not occur because the kernel of \(P\) is a nontrivial invariant subspace of \(\text{Sol}(F_{2})\) and this contradicts to the irreducibility of \(F_{2}\).
Assume the third case; we write \(P\) as \(P=P_{1}\circ F_{2}\) and divide both sides of \(H^{\prime}P=QH\) by \(F_{2}\). Then, we have
\[H^{\prime}\circ P_{1}=Q\circ F_{1}.\]
Since \(\text{order}(P)<n=n_{1}+n_{2}\), we see that \(\text{order}(P_{1})<n_{1}\) and that \(P_{1}(\text{Sol}(F_{1}))\neq 0\). Thus \(\text{Sol}(H^{\prime})\) admits a non-trivial invariant subspace, which implies that \(H^{\prime}\) is reducible.
The latter statement is obtained by taking adjoint.
### Reducibility type and shift operator when \(\text{ord}(P)=1\)
Consider a situation that an equation \(H\) and a shifted equation \(H^{\prime}\) is connected by a shift operator \((P,Q)\):
\[H^{\prime}P=QH,\]
equivalent to say that \(P\) is a monodromy-preserving linear map sending the solution space \(\text{Sol}(H)\) of \(H\) to the solution space \(\text{Sol}(H^{\prime})\) of \(H^{\prime}\).
Assume \(H\) is reducible
\[H=F_{1}\circ\cdots\circ F_{t},\]
equivalent to say that \(\text{Sol}(H)\) admits a filtration of monodromy invariant subspaces
\[\text{Sol}(H)=S_{1}\supset\cdots\supset S_{t}=\text{Sol}(F_{t}).\]
**Proposition 4.23**.: _Suppose \(\text{order}(P)=1\) and \(H=F_{1}\circ\cdots\circ F_{t}\). If \(P\) is constant times \(F_{t}\), then_
\[H^{\prime}=Q\circ F_{1}\circ\cdots\circ F_{t-1},\]
_otherwise, \(P\) keeps the filtration:_
\[\text{Sol}(H^{\prime})=P(S_{1})\supset\cdots\supset P(S_{t}),\]
_equivalent to say that \(H^{\prime}\) admit a decomposition as_
\[H^{\prime}=F_{1}^{\prime}\circ\cdots\circ F_{t}^{\prime},\quad\text{ord}(F_{i }^{\prime})=\text{ord}(F_{i}).\]
On the other hand, assume \(H^{\prime}\) is reducible: \(H^{\prime}=F_{1}^{\prime}\circ\cdots\circ F_{t}^{\prime}.\) Turn to adjoint situation:
\[H^{*}Q^{*}=P^{*}H^{\prime*},\quad(H^{\prime})^{*}=(F_{t}^{\prime})^{*}\circ \cdots\circ(F_{t}^{\prime})^{*},\]
\(\text{Sol}((H^{\prime})^{*})\) admits a filtration as \(T_{t}\supset\cdots\supset T_{1}=\text{Sol}((F_{1}^{\prime})^{*})\). Apply Proposition 4.23. If \(Q^{*}=(F_{1}^{\prime})^{*}\), then
\[H^{*}=P^{*}\circ(F_{t}^{\prime})^{*}\circ\cdots\circ(F_{2}^{\prime})^{*}, \quad\text{that is}\quad H=F_{2}^{\prime}\circ\cdots\circ F_{t}^{\prime}\circ P,\]
otherwise \(Q^{*}\) keeps the filtration:
\[\mathrm{Sol}(H^{*})=Q^{*}\mathrm{Sol}((H^{\prime})^{*})=Q^{*}T_{t}\supset\cdots \supset Q^{*}T_{1},\]
that is, \(H^{*}\) admits a decomposition as
\[H^{*}=H^{*}_{t}\circ\cdots\circ H^{*}_{1},\quad\mathrm{ord}(H^{*}_{i})=\mathrm{ ord}(F^{\prime}_{i}).\]
**Proposition 4.24**.: _Suppose \(\mathrm{order}(Q)=1\) and \(H^{\prime}=F^{\prime}_{1}\circ\cdots\circ F^{\prime}_{t}.\) If \(Q\) is constant times \(F^{\prime}_{1}\), then_
\[H=F^{\prime}_{2}\circ\cdots\circ F^{\prime}_{1}\circ P,\]
_otherwise, \(H\) admits a decomposition as_
\[H=F_{1}\circ\cdots\circ F_{t},\quad\mathrm{ord}(F_{i})=\mathrm{ord}(F^{\prime }_{i}).\]
### From \(H_{6}\) to \(H_{5}\) and \(H_{3}\) by factorization
Recall that middle convolutions send \(H_{6}\) to \(H_{5}\) (Remark 3.4), and \(H_{6}\) to \(H_{3}\) (SS3.2.1). In this section we show that \(H_{5}\) and \(H_{3}\) can be also obtained from \(H_{6}\) by factorizations.
#### 4.7.1 From \(H_{6}\) to \(H_{5}\) by factorization
Recall the \((\theta,\partial)\)-form of \(H_{6}:=H_{6}(e,a)=T_{0}+T_{1}\partial+T_{2}\partial^{2}+T_{3}\partial^{3}\). Since
\[T_{0}=(\theta+s+2)(\theta+s+1)(\theta+s)B_{0},\quad B_{0}=(\theta+e_{7})( \theta+e_{8})(\theta+e_{9}),\]
if \(e_{9}=0\), \(H_{6}\) is divisible by \(\partial\) from the right. We get, as in SS1.2,
\[H_{5}=H_{5}(e_{1},\ldots,e_{8})=H_{6}(e_{1},\ldots,e_{8},e_{9}=0,)/\partial.\]
#### 4.7.2 From \(H_{6}\) to \(H_{3}\) by factorization
When \(s=1\), the coefficients of \(H_{6}\) change as
\[T_{0} = (\theta+3)(\theta+2)(\theta+1)B_{0}\] \[= \partial^{3}x^{3}B_{0},\] \[T_{1}\partial = (\theta+3)(\theta+2)B_{1}(\theta,s=1)\partial=\partial(\theta+2)( \theta+1)B_{1}(\theta-1,s=1)\] \[= \partial^{3}x^{2}B_{1}(\theta-1,s=1),\] \[T_{2}\partial^{2} = (\theta+3)B_{2}(\theta,s=1)\partial^{2}=\partial^{2}(\theta+1)B_{ 2}(\theta-2,s=1)\] \[= \partial^{3}xB_{2}(\theta-2,s=1),\] \[T_{3}\partial^{3} = \partial^{3}T_{3}(\theta-3,s=1).\]
We have the factorization \(H_{6}=\partial^{3}\circ V\), where \(V\) is a differential operator of order \(3\):
\[V=x^{3}B_{0}+x^{2}B_{1}(\theta-1)+xB_{2}(\theta-1)+B_{3}(\theta-1),\quad e_{9} =3-e_{1}-\cdots-e_{8}.\]
In order to get a relation of \(V\) with equation \(H_{3}\), we multiply \(x^{e_{1}}(x-1)^{e_{4}}\) from the right to \(V\), and rename the local exponents as follows. By following these transformations by the move of the Riemann scheme \(R_{V}\) of \(V\) as
\[R_{V}=\left(\begin{array}{ccc}e_{1}&e_{2}&e_{3}\\ e_{4}&e_{5}&e_{6}\\ *&e_{7}&e_{8}\end{array}\right)\rightarrow\left(\begin{array}{ccc}0&e_{2}-e_ {1}&e_{3}-e_{1}\\ 0&e_{5}-e_{4}&e_{6}-e_{4}\\ *&e_{7}+e_{1}+e_{4}&e_{8}+e_{1}+e_{4}\end{array}\right)=\left(\begin{array}[] {ccc}0&b_{1}&b_{2}\\ 0&b_{3}&b_{4}\\ b_{7}&b_{5}&b_{6}\end{array}\right)=R_{3},\]
we see that the transformed equation is \(H_{3}\).
### Polynomial solutions
The equation \(H_{6}\) can have polynomial solutions (SS6.2.3), more generally, we have
**Proposition 4.25**.: _Let \(H\) be an equation admitting a \((\theta,\partial)\)-form. If \(H\) can be written as_
\[H=\text{\rm(a polynomial in $\theta$)}(\theta-m)+\text{\rm(a polynomial in $\theta$ and $\partial$) }\partial\]
_for a non-negative integer \(m\), then \(H\) is divisible from the right by \(\partial-f^{\prime}/f\), where \(f\) is a polynomial of \(x\) of degree \(\leq m\)._
Proof.: \(H\) maps the set of polynomials of \(x\) of degree \(\leq m\) to that of degree \(\leq m-1\), so there is such \(f\) killed by \(H\).
A well-known example: the Gauss hypergeometric operator \((\theta+a)(\theta+b)-(\theta+c)\partial\) admits a polynomial solution when \(a\) is a non-positive integer (see SS5.8).
_Remark 4.26_.: The zeros of the polynomial solution \(f\) other than \(\{0,1\}\) are apparent singular points; a special case of Proposition 4.19.
The Gauss hypergeometric equation \(E_{2}\)
\begin{tabular}{r l}
**5.1** & **Exponents at \(x=0\) and \(x=1\)** \\
**5.2** & **Transformation \(x\to 1/x\) and the local exponents at \(x=\infty\)** \\
**5.3** & **Adjoint operator of \(E_{2}\)** \\
**5.4** & **Differentiation** \\
**5.5** & **Shift operators of \(E_{2}\)** \\ & 5.5.1** & Relation between \(P\) and \(Q\) \\
**5.6** & **S-values and reducibility conditions of \(E_{2}\)** \\
**5.7** & **Reducibility conditions and the Euler integral representation** \\
**5.8** & **Reducible cases of \(E_{2}\)** \\ \hline \end{tabular}
In order to make clear the story of this and the following papers, we review some known facts about the Gauss hypergeometric equation. We start with the hypergeometric operator in \((x,\partial)\)-form
\[E_{2}=E(a,b,c):=x(1-x)\partial^{2}+(c-(a+b+1)x)\partial-ab,\quad\partial=d/dx.\]
It has singularities at \(\{0,1,\infty\}\), and is symmetric under the exchange \(a\leftrightarrow b\). Its \((\theta,\partial)\)-form is given as
\[E(a,b,c)=E_{0}+E_{1}\partial,\quad E_{0}(\theta,a,b)=(\theta+a)(\theta+b),\ E_{1}(\theta,c)=-(\theta+c).\]
Historically, the hypergeometric series
\[F(a,b,c;x)=\sum\frac{(a)_{n}(b)_{n}}{(c)_{n}(1)_{n}}x^{n}\]
studied before the hypergeometric equation was found. However our main objects \(H_{6},G_{6},\ldots\) have no simple expression of local solutions, so we started with the differential equation.
### Exponents at \(x=0\) and \(x=1\)
To see the local exponents at \(x=0\), we use the \((\theta,\partial)\)-form. Apply \(E(a,b,c)\) to \(u=x^{\rho}(1+\cdots)\). Since \(E_{0}\) keeps the local exponents \(\rho\), we neglect it, and see the effect of \(E_{1}\):
\[E_{1}\partial u=-(\theta+c)\rho x^{\rho-1}(1+\cdots)=(\rho-1+c)\rho x^{\rho-1 }+O(x^{\rho}).\]
The local exponents at \(x=0\) are determined by the last term \(E_{1}\), and are given as \(\rho=0\), \(1-c\). (Special case of Proposition 2.2)
Apply the transformation \(x\to 1-x\) in the \((x,\partial)\)-form of \(E(a,b,c)\). We find the resulting equation coincides with \(E(a,b,a+b-c+1)\). Thus the local exponents at \(x=1\) are \(\{0,c-a-b\}\).
### Transformation \(x\to 1/x\) and the local exponents at \(x=\infty\)
Put \(x=1/y,w=y\partial_{y}(=-\theta),\partial_{y}=d/dy\) in the \((\theta,\partial)\)-form:
\[E_{y}=(-w+a)(-w+b)-(-w+c)(-y)w. \tag{5.1}\]
Apply this to \(u=y^{\rho}(1+\cdots)\). Since the second term increases the local exponent \(\rho\), we neglect it, and see the effect of the first term:
\[(-w+a)(-w+b)y^{\rho}(1+\cdots)=(-\rho+a)(-\rho+b)y^{\rho}(1+\cdots).\]
The local exponents at \(x=\infty\) are determined by the first term \(E_{0}\), and are given as \(\rho=a\), \(b\). (Special case of Proposition 2.3)
Let us see that \(E_{y}\) can be transformed to a Gauss operator. Compose \(y^{a}\) (\(a\): one of the local exponents at infinity) from the right
\[E_{y}y^{a} = y^{a}\left[\ \{a-(w+a)\}\{b-(w+a)\}-\{c-(w+a)\}(-y)(w+a)\ \right]\] \[= y^{a}\left[\ (-w)(-w+b-a)-(-w+c-a)(-y)(w+a)\ \right].\]
By multiplying \(-y^{-a-1}\) to the expression of the last line, we see that
\[-\{ (-w-1)(-w+b-a-1)y^{-1}-(-w+c-a-1)(-)(w+a)\ \}\] \[= (w+a)(w-c+a+1)-(w-b+a+1)(w+1)y^{-1}.\]
In the last line, we exchanged the first and the second terms. Since \(\partial_{y}=(w+1)y^{-1}\), the last operator is equal to
\[E(a,1-c+a,1+a-b)=(w+a)(w-c+a+1)-(w-b+a+1)\partial_{y}.\]
The transformations above from \(E(a,b,c)\) to \(E(a,1-c+a,1+c-b)\) can be visualized by the Riemann schemes as
\[R_{2}(a,b,c):=\left(\begin{array}{ccc}x=0:&0&1-c\\ x=1:&0&c-a-b\\ x=\infty:&a&b\end{array}\right)\rightarrow\left(\begin{array}{ccc}a&b\\ 0&c-a-b\\ 0&1-c\end{array}\right)\rightarrow\left(\begin{array}{ccc}0&b-a\\ 0&c-a-b\\ a&1-c+a\end{array}\right),\]
which is the transformation \(R_{2}(a,b,c)\to R_{2}(a,1-c+a,1+a-b)\). Summing up, we have
\[x^{-a-1}E(a,b,c)|_{x\to 1/x}\circ x^{a}=-E(a,1-c+a,1+a-b),\]
where \(E(a,b,c)|_{x\to 1/x}\) denotes \(E_{y}\) in (5.1) with the change \(y\to x,w\rightarrow\theta\).
### Adjoint operator of \(E_{2}\)
The adjoint of \(E(a,b,c)=E_{0}(\theta,a,b)+E_{1}(\theta,c)\partial\) is computed as
\[E_{0}(\theta,a,b)^{*} = (-\theta-1+b)(-\theta-1+a)=(\theta+1-a)(\theta+1-b)\] \[= E_{0}(\theta,1-a,1-b),\] \[(E_{1}(\theta,c)\partial)^{*} = -\partial E_{1}^{*}=-\partial(-1)(-1-\theta+c)=-(\theta+2-c)\partial\] \[= E_{1}(\theta,2-c)\partial,\]
and we have
\[E(a,b,c)^{*}=E(1-a,1-b,2-c).\]
### Differentiation
The differentiation of any solution \(u\) of the Gauss equation \(E(a,b,c)\) is again a solution of another Gauss equation \(E(a+1,b+1,c+1)\). This is seen by differentiating the hypergeometric series or by composing \(\partial\) and the equation \(E\) to see that \(u^{\prime}\) satisfies the equation with parameter \((a+1,b+1,c+1)\): Since \(\partial\circ\theta=(\theta+1)\circ\partial\),
\[\partial\circ E(a,b,c) = \partial\circ(E_{0}(\theta,a,b)+E_{1}(\theta,c)\partial)=(E_{0}( \theta+1,a,b)+E_{1}(\theta+1,c)\partial)\circ\partial\] \[= (E_{0}(\theta,a+1,b+1)+E_{1}(\theta,c+1)\partial)\circ\partial\] \[= E(a+1,b+1,c+1)\circ\partial.\]
In terms of the Riemann scheme, this is expressed as
\[R_{2}(a,b,c)=\left(\begin{array}{cc}0&1-c\\ 0&c-a-b\\ a&b\end{array}\right)\underset{\partial}{\rightarrow}\left(\begin{array}{cc}0&1-c -1\\ 0&c-a-b-1\\ a+1&b+1\end{array}\right)=R_{2}(a+1,b+1,c+1).\]
The inverse of \(\partial\) is obtained as follows: Write the Gauss equation as
\[E(a,b,c)=E^{\prime}\circ\partial-ab,\qquad E^{\prime}=E^{\prime}(a,b,c)=x(1-x )\partial+c-(a+b+1)x,\]
The derivation of the Gauss series \(F(a,b,c;x)\) is \(\frac{ab}{c}F(a+1,b+1,c+1;x)\); hence, we have
\[\frac{1}{c}E^{\prime}(a,b,c)F(a+1,b+1,c+1;x)=F(a,b,c;x),\]
which means that the operator \(\partial\) is read as a transformation of the parameters \((a,b,c)\rightarrow(a+1,b+1,c+1)\) and \(E^{\prime}\) the reverse transformation \((a+1,b+1,c+1)\rightarrow(a,b,c)\).
### Shift operators of \(E_{2}\)
The shift operator \(P_{a+}\) for the parameter-ascending shift \(a\to a+1\) is obtained by the following procedure (we write \(R_{abc}\) for \(R_{2}(a,b,c)\)):
\[R_{abc}\underset{x^{a}}{\rightarrow}\left(\begin{array}{cc}a&a+1-c\\ 0&c-a-b\\ 0&b-a\end{array}\right)\underset{\partial}{\rightarrow}\left(\begin{array}{ cc}a-1&a-c\\ 0&c-a-b-1\\ 2&b-a+1\end{array}\right)\underset{x^{1-a}}{\rightarrow}\left(\begin{array}[] {cc}0&1-c\\ 0&c-a-b-1\\ a+1&b\end{array}\right).\]
Thus, we have the operator
\[P_{a+}=x^{1-a}\circ\partial\circ x^{a}=x^{1-a}\circ(ax^{a-1}+x^{a}\circ \partial)=x\partial+a.\]
The descending operator \(P_{-a}\) for \(a\to a-1\) is obtained by
\[R_{abc} \underset{X}{\rightarrow}\left(\begin{array}{cc}c-a&1-a\\ a+b-c&0\\ a-b&0\end{array}\right)\underset{\partial}{\rightarrow}\left(\begin{array}{ cc}c-a-1&-a\\ a+b-c-1&0\\ a-b+1&2\end{array}\right)\] \[\underset{X^{-1}x(x-1)}{\rightarrow}\left(\begin{array}{cc}0&1- c\\ 0&c-a-b+1\\ a-1&b\end{array}\right),\]
where \(X=x^{c-a}(x-1)^{a+b-c}\). Hence, we get the operator \(-P_{a-}\), where \(P_{a-}=x(1-x)\partial+c-a-bx\), which is a little more complicated than that for \(a\to a+1\). When \(c\to c-1\), we see that
\[R_{abc} \underset{x^{c-1}}{\rightarrow}\left(\begin{array}{cc}c-1&0\\ 0&c-a-b\\ a-c+1&b-c+1\end{array}\right)\underset{\partial}{\rightarrow}\left(\begin{array} []{cc}c-2&0\\ 0&c-a-b-1\\ a-c+2&b-c+2\end{array}\right)\] \[\underset{x^{2-c}}{\rightarrow}\left(\begin{array}{cc}0&2-c\\ 0&c-a-b-1\\ a&b\end{array}\right)\]
and we get a descending operator \(P_{c-}=x\partial+c-1\). For the ascending case \(c\to c+1\), we see that
\[R_{abc} \underset{(x-1)^{a+b-c}}{\rightarrow}\left(\begin{array}{cc}0&1-c \\ a+b-c&0\\ c-b&c-a\end{array}\right)\underset{\partial}{\rightarrow}\left(\begin{array}{ cc}0&-c\\ a+b-c-1&0\\ c-b+1&c-a+1\end{array}\right)\] \[\underset{(x-1)^{1+c-a-b}}{\rightarrow}\left(\begin{array}{cc}0& -c\\ 0&1+c-a-b\\ a&b\end{array}\right);\]
thus we get an ascending operator \(P_{c+}=(x-1)\partial+a+b-c\).
By changing the notation of parameters from \((a,b,c)\) to \((e_{1},e_{2},e_{3},s=1-e_{1}-e_{2}-e_{3})\), we repeat the process above as follows:
\[R_{2}=\left(\begin{array}{ccc}x=0:&0&e_{1}\\ x=1:&0&e_{2}\\ x=\infty:&s&e_{3}\end{array}\right) \underset{x^{*}}{\rightarrow}\left(\begin{array}{cc}s&e_{1}+s \\ 0&e_{2}\\ 0&e_{3}-s\end{array}\right)\underset{\partial}{\rightarrow}\left(\begin{array} []{cc}s-1&e_{1}+s-1\\ 0&e_{2}-1\\ 2&e_{3}-s+1\end{array}\right)\] \[\underset{x^{1-s}}{\rightarrow}\left(\begin{array}{cc}0&e_{1} \\ 0&e_{2}-1\\ s+1&e_{3}\end{array}\right)\]
and, therefore, we get the shift operator \(P_{2-}:=x\partial+s\) for the shift \(e_{2}\to e_{2}-1\). Since
\[R_{2}\underset{X}{\rightarrow}\left(\begin{array}{cc}e_{2}+e_{3}&e_{123}\\ -e_{2}&0\\ s-e_{3}&0\end{array}\right)\underset{\partial}{\rightarrow}\left(\begin{array} []{cc}e_{2}+e_{3}-1&e_{123}-1\\ -e_{2}-1&0\\ s-e_{3}+1&2\end{array}\right)\underset{X^{-1}x(x-1)}{\rightarrow}\left( \begin{array}{cc}0&e_{1}\\ 0&e_{2}+1\\ s-1&e_{3}\end{array}\right),\]
where \(e_{123}=e_{1}+e_{2}+e_{3},X=x^{e_{2}+e_{3}}(x-1)^{-e_{2}}\), we have \(-P_{2+}\), where \(P_{2+}:=x(1-x)\partial+e_{2}+e_{3}-e_{3}x\) is the shift operator for the shift \(e_{2}\to e_{2}+1\). Since
\[R_{2}\underset{x^{-e_{1}}}{\rightarrow}\left(\begin{array}{cc}-e_{1}&0\\ 0&e_{2}\\ s+e_{1}&e_{3}+e_{1}\end{array}\right)\underset{\partial}{\rightarrow}\left( \begin{array}{cc}-e_{1}-1&0\\ 0&e_{2}-1\\ s+e_{1}+1&e_{3}+e_{1}+1\end{array}\right)\underset{x^{e_{1}}+}{\rightarrow} \left(\begin{array}{cc}0&e_{1}+1\\ 0&e_{2}-1\\ s&e_{3}\end{array}\right),\]
we have the shift operator \(P_{1+2-}:=x\partial-e_{1}\) for the shift \((e_{1},e_{2})\rightarrow(e_{1}+1,e_{2}-1)\). Since
\[R_{2}\underset{(x-1)^{-e_{2}}}{\rightarrow}\left(\begin{array}{cc}0&e_{1} \\ -e_{2}&0\\ s+e_{2}&e_{3}+e_{2}\end{array}\right)\underset{\partial}{\rightarrow}\left( \begin{array}{cc}0&e_{1}-1\\ -e_{2}-1&0\\ s+e_{2}+1&e_{3}+e_{2}+1\end{array}\right)\underset{(x-1)^{e_{2}+1}}{\rightarrow} \left(\begin{array}{cc}0&e_{1}-1\\ 0&e_{2}+1\\ s&e_{3}\end{array}\right),\]
we have the shift operator \(P_{1-2+}:=(x-1)\partial-e_{2}\) for the shift \((e_{1},e_{2})\rightarrow(e_{1}-1,e_{2}+1)\).
The shift operators relative to \(\{a,b,c\}\) and \(\{e_{1},e_{2},e_{3}\}\) are related as
\[P_{2-}=P_{a+},\quad P_{2+}=P_{a-},\quad P_{1+2-}=P_{c-},\quad P_{1-2+}=P_{c+}.\]
_Remark 5.1_.: The general shift operators for
\[\mathrm{Sol}(E(a,b,c))\rightarrow\mathrm{Sol}(E(a+p,b+q,c+r)),\quad p,q,r\in \mathbb{Z}\]
are given in [1, 2]. We thank H. Ando for his Maple program computing them.
#### 5.5.1 Relation between \(P\) and \(Q\)
Let us see Proposition 4.16 for \(E_{2}\). By taking adjoint of the shift relation, for example,
\[E(a+1,b,c)\circ P_{a+}=Q_{a+}\circ E(a,b,c),\quad P_{a+}=x\partial+a,\]
we have
\[E(1-a,1-b,2-c)Q_{a+}^{*}=P_{a+}^{*}E(-a,1-b,2-c),\]
since the adjoint of \(E(a,b,c)\) is \(E(1-a,1-b,2-c)\). Hence we have
\[Q_{a+}^{*}=-P_{a+}(-a,1-b,2-c)=-(x\partial-a)\quad\text{so}\quad Q_{a+}=x \partial+1+a.\]
In this way \(Q_{a+}\) can be computed from \(P_{a+}\). List of pairs of shift operators \((P,Q)\):
\[P_{a+} = x\partial+a, Q_{a+} = x\partial+a+1,\] \[P_{a-} = x(x-1)\partial+a+bx-c, Q_{a-} = x(x-1)\partial+a+bx-c+x-1,\] \[P_{c+} = (x-1)\partial+a+b-c, Q_{c+} = P_{c+},\] \[P_{c-} = x\partial+c-1, Q_{c-} = P_{c-}.\]
### S-values and reducibility conditions of \(E_{2}\)
Since \(P_{a+}=x\partial+a\), \(P_{a-}=x(1-x)\partial+c-a-bx\), and \(E=x(1-x)\partial^{2}+\cdots\), the S-value \(Sv_{a-}\) for the shift \(a\to a-1\to a\) is computed as
\[P_{a+}(a-1)\circ P_{a-}(a)-xE(a)=-(a-1)(a-c).\]
Similarly, we get
\[Sv_{b-}(b)=-(b-1)(b-c),\quad Sv_{c-}(c)=-(b-c+1)(a-c+1).\]
Thus \(E(a,b,c)\) is reducible if
\[a-1,\ a-c,\ b-c+1,\ a-c+1=0,\]
and we get by Theorem 4.22 the well known condition of reducibility
\[a,\ b,\ c-a,\ c-b\in\ \mathbb{Z}.\]
### Reducibility conditions and the Euler integral representation
The identity
\[E(a,b,c)\varphi=-b\frac{\partial}{\partial s}\left(\frac{s(1-s)}{x-s}\varphi \right),\quad\varphi=s^{b-c}(1-s)^{c-a-1}(x-s)^{-b}\]
implies that the function defined by the integral
\[F_{\gamma}(x)=\int_{\gamma}\varphi\,ds\]
along a closed path \(\gamma\)8 gives a solution to \(E(a,b,c)\). The integrand has exponents
Footnote 8: \(\gamma\) is topologically closed and the values of \(\varphi\) at the starting point and the ending point agree.
\[b-c,\quad c-a-1,\quad-b,\quad a\]
at \(0\), \(1\), \(x\), \(\infty\), respectively. If one of the exponents is a negative integer, then we can choose as \(C\) a small loop around this point, and \(F_{C}(x)\neq 0\) generates an invariant subspace of the solution space, which means the equation is reducible.
### Reducible cases of \(E_{2}\)
When \(E(a,b,c)\) is reducible, we see its factorization, which gives examples of the discussion in SS4.5 and 4.6. Recall the first four solutions among the Kummer's 24 solutions (cf. [4]) :
I \[: F(a,b,c;x),\] II \[: (1-x)^{c-a-b}F(c-a,c-b,c;x),\] III \[: x^{1-c}F(a-c+1,b-c+1,2-c;x),\] IV \[: x^{1-c}(1-x)^{c-a-b}F(1-a,1-b,2-c;x).\]
Note that the parameters of hypergeometric series in I and IV as well as II and III are related; recall the adjoint relation:
\[E^{*}(a,b,c)=E(1-a,1-b,2-c),\quad E^{*}(c-a,c-b,c)=E(a-c+1,b-c+1,2-c).\]
When the operator \(E(a,b,c)\) is reducible (\(a\), \(b\), \(c-a\), or \(c-b\in\mathbb{Z}\)), \(E\) factorizes into \(F_{1}\circ F_{2}\),
\[F_{2}=\partial-\frac{G^{\prime}}{G},\quad G=x^{\mu}(x-1)^{\nu}g,\]
where
\[(\mu,\nu)=(0,0),\ (0,c-a-b),\ (1-c,0),\ (1-c,c-a-b),\]
according to the types I,..., IV of \(G\), respectively, and \(g\) is a hypergeometric polynomial:
condition type of \(G\) degree of the polynomial \(g\)
\[\begin{array}{llll}\mbox{condition}&\mbox{type of $G$ \ \ degree of the polynomial $g$}\\ a=\cdots,-2,-1&\mbox{I}&-a\\ a=0&\mbox{I}&0\\ a=1&\mbox{IV}&0\\ a=2,3,\cdots&\mbox{IV}&a-1\\ \end{array}\]
\[\begin{array}{llll}c-a=\cdots,-2,-1&\mbox{II}&-(c-a)\\ c-a=0&\mbox{II}&0\\ c-a=1&\mbox{III}&0\\ c-a=2,3,\cdots&\mbox{III}&c-a-1\\ \end{array}\]
The zeros of \(g\) are the apparent singular points of \(F_{2}\), and so of \(F_{1}\). Therefore, the apparent singularities are the zeros of the hypergeometric series (cf. Proposition 4.25).
## 6 Shift operators of \(H_{6}\)
\[\begin{array}{ll}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.
Proof.: The first one is obtained as follows: Put
\[P=(x-1)\partial+s,\quad Q=(x-1)\partial+q\]
and solve the equation
\[H_{6}(sh_{1}(e),u-\alpha)\circ P=Q\circ H_{6}(e,u)\]
with respect to the set of unknowns \(\{\alpha,q\}\). Solution is
\[\alpha=s_{13}+s_{23}+1,\quad q=s+3.\]
The second and the third ones are obtained similarly.
### Inverse shift operators and S-values of \(H_{6}\)
We have determined the shift operators of the equation \(H_{6}\) for the shifts \(sh_{1}\), \(sh_{2}\) and \(sh_{3}\) and denoted them as \((P_{-00},Q_{-00}),\ldots,(P_{--+},Q_{--+})\). Generally, we introduce notation as follows.
**Definition 6.2**.: If \((P,Q,\alpha)\) solves the equation
\[H_{6}(\mathbf{e}_{1}+\epsilon_{1}\mathbf{1},\mathbf{e}_{4}+\epsilon_{4}\mathbf{1},\mathbf{e}_{7}+ \epsilon_{7}\mathbf{1},u-\alpha)\circ P=Q\circ H_{6}(e,u),\quad\epsilon_{1}, \epsilon_{4},\epsilon_{7}=-1,0,1,\]
then the operators \(P\) and \(Q\) are denoted as \(P_{\delta_{1}\delta_{4}\delta_{7}}\) and \(Q_{\delta_{1}\delta_{4}\delta_{7}}\), where \(\delta_{i}=-,0,+\) according as \(\epsilon_{i}=-1,0,1\). For example, for the shift \(e\to(\mathbf{e}_{1}+\mathbf{1},\mathbf{e}_{4}+\mathbf{1},\mathbf{e}_{7}-\mathbf{1})\), the shift operators are \(P_{++-}\) and \(Q_{++-}\).
#### 6.1.1 \(P_{++-}\) and the S-value \(Sv_{--+}=P_{+--}\circ P_{--+}\) for \(H_{6}\)
While the operator \(P_{--+}\) defines a map from \(\mathrm{Sol}(H_{6}(e,u))\) to \(\mathrm{Sol}(H_{6}(\mathbf{e}_{1}-\mathbf{1},\mathbf{e}_{4}-\mathbf{1},\mathbf{e}_{7}+\mathbf{1},u- \alpha))\), its inverse map is given by the operator \(P_{++-}\) evaluated at \((\mathbf{e}_{1}-1,\mathbf{e}_{4}-1,\mathbf{e}_{7}+1,u-\alpha)\) and the composition gives the S-value; refer to 4.3. We call the operator \(P_{++-}\) itself the inverse of \(P_{--+}\) for simplicity in the following. In view of this property, we see that
\[P_{++-}(\mathbf{e}_{1}-1,\mathbf{e}_{4}-1,\mathbf{e}_{7}+1)=(H_{6}-p_{0})/\partial=x^{3}( x-1)^{3}\partial^{5}+\cdots,\]
where \(p_{0}\) is the constant term of the \((x,\partial)\)-form of \(H_{6}=x^{3}(x-1)^{3}\partial^{6}+p_{5}\partial^{5}+\cdots+p_{1}\partial+p_{0}\) and that the S-value in this case, which we denote as \(Sv_{--+}\), is
\[Sv_{--+} = P_{++-}(\mathbf{e}_{1}-\mathbf{1},\mathbf{e}_{4}-\mathbf{1},\mathbf{e}_{7}+\mathbf{1}) \circ P_{--+}\] \[= H_{6}-p_{0}\equiv-p_{0}=-s(s+1)(s+2)e_{7}e_{8}e_{9}\mod H_{6}.\]
#### 6.1.2 \(P_{0+0}\) and the S-value \(Sv_{0+0}=P_{0-0}\circ P_{0+0}\) for \(H_{6}\)
The inverse of \(P_{0-0}\), denoted \(P_{0+0}\), is computed by the relation \(P_{0-0}(\mathbf{e}_{4}+\mathbf{1})\circ P_{0+0}-U\circ H_{6}(e)\) is constant (the S-value \(Sv_{0+0}\)) for some differential operator \(U\); in this case, since \(P_{0-0}=x\partial+s\) and \(H_{6}=x^{3}(x-1)^{3}\partial^{6}+\cdots\), we set
\[P_{+00}=x^{5}(x-1)^{3}\partial^{5}+\cdots,\]
and \(U=x^{3}\). We solve
\[P_{0-0}(\mathbf{e}_{4}=\mathbf{e}_{4}+\mathbf{1})\circ P=x^{3}H_{6}+\mathrm{constant},\]
to find \(P=P_{0+0}\) and \(\text{constant}=Sv_{0+0}\). The \((\theta,\partial)\)-form of \(H_{6}\):
\[H_{6}=T_{0}+T_{1}\partial+T_{2}\partial^{2}+T_{3}\partial^{3},\]
implies that \(x^{3}H_{6}\) has \((x,\theta)\)-form as:
\[x^{3}H_{6}=x^{3}T_{0}+x^{2}\theta T_{1}(\theta-1)+x\theta(\theta-1)T_{2}( \theta-2)+\theta(\theta-1)(\theta-2)T_{3}(\theta-3).\]
Note that this expression has no constant (independent of \(x,\theta,\partial\)) term.
Since \(P_{0-0}(\boldsymbol{e}_{4}=\boldsymbol{e}_{4}+\boldsymbol{1})=\theta+s-1\), and the composite \((\theta+s-1)P\) differs from \(x^{3}H_{6}\) only by additive constant, \(P\) has \((x,\theta)\)-form as
\[P=x^{3}P_{-3}+x^{2}P_{-2}+xP_{-1}+P_{0}.\]
Thus
\[(\theta+s-1)P=x^{3}(\theta+2+s)P_{-3}+x^{2}(\theta+1+s)P_{-2}+x(\theta+s)P_{- 1}+(\theta+s-1)P_{0}.\]
Note that the constant term of this expression is the S-value
\[Sv_{0+0}=P_{0-0}\circ P_{0+0}=(s-1)P_{0}(\theta=0).\]
Since the \((x,\partial)\)-form is unique, we have
\[\begin{array}{ll}T_{0}&=(\theta+2+s)P_{-3},\\ \theta T_{1}(\theta-1)&=(\theta+1+s)P_{-2},\\ \theta(\theta-1)T_{2}(\theta-2)&=(\theta+s)P_{-1},\\ \theta(\theta-1)(\theta-2)T_{3}(\theta-3)&=(\theta+s-1)P_{0}-(s-1)P_{0}(0). \end{array}\]
Since \(T_{3}=-(\theta+3-e_{1})(\theta+3-e_{2})(\theta+3-e_{3})\),
\[-\theta(\theta-1)(\theta-2)(\theta-e_{1})(\theta-e_{2})(\theta-e_{3})=(\theta +s-1)P_{0}-(s-1)P_{0}(0),\]
and putting \(\theta=1-s\), we have the S-value \(Sv_{0+0}=P_{0-0}\circ P_{0+0}\):
\[(s-1)P_{0}(0)=(1-s)(-s)(-1-s)(1-s-e_{1})(1-s-e_{2})(1-s-e_{3})\]
and \(P=P_{0+0}=x^{3}P_{-3}+x^{2}P_{-2}+xP_{-1}+P_{0}\), where
\[\begin{array}{ll}P_{-3}&=(\theta+s+1)(\theta+s)B_{0}(\theta),\\ P_{-2}&=\theta(\theta+s+1)B_{1}(\theta-1),\\ P_{-1}&=\theta(\theta-1)B_{2}(\theta-2),\\ P_{0}&=-\frac{\theta(\theta-1)(\theta-2)(\theta-e_{1})(\theta-e_{2})(\theta-e_{ 3})+Sv_{0-0}}{(\theta+s-1)}.\end{array}\]
Thus we got
\[P_{0-0}(\boldsymbol{e}_{4}=\boldsymbol{e}_{4}+\boldsymbol{1})\circ P_{0+0}=x^ {3}H_{6}+Sv_{0+0}, \tag{6.1}\]
where \(P_{0-0}(\boldsymbol{e}_{4}=\boldsymbol{e}_{4}+\boldsymbol{1})=\theta+s-1\).
#### 6.1.3 \(P_{+00}\) and the S-value \(Sv_{+00}=P_{-00}\circ P_{+00}\) for \(H_{6}\)
Perform the coordinate change \(x\to 1-x\) to (6.1):
* \(P_{0-0}(\boldsymbol{e}_{4}=\boldsymbol{e}_{4}+\boldsymbol{1})=x\partial+s-1\) changes into \[(x-1)\partial+s-1=P_{-00}(\boldsymbol{e}_{1}=\boldsymbol{e}_{1}+\boldsymbol{1}).\]
* \(x^{3}H_{6}(\boldsymbol{e}_{1},\boldsymbol{e}_{4},\boldsymbol{e}_{7},T_{10})\) changes into (SS2.1.4) \[-(x-1)^{3}H_{6}(\boldsymbol{e}_{4},\boldsymbol{e}_{1},\boldsymbol{e}_{7},-T_{1 0}+\alpha(e)),\] where \[\alpha(e)=3s^{2}+(s_{11}+s_{12}-s_{23}+2)s+3s_{11}+3s_{12}-3s_{23}-3s_{33}-21.\]
Perform next the parameter change \(\boldsymbol{e}_{1}\leftrightarrow\boldsymbol{e}_{4}\) and the accessory parameter change \(T_{10}\to-T_{10}+\alpha(e)\), to get
\[P_{-00}(\boldsymbol{e}_{1}=\boldsymbol{e}_{1}+\boldsymbol{1})\circ P_{+00}=-( x-1)^{3}H_{6}+Sv_{+00},\]
where \(P_{+00}\) is \(P_{0+0}\) with the substitution
\[x\to 1-x,\quad\theta\to(x-1)\partial,\quad\boldsymbol{e}_{1}\to\boldsymbol{e}_ {4},\quad\boldsymbol{e}_{4}\to\boldsymbol{e}_{1},\quad T_{10}\to-T_{10}+\alpha (e),\]
and
\[Sv_{+00}=(1-s)(-s)(-1-s)(1-s-e_{4})(1-s-e_{5})(1-s-e_{6}).\]
#### 6.1.4 S-values and reducibility conditions
We list the S-values for the three simple shifts above:
**Proposition 6.3**.: _The three S-values of the simple shift operators above:_
\[Sv_{--+} = P_{++-}(\boldsymbol{e}_{1}-\boldsymbol{1},\boldsymbol{e}_{4}-1, \boldsymbol{e}_{7}+1)\circ P_{--+}=-s(s+1)(s+2)e_{7}e_{8}e_{9},\] \[Sv_{-00} = P_{+00}(\boldsymbol{e}_{1}-\boldsymbol{1})\circ P_{-00}=-s(s+1) (s+2)(s+e_{4})(s+e_{5})(s+e_{6}),\] \[Sv_{0-0} = P_{0+0}(\boldsymbol{e}_{4}-\boldsymbol{1})\circ P_{0-0}=s(s+1)(s +2)(s+e_{1})(s+e_{2})(s+e_{3}),\]
Note that, the order of composition of two maps, the S-value changes following the rule described in Proposition 4.11.
Theorem 6.1, Propositions 4.23 and 4.24 lead to
**Theorem 6.4**.: _If one of_
\[s,\quad e_{i}+s\ (i=1,\ldots,6),\quad e_{7},\ e_{8},e_{9}\]
_is an integer, then the equation \(H_{6}\) is reducible._
### Reducible cases of \(H_{6}\)
**Definition 6.5**.: Two operators \(H\) and \(H^{\prime}\) with accessory parameters are said to be _essentially the same_ if \(H\) is transformed into \(H^{\prime}\) by
1. changing coordinate by permutation of \(\{x=0,1,\infty\}\),
2. multiplying a function from the left,
3. multiplying a factor \(x^{*}(x-1)^{**}\) from the right,
4. renaming the local exponents,
5. and by changing the accessory parameters.
Let \(G\) be an equation such that its accessory parameters are assigned as functions of local exponents. Two operators \(G\) and \(G^{\prime}\) are said to be _essentially the same_ if \(G\) is transformed into \(G^{\prime}\) by the changes \(1,\ldots,4\) above, and \(5\): the accessory parameters, functions of \(e\), change according to the renaming of \(e\).
All the statements in this section about \(H_{6},H_{5}\) and \(H_{3}\) are valid word to word about \(G_{6},G_{5}\) and \(G_{3}\), which will be defined in the next section.
#### 6.2.1 Factorization when \(e_{9}=0,1\) and when \(s=-2,-1,0,1\)
We examine the cases where \(e_{9}=0,1\) and the cases \(s=-2,-1,0,1\). Recall the \((\theta,\partial)\)-form of \(H_{6}\): \(T_{0}+T_{1}\partial+T_{2}\partial^{2}+T_{3}\partial^{3}\) in Proposition 1.2,
\[\begin{array}{llll}x\partial&=\theta,&\partial x&=\theta+1,\\ x^{2}\partial^{2}&=\theta(\theta-1),&\partial^{2}x^{2}&=(\theta+1)(\theta+2), \\ x^{3}\partial^{3}&=\theta(\theta-1)(\theta-2),&\partial^{3}x^{3}&=(\theta+1)( \theta+2)(\theta+3),\end{array} \tag{6.2}\]
and
\[\theta\partial=\partial(\theta-1),\quad\theta\partial^{2}=\partial^{2}( \theta-2),\quad\theta\partial^{3}=\partial^{3}(\theta-3),\ldots\]
* When \(e_{9}=0\), Since \(T_{0}\) is divisible by \(\partial\) from the right, \(H_{6}\) factorizes as \[H_{6}(e_{9}=0)=H_{5}\circ\partial,\] where \(H_{5}=H_{6}(e_{9}=0)/\partial\), which we have explained in SS4.7.1.
* When \(e_{9}=1\), Since \(\theta+e_{9}=\theta+1=\partial x\) and \(\theta\partial=\partial(\theta-1)\), \(T_{0}\) is divisible by \(\partial\) from the left. \[\begin{array}{llll}T_{0}(e_{9}=1)&=&\partial(\theta+s+1)(\theta+s)(\theta+s- 1)(\theta+e_{7}-1)(\theta+e_{8}-1),\\ T_{1}(e_{9}=1)\partial&=&\partial(\theta+s+1)(\theta+s)B_{1}(\theta-1),\\ T_{2}(e_{9}=1)\partial^{2}&=&\partial(\theta+s+1)B_{2}(\theta-1)\partial,\\ T_{3}(e_{9}=1)\partial^{3}&=&-\partial(\theta+2-e_{1})(\theta+2-e_{2})(\theta+ 2-e_{3})\partial^{2},\end{array}\] leads to \[H_{6}(e_{9}=1)=\partial\circ X_{5},\] where \(X_{5}\) is essentially equal to \(H_{5}\).
* When \(s=1\), \[\begin{array}{ll}T_{0}(s=1)&=(\theta+3)(\theta+2)(\theta+1)B_{0}(\theta,s=1)= \partial^{3}x^{3}B_{0}(\theta,s=1),\\ T_{1}(s=1)\partial&=(\theta+3)(\theta+2)B_{1}(\theta,s=1)\partial=\partial( \theta+2)(\theta+1)B_{1}(\theta-1,s=1)\\ &=\partial^{3}x^{2}B_{1}(\theta-1,s=1),\\ T_{2}(s=1)\partial^{2}&=(\theta+3)B_{2}(\theta,s=1)\partial^{2}=\partial^{2}( \theta+1)B_{2}(\theta-2,s=1)\\ &=\partial^{3}xB_{2}(\theta-2,s=1),\\ T_{3}(s=1)\partial^{3}&=\partial^{3}B_{3}(\theta-3,s=1)\end{array}\] leads to \[\begin{array}{ll}H_{6}(s=1)=\partial^{3}\circ H_{3},\end{array}\] as we have stated in SS4.7.2.
* When \(s=0\), \[\begin{array}{ll}T_{0}(s=0)&=(\theta+2)(\theta+1)\theta B_{0}(\theta,s=0)= \partial^{2}x^{2}B_{0}(\theta,s=0)x\partial,\\ T_{1}(s=0)\partial&=(\theta+2)(\theta+1)B_{1}(\theta,s=0)\partial=\partial^{2 }x^{2}B_{1}(\theta,s=0)\partial,\\ T_{2}(s=0)\partial^{2}&=(\theta+2)B_{2}(\theta,s=0)\partial^{2}=(\theta+2) \partial B_{2}(\theta-1,s=0)\partial\\ &=\partial(\theta+1)B_{2}(\theta-1,s=0)\partial=\partial^{2}xB_{2}(\theta-1,s =0)\partial,\\ T_{3}(s=0)\partial^{3}&=\partial^{2}B_{3}(\theta-2,s=0)\partial\end{array}\] leads to \[\begin{array}{ll}H_{6}(s=0)=\partial^{2}\circ X_{3}\circ\partial,\end{array}\] where \(X_{3}\) is essentially equal to \(H_{3}\).
* When \(s=-1\), \[\begin{array}{ll}T_{0}(s=-1)&=(\theta+1)\theta(\theta-1)B_{0}(\theta,s=-1)= \partial x\cdot x^{2}\partial^{2}B_{0}(\theta,s=-1)\\ &=\partial x^{3}B_{0}(\theta+2,s=-1)\partial^{2},\\ T_{1}(s=-1)\partial&=(\theta+1)\theta B_{1}(\theta,s=-1)\partial=\partial xx \partial B_{1}(\theta,s=-1)\partial\\ &=\partial x^{2}B_{1}(\theta+1,s=-1)\partial^{2},\\ T_{2}(s=-1)\partial^{2}&=(\theta+1)B_{2}(\theta,s=-1)\partial^{2}=\partial xB _{2}(\theta,s=-1)\partial^{2},\\ T_{3}(s=-1)\partial^{3}&=\partial B_{3}(\theta-1,s=-1)\partial^{2}\end{array}\] leads to \[\begin{array}{ll}H_{6}(s=-1)=\partial\circ X_{3}^{\prime}\circ\partial^{ 2},\end{array}\] where \(X_{3}^{\prime}\) is essentially equal to \(H_{3}\).
* When \(s=-2\), \[\begin{array}{ll}T_{0}(s=-2)&=\theta(\theta-1)(\theta-2)B_{0}(\theta,s=-2)=x ^{3}\partial^{3}B_{0}(\theta,s=-2)\\ &=x^{3}B_{0}(\theta+3,s=-2)\partial^{3},\\ T_{1}(s=-2)\partial&=\theta(\theta-1)B_{1}(\theta,s=-2)\partial=x^{2}\partial^{ 2}B_{1}(\theta,s=-2)\partial\\ &=x^{2}B_{1}(\theta+2,s=-2)\partial^{3},\\ T_{2}(s=-2)\partial^{2}&=\theta B_{2}(\theta,s=-2)\partial^{2}=xB_{2}(\theta+1,s =-+2)\partial^{3},\\ T_{3}(s=-2)\partial^{3}&=T_{3}(s=-2)\partial^{3}\end{array}\] leads to \[\begin{array}{ll}H_{6}(s=-3)=X_{3}^{\prime\prime}\circ\partial^{3},\end{array}\] where \(X_{3}^{\prime\prime}\) is essentially equal to \(H_{3}\).
#### 6.2.2 Factorization when \(e_{9}\in\mathbb{Z}\), \(e_{1}+s\in\mathbb{Z}\) and \(s\in\mathbb{Z}\)
Proposition 4.21 leads to
**Proposition 6.6**.: _If \(e_{9}\in\mathbb{Z}\), then \(H_{6}\) factorizes as follows: when \(e_{9}\) is a non-positive integer, the type of factorization is [51] and, when it is a positive integer, [15] :_
\[\begin{array}{ccccccccc}e_{9}=&\cdots&-2&-1&0&1&2&3&\cdots\\ &&[51]&[51]&[51]A0&[15]A0&[15]&[15]\end{array}\]
_The notation \(A0\) means that the factors have no singularity other than \(\{0,1,\infty\}\)._
When \(e_{9}=-1\), the factors have one apparent singular point and when \(e_{9}=-2\), two apparent singular points (cf. Proposition 4.19).
By the change \(x\to 1/x\), the condition \(e_{9}\in\mathbb{Z}\) is converted to \(e_{1}+s\in\mathbb{Z}\):
**Proposition 6.7**.: _If \(e_{1}+s\in\mathbb{Z}\), \(H_{6}\) factorizes as follows:_
\[\begin{array}{ccccccccc}e_{1}+s=&\cdots&-2&-1&0&1&2&3&\cdots\\ &&[51]&[51]&[51]A0&[15]A0&[15]&[15]\end{array}\]
_When \(e_{1}+s=0,1\), the factor \([5]\) is essentially equal to \(H_{5}\)._
**Proposition 6.8**.: _If \(s\in\mathbb{Z}\), \(H_{6}\) is reducible of type \(\{3111\}\):_
\[\begin{array}{ccccccccc}s=&\cdots&-3&-2&-1&0&1&2&\cdots\\ &&[3111]&[3111]A0&[1311]A0&[1131]A0&[1113]A0&[1113]\end{array}\]
These exhaust all the reducible cases.
#### 6.2.3 Polynomial solutions
We apply Proposition 4.25 to
\[H_{6}=(\theta+s)(\theta+s+1)(\theta+s+2)(\theta+e_{7})(\theta+e_{8})(\theta+e _{9})+(T_{1}+T_{2}\partial+T_{3}\partial^{2})\partial.\]
**Proposition 6.9**.: _If one of \(e_{j}\)\((j=7,8,9)\) and \(s\) is a non-positive integer \(-m\), then \(H_{6}\) has a polynomial solution of degree \(\leq m\)._
Moreover, since the symmetry \(x\to 1/x\) takes \(\boldsymbol{e}_{7}\to\boldsymbol{e}_{1}+s\) (see SS4.2.3), we have
**Proposition 6.10**.: _If \(e_{i}+s\)\((i=1,2,3)\) is 0 or a negative integer \(-m\), then \(H_{6}\) has a solution: a power of \(x\) times a polynomial of degree \(\leq m\)._
## 7 Equation \(G_{6}\)
\begin{tabular}{r l} \hline
**7.1** & **Definition of the equation \(G_{6}(e,a)\)** \\
**7.2** & **Proof of Theorem 7.3** \\
**7.3** & **Inverse shift operators and S-values of \(G_{6}\)** \\
**7.4** & **Adjoint and the coordinate changes \(x\to 1-x\) and \(x\to 1/x\)** \\ & 7.4.1 & Proof of Theorem 7.6 \\ \hline \end{tabular}
In this subsection, we define the equation \(G_{6}\) with Riemann scheme \(R_{6}\) by replacing the coefficient \(T_{10}\) of the equation \(H_{6}\) by a polynomial in the local exponents \(e\). The equation \(G_{6}\) admits shift operators for any block shifts of \(e\).
We prepare an algebraic lemma for later use.
**Lemma 7.1**.: _The ring of symmetric polynomials in \(x_{1},..,x_{n}\) invariant under the shift \(sh:(x_{1},...,x_{n})\rightarrow(x_{1}+1,...,x_{n}+1)\) is generated by \(1\) and the fundamental symmetric polynomials \(t_{i}\) of degree \(i\)\((i=2,\ldots,n)\) in_
\[y_{k}:=x_{k}-y_{0}\ (k=1,2,...,n),\]
_where \(y_{0}:=(x_{1}+x_{2}+...+x_{n})/n.\ \{1,t_{2},\ldots,t_{n}\}\) are algebraically independent._
In fact, \(y_{1},\ldots,y_{n}\) are stable by the shift \(sh\), and \(y_{0}\) changes to \(y_{0}+1\). On the other hand, permutations of \(x_{1},\ldots,x_{n}\) correspond those of \(y_{1},\ldots,y_{n}\); \(y_{0}\) does not change.
We apply this lemma to the ring of polynomials of the variables as \(x_{1}=e_{1},x_{2}=e_{2},x_{3}=e_{3}\) when \(n=3\):
**Corollary 7.2**.: _The ring of polynomials invariant under the shift \((e_{1},e_{2},e_{3})\rightarrow(e_{1}+1,e_{2}+1,e_{3}+1)\) is generated by 1, \(t_{2}\) and \(t_{3}\), where_
\[\begin{array}{ll}t_{2}&=(e_{1}-e_{0})(e_{2}-e_{0})+(e_{2}-e_{0})(e_{3}-e_{0} )+(e_{3}-e_{0})(e_{1}-e_{0}),\\ &=(-e_{1}^{2}+e_{1}e_{2}+e_{1}e_{3}-e_{2}^{2}+e_{2}e_{3}-e_{3}^{2})/3\\ &=s_{2}-s_{1}^{2}/3,\\ t_{3}&=(e_{1}-e_{0})(e_{2}-e_{0})(e_{3}-e_{0})\\ &=(2e_{1}-e_{2}-e_{3})(2e_{2}-e_{1}-e_{3})(2e_{3}-e_{1}-e_{2})/27\\ &=2s_{1}^{3}/27-s_{1}s_{2}/3+s_{3},\\ e_{0}&=(e_{1}+e_{2}+e_{3})/3,\\ s_{1}&=e_{1}+e_{2}+e_{3},\quad s_{2}=e_{1}e_{2}+e_{1}e_{3}+e_{2}e_{3},\quad s _{3}=e_{1}e_{2}e_{3}.\end{array}\]
### Definition of the equation \(G_{6}(e,a)\)
For an equation \(G(e)\) with local exponents \(e\), we denote by \(G(\boldsymbol{e}_{1}\rightarrow\boldsymbol{e}_{1}-\boldsymbol{1})\) the equation with exponents \(\boldsymbol{e}_{1}\) shifted to \(\boldsymbol{e}_{1}-\boldsymbol{1}\) and so on. Now we can state the main theorem of this paper.
**Theorem 7.3**.: _Let \(G_{6}\) denote an equation \(H_{6}\) with the Riemann scheme \(R_{6}\) and with the accessory parameter \(T_{10}\) replaced by a polynomial in \(e_{1},\ldots,e_{9}\). We assume that it admits shift operators relative to the shifts of blocks \(\boldsymbol{e}_{i}\rightarrow\boldsymbol{e}_{i}\pm\boldsymbol{1}\ (i=1,4,7)\). Namely, for \(i=1\), assume that the equation_
\[G_{6}(\boldsymbol{e}_{1}\rightarrow\boldsymbol{e}_{1}+\boldsymbol{1})\circ P= Q\circ G_{6}\]
_admits a non-zero solution \((P,Q)\) and similarly for other cases. Then the term \(T_{10}\) is written as_
\[T_{10}=S_{10}+R,\]
_where_
\[S_{10}:=(-5-s_{21}+s_{22}-5s_{23}+s_{31}-s_{32}-3s_{33})/2\]
\[+(s_{11}-7s_{13}+s_{11}s_{13}+s_{11}s_{23}-s_{13}s_{21}+s_{13}s_{22})/3\]
\[+(s_{11}^{2}-s_{12}^{2}+s_{13}^{2}-s_{11}s_{21}+s_{12}s_{22}+s_{13}s_{23})/6\]
\[+(s_{11}^{2}-s_{12}^{2})s_{13}/9+(s_{11}^{3}-s_{12}^{3})/27,\]
_and \(R\) is an element of the ring generated by 1 and_
\[t_{2i}:=s_{2i}-s_{1i}^{2}/3\quad\text{and}\quad t_{3i}:=2s_{1i}^{3}/27-s_{1i}s _{2i}/3+s_{3i},\quad i=1,2,3.\]
We could not decide whether there exist shift operators for other shifts, such as \(e_{2}\to e_{2}+3\).
**Corollary 7.4**.: _When \(T_{10}\) is a polynomial in \(e_{1},\dots,e_{9}\) of degree 3, then_
\[T_{10}=S_{10}+R,\quad R=a_{0}+a_{1}t_{21}+a_{2}t_{22}+a_{3}t_{23}+a_{4}t_{31}+ a_{5}t_{32}+a_{6}t_{33},\]
_where \(a_{0},\dots,a_{6}\) are free constants._
**Definition 7.5**.: The operator \(H_{6}\) with the cubic polynomial \(T_{10}\) as above in the corollary will be denoted as \(G_{6}(e,a)\).
### Proof of Theorem 7.3
Thanks to Theorem 6.1, we have only to solve the system for \(T_{10}(e)\):
\[T_{10}(sh_{1})-T_{10} =s_{13}+s_{23}+1,\] \[T_{10}(sh_{2})-T_{10} =0,\] \[T_{10}(sh_{3})-T_{10} =20-s_{11}^{2}/3-2s_{11}s_{13}/3+s_{12}^{2}/3-s_{13}^{2}/3\] \[-2s_{11}+7s_{13}+s_{21}-s_{22}+2s_{23}.\]
One can check that the polynomial \(S_{10}\) solves these system of three identities. The second identity, for example, says that \(T_{10}\) is a polynomial of \(t_{22}\) and \(t_{32}\) with coefficients independent of \(\{e_{4},e_{5},e_{6}\}\). Now, the difference \(R=T_{10}-S_{10}\) is a polynomial invariant under \(sh_{1}\), \(sh_{2}\) and \(sh_{3}\); therefore, we have the theorem in view of Corollary 7.2.
### Inverse shift operators and S-values of \(G_{6}\)
The shift operators
\[\begin{array}{ll}P_{+00}&=x^{3}(x-1)^{5}\partial^{5}+\cdots,\\ P_{0+0}&=x^{5}(x-1)^{3}\partial^{5}+\cdots,\\ P_{++-}&=x^{3}(x-1)^{3}\partial^{5}+\cdots\end{array}\]
for the equation \(G(e,a)\) depends linearly on the parameters \(a_{0},\dots,a_{6}\) as follows:9
Footnote 9: they are listed in G6PQ,txt in FDEdata mentioned in the end of Introduction.
\[P_{+00} =\overline{P}_{+00}+R(x-1)^{3}\big{(}xdx^{2}(s+1)dx\big{)},\] \[P_{0+0} =\overline{P}_{0+0}+Rx^{3}\big{(}(x-1)dx^{2}(s+1)dx\big{)},\] \[P_{++-} =(H_{6}-p_{0})/\partial\] \[=\overline{P}_{++-}+R\big{(}x(x-1)dx^{2}(s+1)(2x-1)dx+s(s+1)\big{)},\]
where \(R=a_{0}+t_{21}a_{1}+\cdots+t_{33}a_{6}\), and \(\overline{P}_{+00}\), \(\overline{P}_{0+0}\) and \(\overline{P}_{++-}\) are operators excluding the terms with \(a_{0},\ldots,a_{6}\).
The S-values do not depend on the parameter \(a\)'s, and are exactly the same to those for \(H_{6}\) given in Proposition 6.3.
### Adjoint and the coordinate changes \(x\to 1-x\) and \(x\to 1/x\)
The operator \(G(e,a)\) is symmetric under adjoint and the coordinate changes interchanging \(\{0,1,\infty\}\):
**Theorem 7.6**.:
* _Adjoint symmetry: The adjoint of_ \(G_{6}(e,a)\) _is equal to_ \[G_{6}(\mathbf{2}-\boldsymbol{e}_{1},\mathbf{2}-\boldsymbol{e}_{4},\mathbf{1}- \boldsymbol{e}_{7},-a_{0},-a_{1},-a_{2},-a_{3},a_{4},a_{5},a_{6}).\]
* \((x\to 1-x)\)_-symmetry:_ \[G_{6}(\boldsymbol{e},a)|_{x\to 1-x}=G_{6}(\boldsymbol{e}_{4},\boldsymbol{e}_{1}, \boldsymbol{e}_{7},-a_{0},-a_{2},-a_{1},-a_{3},-a_{5},-a_{4},-a_{6}),\]
* \((x\to 1/x)\)_-symmetry:_ \[x^{r-3}G_{6}(\boldsymbol{e},a)|_{x\to 1/x}\circ x^{-r}=G_{6}(\boldsymbol{e}_{7}-s \mathbf{1},\boldsymbol{e}_{4},\boldsymbol{e}_{1}+s\mathbf{1},-a_{0},-a_{3},-a _{2},-a_{1},-a_{6},-a_{5},-a_{4}),\] _where_ \(G_{6}|_{x\to 1-x}\) _and_ \(G_{6}|_{x\to 1/x}\) _are_ \(G_{6}\) _after the coordinate changes_ \(x\to 1-x\) _and_ \(x\to 1/x\)_, respectively._
#### 7.4.1 Proof of Theorem 7.6
When \(T_{10}=S_{10}\), that is, \(a_{0}=\cdots=a_{6}=0\), then a straightforward computation (use \((\theta,dx)\)-form for the adjoint and the coordinate change \(x\to 1/x\), and \((x,\partial)\)-form for \(x\to 1-x\)) leads to the result. In general we have only to notice that for \(\boldsymbol{e}_{adj}=(\mathbf{2}-\boldsymbol{e}_{1},\mathbf{2}-\boldsymbol{e}_ {4},\mathbf{1}-\boldsymbol{e}_{7})\),
\[t_{2j}(\boldsymbol{e}_{adj})=t_{2j}(\boldsymbol{e}),\ j=1,2,3,\qquad t_{3j}( \boldsymbol{e}_{adj})=-t_{3j}(\boldsymbol{e}),\ j=4,5,6,\]
for \(\boldsymbol{e}_{ch01}=(\boldsymbol{e}_{4},\boldsymbol{e}_{1},\boldsymbol{e}_ {7})\),
\[t_{i1}(\boldsymbol{e}_{ch01})=t_{i2}(\boldsymbol{e}),\ t_{i2}(\boldsymbol{e}_{ ch01})=t_{i1}(\boldsymbol{e}),\ t_{i3}(\boldsymbol{e}_{ch01})=t_{i3}(\boldsymbol{e}),\ \ i=2,3,\]
for \(\boldsymbol{e}_{ch0\infty}=(\boldsymbol{e}_{7}-s\mathbf{1},\boldsymbol{e}_{4}, \boldsymbol{e}_{1}+s\mathbf{1})\),
\[t_{i1}(\boldsymbol{e}_{ch0\infty})=t_{i3}(\boldsymbol{e}),\ t_{i2}(\boldsymbol{e}_ {ch0\infty})=t_{i2}(\boldsymbol{e}),\ t_{i3}(\boldsymbol{e}_{ch0\infty})=t_{i1} (\boldsymbol{e}),\ \ i=2,3.\]
## 8 Equation \(E_{6}:=G_{6}(e,0)\)
**8.1**: **Interpolative expression of \(E_{6}\) using \(V\)****.** **52**: **Explicit expression of the decomposition [1113] when \(s=2,3,\dots\)****53**
**Definition 8.1**.: When \(a_{0}=\dots=a_{6}=0\), \(G_{6}(e,a)\) is called \(E_{6}(e)\).
The equation \(E_{6}(e)\) is symmetric in the sense that the following properties hold.
**Theorem 8.2**.:
* _Shift relations:_ \[E_{6}(\boldsymbol{e}_{1}\pm\boldsymbol{1},\boldsymbol{e}_{4},\boldsymbol{e}_ {7})\circ P_{\pm 00}=Q_{\pm 00}\circ E_{6}(\boldsymbol{e}),\qquad E_{6}( \boldsymbol{e}_{1},\boldsymbol{e}_{4}\pm\boldsymbol{1},\boldsymbol{e}_{7}) \circ P_{0\pm 0}=Q_{0\pm 0}\circ E_{6}(\boldsymbol{e}),\] \[E_{6}(\boldsymbol{e}_{1}\pm\boldsymbol{1},\boldsymbol{e}_{4}\pm \boldsymbol{1},\boldsymbol{e}_{7}\mp\boldsymbol{1})\circ P_{\pm\pm\mp}=Q_{\pm \pm\mp}\circ E_{6}(\boldsymbol{e}).\]
* _Differentiation symmetry:_ \[\partial E_{6}(\boldsymbol{e})=E_{6}(\boldsymbol{e}_{1}-\boldsymbol{1}, \boldsymbol{e}_{4}-\boldsymbol{1},\boldsymbol{e}_{7}+\boldsymbol{1})\partial,\]
* _Adjoint symmetry: The adjoint of_ \(E_{6}(e)\) _is equal to_ \[E_{6}(\boldsymbol{2}-\boldsymbol{e}_{1},\boldsymbol{2}-\boldsymbol{e}_{4}, \boldsymbol{1}-\boldsymbol{e}_{7}).\]
* \((x\to 1-x)\)_-symmetry:_ \[E_{6}(\boldsymbol{e})|_{x\to 1-x}=E_{6}(\boldsymbol{e}_{4},\boldsymbol{e}_{1}, \boldsymbol{e}_{7}),\]
* \((x\to 1/x)\)_-symmetry:_ \[x^{-s-3}E_{6}(\boldsymbol{e},a)|_{x\to 1/x}\circ x^{s}=E_{6}(\boldsymbol{e}_{7}-s \boldsymbol{1},\boldsymbol{e}_{4},\boldsymbol{e}_{1}+s\boldsymbol{1}),\] _where_ \(E_{6}|_{x\to 1-x}\) _and_ \(H_{6}|_{x\to 1/x}\) _are_ \(H_{6}\) _after the coordinate changes_ \(x\to 1-x\) _and_ \(x\to 1/x\)_, respectively._
Since we have adjoint symmetry as in the theorem, Proposition 4.4.4 is applicable to know the second members of shift operators \((P,Q)\).
### Interpolative expression of \(E_{6}\) using \(V\)
Let \(V:=\partial^{3}\backslash E_{6}(e_{9}=3-e_{1}-\dots-e_{8})\), that is, \(E_{6}(e_{9}=3-e_{1}-\dots-e_{8})=\partial^{3}\circ V\), as in SS4.7.2. Put
\[V_{1}=V,\ V_{0}=V(e^{\prime}),\ V_{-1}=V_{0}(e^{\prime}),\ V_{-2}=V_{-1}(e^{ \prime}),\]
where \(e^{\prime}=(e_{1}-1,\dots,e_{6}-1,e_{7}+1,e_{8}+1)\), and
\[U:=\frac{(s-1)s(s+1)(s+2)}{6}\left\{\frac{\partial^{3}\circ V_{1}}{s-1}-3\frac {\partial^{2}\circ V_{0}\circ\partial}{s}+3\frac{\partial\circ V_{-1}\circ \partial^{2}}{s+1}-\frac{V_{-2}\circ\partial^{3}}{s+2}\right\},\]
where \(s=2-(e_{1}+\dots+e_{8}+e_{9})/3.\) Then, by a straightforward computation, we have an interpolative expression of \(E_{6}\) by use of \(V\):
**Proposition 8.3**.: \[E_{6}-U=-3(s-1)s(s+1)(s+2)\left\{\left(x^{2}-x+\frac{1}{3}\right)\partial^{2}+ \left(x-\frac{1}{2}\right)(e_{7}+e_{8}+1)\partial+e_{7}e_{8}\right\}.\]
This expression makes the decomposition of \(E_{6}\) described in Proposition 6.8 clear.
### Explicit expression of the decomposition [1113] when \(s=2,3,\dots\)
By Proposition 6.8, when \(s=1,2,3,\dots\), the equation \(H_{6}\) is reducible of type [1113]. In this section, for \(E_{6}\), we find explicit expression of the factors of decomposition [1113], when \(s=2\), \(3\), \(\dots\). Recall (SS4.7.2) \(E_{6}(s=1)=\partial^{3}\circ V\), where
\[V=x^{3}B_{0}(\theta)+x^{2}B_{1}(\theta+1)+\cdots,\quad B_{0}(\theta)=(\theta+ e_{7})(\theta+e_{8})(\theta+e_{9}).\]
Assume \(e_{7}\), \(e_{8}\), \(e_{9}\notin\mathbb{Z}\), that is, \(B_{0}(\theta=k)\neq 0\) (\(k\in\mathbb{Z}\)). Recall the shift relation \(E_{6}(e-u)\circ\partial=\partial\circ E_{6}(e)\), in particular
\[E_{6}(s=n+1)\circ\partial=\partial\circ E_{6}(s=n),\]
and set
\[E^{(n)}=E_{6}(s=n+1),\quad n=0,1,\dots\]
They satisfy
\[E^{(0)}:=\partial^{3}\circ V,\quad E^{(n)}\circ\partial^{n}=\partial^{n}\circ E ^{(0)},\quad i.e.,\quad E^{(n)}:=(\partial^{n}\circ E^{(0)})/\partial^{n}.\]
**Lemma 8.4**.: \(E^{(n)}(1)\) _is a non-zero constant._
Proof.: The identity
\[E^{(n)}(1) = E^{(n)}\partial^{n}(\tfrac{1}{n!}x^{n})=\partial^{n}E^{(0)}( \tfrac{1}{n!}x^{n})=\partial^{n}\partial^{3}V(\tfrac{1}{n!}x^{n})\] \[= \partial^{n+3}(\tfrac{1}{n!}B_{0}(\theta=n)x^{n+3}+\cdots)=((n+3)!/n!)B_{0}(\theta=n),\]
assert the claim.
**Lemma 8.5**.: _Let \(Q_{1}\), \(Q_{2}\) be non-zero differential operators with rational function coefficients. Put \(f:=Q_{2}(1)\) a non-zero rational function. Assume that \(Q_{1}Q_{2}(1)\) is a non-zero constant. Then there exist differential operators \(\tilde{Q}_{1},\tilde{Q}_{2}\) such that_
\[\tilde{Q}_{1}\circ\partial =\partial\circ Q_{1}\circ f,\] \[\tilde{Q}_{2}\circ\partial =\partial\circ\frac{1}{f}\circ Q_{2},\] \[\tilde{Q}_{1}\circ\tilde{Q}_{2}\circ\partial =\partial\circ Q_{1}\circ Q_{2}.\]
Proof.: Since \(\partial(Q_{1}(f))=\partial(Q_{1}Q_{2}(1))=0\) and \(\partial(\tfrac{1}{f}Q_{2}(1))=\partial(1)=0\), the right-hand sides of the above two first formulae are divisible from the right by \(\partial\). The last equation is obtained by the combination of first two.
We start by putting
\[Q_{1}^{(0)}:=\partial^{3},\quad Q_{2}^{(0)}:=V=x^{3}(x-1)^{3}\partial^{3}+\cdots;\]
they satisfy \(E^{(0)}=Q_{1}^{(0)}\circ Q_{2}^{(0)}\). Apply Lemma 8.5 to
\[f=f^{(n)}:=Q_{2}^{(n)}(1),\quad Q_{1}=Q_{1}^{(n)},\quad Q_{2}=Q_{2}^{(n)}, \quad Q_{1}\circ Q_{2}=E^{(n)}\]
to define \(Q_{1}^{(n+1)}\) and \(Q_{2}^{(n+1)}\) inductively:
\[\begin{split} Q_{1}^{(n+1)}\circ\partial&=\partial \circ Q_{1}^{(n)}\circ f^{(n)},\\ Q_{2}^{(n+1)}\circ\partial&=\partial\circ\frac{1}{f^ {(n)}}\circ Q_{2}^{(n)},\\ Q_{1}^{(n+1)}\circ Q_{2}^{(n+1)}\circ\partial&= \partial\circ Q_{1}^{(n)}\circ Q_{2}^{(n)}.\end{split} \tag{8.1}\]
Note that \(Q_{1}^{(n)}\circ Q_{2}^{(n)}=E^{(n)}\), \(Q_{1}^{(n+1)}\circ Q_{2}^{(n+1)}=E^{(n+1)}\), and that \(f^{(n)}\) is a non-zero rational function by Lemma 8.4. Note also
\[Q_{1}^{(1)}=f^{(0)}\partial^{3}+\cdots,\quad\cdots,\quad Q_{1}^{(n)}=f^{(0)} \cdots f^{(n-1)}\partial^{3}+\cdots,\]
\[Q_{2}^{(1)}=\frac{x^{3}(x-1)^{3}}{f^{(0)}}\partial^{3}+\cdots,\quad\cdots, \quad Q_{2}^{(n)}=\frac{x^{3}(x-1)^{3}}{f^{(0)}\cdots f^{(n-1)}}\partial^{3}+\cdots.\]
We define the differential operator \(P^{(n)}\) of order \(n\) inductively by
\[P^{(n)}:=\partial\circ\frac{1}{f^{(n-1)}}P^{(n-1)}=\partial\circ\frac{1}{f^{( n-1)}}\circ\partial\circ\frac{1}{f^{(n-2)}}\circ\cdots\circ\partial\circ \frac{1}{f^{(1)}}\circ\partial\circ\frac{1}{f^{(0)}}.\]
Then by definition, we have the following lemma:
**Lemma 8.6**.:
1. \(Q_{1}^{(n)}\circ P^{(n)}=\partial^{n+3}\)_._
2. \(\mathrm{Ker}\ P^{(n)}\) _is a subspace of_ \(\langle 1,x,\ldots,x^{n+2}\rangle\) _of dimension_ \(n\)_._
3. _The solution space of_ \(Q_{1}^{(n)}\) _is a_ \(3\)_-dimensional subspace of_ \(\mathbb{C}(x)\)_._
Proof.: (1) We use \(P^{(n+1)}=\partial\circ\frac{1}{f^{(n)}}\circ P^{(n)}\), then
\[\begin{split} Q_{1}^{(n+1)}\circ P^{(n+1)}&=Q_{1}^ {(n+1)}\circ\partial\circ\frac{1}{f^{(n)}}\circ P^{(n)}\\ &=\partial\circ Q_{1}^{(n)}\circ f^{(n)}\circ\frac{1}{f^{(n)}} \circ P^{(n)}=\partial\circ Q_{1}^{(n)}\circ P^{(n)}.\end{split}\]
(2) \(\mathrm{Ker}\ \partial^{n+3}=\langle 1,x,\ldots,x^{n+2}\rangle\).
We prepare another lemma:
**Lemma 8.7**.: _Let \(Q\) be a differential operator over \(\mathbb{C}(x)\) of order three whose leading term is \(\partial^{3}\), such that the solution space is a \(3\)-dimensional vector space in \(\mathbb{C}(x)\)._
1. _For linearly independent solutions_ \(h_{1},h_{2},h_{3}\in\mathbb{C}(x)\)_, set_ \[\begin{split} L_{3}&:=\partial-f_{3},\quad f_{3}=h ^{\prime}_{3}/h_{3},\quad\mathrm{ put}\quad g_{2}:=L_{3}(h_{2}),\\ L_{2}&:=\partial-f_{2},\quad f_{2}=g^{\prime}_{2}/g_ {2},\quad\mathrm{ put}\quad g_{1}:=L_{2}\circ L_{3}(h_{1}),\\ L_{1}&:=\partial-f_{1},\quad f_{1}=g^{\prime}_{1}/g_ {1}.\end{split}\] _Then we have_ \[Q=L_{1}\circ L_{2}\circ L_{3}.\]
2. _Conversely, if_ \(Q\) _has an expression_ \(L_{1}\circ L_{2}\circ L_{3}\) _such as_ \[L_{i}=\partial-f_{i}(x),\quad f_{i}(x)\in\mathbb{C}(x)\quad(i=1,2,3),\] _then_ \[f_{3}=h^{\prime}_{3}/h_{3},\quad f_{2}=g^{\prime}_{2}/g_{2},\ g_{2}=L_{3}(h_{2}), \quad f_{1}=g^{\prime}_{1}/g_{1},\ g_{1}=L_{2}\circ L_{3}(h_{1})\] _for some solutions_ \(h_{j}\) _(_\(i=3,2,1\)_)._
Proof.:
1. Easy to see that \(h_{3},h_{2}\) and \(h_{1}\) solve \(L_{1}\circ L_{2}\circ L_{3}\).
2. Set \[\begin{array}{ll}W_{3}&=\{u\in\mathbb{C}(x)\mid L_{1}L_{2}L_{3}u=0\},\\ W_{2}&:=\{u\in\mathbb{C}(x)\mid L_{2}L_{3}u=0\},\\ W_{1}&:=\{u\in\mathbb{C}(x)\mid L_{3}u=0\}.\end{array}\] Then \(W_{1}\subset W_{2}\subset W_{3}\) and \(\dim W_{i}=i\) for \(i=1,2,3\). We take \(h_{3},h_{2},h_{1}\) so that \[\langle h_{3}\rangle=W_{1},\quad\langle h_{2},h_{3}\rangle=W_{2},\quad\langle h _{1},h_{2},h_{3}\rangle=W_{3}.\]
Apply these lemmas to
\[Q=\frac{1}{f^{(0)}\cdots f^{(n-1)}}Q_{1}^{(n)},\]
and we have the conclusion.
**Proposition 8.8**.: _Define \(f^{(n)}\), \(Q_{1}^{(n)}\) and \(Q_{2}^{(n)}\) by (8.1). Then \(E_{6}(s=n+1)\)\((n=1,2,\dots)\) factors as \(Q_{1}^{(n)}\circ Q_{2}^{(n)}\). For a basis \(\{h_{1},h_{2},h_{3}\}\) of the solution space of \(Q_{1}^{(n)}\), define the first-order operators \(\{L_{1},L_{2},L_{3}\}\) as in Lemma 8.7. Then_
\[Q_{1}^{(n)}=f^{(0)}\cdots f^{(n-1)}L_{1}\circ L_{2}\circ L_{3}.\]
_Though these three operators \(L_{1},L_{2}\) and \(L_{3}\) are not uniquely determined, they are controlled by Lemma 8.7._
_Remark 8.9_.: The three operators \(L_{1},L_{2}\) and \(L_{3}\) have apparent singularities not only at the roots and the poles of \(f^{(0)}\cdots f^{(n-1)}\) but also at the points depending on the choice of the basis \(\{h_{1},h_{2},h_{3}\}\).
## 9 Shift operators of \(H_{5}\)
\begin{tabular}{r l} \hline
**9.1** & **Shift operators of \(H_{5}\), S-values and reducibility conditions** \\
**9.2** & **Reducible cases of \(H_{5}\)** \\
**9.3** & **Shift operators of \(H_{5}\)** \\ \hline \end{tabular}
We find shift operators and reducibility conditions for \(H_{5}\). Recall
\[H_{5}=H_{5}(e_{1},\dots,e_{8}):=H_{6}(e_{9}=0)/\partial=x\overline{T}_{0}+ \overline{T}_{1}+\overline{T}_{2}\partial+\overline{T}_{3}\partial^{2}\]
where
\[\begin{array}{llll}\overline{T}_{0}&=&(\theta-r+1)(\theta-r+2)(\theta-r+3)( \theta+e_{7}+1)(\theta+e_{8}+1),\\ \overline{T}_{1}&=&(\theta-r+1)(\theta-r+2)B_{51},\quad B_{51}:=B_{1}(e_{9}=0 ),\\ \overline{T}_{2}&=&(\theta-r+2)B_{52},\quad B_{52}:=B_{2}(e_{9}=0),\\ \overline{T}_{3}&=&-(\theta+3-e_{1})(\theta+3-e_{2}))(\theta+3-e_{3})). \\ \end{array}\]
Its Riemann scheme is
\[\left(\begin{array}{ccccc}0&1&e_{1}-1&e_{2}-1&e_{3}-1\\ 0&1&e_{4}-1&e_{5}-1&e_{6}-1\\ 1-r&2-r&3-r&e_{7}+1&e_{8}+1\end{array}\right),\qquad r=-s=(e_{1}+\cdots+e_{8} -6)/3.\]
This equation has \((x\to 1-x)\)-symmetry and adjoint symmetry but has no \((x\to 1/x)\)-symmetry nor differentiation symmetry as are summarized in SS2.1 and SS2.2.
### Shift operators of \(H_{5}\), S-values and reducibility conditions
**Theorem 9.1**.: _Equation \(H_{5}\) has shift operators relative to the shifts of blocks \(\{e_{1},e_{2},e_{3}\}\) and \(\{e_{4},e_{5},e_{6}\}\). Explicit form is tabulated in SS9.3._
Notation:\(P_{\pm 0}\) denotes the shift operator of \(H_{5}\) for the shift \(\boldsymbol{e}_{1}\pm\mathbf{1}\), and \(P_{0\pm}\) for \(\boldsymbol{e}_{4}\pm\mathbf{1}\).
**Proposition 9.2**.: _The S-values for the shifts of blocks:_
\[\begin{array}{ll}Sv_{-0}&=P_{+0}(\boldsymbol{e}_{1}-1)\circ P_{-0}=(r-1)(r- 2)(e_{4}-r)(e_{5}-r)(e_{6}-r),\\ Sv_{0-}&=P_{0+}(\boldsymbol{e}_{4}-1)\circ P_{0-}=-(r-1)(r-2)(e_{1}-r)(e_{2}-r)(e _{3}-r).\end{array}\]
**Theorem 9.3**.: _If one of \(r,e_{1}-r,\ldots,e_{6}-r\) is an integer, then the equation \(H_{5}\) is reducible._
Proof of Theorem 9.1: Let \(sh\) be a shift of blocks \(\boldsymbol{e}_{i}\to\boldsymbol{e}_{i}\pm\mathbf{1}\)\((i=1,4)\), and \(H_{6sh}\) be \(H_{6}\) with shift \(sh\). We have the shift relation
\[H_{6sh}\circ P=Q\circ H_{6}.\]
Let us see what happens if we put \(e_{9}=0\) in this relation. We have
\[H_{6}(e_{9}=0)=H_{5}\circ\partial\quad\text{and}\quad H_{6sh}(e_{9}=0)=H_{5sh }\circ\partial,\]
hence
\[H_{5sh}\circ\partial\circ P=Q\circ H_{5}\circ\partial.\]
Define \(P_{1}\) by
\[\partial\circ P=P_{1}\circ\partial,\]
then we get
\[H_{5sh}\circ P_{1}=Q\circ H_{5}.\]
Divide \(P_{1}\) by \(H_{5}\) on the right:
\[P_{1}=A\circ H_{5}+P_{2},\quad\text{deg }(P_{2})<5=\text{deg }(H_{5}),\]
and we have the shift relation
\[H_{5sh}\circ P_{2}=(Q-H_{5sh}\circ A)\circ H_{5}.\qed\]
**Example 9.4**.: \(sh:\boldsymbol{e}_{1}\to\boldsymbol{e}_{1}+1,\ P=P_{+0}\)_._
In this case, \(H_{5sh}=H_{5}(\boldsymbol{e}_{1}+1)\) and we have \(\partial\circ P_{+00}(e_{9}=0)=P_{1}\circ\partial\) for some \(P_{1}\). Let
\[P_{1}=A\circ H_{5}+P_{2}\quad\text{and}\quad Q_{2}=Q_{+00}(e_{9}=0)-H_{5sh} \circ A.\]
Then, we have the shift relation: \(H_{5sh}\circ P_{2}=Q_{2}\circ H_{5}\), where \(P_{2}=x^{3}(x-1)^{4}(r+1)\partial^{4}+\cdots\) and \(Q_{2}\) similar. Hence, \(P_{2}=P_{+0}\) and \(Q_{2}=Q_{+0}\) are obtained as listed in SS9.3.
**Example 9.5**.: \(sh:\boldsymbol{e}_{1}\to\boldsymbol{e}_{1}-1,\ P=P_{-0}\)_._
In this case, for \(H_{6}\),
\[P_{-00}=(x-1)\partial-r,\quad Q_{-00}=(x-1)\partial+3-r,\]
and \(H_{5sh}=H_{5}(\boldsymbol{e}_{1}-1)\). Defining \(P_{2}\) and \(Q_{2}\) as above, we have the shift relation \(H_{5sh}\circ P_{2}=Q_{2}\circ H_{5}\), where
\[P_{2}=P_{-0}:=(x-1)\partial+1-r,\quad Q_{2}=Q_{-0}:=(x-1)\partial+3-r.\]
For the shifts \(\boldsymbol{e}_{4}\to\boldsymbol{e}_{4}\pm 1\), we have similar results. Refer to SS9.3.
_Remark 9.6_.: The shift relations of \(H_{6}\) when the shifts includes \(e_{9}\) produce no new relations of \(H_{5}\).
### Reducible cases of \(H_{5}\)
When \(H_{5}\) is reducible as in Theorem 9.3, the equation \(H_{5}\) factorizes and \(H_{4}\) and \(H_{3}\) appear as factors:
1. When \(e_{1}-r=1\), _i.e.,_\(e_{1}=(e_{2}+\cdots+e_{8}-3)/2\), we find that \(H_{5}\) factors of type [1,4], and the factor [4] has Riemann scheme as \[\left(\begin{array}{cccc}x=0:&0&1&e_{2}-1&e_{3}-1\\ x=1:&0&e_{4}-1&e_{5}-1&e_{6}-1\\ x=\infty:&e_{7}+1&e_{8}+1&7/2-e_{28}/2&9/2-e_{28}/2\end{array}\right),\quad e_{2 8}=e_{2}+\cdots+e_{8}.\] After exchanging \(x=1\) and \(x=\infty\), we multiply \((x-1)^{7/2-e_{28}/2}\) from the right. Renaming the exponents as \[0,\ 1,\ \epsilon_{1},\ \epsilon_{2};\quad 0,\ 1,\ \epsilon_{3},\ \epsilon_{4}; \quad s,\ \epsilon_{5},\ \epsilon_{6},\ \epsilon_{7},\] we can check that this coincides with \(H_{4}(\epsilon)\), which is defined in Section 10, and has \(7\)\((=8-1)\) independent parameters.
2. When \(r=2\), \(H_{5}\) factors as \([3,1,1]\). The factor \([1,1]\) is just \(\partial^{2}\) and the Riemann scheme of \(x^{-e_{3}-2}(x-1)^{-e_{6}-2}\circ[3]\circ x^{e_{3}-3}(x-1)^{e_{6}-3}\) is \[\left(\begin{array}{cccc}x=0:&0&e_{1}-e_{3}&e_{2}-e_{3}\\ x=1:&0&e_{4}-e_{6}&e_{5}-e_{6}\\ x=\infty:&e_{3}+e_{6}-3&e_{3}+e_{6}+e_{7}-3&9-e_{1}-e_{2}-e_{4}-e_{5}-e_{7}\end{array} \right).\] Renaming these exponents as \[0,\ \epsilon_{1},\ \epsilon_{2};\quad 0,\ \epsilon_{3},\ \epsilon_{4};\quad s,\ \epsilon_{5},\ \epsilon_{6},\] we can check that this coincides with \(H_{3}(\epsilon)\), which already appeared as a factor of \(H_{6}\) (SS6.2.1), and is defined in Section 12. This has \(6\)\((=7-1)\) independent parameters.
Summing up, we have the following proposition.
**Proposition 9.7**.: \(1)\) _For \(i=1,\ldots,6\),_
\[\begin{array}{cccccc}e_{i}-r=&\cdots&-1&0&1&2&\cdots\\ &\cdots&[4,1]&[4,1]A0&[1,4]A0&[1,4]&\cdots\end{array}\]
_When \(e_{i}+s=0,1\), the factor \([4]\) is essentially \(H_{4}\)._
2. \[\begin{array}{cccccc}r=&\cdots&-1&0&1&2&3&\cdots\\ &\cdots&[1,1,3]&[1,1,3]A0&[1,3,1]A0&[3,1,1]A0&[3,1,1]&\cdots\end{array}\]
_When \(r=0,1,2\), the factor \([3]\) is essentially \(H_{3}\)._
### Shift operators of \(H_{5}\)
Important convention: _For a polynomial \(U\) of \(\theta\), we denote by \(U[k]\) the polynomial \(U(\theta=\theta+k)\); say, \(U[-2]\) for \(U(\theta=\theta-2)\). For a polynomial \(B\) depending on parameters, \(B_{s}\) denotes the polynomial \(B\) with shifted parameters in question._
\[[-0]\quad(\mathbf{e}_{1}-{\bf 1}=[e_{1}-1,e_{2}-1,e_{3}-1,r-1]) \tag{9.3.1}\]
\[P_{-0}=(x-1)\partial+1-r,\qquad Q_{-0}=(x-1)\partial+3-r.\]
\[[+0]\quad(\mathbf{e}_{1}+{\bf 1}=[e_{1}+1,e_{2}+1,e_{3}+1,r+1])\]
\[P_{+0} = x^{3}P_{nnn}+x^{2}P_{nn}+xP_{n}+P_{0}+P_{1}\partial,\] \[Q_{+0} = x^{3}Q_{nnn}+x^{2}Q_{nn}+xQ_{n}+Q_{0}+Q_{1}\partial,\]
\[\begin{array}{lll}\overline{P_{nnn}}&=&(\theta-r+1)(\theta-r+2)(\theta+e_{7} +1)(\theta+e_{8}+1),\\ P_{nn}&=&-(\theta-2r+3)(\theta-r+1)(\theta+e_{7}+1)(\theta+e_{8}+1)+(\theta+1-r )B_{51},\\ \\ P_{n}&=&r(r-1)(\theta+e_{7}+1)(\theta+e_{8}+1)-(\theta-2r+2)B_{51}+\theta B_{5 2}[-1],\\ P_{0}&=&-(\theta+r-1)(\theta+1-e_{1})(\theta+1-e_{2})(\theta+1-e_{3})-(\theta-r +1)B_{52}[-1],\\ \\ P_{1}&=&(\theta+2-e_{1})(\theta+2-e_{2})(\theta+2-e_{3}),\\ Q_{nnn}&=&(\theta-r+3)(\theta-r+4)(\theta+e_{7}+3)(\theta+e_{8}+3),\\ Q_{nn}&=&-(\theta-2r+2)(\theta-r+3)(\theta+e_{7}+2)(\theta+e_{8}+2)+(\theta-r+ 3)B_{51s}[2],\\ \\ Q_{n}&=&r(r-1)(\theta+e_{7}+1)(\theta+e_{8}+1)-(\theta-2r+2)B_{51s}[1]+(\theta +3)B_{52s}[1],\\ \\ Q_{0}&=&-(\theta+r+1)(\theta+2-e_{1})(\theta+2-e_{2})(\theta+2-e_{3})-(\theta- r+1)B_{52s},\\ \\ Q_{1}&=&(\theta+2-e_{1})(\theta+2-e_{2})(\theta+2-e_{3}),\\ \\ \overline{B_{51s}=B_{51}(\mathbf{e}_{1}+{\bf 1}),\quad B_{52s}:=B_{52}( \mathbf{e}_{1}+{\bf 1}).}\end{array}\]
\[[0-]\quad(\mathbf{e}_{4}-{\bf 1}=[e_{4}-1,e_{5}-1,e_{6}-1,r-1]) \tag{9.3.2}\]
\[P_{0-}=x\partial+1-r,\qquad Q_{0n}=x\partial+3-r.\]
\[[0+]\quad(\mathbf{e}_{4}+{\bf 1}=[e_{4}+1,e_{5}+1,e_{6}+1,r+1])\]
\[\begin{array}{lll}P_{0+}&=&x^{3}P_{nnn}+x^{2}P_{nn}+xP_{n}+P_{0}+P_{1} \partial,\\ Q_{0+}&=&x^{3}Q_{nnn}+x^{2}Q_{nn}+xQ_{n}+Q_{0}+Q_{1}\partial,\end{array}\]
\[\begin{array}{lll}\overline{P_{nnn}}&=&(\theta-r+1)(\theta-r+2)(\theta+e_{7} +1)(\theta+e_{8}+1),\\ P_{nn}&=&(\theta-r+1)B_{51},\\ P_{n}&=&\theta B_{52}[-1],\\ P_{0}&=&(\mbox{see below})\\ Q_{nnn}&=&(\theta-r+3)(\theta-r+4)(\theta+e_{7}+3)(\theta+e_{8}+3),\\ Q_{nn}&=&(\theta-r+3)B_{51s}[2],\\ Q_{n}&=&(\theta+2)B_{52s}[1],\\ Q_{0}&=&P_{0}[2]\end{array}\]
\[\begin{array}{lll}\overline{B_{51s}=B_{51}(\mathbf{e}_{4}+1), \quad B_{52s}=B_{52}(\mathbf{e}_{4}+1).}\end{array}\]
\[P_{0} = -\theta^{4}-(r+2-e_{1}-e_{2}-e_{3})\theta^{3}-(r^{2}+(2-e_{1}-e_{2}- e_{3})r-e_{1}\] \[\qquad-e_{2}-e_{3}+e_{1}e_{2}+e_{1}e_{3}+e_{2}e_{3})\theta^{2}-(r ^{3}+(2-e_{1}-e_{2}-e_{3})r^{2}\] \[\qquad-(e_{1}+e_{2}+e_{3}-e_{1}e_{2}-e_{1}e_{3}-e_{2}e_{3})r-2+e_{1}\] \[\qquad+e_{2}+e_{3}-e_{1}e_{2}e_{3})\theta-(r-1)(r-e_{1}+1)(r+1-e_ {2})(r+1-e_{3}).\]
## 10 Shift operators of \(H_{4}\)
**10.1 A shift operator of \(H_{4}\)****10.2 Reducible cases of \(H_{4}\)****10.1 Reducible cases of \(H_{4}\)****10.2 Reducible cases of \(H_{4}\)****10.1 A shift operator of \(H_{4}\)****10.2 Reducible cases of \(H_{4}\)****10.3 Reducible cases of \(H_{4}\)****10.1 A shift operator of \(H_{4}\)****10.2 Reducible cases of \(H_{4}\)****10.3 Reducible cases of \(H_{4}\)****10.3 Reducible cases of \(H_{4}\)****10.
Proof.: When \(e_{7}=1\), \(H_{4}\) factors as \([\partial,F_{1}]\). The local exponents of \(F_{1}=x^{2}(x-1)^{2}\partial^{3}+\cdots\) are
\[[0,e_{1},e_{2}],\ [0,e_{3},e_{4}],\ [e_{5},e_{6},3-e_{1}-\cdots-e_{6}].\]
\(F_{1}\) coincides with \(H_{3}\) without modification. for \(H_{3}\).
When \(e_{7}=0\), \(H_{4}\) factors as \([F_{0},\partial]\). The local exponents of \(F_{0}=x^{2}(x-1)^{2}+\cdots\) are
\[[0,e_{1}-1,e_{2}-1],\ [0,e_{3}-1,e_{4}-1],\ [e_{5}+1,e_{6}+1,5-e_{1}-\cdots-e_{6}],\]
and \(F_{0}=H_{3}(e_{1}-1,\ldots,e_{4}-1,e_{5}+1,e_{6}+1)\). |
2305.01521 | Unlocking the Power of Representations in Long-term Novelty-based
Exploration | We introduce Robust Exploration via Clustering-based Online Density
Estimation (RECODE), a non-parametric method for novelty-based exploration that
estimates visitation counts for clusters of states based on their similarity in
a chosen embedding space. By adapting classical clustering to the nonstationary
setting of Deep RL, RECODE can efficiently track state visitation counts over
thousands of episodes. We further propose a novel generalization of the inverse
dynamics loss, which leverages masked transformer architectures for multi-step
prediction; which in conjunction with RECODE achieves a new state-of-the-art in
a suite of challenging 3D-exploration tasks in DM-Hard-8. RECODE also sets new
state-of-the-art in hard exploration Atari games, and is the first agent to
reach the end screen in "Pitfall!". | Alaa Saade, Steven Kapturowski, Daniele Calandriello, Charles Blundell, Pablo Sprechmann, Leopoldo Sarra, Oliver Groth, Michal Valko, Bilal Piot | 2023-05-02T15:29:40Z | http://arxiv.org/abs/2305.01521v1 | # Unlocking the Power of Representations
###### Abstract
We introduce _Robust Exploration via Clustering-based Online Density Estimation_ (RECODE), a non-parametric method for novelty-based exploration that estimates visitation counts for clusters of states based on their similarity in a chosen embedding space. By adapting classical clustering to the nonstationary setting of Deep RL, RECODE can efficiently track state visitation counts over thousands of episodes. We further propose a novel generalization of the inverse dynamics loss, which leverages masked transformer architectures for multi-step prediction; which in conjunction with RECODE achieves a new state-of-the-art in a suite of challenging 3D-exploration tasks in DM-HARD-8. RECODE also sets new state-of-the-art in hard exploration Atari games, and is the first agent to reach the end screen in _Pitfall!_
Machine Learning, ICML
## 1 Introduction
Exploration mechanisms are a key component of reinforcement learning (RL, Sutton & Barto, 2018) agents, especially in sparse-reward tasks where long sequences of actions need to be executed before collecting a reward. The exploration problem has been studied theoretically (Kearns & Singh, 2002; Azar et al., 2017; Brafman & Tennenholtz, 2003; Auer et al., 2002; Agrawal & Goyal, 2012; Audibert et al., 2010; Jin et al., 2020) in the context of bandits (Lattimore & Szepesvari, 2020) and Markov Decision Processes (MDPs, Puterman, 1990; Jaksch et al., 2010). One simple yet theoretically-sound approach for efficient exploration in MDPs is to use a decreasing function of the visitation counts as an exploration bonus (Strehl & Littman, 2008; Azar et al., 2017). However, this approach becomes intractable for large or continuous state spaces, where the agent is unlikely to visit the exact same state multiple times, and some form of meaningful generalization over states is necessary. Several approximations and proxies for visitation counts and densities have been proposed to make this form of exploration applicable to complex environments. Two partially successful approaches in deep RL are: the parametric approach, which uses neural networks to estimate visitation densities directly, and the non-parametric approach, which leverages a memory of visited states to guide exploration.
Parametric methods either explicitly estimate the visitation counts using density models (Bellemare et al., 2016; Ostrovski et al., 2017) or use proxies for visitation such as the prediction error of a dynamics model (Pathak et al., 2017; Guo et al., 2022), or from predicting features of the current observation, e.g., features given by a fixed randomly initialized neural network as in RND (Burda et al., 2019). While this family of methods provides strong baselines for exploration in many settings (Burda et al., 2018), they are prone to common problems of deep learning in continual learning scenarios, especially slow adaptation and catastrophic forgetting. Parametric models trained via gradient descent are unsuitable for rapid adaptation (e.g., within a single episode) because it requires updates to the state representation before the exploration bonus can catch up. Additionally, catastrophic forgetting makes parametr
Figure 1: A key result of RECODE is that it allows us to leverage more powerful state representations for long-term novelty estimation; enabling to achieve a new state-of-the-art in the challenging 3D task suite DM-HARD-8.
to the so-called 'detachment' problem in which the algorithm loses track of promising areas to explore Ostrovski et al. (2017). Non-parametric methods rely on a memory to store encountered states Savinov et al. (2018); Badia et al. (2020). This facilitates responsiveness to the most recent experience as well as preserving memories without interference. However, due to computational constraints, it is necessary to limit the memory size which, in turn, requires a selection or aggregation mechanism for states.
To obtain the best of both worlds, Never Give Up (NGU, Badia et al., 2020) combines a short-term novelty signal based on an episodic memory and a long-term novelty signal based on RND into a single intrinsic reward. However, the need to estimate two different novelty signals simultaneously adds complexity and requires careful tuning. Moreover, as pointed out by Pathak et al. (2017), the final efficacy of any exploration algorithm strongly depends on the chosen state representation. If the state encoding is susceptible to noise or uncontrollable features in the observations, it can lead to irrelevant novelty signals and prevent meaningful generalization over states. As NGU relies on RND for representation, it also inherits its encoding deficiencies in the presence of noisy observations which limits the applicability of the method in stochastic or complex environments.
In this paper, we tackle these issues by decomposing the exploration problem into two disentangled sub-problems. First, (i) **Representation Learning** with an embedding function that encodes a meaningful notion of state similarity while being robust to uncontrollable factors in the observations. Second, (ii) **Count Estimation** that is able to provide a long term visitation-based exploration bonus while retaining responsiveness to the most recent experience. Addressing (i), we extend the inverse dynamic model proposed by Pathak et al. (2017) by leveraging the power of masked sequence transformers Devlin et al. (2018) to build an encoder which can produce rich representations over longer trajectories while suppressing the encoding of uncontrollable features. We refer to our representation as CASM, for _Coupled Action-State Masking_. In order to deliver on (ii) we introduce a novel, non-parametric method called Robust Exploration via Clustering-based Online Density Estimation (RECODE). In particular, RECODE estimates soft visitation counts in the embedding space by adapting density estimation and clustering techniques to an online RL setting. Our approach tracks histories of interactions spanning thousands of episodes, significantly increasing memory capacity over prior art in non-parametric exploration methods which typically only store the most recent history like the current episode. In the presence of noise, we show that it strictly improves over state-of-the-art exploration bonuses such as NGU or RND. RECODE matches or exceeds state-of-the-art exploration results on Atari and is the first agent to reach the end-screen in _Pitfall!_. Beyond 2D, our method also performs well in 3D domains and in conjunction with CASM, sets new state-of-the-art results in the challenging DM-HARD-8 suite (Fig. 1).
## 2 Background
We consider a discrete-time interaction McCallum (1995); Hutter (2004); Hutter et al. (2009); Daswani et al. (2013) between an agent and its environment. At each time step \(t\in\mathbb{N}\) the agent receives an observation \(o_{t}\in\mathcal{O}\), that partially captures the underlying state \(s\in\mathcal{S}\) of the environment and generates an action \(a_{t}\in\mathcal{A}\). We consider policies \(\pi:\mathcal{O}\rightarrow\Delta_{\mathcal{A}}\), that map an observation to a probability distribution over actions. Finally, an extrinsic reward function \(r_{e}:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}\) maps an observation to a scalar feedback. This function can be combined with an intrinsic reward function \(r_{i}\) to encourage the exploratory behavior which might not be induced from \(r_{e}\) alone.
The observations provided to the agent at each time step \(t\) are used to build a representation of the state via an embedding function \(f_{\theta}:\mathcal{O}\rightarrow\mathcal{E}\), associating \(o_{t}\) with a vector \(e_{t}=f_{\theta}(o_{t})\). Typically, the embedding space \(\mathcal{E}\) is the vector space \(\mathbb{R}^{D}\) where \(D\in\mathbb{N}^{*}\) is the embedding size. Common approaches to learn \(f_{\theta}\) include using an auto-encoding loss on the observation \(o_{t}\)Burda et al. (2018), an inverse dynamics loss Pathak et al. (2017), a multi-step prediction loss at the latent level Guo et al. (2020); Wu et al. (2022), or other similar representation learning methods. In particular, Pathak et al. (2017) and Badia et al. (2020) highlight the utility of the inverse-dynamics loss to filter out noisy or uncontrollable features, e.g., an on-screen death timer as in _Pitfall!_.
A popular and principled approach to exploration in discrete settings is to provide an intrinsic reward inversely proportional to the visitation count Strehl and Littman (2008); Azar et al. (2017). However, in large or continuous spaces the same state may be rarely encountered twice. Badia et al. (2020) remedy this issue by introducing a slot-based memory \(M\), which stores all past embeddings in the current episode, and replaces discrete counts with a sum of similarities between a queried embedding \(e_{t}=f_{\theta}(o_{t})\) and its k-nearest-neighbors \(\text{Neigh}_{k}(e_{t})\) under the kernel \(\mathcal{K}\):
\[r_{t}\propto\frac{1}{\sqrt{N(f_{\theta}(o_{t}))}}\approx\frac{1}{\sqrt{\sum_{ m\in\text{Neigh}_{k}(e_{t})}\mathcal{K}(e_{t},m)}}. \tag{1}\]
Since storing the full history of embeddings throughout training would require a prohibitive amount of space, this slot-based memory is typically relegated to short-term horizons only, and in NGU it is reset at the end of every episode. As a consequence, slot-based memory must be combined with a separate mechanism capable of estimating long-term novelty; resulting in additional method complexity and trade-offs. In the following, we present a simple and efficient slot-based memory which can effectively track novelty over thousands of episodes.
## 3 Recode
In this section we present our method, Robust Exploration via Clustering-based Online Density Estimation (RECODE), to compute intrinsic rewards for exploration. RECODE takes inspiration from the reward of NGU(Badia et al., 2020), but while NGU stores individual embedded observations in \(M\) and uses periodic resets to limit space complexity, RECODE controls its space complexity by aggregating similar observations in memory. This requires storing a separate counter associated with each element in the memory and new observations need not be directly added to the memory, but will typically be assigned to the nearest existing element whose counter is then incremented. Since the counters are never reset and the merged observations have a better coverage of the embedding space, RECODE's memory is much longer-term than a simple slot-based approach yielding state-of-the-art performance in many hard-exploration environments. It also simplifies the estimation of novelty to only one mechanism vs. two as in NGU. Moreover, the RECODE architecture is highly flexible, allowing it to be easily combined with a variety of RL agents and most importantly different representation learning methods. As we show in the experiments, methods that can better leverage priors from learned representations, such as RECODE, outperform those that need to estimate novelty directly on raw observations, like RND (and in turn NGU). We now present our detailed implementation and an overview of our techniques in Algorithm 1.
```
1:Input: Embedding \(e\), Memory \(M=\{m_{j}\}_{i=1}^{|M|}\), atom visitation counts \(\{c_{i}\}_{i=1}^{|M|}\), number of neighbors \(k\), relative tolerance to decide if a candidate new atom is far \(\kappa\), squared distance estimate \(d_{\text{max}}^{\text{max}},d_{\text{max}}^{\text{max}}\)'s decay rate \(\tau\), discount \(\tau\), in: probability \(\eta\), kernel function \(\mathcal{K}\), intrinsic reward constant \(n_{0}\)
2:Output: Updated memory \(M=\{m_{j}\}_{i=1}^{|M|}\), updated atom visitation counts \(\{c_{i}\}_{i=1}^{|M|}\), updated squared distance \(d_{\text{max}}^{\text{max}}\), intrinsic reward \(\tau\)
3:Compute \(N_{\mathcal{K}}(M,e)=\sum_{i=1}^{|M|}(1+c_{i})\mathcal{K}(m_{i},e)\);
4:Compute intrinsic reward \(r=\left(\sqrt{\mathcal{N}_{\mathcal{K}}(M,e)}+n_{0}\right)^{-1}\)
5:Find nearest \(k\) atoms to the embedding \(e\): \(\text{Neigh}_{k}(e)=\{m_{j}\}_{j=1}^{k}\)
6:Update \(d_{\text{max}}\) estimate: \(d_{\text{max}}^{\text{max}}\leftarrow(1-\tau)\ d_{\text{max}}^{\text{max}}+ \frac{\tau}{2}\sum_{m\in\text{Neigh}_{k}(e)}\|m-e\|_{2}^{2}\)
7:Discount all atom counts \(c_{i}\leftarrow\gamma\,c_{l}\quad\forall l\in\{1,\cdots,|M|\}\)
8:Find nearest atom \(m_{\star}\): \(=\arg\min_{m\in M,\dots\neq m_{j}}\|m-e\|_{2}\)
9:Sample uniformly a real number in \([0,1]\); \(u\sim U[0,1]\)
10:if\(\|m_{\star}-e\|_{2}^{2}>\kappa\ d_{\text{max}}^{\text{max}}\) and \(u<\eta\)then
11: Sample atom to remove \(m_{j}\) with probability \(P(j)\propto 1/c_{j}^{2}\)
12:Find atom \(m_{\dagger}\) nearest to \(m_{j}\): \(m_{\dagger}=\arg\min_{m\in M,m\neq m_{j}}\|m-m_{j}\|_{2}\)
13:Radiistribute the count of removed atom: \(c_{i}\gets c_{j}+c_{i}\)
14:Insert \(e\) at index \(j\) with count \(1\): \(m_{j}\gets e,c_{j}\gets 1\)
15:else
16:Update nearest atom position \(m_{\star}\leftarrow\frac{c_{\star}}{c_{\star}+1}m_{\star}+\frac{1}{c_{\star}+1}e\)
17:Update nearest atom count \(c_{\star}\gets c_{\star}+1\)
18:endif
```
**Algorithm 1** RECODE
Approximating visitation counts.Our estimator is based on a finite slot-based container \(M=\{m_{j}\}_{j=1}^{|M|}\), where \(|M|\) is the memory size. We refer to \(m_{j}\in\mathcal{E}\) as atoms since they need not correspond to a single embedding as in Badia et al. (2020);a) We also store a separate count vector \(c\) such that \(c_{i}\) is an estimate of the visitation count of \(m_{i}\). In particular, \(c_{i}\) does not only reflect the number of visits to \(m_{i}\) but also captures any previous visit sufficiently close to it.
Given a new embedding \(e\), we estimate its _soft-visitation count_ (Alg. 1:L3-4) as the weighted sum of all atoms close to \(e\) in the memory, according to a similarity kernel:
\[N_{\mathcal{K}}(M,e)=\sum\nolimits_{l}(1+c_{l})\mathcal{K}(m_{l},e;d_{\text{ ema}}). \tag{2}\]
In particular, we choose our kernel function as:
\[\mathcal{K}(m_{l},e)=\frac{1}{1+\frac{\|e-m_{l}\|_{2}^{2}}{e\epsilon d_{\text{ ema}}^{2}}}\,\mathds{1}\{\|e-m_{l}\|_{2}^{2}<d_{\text{ema}}^{2}\}\,, \tag{3}\]
where \(\epsilon\in\mathbb{R}_{+}\) is a fixed parameter. This definition is similar to Badia et al. (2020), but we replace their sum over \(e\)'s top-\(k\) neighbors with a sum over all atoms within a \(d_{\text{ema}}\) distance from \(e\). This choice prevents a counter-intuitive behaviour that can occur when using the \(k\)-NN approach with counts. In particular, it is desirable that the soft-visitation count of a given embedding should increase after adding it to the memory. However, adding atoms to the memory can change the \(k\)-NN list. If an atom displaced from this list has a large count, this might actually _reduce_ nearby soft-visitation count estimates instead of increasing them. Conversely, our approach is not affected by this issue.
Finally, we return the intrinsic reward \(r\) as in equation 1, but add a small constant \(n_{0}\) to the denominator for numerical stability and normalize \(r\) by a running estimate of its standard-deviation as in Burda et al., 2019.
Building the memory.To construct our memory we rely on the same aggregation principle we leveraged to estimate soft-visitation counts. In particular, we will draw a parallel between our atoms \(m_{i}\) and the centroids of a clustering of observations. We take inspiration from classical clustering and density estimation approaches such as \(k\)-means or DP-means (Kulis and Jordan, 2011); and adapt them to deal with the challenges posed by our large scale RL setting: memory size is limited and cannot store all past data, observations arrive sequentially, their distribution is non-stationary, and even the representation used to embed them changes over time. We now describe how RECODE tackles these problems.
At every step we must update the memory \(M\) to reflect the impact of seeing \(e\) on the soft-visitation counts, while keeping the size \(|M|\) fixed. Intuitively, two possible ways come to mind: either replace an existing atom with the new embedding, or update the position and count of an existing atom to be closer to \(e\). Let \(m_{\star}\) be the closest atom to \(e\) in \(M\). We adopt the following rules (Alg. 1:L8-18) to integrate new
embeddings into the memory, which are closely related to the DP-means clustering algorithm (Kulis and Jordan, 2011):
* If \(e\) satisfies \(||m_{\star}-e||^{2}<\kappa d^{2}_{\text{ema}}\), where \(d_{\text{ema}}\) is an adaptive threshold and \(\kappa>0\) a fixed parameter, it is "assigned" to the cluster encoded by \(m_{\star}\) and we update \(m_{\star}\)'s value according to the convex combination of the counts of the existing embedding and the new one: \[m_{\star}\longleftarrow\frac{c_{\star}}{c_{\star}+1}m_{\star}+\frac{1}{c_{ \star}+1}e\] (4) Its weight \(c_{\star}\) is also incremented by \(1\);
* If there is no close-by atom, we randomly decide whether to create a new one by flipping a coin with probability \(\eta\). If the coin-flip succeeds, we introduce the new embedding as a new atom, and we also remove an existing atom using a procedure described in the next paragraph. If the coin-flip fails, we instead update \(m_{\star}\) as in equation 4.
The random coin-flip is introduced to increase the stability of the clustering algorithm to noise. In particular, an embedding far away from the memory will be inserted only after it is seen on average \(1/\eta\) times, making one-off outliers less of a problem. At the same time, once a far away embedding is observed multiple times and becomes relevant for the soft-visitation counts, there is a high chance that it will be added to improve the coverage of the memory. But to keep memory size finite, an existing atom must be removed. We investigate three different strategies to select an atom \(m_{i}\) for removal based on its cluster count \(c_{i}\): (a) removing with probability \(\propto\frac{1}{c_{i}^{2}}\); (b) removing with probability \(\propto\frac{1}{c_{i}}\); (c) removing the atom with the smallest \(c_{i}\). An ablation study over removal strategies in App. C.2 (Figures 8 and 9), empirically shows that strategy (a) works best for the settings we consider, but also that results are generally quite robust to the specific choice.
Whenever an atom \(i\) is removed, its count \(c_{i}\) is redistributed to the count of its nearest neighbor in order to preserves the total count of the memory. The update rule of RECODE can be also interpreted from the theoretical point of view as an approximate inference scheme in a latent probabilistic clustering model. We provide a more detailed connection of our update rule with the DP-means algorithm in App. C.
Dealing with non-stationary distributions.The distance scale between embedded observations can vary considerably between environments and throughout the course of training, as a result of non-stationarity in both the policy and embedding function \(f_{\theta}\). To deal with this issue, we include an adaptive bandwidth mechanism as in NGU(Badia et al., 2020). In particular, we update the kernel parameter \(d^{2}_{\text{ema}}\) whenever a new embedding \(e\) is received, based on the mean squared distance of the new embedding to the \(k\)-nearest existing atoms (Alg. 1:L5-6). To allow for more rapid adaptation of \(d_{\text{ema}}\), we replace the running average used in NGU with an exponential moving average with parameter \(\tau\).
We note, however, that this mechanism is insufficient to cope with non-stationarity in \(f_{\theta}\) over long timescales. The original NGU memory is not strongly impacted by this issue since it is reset after every episode, leaving little time for the representation to change significantly. However, in RECODE, these changing representations can end up corrupting the long-term memory if old clusters are not updated frequently. In particular, an atom might achieve a high count under a representation, but become unreachable (and thus useless) under a different representation while still being unlikely to be removed. To counteract this we add a decay constant \(\gamma\) which discounts the counts of all atoms in memory at each step as \(c_{i}\longleftarrow\gamma c_{i}\), with \(\gamma<1\) (Alg. 1:L7). This effectively decreases the counts of stale atoms over time and increases the likelihood of their removal during future insertions: clusters that do not get new observations 'assigned' to them for a long time are eventually replaced. At the same time, relevant clusters are kept alive much longer than previous methods. Fig. 3 reports the histogram of cluster ages for clusters contained in the memory of an agent that has learned how to reach _Pitfall_'s end screen. The red line in Fig. 3 denotes the maximum possible number of steps in an single episode, which is enforced by _Pitfall_'s in-game death timer, and would represent the maximum memory horizon for methods that reset their memory every episode. As we can see, most of the clusters are much older than one episode, with earliest memories reaching back thousands of episodes. We consider the effect of discounting in more detail in App. C.2 (Figures 10 to 12 and 14).
Importantly, we note that unlike NGU where each actor maintains its own copy of the memory, RECODE shares the memory across all actors in a distributed agent, which greatly increases the frequency of updates to each atom resulting in less representation drift between memory updates.
## 4 Representation Learning Methods
As discussed in Section 2, the choice of the embedding function \(f_{\theta}:\mathcal{O}\rightarrow\mathcal{E}\) can have a significant impact on the quality of exploration; with many different representation learning techniques being studied in this context (Burda et al., 2018; Guo et al., 2020, 2022, 2021; Erraqabi et al., 2021). In the following, we focus on action prediction embeddings, introducing first the standard \(1\)-step prediction formulation (Pathak et al., 2017; Badia et al., 2020; 2020). The embedding function \(f_{\theta}\) is parameterized as a feed-forward neural network taking \(o_{t}\), the observation at time \(t\), as input. We define a classifier \(g_{\phi}\) that, given the embeddings of
two consecutive observations \(f_{\theta}(o_{t}),f_{\theta}(o_{t+1})\), outputs an estimate \(p_{\theta,\phi}(a_{t}|o_{t},o_{t+1})=g_{\phi}\left(f_{\theta}(o_{t}),f_{\theta}( o_{t+1})\right)\) of the probability of taking an action given two consecutive observations \((o_{t},o_{t+1})\). Both \(f_{\theta}\) and \(g_{\phi}\) are then jointly trained by minimizing an expectation of the negative log likelihood:
\[\min_{\theta,\phi}\mathcal{L}(\theta,\phi)(a_{t})=-\ln(p_{\theta,\phi}(a_{t}| o_{t},o_{t+1}))\,, \tag{5}\]
where \(a_{t}\) is the true action taken between \(o_{t}\) and \(o_{t+1}\). These embeddings proved to be helpful in environments with many uncontrollable features in the observation (Badia et al., 2020), such as in Atari's _Pitfall_, where the observations contain many spurious sources of novelty even when the agent is standing still.
While RECODE can be used with an arbitrary embedding function, e.g. one tailored for the domain of interest, the choice of a meaningful representation is also a key factor for the final performance. A major downside of the standard, \(1\)-step action-prediction method is the simplicity of the prediction task, which can often be solved by learning highly localized and low-level features (e.g. how a single object shifts under a transition), which need not be informative of the global environment structure. In contrast, an ideal embedding should capture higher-level information about the environment, such as the agent's position or relative location of previously observed landmarks; which might not be simultaneously present in the individual observations \(o_{t}\) and \(o_{t+1}\). In order to achieve this, a wider context of time-steps may be needed.
However, the prediction task would become even easier if we simply provided the full trajectory to the predictor. In order to address this limitation, we propose to use a _stochastic_ context, \(h_{t}\), where at each timestep \(k\leq t\), either \(f_{\theta}(o_{k})\) or \(a_{k-1}\) is provided.1 The main intuition being that the model can still predict \(a_{t}\) by learning to infer the missing information from \(f_{\theta}(o_{t})\) given \((h_{t-1},a_{t-1})\). In this way, the action predictor would not solely rely on the information provided by \(f_{\theta}(o_{t})\), but it would also construct redundant representations within \(h_{t}\).
Footnote 1: We avoid masking both \(f_{\theta}(o_{k})\) and \(a_{k-1}\) simultaneously as this would increase the likelihood that the prediction task is indeterminable.
From an implementation standpoint, we first build a sequence of observation embeddings and actions, \((f_{\theta}(o_{0}),a_{0},f_{\theta}(o_{1}),\dots,a_{t-1},f_{\theta}(o_{t}))\). Then, inspired by masked language models (Devlin et al., 2018), at each timestep \(t\), we randomly substitute either \(f_{\theta}(o_{t})\) or \(a_{t}\) with a special token indicating missing information. These masked sequences are then fed to a causally-masked transformer, whose output is then projected down to the size of the embedding (\(\dim z_{t}=\dim f_{\theta}(o_{t})\)), and the difference between
Figure 3: Content of an agent memory when it learns to reach _Pitfall_’s end screen.
Figure 2: Coupled Action-State Masking (CASM) architecture used for learning representations in partially observable environments. The transformer takes masked sequences of length \(k\) consisting of actions \(a_{i}\) and embedded observations \(e_{i}=f_{\theta}(o_{i})\) as inputs and tries to reconstruct the missing embeddings in the output. The reconstructed embeddings at time \(t-1\) and \(t\) are then used to build a 1-step action-prediction classifier. The embedding function used as a representation for RECODE is \(f_{\theta}\). Masked inputs are shaded in pink, \(N=4\) masked sequences are sampled during training (indicated by the stacks of \(a\), \(e\) and \(z\) in the diagram).
the two is input into a final MLP classifier \(g_{\phi}\). As with \(1\)-step action prediction, we train the representation using maximum likelihood. We refer to this approach as Coupled Action-State Masking (CASM) in the following. During training, we randomly sample multiple masked sequences per trajectory (\(N=4\)) to help reduce gradient variance. Note that the final embedding that we provide to RECODE is \(e_{t}=f_{\theta}(o_{t})\), i.e. the transformer _inputs_, to avoid leaking information about the agent's trajectory. Figure 2 shows a diagram of the architecture.
## 5 Experiments
In this section, we experimentally validate the efficacy of our approach on two established benchmarks for exploration in 2D and 3D respectively: a subset of the Atari Learning Environment (ALE, Bellemare et al., 2013) containing eight games such as Pitfall and Montezuma's Revenge which are considered hard exploration problems (Bellemare et al., 2016); and DM-HARD-8 (Gulcehre et al., 2019), a suite of partially observable 3D games. All games pose significant exploration challenges such as very long horizons (\(\mathcal{O}\)(10K) steps), the necessity to backtrack, sparse rewards, object interaction and procedural environment generation. Our method achieves state-of-the-art results across both benchmarks and even solves two previously unsolved games: in Atari's _Pitfall!_ our method is the first to reach the end screen and on DM-HARD-8's _Push Block_ we are the first to achieve super-human performance. We also perform a set of ablations to shed more light on the influence of the representation learning mechanism and the robustness w.r.t. noisy observations.
All candidate architectures evaluated in the following experiments (and in App. K), are composed of three main modules: (1) a base agent, responsible for core RL tasks such as collecting observations and updating the policy, (2) an algorithm responsible for generating the exploration bonus, and (3) an embedding mechanism responsible for learning meaningful representations of observations. Our nomenclature reflects the choice of modules as AGENT-EXPLORATION-EMBEDDING. For example, the MEME agent described in Kapturowski et al. (2022) is denoted as MEME-NGU-AP. We use the MEME agent across all experiments, but vary the exploration and representation mechanisms. For exploration we consider EMM (pure episodic memory), NGU and RECODE whereas for representation we experiment with AP and CASM. We provide a full list of hyper-parameters for all agents and baselines in App. E.
### Atari
The hard-exploration subset of Atari as identified by Bellemare et al. (2016) poses a considerable challenge in terms of optimization horizon with episodes lasting up to \(27,000\) steps using the standard action-repeat of four. Additionally, rewards vary considerably in both scale and density. Across all our experiments in the Atari domain, we set the memory size of our agent to \(5\cdot 10^{4}\) atoms. We evaluate all agents following the regime established in prior work (Mnih et al., 2015; Van Hasselt et al., 2016) using \(30\) random noops, no'sticky actions' (Machado et al., 2018) and average performance over six seeds.
We compare the game scores obtained using our exploration bonus, RECODE, against other methods while keeping agent
Figure 4: Comparison of RECODE against other exploration bonuses on Atari’s hard exploration games. All agents are based on MEME and use the same representation learning mechanism (AP). Note that the high variance in _Q‘bert_ is due to a bug in the game that, when exploited, allows to obtain significantly higher scores (Chrabaszcz et al., 2018).
architecture and representation mechanism fixed. The results presented in Fig. 4 show that our method achieves state-of-the-art, super-human performance across all eight games while using a conceptually simpler exploration bonus compared to MEME-NGO-AP. The MEME-EMM-AP and MEME-RND ablations in Fig. 4 reveal the respective shortcomings of short-term and long-term novelty when used in standalone fashion. EMM on its own cannot solve _Montezuma's Revenge_ because it requires long-term memory. Conversely, RND on its own cannot solve _Pitfall!_ because of the presence of many uncontrollable features in the observations and its inability to leverage the AP embeddings. In contrast, RECODE is able to leverage the AP representation for short-term and long-term novelty due to the clustering-based memory integrating over a long horizon which enables solving both games with a single intrinsic reward.
### DM-Hard-8
DM-HARD-8(Gulcehre et al., 2019) consist of eight exploration tasks, designed to challenge an RL agent in procedurally-generated 3D worlds with partial observability, continuous control, sparse rewards, and highly variable initial conditions. Each task requires the agent to interact with specific objects in its environment in order to reach a large apple that provides reward (cf. Fig. 16 in the Appendix for an example). The procedural generation randomizes object shapes, colors, and positions at every episode. Across all our experiments in the DM-HARD-8 domain, we set the memory size of our agent to \(2\cdot 10^{5}\) atoms. We also use the more powerful CASM representation over AP as the default in these experiments but present an ablation on the influence of the representation in Sec. 5.3. All performances reported for evaluation are averaged across three seeds.
We compare RECODE with NGU and the recently proposed BYOL-Explore(Guo et al., 2022) in this domain. The results presented in Fig. 5 show that our method is able to solve six out of eight tasks with super-human performance which sets a new state-of-the-art on this benchmark and marks the first time that the human baseline has been beaten on _Push Blocks_. To control for the contribution of the representation, we also run a version of NGU which uses the more powerful CASM representation instead of its default AP one. Switching AP with CASM improves NGU's performance significantly and stresses the importance of incorporating information over longer trajectories in the representation mechanism for this domain to combat the challenge of partial observability. However, only RECODE is able to take full advantage of the representational power afforded by CASM as it is able to leverage it for both short-term and long-term novelty bonuses.
### Ablations
Concluding our experiments, we perform two ablation studies to gauge the sensitivity of our approach to the presence of noisy observations and the choice of the underlying representation mechanism.
Robustness to observation noise.Noise in the observation space is one of the most significant adversarial conditions exploration methods must to overcome to deliver utility for any practical scenario which always features imperfect sensors. The 'noisy TV problem' (Schmidhuber, 2010; Pathak et al., 2017) is a common metaphor which describes a failure mode of exploration methods getting stuck
Figure 5: Performance of RECODE compared to NGU and BYOL-Explore on the single-task version of DM-HARD-8. The BYOL-Explore results correspond to the final performance reported in Guo et al. (2022) after 1e10 environment frames. All results have been averaged over 3 seeds.
on the prediction of noise as a meaningless signal of novelty. In order to assess our method's robustness w.r.t. observation noise, we construct a noisy version of _Montezuma's Revenge_ by concatenating a frame containing white noise in the range \([0,255]\) to the game's original \(210\times 160\) greyscale observations along the image height dimension. We compare RECODE to NGU in this setting using the same AP backbone to suppress uncontrollable noise on the representation level and assess the sensitivity of the exploration bonus to it. The results of this experiment are presented in Fig. 6. We find that the performance of MEME-NGU-AP deteriorates significantly in the presence of noise. This can be attributed to the fact that NGU relies on RND to compute the long-term exploration bonus, which degenerates to random exploration in the presence of uncontrollable noise (Kapturowski et al., 2018). This effectively restricts the baseline to short-term exploration within one episode. In contrast, RECODE's mean performance is not degraded significantly and achieves a similar score as in Fig. 4, albeit with a higher variance. |
2305.11732 | Black holes and modular forms in string theory | The study of black holes in string theory has led to the discovery of deep
and surprising connections between black holes and modular forms -- which are
two classical, a priori unrelated, subjects. This article explains the main
physical and mathematical ideas behind these connections. It is known from the
pioneering work of J.Bekenstein and S.Hawking in the 1970s that black holes
have thermodynamic entropy, and should therefore be made up of a collection of
microscopic quantum states. Superstring theory provides a framework wherein we
can associate a number of microscopic states that make up the
quantum-statistical system underlying a black hole, thus explaining their
thermodynamic behavior from a more fundamental point of view. %The
above-mentioned connections arise from the observation that, i The basic
connection to modular forms arises from the observation that, in the simplest
superstring-theoretic construction, the generating function of the number of
microscopic states is a modular form. In one direction, modular symmetry acts
as a powerful guide to the calculation of quantum-gravitational effects on the
black hole entropy. In the other direction, the connection has led to the
discovery of surprising relations between Ramanujan's mock modular forms and a
class of string-theoretic black holes, thus providing an infinite number of new
examples of mock modular forms. | Sameer Murthy | 2023-05-19T15:09:23Z | http://arxiv.org/abs/2305.11732v2 | # Black holes and modular forms in string theory
###### Abstract
The study of black holes in string theory has led to the discovery of deep and surprising connections between black holes and modular forms--which are two classical, a priori unrelated, subjects. This article explains the main physical and mathematical ideas behind these connections.
It is known from the pioneering work of J. Bekenstein and S. Hawking in the 1970s that black holes have thermodynamic entropy, and should therefore be made up of a collection of microscopic quantum states. Superstring theory provides a framework wherein we can associate a number of microscopic states that make up the quantum-statistical system underlying a black hole, thus explaining their thermodynamic behavior from a more fundamental point of view. The basic connection to modular forms arises from the observation that, in the simplest superstring-theoretic construction, the generating function of the number of microscopic states is a modular form. In one direction, modular symmetry acts as a powerful guide to the calculation of quantum-gravitational effects on the black hole entropy. In the other direction, the connection has led to the discovery of surprising relations between Ramanujan's _mock_ modular forms and a class of string-theoretic black holes, thus providing an infinite number of new examples of mock modular forms.
## Introduction
_Black holes_ (BH) are objects that exist in the physical universe: they are regions of spacetime, typically formed by the collapse of heavy stars. At the theoretical level they are described as solutions to the theory of General Relativity (GR). This theory is one of the pillars underlying the paradigm of modern physics--it determines all large-scale structure in our universe and underlies the dynamics of spacetime itself. A black hole is characterized by the existence of a one-way surface surrounding it, called the _event horizon_. The one-way nature of the horizon means that things can fall into the black hole but nothing--not even light--can come out, thus leading to its name. The theoretical description of black holes has been spectacularly confirmed by many recent observations in astronomy. See the textbook Schutz (2022) for an introduction to GR and BHs.
_Modular forms_ are objects that exist in the mathematical universe: they are a class of functions that naturally arise in the field of number theory. They are characterized by their symmetry properties under the action of the _modular group_\(SL(2,\mathbb{Z})\) (the group of \(2\times 2\) integer matrices with unit determinant). The related term _automorphic form_ is used for functions with symmetry properties under more general groups. Starting from the study of theta functions in the 19th century and its classic manifestations in number theory, the field of modular and automorphic forms has grown to include ramifications in topology, algebraic geometry, algebraic and analytic number theory, as well as physics, and has been the subject of intense study over the last 100 years. See Bruinier, van der Geer, Harder, and Zagier (2008) for an introduction to modular forms.
The work of Bekenstein (1973) and Hawking (1975) showed that, when one takes into account quantum mechanical effects, black holes have thermodynamic entropy. This leads to the conclusion that a black hole must be made up of microscopic states, just like the entropy of air in a room is explained by the fact it consists of a large number of molecules. The natural consequent question is: _What are the microstates (the "molecules") of a black hole?_
Since any classical-mechanical process cannot probe the interior of a black hole, the answer to the above question needs a framework which consistently takes into account quantum effects in the presence of a black hole. Precisely such a framework is provided by superstring theory, wherein one can construct models of quantum black holes for which one can count the number of microscopic states. The basic connection to modular forms arises from the observation that, in the simplest string-theoretic constructions, the number of these microscopic states is the Fourier coefficient of a certain modular form.
The underlying modular symmetry allows us to make a simple asymptotic estimate for the logarithm of these Fourier coefficients, which agrees precisely with the Bekenstein-Hawking entropy of the corresponding BHs, as was first discovered in the breakthrough work of Strominger and Vafa (1996). In fact, the power of modular symmetry allows us to go much further. The well-known Hardy-Ramanujan-Rademacher formula of analytic number theory, which ex
presses the Fourier coefficient of the modular form as an infinite sum of \(I\)-Bessel functions, turns out to be intimately related to quantum-gravitational corrections of the black hole entropy. The connection to black holes also informs the field of pure modular forms. For example, it has led to new theorems and constructions of mock modular forms, (which were first introduced by Ramanujan in his last letter to Hardy in 1920).
## Black holes and their thermodynamics
The structure and evolution of spacetime at large scales in our universe is described by the theory of General Relativity. Spacetime--the set of all points in space and time--is considered to be a pseudo-Riemannian 4-manifold of signature \((-,+,+,+)\), the \(-\) sign indicating the single time coordinate \(x^{0}\) and the three \(+\) signs related to the three spatial coordinates \(x^{1},x^{2},x^{3}\). The spacetime manifold comes equipped with a metric (a rank-2 tensor field with components \(g_{\mu\nu}(x)\)\(\mu,\nu=0,1,2,3\)) which we roughly think of as encoding the local shape and size of the manifold. At large scales, physical matter in the universe is effectively described by a rank-2 tensor field on the spacetime manifold, called the stress-energy tensor with components \(T_{\mu\nu}(x)\).
The basic equations of general relativity, also referred to as the Einstein equations, are written in terms of differential geometric quantities, i.e. the Ricci tensor field \(R_{\mu\nu}(g_{\mu\nu}(x))\) and its trace called the Ricci scalar field \(R(g_{\mu\nu}(x))\), which are measures of the curvature of the spacetime metric. The equations relate the spacetime curvature to the stress-energy tensor (for zero cosmological constant) as follows,
\[R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R\ =\ 8\pi G\,T_{\mu\nu}\,, \tag{1}\]
where \(G\) is Newton's constant of gravitation.
All physical structures at large scales in our universe, i.e. galaxies, stars, etc, are solutions to the Einstein equations (1). Black holes are a very special class of solutions to these equations with the property that the resulting spacetime metric has a region from within which no signal can propagate out to an external observer. The boundary of this region is called the event horizon, whose area is a measure of the size of the black hole. Black holes can have very complicated dynamics such as mergers1, but the solutions of Einstein's equations that describe stationary black holes are very simple.2 These solutions are characterized by their conserved charges, namely their mass, spin, and electromagnetic charge.3
Footnote 1: See e.g. [https://www.ligo.org/](https://www.ligo.org/)
Footnote 2: Stationary means that they do not change as a function of time. One can think of these as the end point of the complicated dynamical processes, when the black holes “settle”. This is the limit when we can study them essentially as independent objects.
Footnote 3: Indeed, the dynamics of stationary charge-neutral black holes that we observe in the sky (see e.g. [https://www.cfa.harvard.edu/research/topic/black-holes](https://www.cfa.harvard.edu/research/topic/black-holes)) are well-described by the Kerr BH metric solution which is completely characterized by mass and spin.
Theoretical investigations on black holes in the 1970s, primarily due to Bekenstein (1973) and Hawking (1975) established the following remarkable fact. Although nothing can come out of a black hole in the classical theory of general relativity, a black hole in the quantum theory actually behaves like a thermodynamic object with associated temperature and entropy.
The main arguments to demonstrate the thermodynamic behavior of black holes are simple but profound, and proceed as follows. Consider a system ("the universe") consisting of a black hole of mass \(M\) and a bucket of water (which carries energy and entropy) outside the black hole horizon. Now imagine a process where we throw the water into the black hole, say in a perfectly radial direction so as not to generate angular momentum. After the water has entered the horizon, we lose some energy \(E_{\rm water}\) and some entropy \(S_{\rm water}\) from the external region. Once the perturbations of the system have died down, the system settles into a new state with a new black hole solution and nothing outside. The mass of the black hole can be measured from the outside and it increases to (\(\widetilde{M}=M+E_{\rm water}\)), such that the total energy of the system remains the same (as it should by the principle of conservation of energy!). On the other hand, it would seem that the total entropy of the system decreases by \(S_{\rm water}\)--thus violating the second law of thermodynamics--unless we assign an intrinsic entropy to the black hole itself.
Now, if a black hole has entropy, and given that it has energy (\(=\) its mass \(M\)), it must have a temperature, according to the first law of thermodynamics. Building on the fact that the black hole area always increases in physical processes (Bardeen, Carter, and Hawking (1973)), Bekenstein further argued that the thermodynamic entropy of black holes must be proportional to the area of its event horizon. In a tour de force, Hawking then set up a thought experiment involving scattering of particles (or, equivalently, waves in the quantum theory) off a black hole using the formalism of quantum field theory. Within this setting he calculated the ratio of the amplitude of the reflected wave and the incident wave, which directly leads to the rate of radiation--from which we can deduce the temperature using the Planck distribution formula, and therefore a precise formula for the entropy.
These considerations result in the following succinct formula for the thermodynamic entropy of a BH, that applies universally to any BH solution in general relativity:
\[S_{\rm BH}^{\rm class}\ =\ \frac{1}{4}\frac{A_{H}}{\ell_{\rm Pl}^{2}}\,,\qquad \ell_{\rm Pl}^{2}\ =\ \frac{G\hbar}{c^{3}}\,, \tag{2}\]
where \(A_{H}\) is the area of the BH horizon.4 The Bekenstein-Hawking black hole entropy formula (2) is one of the most profound equations in theoretical physics, involving the three fundamental constants: the speed of light \(c\), Planck's constant \(\hbar\), and Newton's gravitational constant \(G\), which determine the fundamental scales of special relativity, quantum mechanics, and gravitation, respectively, as well as the Boltzmann constant \(k_{B}\) which determines the scale of thermodynamic entropy. We often suppress these constants in the following formulas for ease of presentation.
Footnote 4: The superscript refers to the fact that this is a semi-classical formula, which will later be promoted to a quantum formula.
From the theory of quantum-statistical mechanics, we know that the entropy of a physical system is really a statistical property arising from an underlying collection of quantum-mechanical microscopic states (or _microstates_). This property is quantified by the Boltzmann equation which expresses the thermodynamic entropy as the logarithm of the number of microscopic states in which the system can exist for a given macroscopic state. Now, upon combining the black hole entropy formula with the Boltzmann equation we obtain the Boltzmann equation for black holes
\[k_{B}\log d_{\rm micro}\ =\ S_{\rm BH}^{\rm class}+\ldots\,. \tag{3}\]
The dots here indicate that the Boltzmann equation is an approximate equation, which is valid in the so-called _thermodynamic limit_ of very large black hole size. We will return to this important point later in the exposition.
We thus reach the profound conclusion that a black hole should be made up of a collection of a large number \(d_{\rm micro}\) of microscopic states.
Finding a theory of quantum gravity--a theory which brings together Quantum Mechanics and General Relativity into one consistent framework--has remained an important open question of fundamental physics for the last 60 years. One of the reasons it is such a hard problem is the lack of an experimental guide.5 In this situation it is very useful to have a quantitative criterion which can be used to test or falsify a given theory. The Bekenstein-Hawking entropy formula plays this role: a consistent theory of quantum gravity should be able to produce a microscopic quantum-statistical ensemble of black hole microstates which satisfies the Boltzmann equation (3) for black holes.
Footnote 5: Note that it is impossible to build a particle accelerator probing the Planck scale with current technology.
**Two pictures of black holes in string theory**
A series of developments in the 1990s pioneered by Sen (1995) and Strominger and Vafa (1996), building on previous work by Susskind (1993); Susskind and Uglum (1994); 't Hooft (1990), led to a quantum-statistical explanation of the thermodynamic entropy of black holes in string theory. The basic idea is that there are actually two pictures of a black hole in string theory (see Figure 1), which allows us to separately calculate and compare its thermodynamic and statistical entropy.
Let us first consider the basic physical question: what determines whether an object of mass \(M\) is a black hole or not? The criterion, roughly speaking, is whether the intrinsic size of the object is smaller or larger than the size of a black hole of the same mass. Consider, for the sake of simplicity, spherically symmetric objects. Then if the radius of the object is larger than the _Schwarzschild radius_\(R_{\rm Sch}=2GM\), then it is _not_ a black hole, while if its radius is smaller than the Schwarzschild radius then there is pure vacuum outside the horizon and we cannot distinguish the object from a black hole. (Recall that we cannot observe anything behind the horizon.) For composite objects like stars, the intrinsic size is determined by the radiation pressure of electrons, and the above logic leads to the so-called Chandrashekar limit.6
Footnote 6: See [https://en.wikipedia.org/wiki/Chandrasekhar_limit](https://en.wikipedia.org/wiki/Chandrasekhar_limit).
Now we apply this criterion to string theory. The consistency of supersymmetric string theory imposes that spacetime is ten dimensional. In order to obtain an effective theory in \(\mathbb{R}^{1,3}\), six of the dimensions should describe a compact manifold. The resulting geometry is called a string compactification, and it determines the spectrum of particles and their interactions at low energies. (See e.g. the classic textbook Green, Schwarz, and Witten (1988) for more details.) The spectrum is characterized by the conserved charges of the theory, also called _quantum numbers_, which are quantized as integer multiples of fundamental units of charge in the quantum theory. In the simplest case the spectrum is labelled by a single quantum number \(N\in\mathbb{N}\).
The gravitational coupling (the effective Newton's constant as appears in all equations) in string theory is determined by the value of the string coupling \(g_{\rm s}\) as \(G=g_{\rm s}^{2}\ell_{\rm s}^{2}\) where \(\ell_{\rm s}\) is the fundamental scale of a string. In this context, the criterion of whether the collection of states labelled by \(N\) is a black hole or not takes the following form. When \(g_{\rm s}N\gg 1\) the effective description is that of weakly-coupled general relativity interacting with a specified set of matter fields, and we have a black hole solution of this effective theory with quantum number \(N\). This is the _macroscopic_ picture. When \(g_{\rm s}N\ll 1\), there is a completely different dual microscopic picture: we have a set of fundamental excitations of string theory with the same quantum number \(N\), consisting of bound states of fluctuations of strings, branes, and other fundamental objects of string theory.
The microscopic picture gives us the possibility of calculating the statistical entropy. However, for a given value of \(g_{\rm s}\), only one of the pictures should hold, and we still cannot make a comparison between the two types of entropies.7
The breakthrough of Sen (1995); Strominger and Vafa (1996) was to consider the black hole entropy problem in the context of _supersymmetric compactifications_ of string theory. In a class of such compactifications, the string coupling constant \(g_{s}\) turns out to be not fixed by the equations of motion, and therefore is a tunable parameter.
Now we can test whether black hole entropy is a statistical entropy as follows. Start with a particular supersymmetric compactification of string theory which admits black hole solutions with some set of conserved charges. Tune the coupling so that the effect of gravity is arbitrarily small and, therefore, the fluctuations of strings and branes can be described using usual quantum field theoretic methods. Count the number of microstates \(d_{\rm micro}\) in the theory as a function of the quantum numbers. Then crank up the coupling and therefore \(G\), so that the horizon radius is much larger than the fundamental scale of the microscopic constituents. Now the collection of microstates gravitate and form a black hole. Measure its horizon area and obtain the thermodynamic entropy \(S^{\rm class}_{\rm BH}\). Finally, compare \(S^{\rm class}_{\rm BH}\) and \(\log d_{\rm micro}\) as a function of the quantum numbers.
However, we are faced with yet another problem--the number of states of the theory can change as we change the coupling, so it is not clear that we should be comparing the entropy of states calculated at weak coupling with the black hole entropy calculated at strong coupling. The resolution of (Sen (1995); Strominger and Vafa (1996)) is to consider a special set of states which exist in supersymmetric theories called BPS (after Bogomolny (1976), Prasad and Sommerfield (1975)) states which are "protected". This means that the number of BPS states--counted with a certain weighting, called the supersymmetric index--does not change under change of small parameters of the theory. In the following sections we explain some details of supersymmetric theories, states, and BHs.
**Supersymmetric states and supersymmetric black holes**
In order to explain the supersymmetric index we first briefly review the basic ideas underlying supersymmetry. All particles in nature comes in two types--bosons and fermions. The bosonic or fermionic nature of particles determines their fundamental properties such as their spin--which can be integer multiples of \(\hbar\) for bosons and integer+\(\frac{1}{2}\) multiples for fermions, and statistical properties: bosons tend to cluster in the same state, i.e. the probability for a boson to be in a certain quantum state increases with the number of bosons already present in that state, while fermions repel each other, i.e. the probability of two fermions to be in the same quantum state is zero. Supersymmetry is a symmetry, first proposed by particle physicists in the 1970s, that relates fermions and bosons.8
Footnote 8: Although it has not been observed in nature, it is very useful to construct model systems where one can perform controllable analytic calculations (which are usually out of reach) in order to probe physical properties of quantum theories e.g. Seiberg and Witten (1994). Here we use it to test the Boltzmann equation for black holes.
The main ideas of supersymmetric state counting can be illustrated in the context of a simple type of system called supersymmetric quantum mechanics (Witten (1982)). Recall that a quantum mechanical system has a Hilbert space \({\cal H}\) whose elements are called (quantum) states, and a self-adjoint operator \(H\) on the Hilbert space, called the Hamiltonian. The eigenvalues of the Hamiltonian are interpreted as possible energies of the states. Now, consider a quantum mechanical system whose Hilbert space \({\cal H}\) consists of states which can be bosonic or fermionic. The _fermion number operator_ denoted \((-1)^{F}\) is a multiplicative operator defined to have value \(+1\) for bosonic states and \(-1\) for fermionic states. The supersymmetric quantum mechanical system comes equipped with a pair of complex conjugate fermionic operators \(Q,\overline{Q}\), called the supercharges, which are nilpotent (i.e. \(Q^{2}=\overline{Q}^{2}=0\)), and obey the algebra
\[[Q,\overline{Q}]\ =\ H\,,\qquad[H,Q]\ =\ 0\,,\qquad[H,\overline{Q}]\ =\ 0\,. \tag{4}\]
The spectrum of states of the supersymmetric system falls into representations of this algebra, which come in two types. The first type is a two-dimensional _long representation_\((|b\rangle,|f\rangle)\) with \(Q|b\rangle=|f\rangle\), with \(Q|f\rangle=0\) by the nilpotence of \(Q\). Since \(Q\) has fermion number \(-1\), the two states have opposite fermion numbers, which we have denoted by \(b\) (bosonic) and \(f\) (fermionic). If \(|b\rangle\) is an eigenstate of \(H\) with eigenvalue \(E\), the algebra (4) implies that \(E\geq 0\) and that \(|f\rangle\) is also an eigenstate with eigenvalue \(E\). The second type is a one-dimensional _short representation_\(|\cdot\rangle\), which can be either bosonic or fermionic, which obeys \(Q|\cdot\rangle=\overline{Q}|\cdot\rangle=0\). The the algebra (4) implies that \(|\cdot\rangle\) is an eigenstate of \(H\) with eigenvalue \(E=0\). Thus the complete spectrum of the theory consists of positive energy states which come in boson-fermion pairs, and zero energy states which need not be paired.
The basic supersymmetric index, called the _Witten in
Figure 1: Microscopic and macroscopic pictures of a black hole in string theory
_dex_ (Witten (1982)), is defined as
\[Z\ =\ {\rm Tr}_{\cal H}\,(-1)^{F}\ {\rm e}^{-\beta H}\,. \tag{5}\]
This quantity is similar to the thermal partition function that one encounters in statistical physics, with \(\beta\) being the inverse temperature. In particular, states with large energies are suppressed by the exponential factor, which usually leads to a convergent trace. There is, however, a big difference compared to the thermal partition function. Since states with \(E>0\) come in pairs with opposite values of \((-1)^{F}\), they do not contribute to the spectrum. The only contribution to the index comes from zero-energy states, i.e.
\[Z\ =\ n_{0}^{b}-n_{0}^{f}\,, \tag{6}\]
the difference between the number of bosons and fermions with zero energy, which, in particular, is independent of \(\beta\)!
Now consider the effect of changing some parameter of the system in a supersymmetric manner, that is to say, the supercharges and the Hamiltonian are functions of the parameter, but obey the algebra (4) for all values of the parameter. The energy levels (the eigenvalues of the Hamiltonian) correspondingly change, but bosons and fermions with \(E>0\) are still paired. Any change in the \(E=0\) spectrum consists of states moving down from \(E>0\) to \(E=0\), or leaving it in the opposite direction, but this can only happen in boson-fermion pairs. Thus, although the total number of (bosonic + fermionic) states at \(E=0\) could change, their difference is invariant.
In this simple example, we see that the supersymmetric index only receives contributions from the one-dimensional zero-energy representations, which are annihilated by the supercharge operators. In the more general setting of supersymmetric quantum field theories, one has multiple supercharges. The types of representations of the supersymmetry algebra in this case are labelled by the fraction of the total number of supercharge operators that are annihilated in that representation. This fraction can range from 0, the so-called non-BPS or long representations, to 1, which are the zero energy vacuum states as in the simple example above. However, now we also have BPS states which preserve a proper fraction of the supersymmetry, which have non-zero energy. Correspondingly, we have more general types of supersymmetric indices, which receive contributions only from states that are annihilated by at least the same fraction of the supercharges. The precise details of these supersymmetric indices depends on the full symmetry algebra of the theory. In the context of asymptotically flat space (that we discuss here) one has a super-Poincare symmetry algebra, in which case these indices are called helicity supertraces (Kiritsis (1998)).
**The micro/macro picture revisited**
Now we consider superstring theory and some particular supersymmetric index which counts a certain type of BPS state. Since the index does not change under changes of coupling, we can revisit the idea of Figure 1 and ask what happens to these states in the macroscopic regime \(g_{s}N\gg 1\)? The complete answer depends on the details of the system but, happily, there are situations where one has a black hole solution of string theory with the same charges as those of the original microscopic quantum states. In such a situation, we can try to compare the microscopic and macroscopic pictures more carefully.
If we review the above logic carefully, we seem to have changed the original problem at two levels. Firstly, the states that we consider preserve (are annihilated by) a certain fraction of supercharges, and this should be reflected in the gravitational theory. Indeed, the effective macroscopic theory--called supergravity--also has a notion of supersymmetry that applies to the gravitational variables, and there are black hole solutions to supergravity--called BPS black holes--that are annihilated by some fraction of supercharges. BPS black holes are a very special type of black hole solutions. One of their peculiarities is that they are necessarily have zero temperature. In this respect they are different from Schwarzschild black holes (as we initially mentioned in the introduction). Nevertheless, BPS black holes have an event horizon and do carry entropy!9
Footnote 9: The apparent violation of the lore/notion of the third law of thermodynamics is avoided because the supersymmetry of such solutions allows for a large degeneracy.
The second level at which the initial problem seems to have changed is that we are no longer discussing the total number of states (and the consequent Boltzmann entropy), but a modified counting of the index of the type (6). In the early days of black hole microstate counting in string theory this aspect was not quite understood fully and it was even questioned whether the agreement obtained by Strominger and Vafa (1996) was only some kind of accidental or approximate agreement in the limit of large charges. More recent developments have made it clear that this is not an accident and there is a more detailed and beautiful mechanism underlying the system.
The basic physics picture is as follows. Typically, boson-fermion pairs do pair up and leave the spectrum when we reach strong coupling. Assuming for the moment that the initial microscopic system has more bosons than fermions, we are left with a final ensemble in which \(n^{f}=0\) and the index only counts bosonic states, i.e. a genuine count in the sense of Boltzmann. On the macroscopic side, it is a non-trivial fact (Sen (2009a), Dabholkar, Gomes, Murthy, and Sen (2011)) that the BPS black holes are indeed made up of only bosonic states (this is sometimes referred to as a vanishing theorem)!10 Thus, we have a logic of the following
sort:
\[\begin{array}{c}\mbox{microscopic index = gravitational index = BH entropy,}\\ \mbox{(supersymmetry)}\qquad\mbox{(vanishing theorem)}\end{array} \tag{8}\]
so that the supersymmetric index should indeed capture the Boltzmann entropy of supersymmetric black holes.
With the formalism in place, we now proceed to calculate the supersymmetric index at large charges and check whether it agrees with the Bekenstein-Hawking area formula for the entropy.
### The counting of black hole microstates in string theory
As mentioned in the introduction, black hole microstates in string theory are quantum-mechanical bound states of fluctuations of fundamental objects of string theory like strings and branes. We first consider a toy model, consisting of a string wrapping a circle of radius \(R\), which nicely illustrates a number of features of the microscopic counting of black hole microstates in string theory.
The string is a a 1-space-dimensional object moving in time, whose points we label by the coordinates \(x^{0}\) (time) and \(x^{1}\) (space). The position on the circle is parameterized by the map \(X(x^{0},x^{1})\in\mathbb{R}/2\pi\mathrm{i}R\mathbb{Z}\). The classical equation of motion on the free string is the free Laplacian equation (with \(\partial_{i}\equiv\partial/\partial x^{i}\)),
\[(\partial_{0}^{2}-\partial_{1}^{2})X(x^{0},x^{1})\ =\ 0\,. \tag{9}\]
The general solution of this equation is
\[\begin{array}{rcl}X(x^{0},x^{1})&=&\frac{n}{R}x^{0}\,+wRx^{1}\\ &&+\,\sum_{k\in\mathbb{Z}\atop k\neq 0}\,\frac{\mathrm{i}}{\sqrt{k}} \Big{(}\alpha_{k}\,\mathrm{e}^{-\mathrm{i}k(x^{0}-x^{1})}\,+\,\widetilde{ \alpha}_{k}\,\mathrm{e}^{-\mathrm{i}k(x^{0}+x^{1})}\,\Big{)}\,.\end{array} \tag{10}\]
Here \(n,w\in\mathbb{Z}\) are interpreted as the momentum of the center of mass of the string around the circle and the winding of the string around the circle, respectively. The terms proportional to \(\mathrm{e}^{-\mathrm{i}k(x^{0}-x^{1})}\) and \(\mathrm{e}^{-\mathrm{i}k(x^{0}+x^{1})}\) are called left- and right-moving excitations, respectively. We focus on the left-moving excitations, and the right-movers are treated in an analogous manner In the classical-mechanical theory, the coefficients \(\alpha_{k}\) are complex numbers obeying \(\widetilde{\alpha_{k}}=\alpha_{-k}\). In the quantum theory, the field \(X\) and therefore the coefficients \(\alpha_{k}\), \(\widetilde{\alpha}_{k}\) are promoted to operators on a Hilbert space obeying \(\alpha_{k}^{\dagger}=\alpha_{-k}\). The quantization process leads to the following commutation relations
\[[\alpha_{k},\alpha_{-k}]\ =\ 1\,,\quad k\ =\ 1,2,3,\ldots\,, \tag{11}\]
with all other commutators vanishing.
The algebra obeyed by the pair \((\alpha_{k},\alpha_{-k})\), defines the quantum bosonic oscillator, which one of the simplest known quantum-mechanical systems. The Hilbert space \(\mathcal{H}_{k}\) associated to this system is described as follows. There is a special _vacuum_ state \(|0\rangle_{k}\) of this system obeying \(\alpha_{k}|0\rangle_{k}=0\), \(k>0\), and the Hilbert space is given by the span over \(\mathbb{C}\) of the following tower of states,
\[|m\rangle_{k}\ =\ \alpha_{-k}^{m}|0\rangle_{k}\,,\qquad m=0,1,2,\ldots\,. \tag{12}\]
The Hamiltonian of this system is given by
\[H_{k}\ =\ k(\alpha_{k}\,\alpha_{-k}+\tfrac{1}{2})\,, \tag{13}\]
and the energy eigenvalues of the states (12) can be easily calculated using the commutation relation in (11) to be
\[H_{k}|m\rangle_{k}\ =\ k(m+\tfrac{1}{2})|m\rangle_{k}\,. \tag{14}\]
The thermal partition function of this harmonic oscillator is given by
\[\mathrm{Tr}_{\mathcal{H}_{k}}\ \mathrm{e}^{-\beta H_{k}}\ =\ \sum_{m=0}^{\infty} \mathrm{e}^{-\beta\mathrm{i}k(m+\frac{1}{2})}\ =\ \frac{\mathrm{e}^{-\beta k/2}}{1-\mathrm{e}^{-\beta k}}\,. \tag{15}\]
The full left-moving Hilbert space \(\mathcal{H}\) of the theory is the Fock space built out of these individual Hilbert spaces \(k\in\mathbb{Z}^{+}\), and the total left-moving Hamiltonian is given by \(H=\sum_{k=1}^{\infty}H_{k}\) where \(H_{k}\) is now interpreted to act as the identity operator on all the Hilbert subspaces \(k^{\prime}\neq k\). The thermal partition function on this Hilbert space is the product over all \(k>0\) of the thermal partition functions (15). The exponents of the numerators of (15) add up to give the energy eigenvalue of the vacuum state, which, according to the above discussion, is naively given by \(\sum_{k=1}^{\infty}\frac{k}{2}\). This sum is interpreted using the zeta-function regulator10 to be \(\zeta(-1)=-1/24\). Setting \(\beta=-2\pi\mathrm{i}\tau\), \(q=\mathrm{e}^{2\pi\mathrm{i}\tau}\), we obtain,
Footnote 10: Recall that \(\zeta(s)=\sum_{k=1}^{\infty}k^{-s}\) is defined for \(\mathrm{Re}(s)>1\), and can be analytically continued to a meromorphic function on the whole complex plane.
\[\begin{array}{rcl}\mathrm{Tr}_{\mathcal{H}}\,q^{H}&=&q^{-\frac{1}{24}}\prod _{k=1}^{\infty}\,\frac{1}{1-q^{k}}\ =\ \sum_{n=0}^{\infty}\,p(n)\,q^{n-\frac{1}{24}}\\ &=&q^{-\frac{1}{24}}(1+q+2q^{2}+3q^{3}+5q^{4}+\ldots)\,.\end{array} \tag{16}\]
The coefficient \(p(n)\) in the above expansion is the number of partitions of the integer \(n\), and can be expressed as the integral
\[p(n)\ =\ \oint_{C}\frac{dq}{q^{n+1}}\,\prod_{k=1}^{\infty}\,\frac{1}{1-q^{k}} \tag{17}\]
over a contour lying inside the unit disk and surrounding the origin. The asymptotic behavior of \(p(n)\) as \(n\to\infty\) is well-known, and can be calculated using the well-known relation of the above infinite product to the Dedekind \(\eta\)-function
\[\frac{1}{\eta(\tau)}\ =\ q^{-\frac{1}{24}}\prod_{k=1}^{\infty}\,\frac{1}{1-q^{k}}\,, \qquad q=\mathrm{e}^{2\pi\mathrm{i}\tau}\,. \tag{18}\]
As \(n\to\infty\), the integral (15) is dominated by the singular behavior of the integrand near the unit circle, which is governed by the amazing symmetry property obeyed by the \(\eta\)-function, which is expressed as the functional equation
\[\eta(-1/\tau)\ =\ \sqrt{-{\rm i}\tau}\,\eta(\tau)\,, \tag{17}\]
Using this, one obtains
\[\log p(n)\sim 2\pi\sqrt{n/6}+\ldots \tag{18}\]
The actual counting of supersymmetric microstates in string theory is more complicated than the above toy example, but retains many of its basic features. As mentioned above, we consider compactifications of ten-dimensional string theory on a six-manifold \({\mathcal{M}}_{6}\) to obtain a geometry of the form \({\mathbb{R}}^{1,3}\times{\mathcal{M}}_{6}\). There are, in fact, multiple versions of string theory, and we focus on one theory called Type IIB string theory. It is known that Type IIB string theory compactified on a Calabi-Yau 3-fold leads to a supersymmetric macroscopic theory in four dimensions described by General Relativity coupled to multiple gauge fields, scalar fields, and fermions.
This theory admits supersymmetric black hole solutions, labelled by their charges under all the gauge fields of the theory. The corresponding microscopic states involves excitations of string theory called branes, which are higher-dimensional generalizations of strings, wrapping cycles of the Calabi-Yau manifold. Counting the number of such microscopic states for a given vector of charges is a complicated problem, and in general one only has estimates for this number given in Maldacena, Strominger, and Witten (1997). However, it can be solved exactly when the Calabi-Yau is simple. Typically, in these cases, the generating functions of microstates consist of combinations of Dedekind \(\eta\)-functions and classical \(\theta\)-functions.
The simplest situation is when the Calabi-Yau manifold is \(T^{6}\). The resulting macroscopic theory is called \({\mathcal{N}}=8\) supergravity, which admits supersymmetric black hole solutions labelled by one positive integer \(N\) (cf. Fig. 1), whose Bekenstein-Hawking entropy is given by
\[S^{\rm class}_{\rm BH}(N)\ =\ \pi\sqrt{N}\,, \tag{19}\]
The corresponding microstates are described by left-moving oscillations of an effective string wrapping some cycles in \(T^{6}\). The single circle in the toy example is now replaced by multiple free bosons and also fermions, and the generating function of the microscopic number of states12 is given by
Footnote 12: As explained above, one really calculates the microscopic index. In this case, it is known that the microscopic index is related to the number of black hole microstates as \((-1)^{8+1}d_{\rm micro}(N)\), due to fermionic zero-energy modes that one has to factor out (Sen (2009a), Dabholkar, Gomes, Murthy, and Sen (2011)). The resulting generating functions \((-1)^{\nu}\theta_{a}(\tau)/\eta(\tau)^{6}\), \(a=1,2\) transform as a vector under \(SL(2,{\mathbb{Z}})\) modular transformations, and they can be assembled into a single Jacobi form (Eichler and Zagier (2013)).
\[\sum_{N=-1\atop N\in{\mathbb{Z}}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\
Taking the \(SL(2,\mathbb{Z})\) matrix in (22) to be \(\left(\begin{smallmatrix}1&1\\ 0&1\end{smallmatrix}\right)\), we obtain the periodicity property \(f(\tau+1)=f(\tau)\) which, together with the holomorphy, leads to the Fourier expansion
\[f(\tau)\ =\ \sum_{n\in\mathbb{Z}}d(n)\,q^{n}\,,\qquad q\ =\ \mathrm{e}^{2\pi \mathrm{i}\tau}\,. \tag{23}\]
This expansion, called the \(q\)-expansion in this context, is at the heart of many of the relations of modular forms with other problems in mathematics and physics. In particular, the coefficients \(d(n)\) are often integers, and can sometimes be interpreted as (virtual) dimensions of certain Hilbert spaces. Indeed, in the context of our main interest here, the coefficients are supersymmetric indices corresponding to black hole microstates in string theory, as explained in the previous section.
When the sum in (23) is restricted to \(n\geq 0\), the corresponding functions are holomorphic on the upper half-plane, and are called holomorphic modular forms. Those with the stronger restriction \(n>0\) are called cusp forms. However, it can be proved that the growth of coefficients \(d(n)\) as \(n\to\infty\) of holomorphic modular forms is bounded by a polynomial function. The exponential growth of \(d(n)\) that is characteristic of black hole microstates appears in modular forms with negative powers of \(n\) in its \(q\)-expansion (such functions are called _weakly holomorphic_) in which case the function itself diverges as \(\tau\to\mathrm{i}\infty\). In order to obtain a controllable analytic theory of such functions, one restricts the sum in (23) in the negative direction to \(n\geq n_{0}\) for some fixed \(n_{0}<0\). This is indeed the case for the examples in the previous section, e.g. \(n_{0}=-1/24\) in (16)13.
Footnote 13: In fact, it is \(1/\eta(\tau)^{24}\) that is a modular form on \(SL(2,\mathbb{Z})\), and the transformation (22) of \(1/\eta(\tau)\) involves a \(24^{\mathrm{th}}\) root of unity.
### Asymptotic expansion of coefficients of modular forms
The leading asymptotic behavior of the coefficients of weakly holomorphic modular forms can be estimated quite easily, as seen in the examples of the previous section. In fact, the modular symmetry is much more powerful, and leads to an exact analytic formula for the coefficients of functions of the type (16) and (20), called the _Hardy-Ramanujan-Rademacher formula_ after Hardy and Ramanujan (1918), Rademacher (1937). The formula for the coefficients of (20) is:
\[d_{\mathrm{micro}}(N)\ =\ 2\pi\Big{(}\frac{\pi}{2}\Big{)}^{7/2}\ \sum_{c=1}^{\infty}c^{-9/2}\,K_{c}(N)\ \widetilde{T}_{\!\!\!\!/\,2}\Big{(}\frac{\pi\sqrt{N}}{c}\Big{)}\,. \tag{24}\]
Here \(\widetilde{I}_{\!\!\!\!/\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \
from quantum gravitational corrections to the Bekenstein-Hawking area formula for the thermodynamic entropy of the black hole.
Recalling that \(I_{p}(x)\sim\mathrm{e}^{x}\), \(x\to\infty\), it is clear from the form of the expansion (24) that the \(c>1\) terms are all exponentially suppressed compared to the \(c=1\) term. We already saw that the leading asymptotic growth of \(d_{\mathrm{micro}}(N)\), given in Equation (21), agrees precisely with the Bekenstein-Hawking entropy formula (19) for the BH entropy.
The starting point for the quantum corrections to this formula is the observation that the spacetime geometry has a special form near the horizon of supersymmetric black holes. In particular, the manifold is a product of AdS\({}_{2}\) (two-dimensional constant negative curvature spacetime) and a compact manifold (S\({}^{2}\) for the BH under consideration here). Recall that the equations of general relativity are a set of coupled non-linear second-order PDEs. Black hole solutions usually depend on the values of the various fields at the asymptotic boundary of spacetime as well as the conserved charges carried by the black hole (which play the role of integration constants). The near-horizon geometry of supersymmetric black holes, by contrast, is governed by the fixed points of a set of first-order differential equations, and is completely fixed by the charges of the black hole. This is the _black hole attractor mechanism_ of Ferrara, Kallosh, and Strominger (1995).
The leading-order quantum correction comes from summing the virtual loops of all the quantum excitations (graviton, photons,...) in the AdS\({}_{2}\times\) S\({}^{2}\) background, using a non-trivial adaptation of standard quantum field theory methods in Banerjee, Gupta, and Sen (2011). The result agrees precisely with the first non-trivial term in the asymptotic expansion of the Bessel function in (24) that gives
\[\log\,d_{\mathrm{micro}}(N)\ =\ \pi\sqrt{N}-2\log N+\mathrm{O}\!\left(1/\sqrt{N} \right). \tag{27}\]
The next corrections to \(d_{\mathrm{micro}}\) coming from (24) are in the form of negative powers of \(N\). The physical origin of these corrections are corrections to the local effective action of two-derivative supergravity (Lopes Cardoso, de Wit, and Mohaupt (1999)) arising from integrating out the massive modes of the string theory in the background of the black hole. They affect the BH entropy in two ways: firstly, in the presence of higher-derivative operators in the gravitational effective action the Bekenstein-Hawking formula needs to be replaced by the Wald entropy formula (Iyer and Wald (1994); Wald (1993)) and, secondly, the black hole solution itself gets corrected by the higher-order effects. We refer the reader to Sen (2008a) for a nice review of these ideas and calculations. The summary is that all such quantum-gravitational calculations in string theory agree with the corresponding microscopic results!
Continuing on to a better approximation to the integer degeneracy, we consider the first (\(c=1\)) term in the Rademacher expansion (24), i.e.,
\[d_{\mathrm{micro}}(N)\ =\ \frac{\pi^{9/2}}{8}\,\overline{I}_{7/2}\!\left(\pi \sqrt{N}\right)+\mathrm{O}\!\left(\mathrm{e}^{\sqrt{N}/2}\right). \tag{28}\]
This result is interpreted in gravity as the result of summing up the entire perturbation series for the quantum entropy of the \(\frac{1}{8}\)-BPS BH. Such an interpretation is made possible in the framework of the quantum black hole entropy expressed as a functional integral over the fluctuating fields of supergravity around the near-horizon AdS\({}_{2}\times\) S\({}^{2}\) background (Sen (2008b, 2009b)). The calculation of this functional integral to all orders in perturbation theory is performed by using the technique of supersymmetric localization applied to the gravitational path integral (Dabholkar, Gomes, and Murthy (2011, 2013), de Wit, Murthy, and Reys (2018), Jeon and Murthy (2019)), which reduces the integral over the infinite-dimensional field space to (in this case) a one-dimensional integral which agrees precisely with the representation (28) of the \(I\)-Bessel function.
Now that the asymptotic series coming from perturbation theory has been summed up into the Bessel function, we are now in a position to rigorously discuss the exponentially suppressed corrections coming from the terms \(c=2,3,\ldots\) in (24). It was shown in Dabholkar, Gomes, and Murthy (2015) that the \(c^{\mathrm{th}}\) term has the same form as the contribution of a \(\mathbb{Z}/c\mathbb{Z}\) orbifold in string theory which contribute to the functional integral around asymptotic AdS\({}_{2}\times\) S\({}^{2}\). The Kloosterman sum \(K_{c}(N)\) is precisely reproduced by the functional integral of a certain Chern-Simons theory which emerges in this background. Finally, Iliesiu et al. (2022) applied the formalism of supersymmetric localization to the fluctuations of the gravitational fields in the background of the \(\mathbb{Z}_{c}\) orbifold. It is important in this calculation to take into account the subtle effects of a certain mode of the gravitational called the Schwarzian mode, which gives a contribution \(1/c\) to the functional integral (and therefore does not contribute to the \(c=1\) answer, even though it is present). Upon putting everything together, one obtains precisely the Bessel function \(\overline{I}_{7/2}(\pi\sqrt{N/c})\). In this manner, the physical quantum-gravitational calculation reproduces, term-by-term, the Hardy-Ramanujan-Rademacher expansion.
## 3 Mock modular forms
The results described in the previous section show that the modular symmetry of the microscopic partition function acts as a powerful guiding principle for the quantum-gravitational corrections to the macroscopic black hole. This power is based on the agreement of the microscopic and macroscopic pictures, which in turn relies on the invariance of the supersymmetric index under a change of parameters of the theory as explained earlier. Thus we have a perfect one-to-one correspondence between the microstates of the strings and branes, and those of the black hole.
However, there is a subtlety in the statement of invariance of the index. Recall that the main argument underlying its invariance is that a change of parameters shifts the energy levels of the supersymmetric theory, but states enter or exit the BPS spectrum in boson-fermion pairs, thus not affecting the index \(n^{B}-n^{F}\). of the parameter space of the theory called _walls_, upon crossing which new BPS solutions enter the spectrum of the theory, which could be purely bosonic or fermionic. Typically, the corresponding field configurations are normalizable states in the Hilbert space on one side of the wall and become non-normalizable on the other side. The index therefore jumps at these walls and this phenomenon is called _wall-crossing_.
The wall-crossing phenomenon manifests itself in supergravity as multi-black hole bound-state configurations carrying the same total charges as the original black hole (Fig. 2), that appear on moving across certain co-dimension-one surfaces (walls) in the gravitational parameter space (Denef and Moore (2011); Sen (2008a)). Our desired single-black-hole states are therefore only part of the full partition function, and therefore are not, a priori, expected to preserve the symmetries of the full theory. For the highly symmetric situation of black holes in \(T^{6}\) compactifications of string theory discussed earlier, it was shown in Dabholkar, Guica, Murthy, and Nampuri (2010) that BH bound state solutions do not contribute to the relevant supersymmetric index, but in general the wall-crossing phenomenon breaks the modular symmetry of the theory. This seems like a potential disaster for the guiding principle!
The natural question is: can one calculate the supersymmetric index corresponding to the single BH? Does that have any interesting modular-like property? In general, this is a very difficult question to answer, but in the context of the next simplest theory called \(\mathcal{N}=4\) supergravity (coming from string compactifications on \(K3\times T^{2}\)), one has a complete solution to this problem (see the review Sen (2008a)). In this case one can show that there is only a specific type of 2-center BH bound state that contributes to the index (Dabholkar et al. (2010)). In fact, at any point in the parameter space of string theory, the total index can be written as
\[d_{\text{micro}}(n)\ =\ d_{\text{1-BH}}(n)+d_{\text{2-BH}}(n) \tag{29}\]
where we can explicitly calculate the three components of this equation separately.
The remarkable fact found in Dabholkar, Murthy, and Zagier (2012) is that \(d_{\text{1-BH}}(n)\) is the Fourier coefficient of a _mock modular form_. What this means is that one can add a specific correction term (called the _shadow_) to recover modular symmetry, analogous to quantum anomalies in physics. Examples of such functions (without a definition!) were first discovered by Ramanujan in his famous last letter to Hardy, and S. Zwegers in his PhD thesis (Zwegers (2008)) gave a definition and explained the structure hidden in Ramanujan's examples (see Zagier (2009), Bringmann, Folsom, Ono, and Rolen (2017)).
The relations of black hole partition functions and mock modular forms were formalized and generalized in Dabholkar et al. (2012) into a set of theorems that apply to a very large class of functions called meromorphic Jacobi forms (one sub-family of which is given by the black hole partition functions of \(\mathcal{N}=4\) string theory). These theorems have been used to provide exact analytic formulas for the black hole degeneracies extending the Hardy-Ramanujan-Rademacher expansion (Bringmann and Mahlburg (2011), Murthy and Reys (2016), Ferrari and Reys (2017), Chowdhury, Kidambi, Murthy, Reys, and Wrase (2020), Lopes Cardoso, Nampuri, and Rossello (2021), Cardoso, Nampuri, and Rossello (2021)), and to prove and analyze conjectures about the positivity of black hole microstates (Bringmann and Murthy (2013), Chattopadhyaya and David (2019), Chattopadhyaya and David (2021)).
These theorems also led to the construction of an infinite number of mock modular forms in Dabholkar et al. (2012), generalizing the known special examples developed in mathematics starting from the \(q\)-series of Ramanujan. The above developments also played a role in the remarkable discoveries in Cheng, Duncan, and Harvey (2014) of _Umbral moonshine_ relations between mock modular forms and discrete groups.
## Conclusion, and broader relations of modular forms and string theory
We have described one small corner of the relations of modular forms and modular symmetry to the physics of black holes in string theory, and discussed how they (i) provide a guiding principle for the quantum-gravitational effects in
Figure 2: An example of wall-crossing involving 2-centered BH bound states
Figure 1: On the left side of the wall, the only inside configuration is a. On the right side, in addition to A. a new stable configuration is with the same total charge (\(q\)) apparent.
black holes, (ii) are a powerful tool for calculating black hole entropy, and (iii) have fed back and led to new developments in mathematics.
There are many other very interesting relations of modular forms and string theory (and, more generally, physics) that we could not cover in this brief article. In particular, modular forms have made an appearance in string theory from the very early days through the world-sheet torus amplitudes of free strings, and to estimate the Hagedorn growth of states of strings (see Green et al. (1988)). Even before that, the simplest manifestation of modular invariance had appeared in the analysis of 2d CFT in Cardy (1986) and led to the famous Cardy formula.
In the 35 years since then, modular, and more generally automorphic forms, have made multiple appearances in string theory (see D'Hoker and Kaidi (2022) for a recent summary). They have been used as a powerful tool e.g. in the description of string worldsheet amplitudes (see Fleig, Kleinschmidt, and Persson (2018)). The corresponding symmetry has also played the role of a fundamental principle as dualities e.g. the \(SL(2,\mathbb{Z})\) duality of supersymmetric Yang-Mills theory (generalizing Montonen-Olive duality) in Vafa and Witten (1994), and of type-IIB string theory in Sen (1994), which has given rise to strong constraints on the structure of the effective action of string and M-theory (see Green and Vanhove (2006), D'Hoker, Green, and Pioline (2019), Pioline (2015)).
We have already seen how black hole microstate degeneracies are captured precisely by a modular form in \(T^{6}\) compactifications of string theory (\(\mathcal{N}=8\) supergravity) and by mock modular forms in \(K3\times T^{2}\) compactifications (\(\mathcal{N}=4\) supergravity). Relations between black hole partition functions in more general Calabi-Yau compactifications in string theory (\(\mathcal{N}=2\) supergravity) and functions called "indefinite theta series" (which are also closely related to mock modular forms) have been found in Manschot (2010), Manschot, Pioline, and Sen (2011), Alexandrov, Manschot, and Pioline (2013), Alexandrov and Pioline (2019).
Mock modular forms have also made other appearances in conformal field theory and string theory, seemingly unrelated to black holes. This includes the early work on \(\mathcal{N}=4\) superconformal characters (Eguchi and Taormina (1988)), Mathieu moonshine (Eguchi, Ooguri, and Tachikawa (2011)) and Umbral moonshine and Niemeier lattices (Cheng, Duncan, and Harvey (2013); Cheng et al. (2014)), and relations to K3 surfaces (Gaberdiel, Hohenegger, and Volpato (2012), Taormina and Wendland (2015)). Another important physical situation where mock modular forms have made an appearance is in the context of the completion of the partition function of Vafa-Witten theory (Dabholkar, Putrov, and Witten (2020)), and more generally four-dimensional \(\mathcal{N}=2\) gauge theories (Korpas, Manschot, Moore, and Nidaiev (2019), Manschot and Moore (2021)).
Physically, the appearance of mock modular behavior is related to the non-compactness of field space, and consequent holomorphic anomalies. This has been studied in the context of supersymmetric quantum mechanics with non-compact target (Murthy and Pioline (2018), Dabholkar, Jain, and Rudra (2019)), non-compact sigma models and CFT (Eguchi and Sugawara (2011), Troost (2010),Ashok and Troost (2011), Murthy (2014), Ashok, Doroud, and Troost (2014), Harvey, Lee, and Murthy (2015), Gupta and Murthy (2017), Nazaroglu (2018), Kumar Gupta, Murthy, and Nazaroglu (2019)), and string theory (Harvey and Murthy (2014), Cheng and Harrison (2015), Harvey, Murthy, and Nazaroglu (2015)), and AdS/CFT (Manschot and Moore (2010)).
More broadly, there are other extremely interesting topics at the modular-forms-physics interface, which are slowly being uncovered. These include the discussion of black holes and class groups (Benjamin, Kachru, Ono, and Rolen (2018)), the relation of quantum modular forms and 3-manifold invariants (Gukov, Pei, Putrov, and Vafa (2020), Cheng, Chun, Ferrari, Gukov, and Harrison (2019), Garoufalidis and Zagier (2021)), modularity at special points of Calabi-Yau moduli spaces (Moore (1998), Candelas, de la Ossa, Elmi, and Van Straten (2020)), and probably many other treasures that are waiting to be discovered!
|
2310.04837 | Federated Self-Supervised Learning of Monocular Depth Estimators for
Autonomous Vehicles | Image-based depth estimation has gained significant attention in recent
research on computer vision for autonomous vehicles in intelligent
transportation systems. This focus stems from its cost-effectiveness and wide
range of potential applications. Unlike binocular depth estimation methods that
require two fixed cameras, monocular depth estimation methods only rely on a
single camera, making them highly versatile. While state-of-the-art approaches
for this task leverage self-supervised learning of deep neural networks in
conjunction with tasks like pose estimation and semantic segmentation, none of
them have explored the combination of federated learning and self-supervision
to train models using unlabeled and private data captured by autonomous
vehicles. The utilization of federated learning offers notable benefits,
including enhanced privacy protection, reduced network consumption, and
improved resilience to connectivity issues. To address this gap, we propose
FedSCDepth, a novel method that combines federated learning and deep
self-supervision to enable the learning of monocular depth estimators with
comparable effectiveness and superior efficiency compared to the current
state-of-the-art methods. Our evaluation experiments conducted on Eigen's Split
of the KITTI dataset demonstrate that our proposed method achieves near
state-of-the-art performance, with a test loss below 0.13 and requiring, on
average, only 1.5k training steps and up to 0.415 GB of weight data transfer
per autonomous vehicle on each round. | Elton F. de S. Soares, Carlos Alberto V. Campos | 2023-10-07T14:54:02Z | http://arxiv.org/abs/2310.04837v1 | # Federated Self-Supervised Learning of Monocular Depth Estimators for Autonomous Vehicles
###### Abstract
Image-based depth estimation has gained significant attention in recent research on computer vision for autonomous vehicles in intelligent transportation systems. This focus stems from its cost-effectiveness and wide range of potential applications. Unlike binocular depth estimation methods that require two fixed cameras, monocular depth estimation methods only rely on a single camera, making them highly versatile. While state-of-the-art approaches for this task leverage self-supervised learning of deep neural networks in conjunction with tasks like pose estimation and semantic segmentation, none of them have explored the combination of federated learning and self-supervision to train models using unlabeled and private data captured by autonomous vehicles. The utilization of federated learning offers notable benefits, including enhanced privacy protection, reduced network consumption, and improved resilience to connectivity issues. To address this gap, we propose FedSCDepth, a novel method that combines federated learning and deep self-supervision to enable the learning of monocular depth estimators with comparable effectiveness and superior efficiency compared to the current state-of-the-art methods. Our evaluation experiments conducted on Eigen's Split of the KITTI dataset demonstrate that our proposed method achieves near state-of-the-art performance, with a test loss below 0.13 and requiring, on average, only 1.5k training steps and up to 0.415 GB of weight data transfer per autonomous vehicle on each round.
Monocular Depth Estimation Self-Supervised Learning Federated Learning
## 1 Introduction
Because of the adverse impact of a poorly managed mobility system on the quality of life, Smart Mobility is often presented as one of the main options to seek more sustainable transport systems [1]. It could also be seen as a set of coordinated actions aimed at improving cities' efficiency, effectiveness, and environmental sustainability. One of these actions is the development of Intelligent Transportation Systems (ITS), which has been occurring since the beginning of the 1970s and can be seen as the integration of advanced technologies, which include electronic sensor technologies, data transmission technologies, and intelligent control technologies, into the transportation systems [2]. Nonetheless, the primary purpose of ITS is to provide better services for drivers and riders [3].
In the last few years, a large amount of research effort has been made to apply Big Data Analytics and other advanced Artificial Intelligence (AI) techniques to improve ITS [4]. In contrast, a smaller amount has been focused on developing intelligent agents to support ITS. The primary efforts made in that sense are those focused on developing Autonomous Vehicles (AVs), which are now one of the most prominent topics in the ITS initiative [5]. Research on AVs has also applied advanced AI techniques to tackle its most critical tasks, such as Computer Vision (CV). Scene Depth Estimation (DE) plays an essential role in CV as it enables the perception and understanding of three-dimensional scenes [6]. Lasers, structured light, and other reflections on the object surface have traditionally been applied in active DE methods [6]. To enable these approaches, elevated costs of human labor and computational resources are usually required for obtaining dense and accurate depth maps [7].
Thus, image-based DE has become one of the main focuses of recent research in CV for AVs due to its lower deployment cost and a wider range of application scenarios [6]. Image-based DE methods traditionally calculate the disparity between two 2D images a binocular camera takes to obtain a depth map [8]. However, binocular DE methods require at least two fixed cameras, and it is difficult to capture enough features in the image to match when the scene has less or no texture [9].
Therefore, research began focusing on Monocular DE (MDE) [10]. Since MDE uses a single camera to obtain an image or video sequence, which does not require additional specialized equipment, it has an even wider applicability [6]. Nonetheless, as monocular images lack a reliable stereoscopic visual relationship, the regression of depth in 3D space from it is an ill-posed problem [6]. More specifically, monocular images adopt a 2D form to reflect the 3D world. However, the depth of the scene is not captured by the imaging process, making it impossible to judge the size and distance of an object in the scene or whether it is occluded by another object [6].
Thus, we need to estimate the depth of each pixel from the monocular image. Based on the pixel depth map, we can judge the size and distance of the objects contained in that scene. When the estimated depth map can accurately reflect the 3D structure of the scene, we can consider the estimation method used to be effective [6]. Several State-of-The-Art (SoTA) solutions for MDE make use of Self-Supervised Learning (SSL) of Deep Neural Networks (DNNs) for this task in combination with other CV tasks, such as ego-motion/pose estimation (PE) and semantic segmentation (SS) [11, 12]. Nonetheless, to the best of our knowledge, none of the SoTA solutions for MDE combines the use of Federated Learning (FL) [13] with SSL to learn MDE models from unlabeled and private data captured by AVs.
The use of FL has been explored in many recent works on ITS and AVs [14, 15]. The main advantages of FL [16] include: (1) increased privacy protection, as there is no longer the need to share the raw data collected by each vehicle with a central server or other vehicles; (2) reduced network consumption, as the size of the model updates that need to be shared in the FL process is significantly smaller than the raw datasets; (3) increased resiliency to connectivity loss when compared to the centralized approach; and (4) increased robustness to Non-IID (independent and identically distributed) data [17]. Thus, we hypothesize that combining FL and SSL can enable learning models with comparable effectiveness and superior efficiency to the SoTA methods in MDE for AVs. Also, several works have explored the combination of SSL and FL on CV tasks with promising results [18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32]. Nonetheless, none of them were evaluated on datasets of images collected by vehicles, such as the SoTA benchmarks for MDE models [33].
Thus, this work's main objective is to develop a solution for the problem of MDE for AVs. This solution must be able to generate depth maps of images captured by monocular cameras in moving AVs with high effectiveness and efficiency. MDE effectiveness is essential for scene understanding by AVs, as the depth information will help identify the distance of obstacles as well as estimate the speed and acceleration of other moving vehicles [34]. Meanwhile, high efficiency is another critical requirement of the ideal solution because it cannot consume a high proportion of the computational resources available on the vehicle, as these are already disputed by the other tasks the vehicle must do in real-time. Besides, the ITS network infrastructure might be unable to support the sharing of all the training data between AVs and/or a central server; therefore, mitigating the bandwidth consumption can increase the ITS infrastructure's scalability.
To the best of our knowledge, this is the first work to present and discuss empirical evidence of the applicability of Self-Supervised Federated Learning (SSFL) to MDE for AVs.
This work tackles the following Research Questions (RQs):
* Is the _efficiency_ of the SSL of MDE models _higher_ when applying FL (with IID and Non-IID data) or a centralized approach 1, in the AVs use case? Footnote 1: Training a model on a central server with data collected by all vehicles.
* Is the _effectiveness_ of SSL MDE models _equivalent_ when applying FL (with IID and Non-IID data) instead of a centralized approach in the AVs use case?
To answer the RQs, we provide the following contributions:
1. We propose the FedSCDepth method to solve the problem of MDE in AVs using SSFL for collaboratively training a depth estimator using unlabeled data captured by vehicles with high effectiveness, efficiency, and privacy;
2. We present an empirical evaluation of a prototype of the proposed method using a real dataset for MDE in AVs.
3. We show that FedSCDepth reaches comparable performance with the SoTA on MDE, with lower computation and communication costs per vehicle per round than centralized training, using both IID and Non-IID data;
Section 2 discusses the related work. Section 3 presents the proposed method, and Section 4 details the evaluation experiments. Section 5 discusses the results obtained, and Section 6 presents conclusions and future work.
## 2 Related Work
In this section, we present the theoretical background of the methods proposed for solving the MDE problem, the methods that leveraged FL in the ITS and AV domains, and the recent works that combined SSL and FL for CV tasks.
### Evolution of Monocular Depth Estimation Methods
During the early phase of DE research, depth maps were primarily estimated using various depth cues such as vanishing points, focus and defocus, and shadows. However, most of these methods were limited to constrained scenes [6]. In the subsequent Machine Learning (ML) period of DE research, researchers proposed several handcrafted features and probabilistic graph models. These models were utilized for MDE using parametric and non-parametric learning within the ML framework [6]. The emergence of Deep Learning (DL) marked a new period in DE research in which MDE became a task of inferring depth maps from single 2D color images using DNNs. Eigen et al. [35] pioneered this approach by introducing a coarse-to-fine framework.
DL techniques for MDE commonly employ encoder-decoder networks to generate depth maps from RGB images. The encoder captures depth features using convolution and pooling layers, while the decoder estimates pixel-level depth maps using deconvolution layers. Skip connections preserve features at different scales. Training involves minimizing a depth loss function until a predefined threshold is reached [6]. Gradient descent variants are commonly used, but their quality depends on hyperparameters and network initialization. Image resizing is often necessary during initialization.
Supervised and semi-supervised approaches to MDE will typically require some amount of labeled data [6], which might not represent training truly general models for DE in the heterogeneous domains where AVs will be deployed. To address this problem, several unsupervised methods have been proposed for learning visual features from large datasets of unlabeled images or videos without relying on human annotations [11]. These methods, often called self-supervised, utilize pseudo-labels generated from raw data. Typically, they employ one or more pretext tasks to learn from unlabeled data. By optimizing the objective functions of pretext tasks, DNNs acquire higher-order representational features, enabling them to predict desired visual features such as image depth [11].
### Self-Supervised Learning for Monocular Depth Estimation
SSL has introduced various pretext tasks, including colorizing grayscale images, image inpainting, and image jigsaw puzzles [6]. These pretext tasks have been explored in conjunction with other training paradigms. Similarly, in addition to single-task learning, which involves training a single network for DE, combining DE with other tasks such as PE, SS, and optical flow prediction can lead to the acquisition of shared representations beneficial for multiple related tasks [6, 11].
A notable series of works that incrementally enhanced SSL for MDE was the SC-Depth series methods [36, 37, 38]. In SC-DepthV1 [36], authors focused on the scale inconsistency issue of preexisting solutions and proposed a method to enable scale-consistent DE over video. In their following work, SC-DepthV2 [37], they focused on the rotation issue in videos that are captured by handheld cameras and proposed an auto-rectify network to handle large rotations. Finally, in SC-DepthV3 [38], they focused on the issue of dynamic objects and blurred object boundaries. Provided that, they proposed a method that leverages an externally pretrained MDE model for generating single-image depth prior, namely pseudo-depth, based on which novel losses are computed to boost SSL. As a result, the models trained through this method can predict sharp and accurate depth maps, even when trained from monocular videos of highly dynamic scenes.
In the present work, we use SC-DepthV3 [38] as our baseline method for centralized SSL of MDE models since it presented great results on two popular benchmarking datasets for MDE in AVs: KITTI [39] and DDAD [40]. Additionally, the well-documented source code2 provided by its authors enabled us to quickly reproduce their experiments and integrate them within our FL solution.Besides SC-DepthV3, we will also compare our results with DepthFormer [41] and MonoFormer [42], two recent transformer-based method that, to the best of our knowledge, currently hold the best results on KITTI Eigen's Split among the SSL-based methods. DepthFormer's main characteristic is that it performs multi-frame SSL-based MDE by improving feature matching across images during cost volume generation [41], while MonoFormer uses a CNN-Transformer hybrid network to increase shape bias by employing Transformers while compensating for the weak locality bias of Transformers by adaptively fusing multi-level representations [42].
Footnote 2: [https://github.com/JiawangBian/sc_depth_pl](https://github.com/JiawangBian/sc_depth_pl)
### Federated Learning (FL)
Big Data-based ML systems usually collect, clean, and aggregate data into one or multiple central servers deployed in the cloud for model training [15]. However, privacy has become a critical aspect of deploying these platforms in recent years. The data used for training typically belongs to different parties that might require different policies and privacy restrictions for sharing data with the platform. In addition, while cloud servers provide highly scalable computational power and storage, transferring data from distributed agents to the cloud might demand high bandwidth from the network infrastructure and incur high communication delays [15].
To tackle these issues, Google proposed FL to allow joint model training by multiple parties [13]. In their approach, the model is assumed to be a neural network whose parameter updates can be shared with a central server without transferring the raw data through the network [15]. Usually, the central server, also called Aggregator Agent (AA) [14], orchestrates the training process and determines how often and how many distributed agents, also called Federated Nodes (FN) [14], will contribute to the global model update.
### Federated Learning with Non-IID data
The problem of Non-IID data (or heterogeneous data) exists in many ML applications and distributed learning methods [17]. ML models are usually trained under the assumption that the training data is IID [17]. Thus, when the data of the FL clients or participants is Non-IID with regards to feature values, categorical labels, or even just the quantity of samples, the trained models' performance might be reduced [17].
To comprehend the challenge posed by Non-IID data to FL, we need to consider the SGD algorithm. Many DNN training algorithms depend largely on SGD for optimization [17]. SGD updates the gradient of each sample every time [17]. Thus, the SGD algorithm converges faster to a local minimum, has a faster update speed, and can be seamlessly applied to FL [17].
In Google's seminal work [13], its authors claimed that FedAvg could make FL more robust to Non-IID data, which was put in check by subsequent research that presented evidence that, in some Non-IID data scenarios, FedAvg might be unstable or even divergent [17]. Nonetheless, FedAvg is still regarded as a baseline aggregation algorithm for FL, with good results on recent Non-IID data experiments [22].
### Self-Supervised Federated Learning
\begin{table}
\begin{tabular}{|c|c|c|c|c|l|} \hline
**Ref.** & **AV** & **CV** & **IID** & **NIID** & **Datasets** \\ \hline [43] & ✗ & ✗ & ✓ & ✗ & Sleep-EDF, HHAR, MobiAct, WiFi-CSI, WESAD \\ \hline [44] & ✗ & ✗ & ✓ & ✗ & HHAR, MobiAct, HAPT \\ \hline [45] & ✗ & ✗ & ✗ & ✓ & Custom \\ \hline [18, 20] & ✗ & ✓ & ✓ & ✓ & CIFAR, Mini-ImageNet \\ \hline [19, 21, 26] & ✗ & ✓ & ✓ & ✓ & CIFAR \\ \hline [22] & ✗ & ✓ & ✓ & ✓ & ImageNet, CIFAR, MS-COCO, Amazon \\ \hline [23] & ✗ & ✓ & ✓ & ✓ & CIFAR, Tiny-ImageNet, LEAF \\ \hline [24] & ✗ & ✓ & ✗ & ✓ & CIFAR, SVHN \\ \hline [25] & ✗ & ✓ & ✓ & ✓ & CIFAR, Fashion-MNIST \\ \hline [27] & ✗ & ✓ & ✓ & ✓ & Retina, CIFAR, CelebA \\ \hline [28] & ✗ & ✓ & ✗ & ✓ & CIFAR, Tiny-ImageNet \\ \hline [29] & ✗ & ✓ & ✗ & ✓ & TCIA PET-CT, MICCAI 2015, CTI ICH D\&S \\ \hline [30] & ✗ & ✓ & ✓ & ✓ & MNIST, CIFAR \\ \hline [31] & ✗ & ✓ & ✓ & ✓ & (Mini-)ImageNet, CIFAR, Mini-INAT2021 \\ \hline [32] & ✗ & ✓ & ✓ & ✓ & CIFAR, SVHN, STL-10, COVID-19, Mini-ImageNet \\ \hline
**Ours** & ✓ & ✓ & ✓ & ✓ & KITTI \\ \hline \end{tabular}
\end{table}
Table 1: Characterization of SSFL works with regards to their applicability to AVs, evaluation on CV tasks, experimentation with IID and NIID, and Datasets used.
Several recent works have explored the combination of SSL and FL with promising results. In Table 1, we characterize those works concerning key aspects, including their applicability for AV use cases, based on the datasets in which they were evaluated. Although most of them tackled CV tasks [18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32], none of them were evaluated on vehicular datasets, such as the SoTA MDE benchmarks. Thus, although the approaches proposed by those works could be adapted to AV use cases, none of them provided empirical evidence that SSFL could be adopted successfully in AV use cases, which is precisely the gap we intend to fill in the literature.
Regarding the CV tasks for which SSFL was used, most works focused on image classification tasks. The most frequent datasets were variations of CIFAR [46] and ImageNet [47]. Also, most works were evaluated with IID and Non-IID data. Non-IID data is usually generated synthetically based on the number of images containing a given object class. That is another aspect in which the present work differentiates itself since we preserve the natural unbalance of samples inherent to the data collection instead of synthetically generating one based on some assumed distribution skew [22].
## 3 Proposed Method
In this section, we detail the proposed method, namely FedSCDepth, which combines an SSL-based MDE component and an FL component, presented in the following subsections.
### Self-Supervised Monocular Depth Estimation
In this section, we describe the MDE model and the formalization of frame warping and self-supervision losses.
#### 3.1.1 Model Architecture
As in [36, 37, 38, 48], the core of the model architecture is composed of an MDE network (DepthNet) and a PE network (PoseNet).Both the DepthNet and PoseNet used ResNet18 [49] as their backbone.A fully convolutional U-Net architecture is used for DepthNet [50] with a DispNet [51] as the decoder. The activations are ELU nonlinearities at every layer except the output, where sigmoids are used. The sigmoid output \(\sigma\) is converted to depth with \(D=\frac{1}{(a\sigma+b)}\), where \(a\) and \(b\) are chosen to constrain \(D\) between \(0.1\) and \(100\) units [48]. The PoseNet is a ResNet18 [52] modified to accept a pair of color images (or six channels) as input and to predict a single 6-DoF (Degrees of Freedom) relative pose [38, 53]. Also, as proposed in SC-DepthV3 [38] we leverage a pre-trained MDE network (PseudolDepthNet) to generate pseudo-depth. During the training of the DepthNet and PoseNet, the PseudolDepthNet generates a single-image depth prior, which is used to boost SSL.
Figure 1: DepthNet architecture. Illustration adapted from [50] and [54].
Figure 2: PoseNet architecture. Illustration adapted from [55] and [56].
An overview of the DepthNet and PoseNet architectures and their combination with PseudoDepthNet in the SSL component is presented in Fig. 1, Fig. 2, and Fig. 3, respectively.
#### 3.1.2 Frame Warping
Given a sequence of image frames captured by a moving monocular camera, the reconstruction \(I^{\prime}_{s\to t}\) of a target frame \(I_{t}\) at time \(t\) can be obtained from a source frame \(I_{s}\) at time \(s\) by performing a bi-linear interpolation over the reprojected frame coordinates. This interpolation, also referred to as warping flow (\(W\)), can be formalized as [57]:
\[W=I^{\prime}_{s\to t}(p_{t})=I_{s}(\hat{p}_{s}) \tag{1}\]
where \(\hat{p}_{s}\) is the reprojection of point \(p_{t}\) into frame \(I_{s}\). To obtain the mapping from \(p_{t}\) to \(\hat{p}_{s}\), \(p_{t}\) needs to be back-projected into 3D point \(X\) using the camera's intrinsic matrix \(K\) and the depth map \(D_{t}\) corresponding to \(I_{t}\). Then \(X\) is transformed to account for camera movement \(C_{t\to s}\) and projected onto the image plain [57]. This transformation is formalized as:
\[\hat{p}_{s}\sim KC_{t\to s}\underbrace{D_{t}(p_{t})K^{-1}}_{X}. \tag{2}\]
#### 3.1.3 Photometric Loss
For a consecutive pair of images (\(I_{a}\), \(I_{b}\)) randomly sampled from a sequence of monocular images, their depths (\(D_{a}\), \(D_{b}\)) and their 6-DoF camera pose (\(P_{ab}\)) are predicted by forwarding the DepthNet and PoseNet, respectively [38]. Provided that, the warping flow (\(W_{ab}\)) between \(I_{a}\) and \(I_{b}\) can be generated using \(D_{a}\), \(D_{b}\), and \(P_{a}b\), and a synthetic reconstruction of \(I_{a}\) (\(I^{\prime}_{a}\)) can be generated using \(W_{ab}\) and \(I_{b}\) via bi-linear interpolation [51]. Thus, the photometric loss (\(L_{p}\)) between \(I_{a}\) and \(I^{\prime}_{a}\) can be used as a self-supervision signal for both networks [36]. Formally,
\[L_{p}=\frac{1}{|V|}\sum_{p\in V}||I_{a}(p)-I^{\prime}_{a}(p)||_{1}, \tag{3}\]
where \(V\) corresponds to the valid points that are successfully projected from \(I_{a}\) to the image plane of \(I_{b}\), and \(|V|\) stands for the number of points in \(V\)[36]. \(L_{1}\) loss is used to reduce the impact of outliers, nonetheless, as it is not invariant to illumination changes, an additional image dissimilarity loss (SSIM [58]) is used, as it normalizes the pixel
Figure 3: SSL component overview. Given a training sample (_i.e._, image pair \(I_{a}\) and \(I_{b}\)) the combined self-supervision loss (\(L_{self}\)) is computed. Meanwhile, a pseudo-depth map (\(PD_{a}\)) is generated using PseudoDepthNet, while depth maps (\(D_{a}\) and \(D_{b}\)) are produced by DepthNet, and PoseNet outputs the pose estimate (\(P_{a}b\)). \(PD_{a}\) and \(D_{a}\) are also fed to the Dynamic Region-Refinement (DRR) and Local Structure Refinement (LSR) modules.
illumination [36]. The modified \(L_{p}\) is formally defined as,
\[L_{p}=\frac{1}{|V|}\sum_{p\in V}(\lambda_{i}||I_{a}(p)-I^{\prime}_{a}(p)||_{1}+ \lambda_{s}\frac{1-SSIM_{aa^{\prime}}(p)}{2}), \tag{4}\]
where \(SSIM_{aa^{\prime}}\) is the element-wise similarity between \(I_{a}\) and \(I^{\prime}_{a}\) by the SSIM function [58]. \(\lambda_{i}=0.15,\lambda_{s}=0.85\)[48].
#### 3.1.4 Mask-Weighted Photometric Loss
To mitigate the adverse impact of moving objects and occlusions, a weight mask \(M=1-D_{diff}\) is used to assign low weights to inconsistent pixels and high weights to consistent pixels [36]. Thus, a mask-weighted photometric loss (\(L_{p}^{M}\)) can be formalized as,
\[L_{p}^{M}=\frac{1}{|V|}\sum_{p\in V}(M(p)\cdot L_{p}(p)). \tag{5}\]
By replacing \(L_{p}\) with \(L_{p}^{M}\), the gradients of inaccurately predicted regions have a lower impact on back-propagation [36].
#### 3.1.5 Combined Self-Supervision Loss Function
As in [38], the edge-aware smoothness loss (\(L_{s}\)) [59] is used to regularize the estimated depth maps since \(L_{p}\) is neither very informative in low-texture images nor in homogeneous regions. Also, to enforce that \(D_{a}\) and \(D_{b}\) conform to the same 3D scene structure, another loss was introduced in [36], based on a depth inconsistency map. In addition, to mitigate the impact of moving objects and occlusions, a weight mask is used to assign low weights to inconsistent pixels and high weights to consistent ones [36]. Thus, by replacing \(L_{p}\) with a mask-weighted photometric loss (\(L_{p}^{M}\)), the gradients of inaccurately predicted regions have less impact in back-propagation [36].
Finally, as in [38] the signals produced by the PseudoDetphNet are used to compute additional losses that help regularize the SSL: the Confident Depth Ranking Loss (\(L_{cdr}\)) and the normal matching loss (\(L_{n}\)) that replaces \(L_{s}\)[38]. Also, the edge-aware relative normal loss (\(L_{ern}\)) helps constrain the relative normal angles of sampled point pairs to be consistent with pseudo-depth [38]. Thus, by combining these losses, a robust self-supervision signal is obtained. Formally,
\[L_{Self}=\alpha L_{p}^{M}+\beta L_{g}+\gamma L_{n}+\delta L_{cdr}+\epsilon L _{ern}. \tag{6}\]
As in [38], \(\alpha=1\), \(\beta=0.5\), \(\gamma=0.1\). Auto-masking and per-pixel minimum reprojection loss are used to filter stationary and non-best points during training [53] and \(\gamma=\bar{\delta}=\epsilon\)[38].
### Federated Learning
The main goal of FL is to learn a global model from highly distributed and heterogeneous data by aggregating locally trained models on remote devices [43], such as AVs. Considering that our MDE model (DepthNet) is represented as \(\varepsilon_{D}^{\theta}(I)=D\), our FL goal can be formally defined as:
\[\underset{\theta}{min}_{D}^{\theta},\text{ where }\varepsilon_{D}^{\theta}:= \sum_{c}^{C}\frac{m_{c}}{m}\varepsilon_{D_{c}}^{\theta}. \tag{7}\]
where \(C\) represents the number of participating client devices (participants) in an FL round, \(m_{c}\) is the total number of instances available for client \(c\) with \(m=\sum_{c}m_{c}\). Lastly, \(\varepsilon_{D_{c}}^{\theta}\) denotes the local MDE model parameterized with weights \(\theta\). To produce a global model, FedAvg [13] is applied to accumulate client updates after every FL round. Fig. 4 illustrates the FL of a self-supervised MDE model proposed in this work. At the same time, Algorithm 1 details how FedAvg is applied for generating the global models for MDE and PE.
**Algorithm 1** FedAvg of DepthNet and PoseNet. The \(C\) participating AVs are indexed by \(c\). \(F\) is the fraction of AVs active on each FL round. \(E\) is the number of training passes each AV makes over its local dataset on each round (local epochs), and \(l\) is the number of AVs selected for each round.
**Input:** DepthNet \(\varepsilon_{D}^{\theta}(I)=D\);
**Input:** PoseNet \(\varepsilon_{P}^{\theta}(I_{i},I_{j})=P_{i\to j}\);
**Algorithm 1** FedAvg of DepthNet and PoseNet. The \(C\) participating AVs are indexed by \(c\). \(F\) is the fraction of AVs active on each FL round. \(E\) is the number of training passes each AV makes over its local dataset on each round (local epochs), and \(l\) is the number of AVs selected for each round.
**Input:** DepthNet \(\varepsilon_{D}^{\theta}(I)=D\);
**Input:** PoseNet \(\varepsilon_{P}^{\theta}(I_{i},I_{j})=P_{i\to j}\);
**Algorithm 2** FedAvg of DepthNet and PoseNet. The \(C\) participating AVs are indexed by \(c\). \(F\) is the fraction of AVs active on each FL round. \(E\) is the number of training passes each AV makes over its local dataset on each round (local epochs), and \(l\) is the number of AVs selected for each round.
**Input:** DepthNet \(\varepsilon_{D}^{\theta}(I)=D\);
**Input:** PoseNet \(\varepsilon_{P}^{\theta}(I_{i},I_{j})=P_{i\to j}\);
**Input:** PoseNet \(\varepsilon_{P}^{\theta}(I_{i},I_{j})=P_{i\to j}\);
## 4 Evaluation Experiments
### Datasets and Scenarios
Our evaluation experiments are conducted with the publicly available KITTI dataset [33], which contains monocular images and 3D scans from scenes captured by cameras and sensors mounted on top of a moving vehicle. Following the approach of [36, 37, 38], we also adopt Eigen's split [35], with the maximum depth set to 80 meters and images resized to a resolution of 832 x 256 pixels for training.
In our experiments, we assume that the 34 drives present in the training dataset correspond to distinct AVs (although some were collected by the same vehicle in different drives). Based on this assumption, we characterize three base experimental scenarios: Centralized Training, Federated Training with IID samples, and Federated Training with Non-IID samples.
#### 4.1.1 Centralized Training (CT)
All vehicles upload their samples to a central server that will train the depth prediction model and distribute the final version to all participants.
#### 4.1.2 Federated Training with IID samples (FT-IID)
The train samples are randomly distributed across the participants, preserving an equal number of samples across all participants, as depicted in Fig. 5. All participants share all validation samples, acting as a gold standard. Each participant trains their local model, which is initialized with the downloaded global model, using their random subset of train samples and computes the validation losses against the gold standard at the end of every epoch. After each FL round, each participant (that was selected for that round) uploads its local model to the aggregation server, which computes the FedAvg, and then distributes the updated global model to all participants.
#### 4.1.3 Federated Training with Non-IID samples (FT-NIID)
This scenario is similar to the previous, except for the fact that the train samples are distributed according to the drives in which they were collected, reflecting the natural unbalance of the data collection. Also, since the number of samples per drive was highly skewed when selecting a subset of the 34 drives, we first picked the ones with the most train samples. Nonetheless, the participants selected for each FL round were picked randomly, without replacement, assuring that the training would go over every participant at least once, given a sufficient number of FL rounds. In addition, to avoid creating too much advantage for the IID scenario, we randomly redistributed the remaining samples (from the participants with the least number of train samples) across the selected participants. Thus, both the IID and the Non-IID scenarios had access to all the samples, changing only their distribution across participants. As shown in Fig. 5, this redistribution did not remove the great unbalance present in the original distribution by drive. However, it substantially increased the lower bound for the number of train samples.
### Metrics
For assessing the effectiveness of the final models, we adopt standard depth evaluation metrics [36, 37, 38] that include the mean absolute relative error (\(AbsRel\)), root mean squared error (\(RMS\)), root means squared log error (\(RMSlog\)), and
Figure 5: Number of samples by a participant in each of the sample distribution strategies considered across 34 (a), 10 (b), and 9 (c) participants, with and without random redistribution of remaining samples.
accuracy under threshold (\(\delta_{i}<1.25^{i},i=1,2,3\)), which are defined in detail in Eigens's seminal work [35]. Also, as in [38], the predicted depth maps are multiplied by a scalar matching the median with the ground truth for evaluation.
For assessing the efficiency of the training methods, we consider the \(AbsRel\) computed over the validation set during the training as our reference for effective learning, and, adapting a communication cost estimation approach proposed in [60] for FL, we formally compute its upper bound (\(W_{max}\)) as
\[W_{max}=2T(C\times\omega_{B}^{*}). \tag{8}\]
where \(C\) is the total number of participants, \(T\) is the total number of communication rounds (or FL rounds), and \(\omega^{*}\) is the number of model parameters. In our formulation, we replace \(\omega^{*}\) with \(\omega_{B}^{*}\) to make it explicit that what we consider is the number of model parameters in Bytes (B). The main motivation for this is to make the comparisons with the estimated cost for CT more direct since these will be estimated based on the dataset size, which is also measured in Bytes. Additionally, we estimate its lower bound (\(W_{min}\)) considering only the participants selected for training the model as,
\[W_{min}=2T(C\times F\times\omega_{B}^{*}). \tag{9}\]
where \(F\) is the fraction of participants that were selected for training the model locally on each FL round. In this estimate, we assume that instead of updating the global model for every participant on every round, only those that will perform local training will have access to the latest global model version.
Finally, we also analyze the number of training steps as a proxy for a computational cost estimate since these represent the number of batches the model has "seen" during the learning process (including images repeated across epochs). Formally, the number of training steps at a given epoch of the CT (\(\#Steps\)) can be computed as,
\[\#Steps=\#Epochs\times\#Batches \tag{10}\]
where \(\#Epochs\) is the number of epochs the model has already been trained on and \(\#Batches\) is the number of batches per epoch, assuming every epoch was trained over the same number of batches. \(\#Batches\) was fixed as \(1k\), to enable comparison with SC-DepthV3's original results [38].
For the FL scenarios, the computation of the training steps is adjusted to account for the number of participants. Thus, we define the steps in a given FT round (\(\#Steps_{FL}\)) as,
\[\#Steps_{FL}=\sum_{t=1}^{\hat{T}}\#Steps_{FL}^{t}, \tag{11}\]
\[\#Steps_{FL}^{t}=\sum_{p\in P_{t}}\#Steps(p) \tag{12}\]
\[\#Steps(p)=\#Epochs_{p}\times\#Batches_{p} \tag{13}\]
where \(\hat{T}\) is the number of FL rounds elapsed, \(P_{t}\) are the participants that performed training on the round \(t\) (which are not necessarily all \(P\)), and \(\#Epochs_{p}\) is the number of epochs through which participant \(p\) iterated over \(\#Batches_{p}\) batches.
Thus, \(\#Steps(p)\) is the number of training steps for participant \(p\), and \(\#Steps_{FL}^{t}\) is the total number of training steps for FL round \(t\). Although there might be fewer batches available for a given \(p\), in the Non-IID scenarios, we have configured the training to resample the available batches randomly until the maximum number of \(\#Batches_{p}=\#Steps\) is reached.
### Implementation Details
Our FedSCDepth prototype implementation was based on SC-DepthV3 [38] and Dec-SSL [22] source codes, which were made publicly available on GitHub by their authors. Our implementation was also shared publicly on GitHub 3.
Footnote 3: [https://github.com/eltonfss/federated-sc-depth](https://github.com/eltonfss/federated-sc-depth)
As in SC-DepthV3 [38], the DNN implementation used PyTorch Lighting 4, with Adam optimizer, and learning rate set to \(10^{-4}\). The DNN encoder was initialized using ImageNet [47] pre-trained weights. As previously mentioned, the maximum number of batches per epoch was set as \(1k\) and the batch size as \(4\) to allow comparing CT and FT results.
FT experiments ran for 12 rounds with a total of 18 different setups resulting from the combination of the following parameter values: \(C=\{10,9\}\); \(F=\{1,\frac{1}{2},\frac{1}{3}\}\); \(E=\{1,2,3\}\).
As in Dec-SSL [22], the local updates were simulated on the same process in which the FedAvg aggregation was computed. Although this simulation strategy might not be sufficiently realistic for estimating all possible metrics, it does not impact the metrics we adopt for estimating the effectiveness of the trained models and the efficiency of the training.
The experiments were deployed and executed on a bare metal server with 1 x CPU i3-12100F 4,3 GHz (4 cores, 8 threads), 2 x 16GB DDR4 RAM (3200 MHz), 1 x GeForce RTX 2060 GPU with 12 GB GDDR6 RAM, and 1 x SSD M.2 2280 1TB (93GB configured as swap memory). The server was configured with Ubuntu 22.04.2 LTS, Python 3.8.15, Conda 4.12.0, Pip 22.3.1, and CUDA version 12.1.
The experiments were executed directly on the server without using any virtualization. Python dependencies were installed in a Conda environment, as described in the sources.
### Ablation Studies
In this section, we analyze the impact of the number of participants per round (\(C*F\)), the number of local training epochs (\(E\)), the number of communication/federation rounds (\(T\)), and the data heterogeneity (IID x NIID) on the lowest global validation losses (\(AbsRel\)) and their corresponding communication and computational costs, depicted in Figure 6.
#### 4.4.1 Impact of number of participants per round
In Fig. 6 (a), we find that the Validation Loss (VL) is lower when \(C\times F=5\), between \(4.2\%\) to \(5.5\%\) lower than the highest. Meanwhile, in Fig. 6 (c), the \(W_{max}\) corresponding to the best VL is lower when \(C\times F=10\), \(34\%\) to \(42\%\) lower than the highest, while in Fig. 6 (e), \(W_{min}\) is lower when \(C\times F=3\), \(51.7\%\) to \(66.7\%\) lower than the highest. Finally, in Fig. 6 (g), the number of training steps corresponding to the best VL is lower when \(C\times F=3\), \(52.9\%\) to \(77.5\%\) lower than the highest.
#### 4.4.2 Impact of number of local epochs
In Fig. 6 (b), we find that the best VL is lower when \(E=3\), \(7.2\%\) to \(8.6\%\) lower than the highest. Meanwhile, in Fig. 6 (d), the \(W_{max}\) corresponding to the best VLs is lower when \(E=1\), \(8\%\) to \(10\%\) lower than the highest, while in Fig. 6 (f), the \(W_{min}\) is also lower when \(E=1\), \(40\%\) to \(54\%\) lower than the highest. Finally, in Fig. 6 (h), the number of training steps corresponding to the best VLs is also lower when \(E=1\), \(77.1\%\) to \(80\%\) lower than the highest.
Figure 6: Lowest global validation loss (a), its estimated communication cost upper bound (c), and lower bound (e), and training steps to obtain it (g), by number of participants. Lowest global validation loss (b), its estimated communication cost upper bound (d), and lower bound (f), and training steps to obtain it (h), by number of local epochs.
#### 4.4.3 Impact of number of federation rounds
In Fig. 7 (a), we find that the best VL decreases rapidly until \(T=3\), with a modest decrease afterwards. Nonetheless, the lowest values are observed with \(T=12\). Meanwhile, in Fig. 7 (b), the number of training steps corresponding to the best VLs increases almost linearly with \(T\) up to about 240k steps, at \(T=12\), while in Fig. 7 (c) and (d), we observe that the \(W_{max}\) and \(W_{min}\) corresponding to the best VLs follow a similar trend, scaling up to about 50GB and 33GB, respectively.
#### 4.4.4 Impact of data heterogeneity
Analyzing Fig. 6 and 7, we find that the proposed method showed robustness to the data heterogeneity inherent to the data collection, obtaining the lowest VL with NIID data (about \(3\%\) lower than the lowest VL with IID data). Meanwhile, the impact on communication costs was not very high since the maximum cost (about 50GB) was the same for both. Also, the maximum number of training steps when using NIID data was the same as IID, about 240k.
### Comparison with Centralized Training
After analyzing the different FT configurations, we concluded that the one that showed the best results was the one with \(C\times F=5\), \(E=3\), \(T=12\) and IID data. This was especially due to the fact that it produced the lowest VL and, although the additional communication and computation cost required for it was not negligible, it was within a reasonable value for AV use cases. Therefore, in Fig. 8 (a), we observe that the selected FT configuration reached a VL about \(7\%\) worse than the best VL obtained with CT with an additional total computational cost of about \(80\%\). One thing to note here is that, as the VLs are computed at the end of every epoch for the CT, its first data point was obtained at 1k training steps. Meanwhile, in the FT, the first VL is computed after the first FL round is complete. Thus, the number of steps of its first data point will depend on the values of \(C\times F\), \(E\), and \(\#Steps=1000\).
Meanwhile, in Fig. 8 (b) and (c), we find that the final \(W_{min}\) and \(W_{max}\) were about \(1.93\times\) and \(2.85\times\) higher with FT than CT, respectively. Nonetheless, the total CT communication cost needs to be paid right at the first round, while in FT, this cost is split across 12 rounds. Considering that there would be 10 AVs involved in the data collection and training, we would have to transmit, on average, about 1.293GB per AV with CT (in the first round only) and something between 0.208GB and 0.415GB per AV with FT (on each round), which indicates that the communication cost paid by each AV on the first round would be on average \(67.9\%\) to \(83.9\%\) lower with FT. Also, if we consider that in CT,
Figure 8: Lowest validation loss (a) and its estimated communication cost upper bound (b), and lower bound (c), by number of training steps.
Figure 7: Lowest global validation loss (a), its communication cost upper bound (b), and lower bound (c), and steps to obtain it (d), by number of rounds.
the computational cost of 100k training steps has to be paid by the central server at the first round, while in FT, the computational cost of 180k training steps is shared by the 10 AVs, with an average of about 1.5k training steps being performed by each AV on each round, we conclude that FT promotes a more efficient cost distribution overall.
Finally, in Table 2 we observe that the efficacy metrics obtained on the test set with the best model obtained with FT were very close to the ones obtained with CT, even matching the \(RMSlog\) and obtaining a slight advantage on the \(SqlRel\) metrics calculated over dynamic regions (image regions classified as vehicles or pedestrians [38]).
### Comparison with State-of-the-Art on SSL-based MDE
In Table 3, we present the MDE efficacy metrics obtained in the test dataset with the best-performing FT configuration of the FedSCDepth. We also compare its results with the results reported by three SoTA SSL-based (centralized) MDE methods: SC-DepthV3 (SCD) [38], which was the baseline of our SSL MDE component; MonoFormer (MF) [42] and DepthFormer (DF) [41], the best performing SSL-based MDE methods in the KITTI Eigen Split [35].
Analyzing Table 3, we can observe that the \(AbsRel\) obtained by FedSCDepth is about \(8.5\%\), \(23.1\%\), and \(42.2\%\) worse than SCD, MF, and DF, respectively. Meanwhile, \(\delta_{3}\) was the same for all except MF, and the \(RMSlog\) with FedSCDepth is about \(4.8\%\), \(7.6\%\), and \(12.6\%\) worse than with SCD, MF, and DF, respectively. Based on those results, we conclude that most of the difference between FedSCDepth and DF, which was the best performing overall, is due to the transformer-based technique employed by the latter, which produced highly superior efficacy than SCD. This difference is also visible when we compare them with MF, which presents a much closer performance to DF than SCD. Thus, our results were very close to SCD, which represents the pre-transformer SSL-based MDE SoTA and is considered our main baseline.
## 5 Discussion
After analyzing the different efficiency metrics of the federated and centralized training methods, we consider the proper answer for _RQ1_ to be the following: FT is more efficient than CT when we accept a less strict loss threshold (such as \(AbsRel\) below \(0.13\)). Meanwhile, to reach optimal depth prediction loss (below \(0.12\)), the CT will be more efficient concerning the total computational and communication costs. Nonetheless, it should be noted that while in CT, the communication cost has to be paid right at the beginning, in FT, this cost is paid across several rounds (at most, 4.15 GB of data are transferred on each round, totaling an average of 0.415 GB per participant on each round and 4.98 GB per participant after 12 rounds).
Also, while the computational cost is entirely paid by the central server in the CT (100k training steps), this cost is shared by the participating AVs in the FT scenarios (on average, 1.5k training steps are performed by each participant at each round, totaling 18k steps of training by each participant at the 12th round). Finally, when comparing the FT efficiency with IID and Non-IID data, it was better with IID data in most scenarios. Nonetheless, there were no significant differences in efficiency in those two FT setups overall, which is an indication that the proposed solution would perform well in realistic FT with AVs, which usually presents Non-IID data.
\begin{table}
\begin{tabular}{|c|c c c c|c c c c|} \hline \multirow{2}{*}{**Scn.**} & \multicolumn{4}{c|}{**Dynamic**} & \multicolumn{4}{c|}{**Static**} \\ \cline{2-9} & \(AbsRel\) & \(SqRel\) & \(RMS\) & \(RMSlog\) & \(AbsRel\) & \(SqRel\) & \(RMS\) & \(RMSlog\) \\ \hline FT & 0.202 & **1.933** & 7.248 & **0.282** & 0.119 & 0.723 & 4.861 & 0.177 \\ \hline CT & **0.191** & 2.072 & **7.111** & **0.282** & **0.106** & **0.638** & **4.275** & **0.159** \\ \hline \end{tabular}
\end{table}
Table 2: Effectiveness Comparison with CT on KITTI (Eigen Split).
\begin{table}
\begin{tabular}{|c|c|c c c c|c c c|} \hline
**Method** & **Resolution** & \(AbsRel\) & \(SqRel\) & \(RMS\) & \(RMSlog\) & \(\delta_{1}\) & \(\delta_{2}\) & \(\delta_{3}\) \\ \hline DF [41] & \(640\times 192\) & **0.090** & **0.661** & **4.149** & **0.175** & **0.905** & **0.967** & **0.984** \\ \hline MF [42] & \(640\times 192\) & 0.104 & 0.846 & 4.580 & 0.183 & 0.891 & 0.962 & 0.982 \\ \hline _SCD [38]_ & \(832\times 256\) & 0.118 & 0.756 & 4.709 & 0.188 & 0.864 & 0.960 & **0.984** \\ \hline _Ours_ & \(832\times 256\) & 0.128 & 0.803 & 5.015 & 0.197 & 0.836 & 0.956 & **0.984** \\ \hline \end{tabular}
\end{table}
Table 3: Effectiveness Comparison with SoTA on KITTI (Eigen Split).
Meanwhile, after comparing the effectiveness of the best FT model with the SoTA, we consider the proper answer for _RQ2_ to be the following: MDE models obtained with CT are more effective than those learned with FT. Nonetheless, the effectiveness lost when using FT is minimal, with the models obtained with FT reaching near SoTA performance in only 12 rounds. Also, the effectiveness obtained by the models obtained with FT is significantly better when working with NIID data for most scenarios, which indicates that this approach is highly applicable to realistic AV deployments, where data collection is typically unbalanced.
## 6 Conclusion
In this paper, we tackle the problem of monocular depth estimation for autonomous vehicles. The key to our method is using federated and self-supervised learning to collaboratively train a depth estimator using unlabeled data captured by vehicles with high effectiveness, efficiency, and privacy preservation. We evaluate a prototype implementation of this method using the KITTI dataset and show that it can achieve near-SoTA performance with a low computation cost per vehicle and a lower communication cost per round per vehicle than centralized training. Additionally, the experimental results indicate that the proposed method is robust to Non-IID data, even using simple FedAvg aggregation. Future work includes exploring other aggregation functions and optimization strategies to further reduce the proposed method's computational and communication costs, as well as evaluating its generalizability with other public benchmark datasets.
|
2306.07400 | Neural Embeddings for Web Testing | Web test automation techniques employ web crawlers to automatically produce a
web app model that is used for test generation. Existing crawlers rely on
app-specific, threshold-based, algorithms to assess state equivalence. Such
algorithms are hard to tune in the general case and cannot accurately identify
and remove near-duplicate web pages from crawl models. Failing to retrieve an
accurate web app model results in automated test generation solutions that
produce redundant test cases and inadequate test suites that do not cover the
web app functionalities adequately. In this paper, we propose WEBEMBED, a novel
abstraction function based on neural network embeddings and threshold-free
classifiers that can be used to produce accurate web app models during
model-based test generation. Our evaluation on nine web apps shows that
WEBEMBED outperforms state-of-the-art techniques by detecting near-duplicates
more accurately, inferring better web app models that exhibit 22% more
precision, and 24% more recall on average. Consequently, the test suites
generated from these models achieve higher code coverage, with improvements
ranging from 2% to 59% on an app-wise basis and averaging at 23%. | Andrea Stocco, Alexandra Willi, Luigi Libero Lucio Starace, Matteo Biagiola, Paolo Tonella | 2023-06-12T19:59:36Z | http://arxiv.org/abs/2306.07400v1 | # Neural Embeddings for Web Testing
###### Abstract
Web test automation techniques employ web crawers to automatically produce a web app model that is used for test generation. Existing crawers rely on app-specific, threshold-based, algorithms to assess state equivalence. Such algorithms are hard to tune in the general case and cannot accurately identify and remove near-duplicate web pages from craw models. Failing to retrieve an accurate web app model results in automated test generation solutions that produce redundant test cases and inadequate test suites that do not cover the web app functionalities adequately. In this paper, we propose WeEmbed, a novel abstraction function based on neural network embeddings and threshold-free classifiers that can be used to produce accurate web app models during model-based test generation. Our evaluation on nine web apps shows that WeEmbed outperforms state-of-the-art techniques by detecting near-duplicates more accurately, inferring better web app models that exhibit 22% more precision, and 24% more recall on average. Consequently, the test suites generated from these models achieve higher code coverage, with improvements ranging from 2% to 59% on an app-wise basis and averaging at 23%.
Web Testing, Neural Embeddings, GUI Testing, Doc2Vec.
## 1 Introduction
Test automation is used to enable end-to-end (E2E) functional testing of web apps. In this approach, testers exercise the application under test (AUT) with automated scripts [1, 2, 3] that imitate the end-user interactions with the web pages by simulating user-like events, such as clicks, scrolls, and form submissions. Therefore, the web app is tested for its ability to provide correct functionalities to the end user, through its graphical user interface (GUI).
The manual development of E2E test automation scripts is a costly endeavor in practice, and so is the maintenance of such scripts over time [4]. For this reason, researchers have proposed automated test generation solutions, most of which rely on a model of the web app [5, 6, 7, 8, 9, 10]. Model-based web testing techniques systematically build a web app model by exploring the functionalities of a given web app by means of a crawler [9, 11, 10]. The model is represented in terms of web app states, i.e., logical functional units, and transitions between states triggered by events (e.g., clicks). Ideally, a good web app model should contain all possible logical web pages--i.e., it should be _complete_--without representing the same logical page multiple times--i.e., it should be _concise_[12, 13].
To automatically determine the logical web app states, model-based techniques use a scoring function, called _state abstraction function_ (SAF). When the SAF is ineffective, it causes clone or _near-duplicate_ states that pollute the model. Near-duplicates are concrete instances of the same logical state, differing only by minor changes [12].
The presence of near-duplicates makes a crawl model not concise (i.e., the same state appears multiple times) and incomplete because, in the presence of duplicated states, the crawler will waste part of its finite exploration budget resolving the same state many times, possibly missing other important, not yet discovered, states. A web app model containing near-duplicates undermines the effectiveness of the test suites generated from it in terms of completeness and adequacy [12, 13]. In fact, missing states will remain untested, potentially reducing the test suite adequacy (e.g., code coverage). Moreover, duplicated states might lead to the generation of redundant test cases that do not contribute to increasing the code coverage of the AUT [7].
A recent study [12] shows that current state-of-the-art SAF implementations, based on similarity algorithms, hashing algorithms, or visual resemblance of web app snapshots, are application dependent, as no algorithm is comparably effective across different web apps. Moreover, the study reports that, even for an effective SAF within the same web app, it is challenging for developers to find the optimal threshold that can detect near-duplicate states without collapsing logically distinct states into the same one [12].
This paper investigates the problem of building a robust SAF using _neural network embeddings_ of web pages. An embedding is a mapping of an input belonging to a complex input space (e.g., natural language, images, or web pages) to a low-dimensional and continuous vector representation belonging to a _latent space_[14]. Embeddings are useful because of their capability to preserve the semantic similarity that holds in the original complex input space [14], which makes them suitable to address the web app similarity problem.
Our approach, implemented in a tool called WebEmbed, consists of a novel SAF that turns multiple intermediate token-sequence representations of web pages into \(n\)-dimensional vector representations used by a classifier to estimate the similarity, or lack thereof, of web pages. Although there are many methods to produce vector em
beddings [15, 16, 17, 18, 19], this paper focuses on the vector representation produced by Doc2Vec [20], as web pages are a mixture of text and tags. More specifically, WebEmbed is trained on three Doc2Vec models on a large corpus of web pages. The input to each of the three models is respectively the token-sequence representation of web pages built from their tags, from their textual content, or the union of the two. Once trained, WebEmbed uses the three Doc2Vec models within a web crawler: upon exploration, for each web page, its three token-sequence representations are retrieved, the corresponding three neural embeddings are computed and compared with the states already present in the web app model, resulting in three similarity scores. The considered web page is retained only if it is not a near-duplicate state, according to a pre-trained classifier that maps the three similarity scores of two web pages into class 1 (near-duplicates) or 0 (distinct).
We have evaluated WebEmbed empirically using benchmarks available from the literature containing a diverse set of web apps. We assessed three different tasks, namely near-duplicate detection, model coverage, and test generation, on three use cases with different requirements in terms of labeling cost for developers. In our experiments, accounting for more than 450 configurations, WebEmbed achieved high accuracy scores on all use cases for all tasks, with a statistically significant margin over two existing state-of-the-art SAFs. Quite notably, our approach is threshold-free and can be applied even without defining any app-specific corpus of labeled data, still achieving satisfactory performance (75% accuracy on near-duplicate detection and 82% on model coverage). When a corpus of labeled data is available for a given web app or sufficient labeling cost is allowed, the accuracy of WebEmbed is further improved (93% on near-duplicate detection--a 22% increment--and 92% concerning model coverage--10% increment). Lastly, by employing WebEmbed, tests generated based on crawled models result in higher code coverage, with an average increase of 23% compared to current state-of-the-art techniques (ranging from 2% to 59% on an app-wise basis).
Our paper makes the following contributions:
**Technique.**: A novel approach, implemented in the publicly available tool WebEmbed[21], which uses neural embeddings and classifiers for web crawling and testing.
**Evaluation.**: An empirical study showing that WebEmbed is more effective than two state-of-the-art SAFs in the near-duplicate detection, model coverage, and test generation tasks under different configurations/use cases.
## 2 Background
In this section, we describe the problem of automatically retrieving an accurate web app model for test generation and its challenges. Then, we introduce the concept of neural embeddings to overcome such challenges.
We use as a running example a simple e-commerce web app showing a product catalog. A user can view the details of a product, add a review and buy it (Figure 1 (left)).
### _Automated Web Model Inference_
Automated web model inference techniques, such as crawling, operate through state exploration by triggering events (e.g., clicks) and by generating inputs that cause state transitions in the web app. Whenever significant changes in the current web page are detected, a new _state_ is added to the model. A state can be viewed as an abstraction of all the dynamic, runtime versions of the same logical web page, often represented by their Document Object Models (DOMs). The final model is a set of states, i.e., the set of abstract web pages of the web app, and edges that represent transitions between states.
From a functional testing viewpoint, the optimal web app model, in terms of logical states and functionalities, is shown in Figure 1 (center). The model includes three states, namely _Catalog page_, _Detail page_ and _Buy page_. From the Catalog page, it is possible to navigate to the Detail page by clicking on a product. From a product Detail page, it is possible to either write a review for the product, which leads back to the same page, or buy the product, which causes a transition to the Buy page. From both the Detail and Buy pages, users can navigate back to the initial Catalog page.
### _Near-duplicate States_
Figure 1 (right) shows a crawl model produced by the state-of-the-art crawler Crawljax [9] with its default configuration, which consists of: (1) no state abstraction capability, i.e., all dynamic states are regarded as new states; (2) an ordered GUI events queuing strategy that considers HMTL elements from top to bottom and from left to right; (3) a depth-first exploration strategy.
In particular, once the web app is loaded, the crawler saves the initial home page (also called _index_ page) as the first state of the crawl model (i.e., state ). In our running example, the index page is the Catalog page. Then, the crawler clicks on the first displayed product, i.e., Item A, which leads to the web page showing the item details. Such page is saved as a state into the crawl model (Detail page
Fig. 1: Left: E-commerce web application. Center: Optimal web app model. Right: Incomplete crawl model w.r.t. the “buy” functionality due to redundant near-duplicate states for the same “Detail page”. States,,, and are functional near-duplicate states of state.
A) and marked as _unvisited_ (i.e., state ). Since a new state is added, the crawler returns to the index page (not shown in Figure 1) and crawls back to the newly added unvisited state. Next, the crawler clicks on the Add Review button, adds Review 1 to the page, either with random or manually provided input data, and stores another state to the crawl model (Detail page A + Review 1), marking it as unvisited since it is regarded as a different state. Similarly to the previously found state (i.e., state ), the crawler returns to the index page and crawls back to state by clicking on Item A. Since state is a new state and the "Add review" clickable is unfired, the crawler adds a new review creating a new state (i.e., state ); the process repeats until the crawler runs out of budget.
From a testing viewpoint, all web pages containing the details of the selected product and its reviews should be collapsed into the same logical page, which is not the case with our crawl model in which many similar replicas of the Detail page for Item A are present. In the literature, concrete instances of the same logical page, such as the detail pages of our example, are known as clones, or near-duplicate states [12]. The presence of near-duplicate states in web app models has a detrimental effect on the effectiveness of model-based test generation techniques, in terms of _conciseness_ and _completeness_. Concerning the former, the presence of near-duplicates typically leads to test suites containing many redundant tests exercising the same functionality. In our running example, it would be sufficient to cover the Detail page only once with a test case, as covering all potential detail pages with many redundant test cases is unlikely to increase the code coverage achieved by the test suite [12]. As for completeness, when exploring large web apps, crawlers may waste a considerable part of their time budget visiting near-duplicate states, without exploring other relevant parts of the application. This harms the completeness of the inferred models, and thus the associated test suites. In the running example, the crawler failed to recognize that the 'new' updated Detail page for Item A, featuring the reviews, was a _near-duplicate_ of the previously-visited Detail page. Therefore, it consumed the entire time budget failing to explore other significant parts of the application, such as the "buy" functionality, leading to an incomplete model.
The SAF used by the crawler is the main root cause for the lack of conciseness and completeness of automated crawl models [12]. Yandrapally et al. [12] showed that state-of-the-art structural and visual SAF implementations produce near-duplicate states. Moreover, the study highlighted the challenge to select optimal thresholds to distinguish near-duplicate from distinct web pages.
Motivated by these findings, in this paper we propose a novel SAF based on the usage of neural embeddings, paired with machine learning classifiers that require no thresholds.
### _Neural Embeddings and Doc2Vec_
Neural embeddings have shown to be useful for many code analysis tasks such as code completion [22], log statement generation [23], code review [24] and other code-related tasks [25, 26]. In this work, we evaluate whether embedding models produced by Doc2Vec [20], a popular document embedding technique, can be useful to target the equivalence problem between web pages, possibly with some adaptation and fine-tuning to take into account the semi-structured nature of HTML documents. We focus on Doc2Vec because it has been applied to compute embeddings for large corpora of textual data [27], document classification [28], sentiment analysis [29, 30], and disease detection studies [31]. However, its application to web testing is still unexplored. We hypothesize that Doc2Vec can produce meaningful embeddings also for HTML pages since their textual representation contains both tags and text.
Doc2Vec aims to find an optimal embedding model such that similar text documents would produce embeddings that lie close in the vector space. Given a document, Doc2Vec creates and projects paragraph embeddings, as well as word embeddings, into the vector space and then uses a trained deep neural network model to predict words of paragraphs or documents in a corpus [20]. Instead of computing an embedding for each word like Word2Vec [32], Doc2Vec creates a different embedding for an entire paragraph or even a document. At inference time, the input paragraph id vector (a one-hot encoded vector) is unknown, hence it is first derived by gradient descent given the input and output words and it is concatenated with the one-hot encoded vectors of the paragraph words to predict the next word in the paragraph. The internal representation used to make such a prediction is averaged or concatenated across predictions to get the final document embedding [20].
Doc2Vec can be configured to use two different models: Paragraph Vector Distributed Memory (PV-DM) or Distributed Bag Of Words (DBOW). The former randomly picks a set of consecutive words in the paragraph and tries to predict the word in the middle, using the surrounding words (i.e., context words) and the paragraph id. The latter is similar to a Skip-gram model, in which, given a paragraph id, the model tries to predict the next word of a randomly picked sequence of words from the chosen paragraph [20].
## 3 Approach
The goal of our approach, which we call WebEmbed, is to automatically detect the occurrence of near-duplicate web pages during crawling, discard them from the web app crawl model and generate test suites. In a nutshell, during crawling, our approach uses a novel neural embedding model for web pages built on top of Doc2Vec [20]. Initially proposed for textual documents, we explore its applicability to HTML web pages containing both tags and text. The DOM tags and the text tokens of the retrieved web app states are represented as vectors in an \(n\)-dimensional embedding space and used by a novel SAF to assess the similarity between web pages.
Figure 2 illustrates our approach, which consists of four phases, namely (1) Training Doc2Vec, (2) Training the State Abstraction Function, (3) Crawling, and (4) Test Creation.
In the first phase, different Doc2Vec models are trained on a unlabeled corpus of web pages. From each web page, our approach extracts different token-sequence representations of the DOM, namely its textual _content_, its _tags_ or a combined representation of _content+tags_. Then, for each representation, a Doc2Vec model is trained. In the second phase, we use a labeled corpus of web pages in which
all pairs of web pages are annotated as being distinct or clone/near-duplicate. For each pair of web pages, we use the Doc2Vec models to compute their vector embeddings such that their similarity can be calculated. Once all similarities are computed, we train a classifier to distinguish clone/near-duplicate web pages based on such similarities. In the third phase, we use the Doc2Vec models at runtime during crawling as a SAF. When two web pages are available, our approach computes their embeddings based on a given representation (either content, tags, content+tags), computes their similarity, and uses the classifier to predict whether the two web pages are distinct or near-duplicate. In the fourth phase, each crawl path is turned into a web test case. We now detail each phase of our approach.
### _Training Doc2Vec_
Our approach requires computing embeddings for HTML web pages. While originally conceived for general-purpose textual documents, in this work we extend Doc2Vec [20] to support web pages.
#### 3.1.1 Token sequence extraction
The first phase requires an unlabeled corpus of HTML web pages, from which we extract a convenient representation for training a Doc2Vec model. Let us consider the HTML of the Detail page for Item A (Listing 1). We first retrieve a sequence of tokens of the DOM representing either its _content, tags_ or _content+tags_. The procedure extractTokens, outlined in Algorithm 1 (lines 1-7), starts from the root node of the DOM and proceeds as follows: (1) the sequence of tokens (either tags, content, or content+tags) for the current node are extracted (line 3); (2) the extraction procedure is recursively called, in a depth-first fashion, on all children of the current node, from left to right. The result of these calls is then appended to the list of extracted tokens (lines 4-6); (3) the sequence of extracted tokens is returned (line 7).
**Tags Token Sequence.** The first extraction function considers only the tags of an HTML page while discarding comments, scripts, and CSS. The intuition is that tags indicate the general layout of an HTML document and may be effective for detecting structurally similar web pages [33].
```
1<htmllang="en>
2<head>
3<title>ItemAdetailpage</title>
4<linkrel="stylesheet"href="styles.css>">
5<script>">
6</script>">
7</head>
8<body>
9<imgsrc="item_a.jsp"class="item_pic"/>
10<h1ItemAcd/h1>
11<imgsrc="tree-stras.png"/>
12<pass="prico="9.99%C/>
13<h2<h3="f"by1="item_a"class="btn?BUV</a>
14<pass="desc">"Detaileddescriptionforitem<A.</p>
15<h2>RevRev</h2><!-Reviewslistedhere->
16<h2>(<h2>(<h3>(<h4>(<h5>(<h6>(<h6>(<h7<(h8<(h99<(h10<(h11<h1<h1<h1<h1<h1<h1<h1<h1<h1<h1<h1<h1<h1<h11<h1<h1<h11<h1<h11<h1<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h11<h1<h11<h11<h1<h11<h11<h1<h11<h11<h11<h11<h11<h1<h11<h11<h1<h11<h11<h11<h1<h11<h11<h11<h11<h1<h11<h1<h11<h11<h1<h11<h1<h11<h11<h11<h1<h11<h1<h11<h11<h1<h1<h1<h1<h1<h1<h1<1<h1<h1<h1<h1<h1<h1<h1<1<h1<h1<h1<h1<h1<h1<h1<h1<h1<h1<h1<h1<h1<h1<1<h1<1<h1<h1<1<h1<h1<h1<1<h1<h1<h1<h1<1<h1<h1<h1<h1<h1<h1<1<h1<1<h1<1<h1<h1<1<h1<1<h1<1<h1<1<h1<h1<1<h1<1<h1<1<h1<1<h1<1<h1<1<h1<1<h1<1<h1<1<h1<1<h1<1<h1<1<h1<1<h1<1<h1<1<h1<1<h1<1<h1<1<h1<1<h1<1<h1<1<1<h1<1<h1<1<h1<1<h1<1<h1<1<1<h1<1<h1<1<h1<1<1<h1<1<1<h1<1<1<h1<1<1<1<1<1<1<1<1<1<1<1<1<1<1<1<11<1<1<11<1<11<1<11<1<11<11<1<1<11<11<1<11<1<1<1<1<11<1<1<11<11<11<1<11<1<11<11<1<1<11<1<11<1<11<11<11<1<11<11<11<11<11<11<11<11<11<11<111<111<11<11<11<11<11<11<11<11<11<111<11<11<111<111<11<111<11<11<111<111<11<11<111<11<111<11<111<111<111<11<11<111<111<111<11<111<111<11<11<111<111<111<111<11<111<111<111<11<111<111<111<11<111<111<111<111<111<111<11<111<111<111<111<111<111<111<11<11<111<111<11<111<111<111<111<111<111<111<111<11<111<111<111<111<111<111<111<111<111<111<111<111<111<111<111<111<111<111<111<111<111<111<111<111<111<111<111<111<111<11<11<111<111<11<111<111<11<111<111<111<111<11<111<111<111<111<111<111<111<111<11<111<11<111<11<11<111<11<111<111<11<11<111<111<111<11<11<11<11<11<11<11<11<11<11<111<11<111<111<11<11<11<11<111<11<11<11<11<11<111<11<11<11<11<11<11<11<11<11<111<11<111<11<11<11<11<11<11<11<11<11<11<11<11<11<11<1<11<11<11<111<11<11<11<111<11<11<111<11<11<11<11<11<11<11<11<11<11<11<11<11<11<11<11<111<11<11<11<11<11<11<111<11<11<11<111<11<11<111<11<111<11<11<11<11<11<1<11<11<11<11<11<11<11<11<11<11<11<11<11<11<11<11<111<11<1<11<11<11<11<11<11<11<11<11<11<11<11<11<1<11<11<11<1<11<11<11<11<11<11<11<11<11<11<1<11<11<11<11<11<11<11<1<11<1<11<11<1<11<11<11<1<11<1<11<11<11<11<11<1<11<1<11<11<1<11<11<11<1<11<11<1<11<1<11<1<1<11<11<1<11<11<1<1<11<1<11<1<11<1<1<1<11<11<1<11<11<11<11<1<11<11<1<11<11<1<11<11<1<1<11<11<11<11<1<111<11<1<11<11<1<11<1<11<11<1<11<11<1<11<1<1<11<11<11<11<11<1<11<11<11<11<11<11<1<11<11<11<11<11<11<1<1<11<11<11<1<11<11<11<11<1<11<11<11<1<1<11<11<1<11<1<11<1<11<1<1<1<11<11<1<11<1<11<11<1<11<11<1<11<1<1<11<1<11<1<11<1<11<11<1<1<11<1<11<11<1<11<1<1<11<11<1<1<11<1<1<1<1<1<1<11<1<1<1<1<1<1<11<1<1<1<1<11<1<11<1<1<1<1<1<1<1<1<1<11<1<1<1<11<1<1<1<1<1<1<1<11<1<1<1<11<1<1<1<11<11<1<1<1<11<1<1<11<1<1<11<1<1<1<1<1<
output of the two previous extraction functions. This can be effective in cases where using the tags or the content only is not enough to accurately classify two web pages. The HTML in Listing 1 is converted to the following content+tags token sequence: [html, head, title, _item_, \(a\), _detail_, _page_, link, body, img, h1, _item_, \(a\), img, p, _9.99, _$_, a, buy_, p, _detailed_, _description_, _for_, _item_, \(a\), h2, _reviews_, \(a\), _+_, _add_, _review_, table, tr, td, _quite_, _good_, _by_, a, _alice_, td, img, tr, td, _does_, _its_, _job_, _by_, a, _bob_, td, img].
In our empirical study, we evaluate the effectiveness of all three token sequences (tags, content, and content+tags) for near-duplicate web page detection.
```
1Function extractTokenS(\(n\), \(ct\)) \(\triangleright\)\(ct\): content, tags, or both
2 let\(tokens\) be an empty list;
3\(tokens\_append(getTokenS(n,ct))\);
4foreach children\(\mathit{anc}\) of \(\mathit{p}\), \(\mathit{from}\) left to rightdo
5if\(\mathit{c}\) is not a serial, style, or comment node then
6\(tokens\_append(\texttt{extractTokenS}(c))\)
7 end if
8
9 end for
10
11return tokens;
12FunctionCrawL(initialURL);
13\(s_{1}\)\(\leftarrow\)\(getState(\mathit{initialURL})\);
14\(model\)\(\leftarrow\)\(\mathit{initializeModel}(s_{1})\)while\(\mathit{-im}\)do
15\(next\)\(\leftarrow\)\(nextStateToExplore(model)\);
16if\(\mathit{next}\)\(\leftarrow\)\(\mathit{nil}\)then\(\triangleright\) apply exhaustively explored
17break;
18
19 end if
20\(\leftarrow\)\(getToState(next)\);
21for\(\mathit{c}\in\mathit{getCandidateEvents}(s)\)do
22\(\mathit{fireEvent}(e)\);
23\(\mathit{s}_{e}\)\(\leftarrow\) current state after firing the event \(e\);
24if\(\smallsetminus\)\(\texttt{IsDuplicate}(s,model)\)then
25 add\(s_{e}\) to \(model\);
26
27 end if
28
29 end for
30
31 end for
32return\(model\);
33FunctionIsDuplicate(\(s_{e}\), \(model_{k}\)
34foreach state \(\mathit{in}\) model do
35if\(\texttt{Classify}(s_{e},s)\)\(\leftarrow\)\(\mathit{dom}\)then
36returnTrue;\(\triangleright\)\(s\) is a duplicate of \(s^{\prime}\)
37
38 end if
39returnFalse;
30
31FunctionClassify(\(p_{1},p_{2},ET\)):\(\triangleright\)\(ET\): embedding types
32let\(\mathit{s}\) be an empty list;
33foreach embedding type\(\mathit{it}\)\(\mathit{it}\)\(\mathit{in}\)\(ET\)do
34\(r_{1}\)\(\leftarrow\)\(\texttt{extractTokenS}(p_{1},getRootNode(\mathit{c}),\mathit{et})\);
35\(r_{2}\)\(\leftarrow\)\(\texttt{extractTokenS}(p_{2},getRootNode(\mathit{c}),\mathit{et})\);
36\(\mathit{dec2vec}\)\(\leftarrow\)\(getDecModel(\mathit{c})\);
37\(e_{1}\)\(\leftarrow\)\(\texttt{dec2vec}\)\(\mathit{in}\)\(\mathit{r}_{1}\);
38\(e_{2}\)\(\leftarrow\)\(\texttt{dec2vec}\)\(\mathit{in}\)\(\mathit{r}_{2}\);
39\(s_{s}\)\(append(\mathit{cosineSimilarity}(e_{1},e_{2}))\);
40 end if
41return\(\mathit{classifier}\).\(\mathit{classify}(s)\);
```
**Algorithm 1**Web App Crawling with WebEmbed
#### 3.1.2 Model Implementation and Training
Once the pre-processing for token sequence extraction is done, three different Doc2Vec models are trained, i.e., one model for each token-sequence type (using the DBOW model) [20]. Hence, we obtain three Doc2Vec models that allow us to compare pairs of web pages and compute their similarity based on one token-sequence representation of the pair at a time. For example, the following embeddings are produced for the HTML of Listing 1:
\[\mathit{doc2vec}(tags) =[-0.25,0.48,...,0.03]\] \[\mathit{doc2vec}(content) =[-0.55,0.17,...,0.90]\] \[\mathit{doc2vec}(content+tags) =[-0.40,0.33,...,0.44]\]
### _Training State Abstraction Function_
In the second phase, we train a SAF. This task requires a _labeled_ corpus of web pages, in which each pair of web pages is manually labeled to indicate whether the web pages in the pair are clones/near-duplicates.
For each pair of web pages in such corpus, we use one of the different Doc2Vec models to compute their embeddings. Then, we compute the cosine similarity [34], a widely used metric to assess vector similarity. A combination of the three similarity scores, based on content, tags, or content+tags neural embeddings, is used to train a classifier to discriminate two web pages as being distinct or clones. In preliminary experiments, we also used the embeddings as input to the classifier without noticing any significant improvement. For performance reasons, we eventually used similarity scores, as a smaller input vector makes both the training and inference of the classifier faster. In particular, the inference time is critical as it directly impacts the time budget of the crawler.
### _Crawling_
The third phase consists of using the trained SAF during crawling, to infer crawl models that can be used for automated test generation.
#### 3.3.1 The Crawler
The crawler loads the web pages in a web browser and exercises client-side JavaScript code to simulate user-like interactions with the web app. This allows the crawler to support modern, client-side intensive, single-page web applications. The main conceptual steps performed when exploring a web application are outlined in the Crawl function of Algorithm 1 (lines 8-20).
Crawling starts at an initial URL, the homepage is loaded into the browser and the initial DOM state, called index, is added to the model (line 10). Subsequently, the main loop (lines 11-19) is executed until the given time budget expires or there are no more states to visit (i.e., the web app has been exhaustively explored according to the crawler). In each iteration of the main loop, the first unvisited state in the model is selected (line 12), and the crawler puts in place adequate actions to reach said state. If the state cannot be reached directly, it retrieves the path from the index page and fires the events corresponding to each transition in the path. Upon reaching the unvisited state, the clickable web elements are collected (i.e., the web elements on which interaction is possible, line 16), and user events such as filling forms or clicking items are generated (line 17). After firing an event, the current DOM state \(s_{c}\) is captured (line 18). The \(\texttt{isDuplicate}\) function supervises the construction of the model and checks whether \(s_{c}\) is a duplicate of an existing state (lines 22-26) by computing pairwise comparisons with all existing states in the model using the WebEmbed SAF.
The state \(s_{c}\) is eventually added to the model if the SAF regards it as a distinct state, i.e., _a state that is not a duplicate of another existing state in the model_ (lines 23-26). Otherwise, it is rejected and the crawler continues its exploration from the next available unvisited state until the timeout is reached.
#### 3.3.2 Usage of the State Abstraction Function
The Classify procedure (lines 27-36) illustrates our neural-based SAF. Given two web pages \(p_{1}\), \(p_{2}\) and an embedding type \(ET\), we first extract the token-sequence representations from each page based on the selected embedding types (\(ET\) can be any non empty subset of {_content_, _tags_, _content_+_tags_}), obtaining one list of tokens for each web page (lines 30-31). Each of the two token sequences \(r_{1}\) and \(r_{2}\) is then fed to the appropriate Doc2Vec model (line 32) to compute an embedding (lines 33-34). Then, the cosine similarity between the two resulting embeddings \(e_{1}\) and \(e_{2}\) is computed, obtaining a similarity score that is appended to the list \(s\) of similarities computed so far (line 35). Next, the classifier marks the two pages as either distinct or clones based on the list \(s\) of similarity scores and determines the SAF return value (line 36), which is _'clone'_ in case of near-duplicate detection or _'distinct'_ otherwise.
**Example.** Consider the following embeddings produced for our running example, for the embedding type _'tags'_ (i.e., \(ET\) = [_'tags'_]):
\[\begin{array}{ll}p_{1}=\text{Catalog Page}&e_{1}=[-0.45,0.56,...,0.30]\\ p_{2}=\text{Detail Page A}&e_{2}=[-0.55,0.17,...,0.90]\\ p_{3}=\text{Detail Page A + Review 1}&e_{3}=[-0.56,0.19,...,0.95]\end{array}\]
During crawling, let us assume that a decision tree classifier flags a pair of pages as '_clone_' when the cosine similarity between their embeddings satisfies the root decision node condition (\(s>0.8\)). If \(sim(e_{1},e_{2})=0.56\), \(p_{2}\) is added to the model, as \(p_{2}\) is not too similar to \(p_{1}\). Then, when exploring \(p_{3}\), we obtain \(sim(e_{3},e_{1})=0.58\) and \(sim(e_{2},e_{3})=0.95\). Hence, page \(p_{3}\) is not added to the model as it is recognized as a near-duplicate (_'clone'_) of \(p_{2}\).
### _Test Creation_
Our approach automatically generates a test suite during crawling through _segmentation_[5]. The crawl sequence of states is segmented into test cases when (1) the current DOM state no longer contains any candidate clickable elements to be fired and the crawler is reset to the index page; (2) no new states are present on the current path. In the case of Figure 1 (right), four (redundant) test cases are generated, one for each state representing the Detail page for item A. With WebEmbed, the output model only has one state for the Detail page. Hence, only one test would be generated, reducing redundancy while keeping model and code coverage the same.
### _Implementation_
We trained Doc2Vec models using the gensim[35] Python library and used the classifiers implementations available in the scikit-learn[36] Python library. We integrated WebEmbed within Crawljax[9]. To automatically generate a test suite during crawling, we use the state-of-the-art DANTE web test generator [5]. DANTE generates fully compilable and functioning Selenium test cases [37] by segmenting a crawling session and by re-using the same inputs used during crawling.
## 4 Empirical Study
### _Research Questions_
To assess the practical benefits of neural embeddings for web testing, we consider the following research questions:
**RQ1 (near-duplicate detection).**_How effective is WebEmbed in distinguishing near-duplicate from distinct web app states?_
**RQ2 (model quality).**_How do the web app models generated by WebEmbed compare to a ground truth model?_
**RQ3 (code coverage).**_What is the code coverage of the tests generated from WebEmbed web app models?_
RQ1 aims to assess what configuration of WebEmbed, in terms of web embedding and classifier, is more effective at detecting near-duplicates through state-pair classification. RQ2 focuses on the crawl model quality in terms of completeness and conciseness. RQ3 evaluates WebEmbed when used for web testing, specifically assessing the test suites generated by WebEmbed crawl models in terms of code coverage of the web apps under test.
### _Datasets_
We use three existing datasets available from the study by Yandrapally et al. [12], plus an additional dataset of web pages collected by the _Common Crawl_ project [38]. Table I shows analytics information about the web pages of the considered datasets in terms of DOM size, length of the HTML source, and amount of text content.
The first dataset \(\mathcal{DS}\) contains 493,088 state-pairs derived from automated crawls (using Crawljax [10]) of 1,031 randomly selected websites from the top one million provided by Alexa, a popular website that ranks sites based on their global popularity (dismissed as of May 1, 2022). For training Doc2Vec, we used an additional dataset (listed third in Table I) of 368,927 web pages available from the _Common Crawl_ project [38], also used in previous research [19]. We refer to this dataset as \(\mathcal{CC}\). Similarly to \(\mathcal{DS}\), the web pages in \(\mathcal{CC}\) are also collected by crawling real-world websites.
The second dataset in Table I, referred to as \(\mathcal{RS}\), contains 1,000 state-pairs from \(\mathcal{DS}\) that Yandrapally et al. [12] manually labeled as either clone, near-duplicate or distinct.
The fourth dataset \(\mathcal{SS}\) contains 97,500 state-pairs of nine subject apps (Table II), which were also manually labeled
\begin{table}
\begin{tabular}{l r r r r r r r} \hline \hline & & \multicolumn{6}{c}{**Web page metrics**} \\ \cline{3-8} & & \multicolumn{2}{c}{**DOM**} & \multicolumn{2}{c}{**Source**} & \multicolumn{2}{c}{**Text content**} \\ & & (\# nodes) & & (\# chars) & & (\# chars) \\ \cline{2-8}
**Dataset** & \# pages & Mean & Std. & Mean & Std. & Mean & Std. \\ \hline \(\mathcal{DS}\) & 33,394 & 821 & 960 & 107,055 & 160,897 & 7,309 & 10,503 \\ \(\mathcal{RS}\) & 1,826 & 665 & 687 & 91,124 & 127,116 & 5,964 & 8,487 \\ \(\mathcal{CC}\) & 368,927 & 401 & 913 & 51,097 & 70,541 & 6,139 & 14,642 \\ \(\mathcal{SS}\) & 1,313 & 212 & 287 & 16,234 & 17,320 & 1,335 & 1,262 \\ \hline \hline \end{tabular}
\end{table} TABLE I: Web page characteristics across the datasets
by Yandrapally et al. [12] as clone, near-duplicate or distinct. These nine web apps (Table II) have been used as subjects in previous research on web testing [33, 39, 8, 7]. Despite being developed with different frameworks, they all provide CRUD functionalities (e.g., login, or add user) which make them functionally similar. Five apps are open-source PHP-based applications, namely Addressbook (App\({}_{1}\), _v. 8.2.5_) [40], Clarroline (App\({}_{2}\), _v. 1.11.10_) [41], PPMA (Apps\({}_{3}\), _v. 0.6.0_) [42], MRBS (App\({}_{4}\), _v. 1.4.9_) [43] and MantisBT (Apps, _v. 1.1.8_) [44]. Four are JavaScript single-page applications- Dimenshift (App\({}_{6}\), _commit 2611664_) [45], PageRank (App\({}_{7}\), _v. 1.0.16_) [46], Phoenix (Apps, _v. 1.1.0_) [47] and PetClinic (Apps, _commit 601045_) [48]--developed using popular JavaScript frameworks such as _Backbone.js, Vue.js, Phoenix/Racat_ and _AngularS_.
### _Baselines_
Based on the study by Yandrapally et al. [12], we selected two algorithms as baselines for WebEmbed, one structural and one visual. The structural algorithm is RTED (Robust Tree Edit Distance) [49], a DOM tree edit distance algorithm. The visual algorithm is PDiff [50], which compares two web page screenshots based on a human-like concept of similarity that uses spatial, luminance, and color sensitivity. We chose them as baselines for WebEmbed for the following reasons: (1) they were the best structural and visual algorithms for near-duplicate detection [12], (2) they were used as a SAF for web testing purposes within Crawylax.
### _Use Cases_
To evaluate the effectiveness of WebEmbed, we consider three use cases, summarized in Table III. For all use cases, WebEmbed relies on the embeddings computed by a common Doc2Vec model trained on the non-annotated pages of the considered datasets, namely \(\mathcal{DS}\cup\mathcal{CC}\). The differences among the use cases are the datasets used to train the WebEmbed classifiers and the associated labeling cost for developers. To avoid confounding factors, we used the tool difflib to assess the presence of cloned pages across datasets and results indicate that no such clones exist.
#### 4.4.1 Beyond apps
This use case aims at investigating the feasibility of a general-purpose model trained on web pages that are different from the ones it is tested on. Therefore, we train the WebEmbed classifiers on \(\mathcal{RS}\) and test them on \(\mathcal{S}\). This use case requires _no labeling costs_ to web developers, as the classifier we train on \(\mathcal{RS}\) is supposed to be re-used as-is on any new web app.
#### 4.4.2 Across apps
This use case investigates the generalizability of WebEmbed when applied to web apps similar to the ones the classifier was trained on (similarity refers to having analogous CRUD functionalities, see Section 4.2). Indeed, we train a classifier for each of the nine web apps in \(\mathcal{S}\) in a leave-one-out fashion. The training set considers the annotated state-pairs of eight web apps, using the ninth web app as a test set. We iteratively vary the test web app until all nine subject apps are accounted for. In this use case, developers are supposed to find and manually label all pages of web apps similar to the ones under test. A company may develop a few web apps in a given domain, investing in manual labeling of the near-duplicates of such apps to save the near-duplicate detection effort later, when a new app will be developed in the same domain.
#### 4.4.3 Within apps
We train an app-specific classifier for each of the nine web apps. For each app in \(\mathcal{S}\), we use 80% of the state pairs for training the classifier and the remaining 20% for testing. In this use case, developers are required to label a significant portion of the near-duplicate pages of the web app under test before a classifier can be trained and applied to the other pages of the same web app.
### _Procedure and Metrics_
#### 4.5.1 RQ\({}_{1}\) (near-duplicate detection)
For each use case of WebEmbed (Section 4.4), we evaluate different WebEmbed implementations, varying (1) the token sequence used to train Doc2Vec and (2) the classifier used to enable the SAF. Concerning the token sequences, we trained three different Doc2Vec models, one for each representation of the pages in the dataset \(\mathcal{DS}\cup\mathcal{CC}\) (tags, content, content+tags). Concerning the training hyperparameters, we used the default parameters of the gensim[35] Python library and fitted the models for 100 epochs using a vector size of 100.
Concerning the classifiers, we evaluate a total of eight classifiers. We consider six machine learning classifiers from the scikit-learn [36] Python library, namely Decision
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & & & & WebEmbed \\ \cline{2-6} & Doc2Vec & \multicolumn{2}{c}{Classifiers} \\ \cline{2-6}
**Use case** & **Train Set** & **Train Set** & **Test Set** \\ \hline Beyond apps & \(\mathcal{DS}\cup\mathcal{CC}\) & \(\mathcal{RS}\) & \(\mathcal{SS}\) \\ Across apps (for each App\({}_{i}\)) & \(\mathcal{DS}\cup\mathcal{CC}\) & \(\mathcal{SS}\setminus\mathrm{App}_{i}\) & \(\mathrm{App}_{i}\) \\ Within apps (for each App\({}_{i}\)) & \(\mathcal{DS}\cup\mathcal{CC}\) & 80\% App\({}_{i}\) & 20\% App\({}_{i}\) \\ \hline \hline \end{tabular}
\end{table} TABLE III: Use cases and variants of WebEmbed
Tree, Nearest Neighbour, SVM, Naive Bayes, Random Forest, and Multi-layer Perceptron. We also consider their ensemble with majority voting and an additional threshold-based classifier.
The quality of near duplicate detection is measured using accuracy, precision, recall, and F\({}_{1}\), where the last three metrics are computed under the assumption that the positive class (output 1 of the classifier) is 'near-duplicate' ('distinct' being the negative class). Overall, we evaluate 456 WebEmbed configurations (8 classifiers \(\times\) 3 token sequences \(\times\) 19 configurations, one for Beyond apps, nine for Across apps, and nine for Within apps).
#### 4.5.2 RQ\({}_{2}\) (model quality)
The crawl models contain redundant concrete states that Vandrapally et al. [12] aggregated into the corresponding logical pages. Logical pages represent clusters of concrete pages that are semantically the same. To measure WebEmbed's model quality w.r.t. the ground truth, we compute the precision, recall, and \(F_{1}\) scores, considering the intra-pairs (\(IP\)) in common in the given model and the intra-pairs within each manually identified logical page (GT) [51]:
\[p=\frac{|IP_{GT}\cap IP_{\text{WeEEmbed}}|}{|IP_{\text{WeEEmbed}}|}\qquad r= \frac{|IP_{GT}\cap IP_{\text{WeEEmbed}}|}{|IP_{GT}|}\]
We also consider the \(F_{1}\) score as the harmonic mean of (intra-pair) precision and recall. As an example, let us consider a set of 6 web pages \(\{p_{1},p_{2},p_{3},p_{4},p_{5},p_{6}\}\) with the following ground truth (GT) assignment: \(\{p_{1},p_{2}\},\{p_{3}\},\{p_{4},p_{5},p_{6}\}\). Suppose WebEmbed produces the following assignment: \(\{p_{1},p_{3}\},\{p_{2}\},\{p_{4},p_{5}\},\{p_{6}\}\). The intra-pairs for GT are \(\langle p_{1},p_{2}\rangle\), \(\langle p_{4},p_{5}\rangle\), \(\langle p_{4},p_{6}\rangle\), \(\langle p_{5},p_{6}\rangle\), whereas the intra-pairs for WebEmbed are \(\langle p_{1},p_{3}\rangle\), \(\langle p_{4},p_{5}\rangle\). Thus, \(p=|\langle p4,p5\rangle|/|\langle p1,p3\rangle,\langle p4,p5\rangle|=0.5\), \(r=|\langle p4,p5\rangle|/|\langle p1,p2\rangle,\langle p4,p5\rangle\),
\(\langle p4,p6\rangle\), \(\langle p5,p6\rangle|=0.25\), and \(F_{1}=2pr/(p+r)=0.32\).
#### 4.5.3 RQ\({}_{3}\) (code coverage)
To assess the effectiveness of WebEmbed when used for web testing, we crawl each web application in \(\mathcal{SS}\) multiple times, each time varying the SAF. For all tools and all use cases, we set the same crawling time of 30 minutes. We use DANTE to generate Selenium web test cases from the crawl sequences, execute the tests, and measure the web app code coverage. For JavaScript-based apps (Dimeshift, Pagekit, Phoenix, PetClinic), we measure _client-side_ code coverage using cdp4j (v. 3.0.8) library, i.e., the Java implementation of Chrome DevTools. For PHP-based apps (Claroline, Addressbook, PPMA, MRBS, MantisBT), we measure the _server-side_ code coverage using the xdebug (v. 2.2.4) PHP extension and the php-code-coverage (v. 2.2.3) library. We addressed randomness in our experiments by manually adding delays where appropriate in the web test suites, in order to mitigate flaky executions. Before measuring coverage, we executed each test suite three times to ensure comparable outcomes across executions of different test suites.
We assess the statistical significance of the differences between WebEmbed and the baselines using the non-parametric Mann-Whitney U test [52] (with \(\alpha=0.05\)) and the magnitude of the differences, if any, using Cohen's \(d\) effect size [53].
### _Results_
#### 4.6.1 RQ\({}_{1}\) (near-duplicate detection)
Table IV shows the results for the tools being compared on the task of near-duplicate detection. Due to space reasons, we only present the scores of WebEmbed when using the SVM classifier, which showed to be the best in our experiments across all use cases. For the Across apps and Within apps use cases, we present the scores averaged over all nine apps. All results are available in our replication package [21].
For each technique being compared, Table IV shows average accuracy (Acc.), precision (Pr.), recall (Rec.), and \(F_{1}\) scores, divided by use case. The scores for the baselines RTED and PDiff are also reported. In the Beyond apps use case, WebEmbed and RTED have similar accuracy (resp. F\({}_{1}\)) values, whereas WebEmbed has a +56% (resp. +44%) increase w.r.t. PDiff. For the Across apps use case, WebEmbed scores higher accuracy and F\({}_{1}\) w.r.t. the baselines (e.g., for accuracy, +8% increase w.r.t. RTED and +12% w.r.t. PDiff). In the Within apps use case, WebEmbed scores higher accuracy and F\({}_{1}\) than the baseline approaches as well (e.g., for accuracy, +11% increase w.r.t. RTED and +8% increase w.r.t. PDiff).
Statistical tests confirmed that the differences in accuracy and F\({}_{1}\) between WebEmbed and the best baseline (either RTED or PDiff, depending on the use case) are statistically significant (\(p\)-value \(<\) 0.05) with a _large_ effect size in both the Across and Within apps use cases.
Overall, WebEmbed produces more accurate models (i.e., models more similar to the ground truth) than the competing techniques across all use cases, as summarized by the intra-pairs \(F_{1}\) scores.
In the Beyond apps use case, WebEmbed scores +18% and +37% average \(F_{1}\) w.r.t. RTED and PDiff, respectively. In the Within apps use case, WebEmbed scores +21% and +34% average \(F_{1}\) w.r.t. RTED and PDiff, respectively. In the Across apps use case, WebEmbed scores an average \(F_{1}\) of 92%, a +35%, and +31% increase w.r.t. RTED and PDiff, respectively. Statistical tests confirmed that the differences in accuracy are statistically significant (_p_-value \(<\) 0.05) with a _large_ effect size in all use cases, except Across apps, in which the differences between WebEmbed and RTED are statistically significant with a _medium_ effect size.
**RQ2:**: WebEmbed _achieves the highest \(F_{1}\) scores (84-92%, on average) over all use cases: neural embeddings are able to approximate the ground truth model better than structural and visual techniques. The differences with the baseline approaches are statistically significant in all use cases, with a medium to large effect size._
#### 4.6.3 RQ3 (code coverage)
Table VI shows the code coverage results for each tool, grouped by use case. Considering the average scores over all nine apps, the scores for WebEmbed (WE) are consistently the best across all use cases.
For the Beyond Apps use case, WebEmbed achieves +6-14% code coverage w.r.t. RTED and PDiff. Concerning the Across Apps use case, WebEmbed achieves +12-13% code coverage w.r.t. RTED and PDiff. About the Within Apps use case, WebEmbed achieves +20-36% code coverage w.r.t. RTED and PDiff. The differences in code coverage between WebEmbed and PDiff are statistically significant for all use cases (i.e., _p_-value \(<\) 0.05, with _small_/_negligible_/_medium_ effect sizes). The differences in code coverage between WebEmbed and RTED are significant only for the Within App use case, with a _small_ effect size.
**RQ3:**: _The tests generated from WebEmbed crawl models achieve the highest code coverage scores over all use cases (up to +36% improvement) thanks to the more accurate and complete web app models generated using neural embeddings._
### _Final Remarks_
Overall, WebEmbed was more effective than the considered baseline approaches across all use cases. From a practical point of view, looking at the accuracy scores in conjunction with code coverage, we suggest: (1) using WebEmbed (Beyond apps) if no labeling budget is allowed for developers. Indeed, the effectiveness of this configuration is close to WebEmbed (Across apps), which instead requires a non-negligible labeling cost. (2) using WebEmbed (Within apps) in all other cases, especially if the labeling cost is affordable. Indeed, the gain in code coverage is significant (+29-26% w.r.t. the Beyond and Across apps use cases).
### _Threats to Validity_
#### 4.8.1 Internal validity
We compared all variants of WebEmbed and baselines under identical experimental settings and on the same evaluation set (Section 4.2). In our experiments, a crawling time of 30 mins allowed all crawls to explore all logical
\begin{table}
\begin{tabular}{l
pages of the AUTs within the timeout. Setting a shorter crawling time (\(<\)30mins) would favor the techniques that make better use of a limited crawling budget (i.e., WebEmbed and RTED). The test generation budget refers to the crawling time allowed for model inference, as the tests are extracted directly from the crawl sequences. The main threat to internal validity concerns our implementation of the testing scripts to evaluate the results, which we tested thoroughly.
#### 4.8.2 External validity
The limited number of subjects in our evaluation poses a threat in terms of the generalizability of our results to other web apps. Moreover, we considered only the embeddings produced by Doc2Vec [20], and WebEmbed's effectiveness may change when considering other algorithms.
#### 4.8.3 Reproducibility
All our results, the source code of WebEmbed, and all subjects are available [21].
## 5 Discussion
## 6 Related Work
### _End-to-End Web Test Automation_
Andrews et al. [54] propose a test generation approach based on a hierarchical finite state machine model to achieve full transition coverage. Biagiola et al. [6, 8] use Page Objects to guide the generation of tests. Marchetto et al. [55] propose a combination of static and dynamic analysis to model the AUT into a finite state machine and generate tests based on multiple coverage criteria.
Mesbah et al. [9] propose ATUSA, a tool that leverages the model of the AUT produced by Crawljax to automatically generate test cases to cover all the transitions of the model. Biagiola et al. [5] propose DANTE, an automated approach to test generation, aimed at producing minimal test suites from web app crawlings. DANTE turns the raw output of a crawler into executable test cases by reusing the same inputs used upon crawling, resolving dependencies among tests and eliminating redundant tests. Sunman et al. [56] propose AWET, an approach that leverages existing test cases, possibly obtained via capture-and-replay in an exploratory testing fashion, to guide crawling.
These works do not address the redundancy in the web app model during crawling due to an ineffective SAF. These test generators can be used in conjunction with WebEmbed, to increase the accuracy of the inferred web app models.
### _Empirical Studies on Near-Duplicates_
Fetterly et al. [57] study the nature of near-duplicates during software evolution, reporting their low variability over time. Yandrapally et al. [12] compares different near-duplication detection algorithms as SAFs in a web crawler. The paper reports on the impossibility of finding an optimal threshold that can accurately detect functional near-duplicates across web apps. Motivated by these findings, in our paper, we use ML classifiers instead of threshold-based classifiers. Moreover, we adopt neural embeddings applied to web pages and use the best detection algorithms from the study by Yandrapally et al. [12].
### _Automated Near-Duplicate Detection_
Regarding detection of near-duplicates _within_ the same AUT, Crescenzi et al. [58] propose a structural abstraction for web pages and a clustering algorithm based on such abstraction. Di Lucca et al. [60, 59] evaluate the Levenshtein distance and the tag frequency for detecting near-duplicate web pages. Stocco et al. [33] use clustering on structural features as a post-processing technique to discard near-duplicates in crawl models. Corazza et al. [61] propose the usage of tree kernels, functions that compute the similarity between tree-structured objects, to detect near-duplicates.
Concerning detection of near-duplicates _across_ AUTs, researchers mainly considered clustering techniques on raw structural features [62, 63, 57, 64, 59, 60, 58, 33]. Other works, such as the one by Henzinger [62], use shingles, i.e., \(n\)-grams composed of contiguous subsequences of tokens, to ascertain the similarity between web pages. Manku et al. [63] use simhash to detect near-duplicates in the context of information retrieval, plagiarism, and spam detection. Yandrapally and Mesbah [13] use web page fragments in which they combine both visual and structural features to detect near-duplicates.
In this paper, we consider HTML neural embeddings to train an ML classifier for near-duplicate detection and we illustrate that its usage for functional testing of web apps outperforms state-of-the-art techniques [12].
### _Embeddings in Software Engineering_
Alon et al. [15] present code2vec, a neural model for learning embeddings for source code, based on its representation as a set of paths in the abstract syntax tree. Hoang et al. [65] propose CC2Vec, a neural network model that learns distributed representations of code changes. The model is applied for log message generation, bug fixing patch identification, and just-in-time defect prediction.
Feng et al. [66] use representation learning applied across web apps for phishing detection. Similarly, we use embeddings produced by Doc2Vec on HTML features to learn a neural representation of the web pages, both beyond, across, and within web apps. Lugeon et al. [19] propose Homepage2Vec, an embedding method for website classification. Namavar et al. [67] performed a large-scale experiment comparing different code representations to aid bug repair tasks. In this work, we propose an embedding method that works at a finer granularity level and that can integrate both structural (HTML tags) and textual (content) information. We study this embedding in the context of automated crawling and web testing.
Among the grey literature, Ma et al. [16] propose GraphCode2Vec, a technique that joins code analysis and graph neural networks to learn lexically and program dependent features to support method name prediction. Dakhel et al. [17] propose dev2vec, an approach to embed developers' domain expertise within vectors for the automated assessment of developers' specialization. Jabbar et al. [18] propose to encode the test execution traces for test prioritization.
Differently, we use the embeddings of Doc2Vec to train an ML classifier that is used as SAF within a crawl-based test generator for functional testing.
## 7 Conclusions and Future Work
In this paper, we aim to improve the crawability of modern web applications by designing and evaluating WebEmbed, a novel state abstraction function for web testing based on neural embeddings of web pages. Neural embeddings are used to train machine learning classifiers for near-duplicate detection. We demonstrate their effectiveness in inferring accurate models for functional testing of web apps, while also discussing their cost for developers in three settings, namely beyond, across and within web apps. Our results show that crawl models produced with WebEmbed have higher precision and recall than the ones produced with existing approaches. Moreover, these models allow test suites generated from them to achieve higher code coverage.
Future work includes exploring other forms of embeddings to further improve the accuracy of WebEmbed. For example, usage of visual embeddings on the web screenshots, e.g., with autoencoders, will be explored, as well as hybrid solutions.
## Acknowledgments
This work was partially supported by the H2020 project PRECRIME, funded under the ERC Advanced Grant 2017 Program (ERC Grant Agreement n. 787703).
|
2305.12213 | Taming Resource Heterogeneity In Distributed ML Training With Dynamic
Batching | Current techniques and systems for distributed model training mostly assume
that clusters are comprised of homogeneous servers with a constant resource
availability. However, cluster heterogeneity is pervasive in computing
infrastructure, and is a fundamental characteristic of low-cost transient
resources (such as EC2 spot instances). In this paper, we develop a dynamic
batching technique for distributed data-parallel training that adjusts the
mini-batch sizes on each worker based on its resource availability and
throughput. Our mini-batch controller seeks to equalize iteration times on all
workers, and facilitates training on clusters comprised of servers with
different amounts of CPU and GPU resources. This variable mini-batch technique
uses proportional control and ideas from PID controllers to find stable
mini-batch sizes. Our empirical evaluation shows that dynamic batching can
reduce model training times by more than 4x on heterogeneous clusters. | Sahil Tyagi, Prateek Sharma | 2023-05-20T15:33:06Z | http://arxiv.org/abs/2305.12213v1 | # Taming Resource Heterogeneity In Distributed ML Training With Dynamic Batching
###### Abstract
Current techniques and systems for distributed model training mostly assume that clusters are comprised of homogeneous servers with a constant resource availability. However, cluster heterogeneity is pervasive in computing infrastructure, and is a fundamental characteristic of low-cost transient resources (such as EC2 spot instances). In this paper, we develop a dynamic batching technique for distributed data-parallel training that adjusts the mini-batch sizes on each worker based on its resource availability and throughput. Our mini-batch controller seeks to equalize iteration times on all workers, and facilitates training on clusters comprised of servers with different amounts of CPU and GPU resources. This variable mini-batch technique uses proportional control and ideas from PID controllers to find stable mini-batch sizes. Our empirical evaluation shows that dynamic batching can reduce model training times by more than \(4\times\) on heterogeneous clusters.
## I Introduction
Distributed training of machine learning models by using large clusters of servers is a popular technique to decrease the model training time. Techniques and system-architectures for distributed ML training such as Stochastic Gradient Descent (SGD) and parameter servers are widely used to train in data centers and cloud platforms to provide reasonable parallel speedup.
However, current techniques and systems for distributed model training mostly assume that the workers (i.e., the servers) will all have the same performance and resource configuration, i.e., will be homogeneous. However, virtual clusters in data centers and especially clouds _do not_ always exhibit this resource homogeneity. The performance of different workers can be affected due to performance interference with co-located applications; workers may be throttled by the cloud or data center provider; or the cluster may have servers with vastly different resource configurations.
This _resource heterogeneity_ is a key characteristic of cloud-based applications, and distributed ML model training must be able to tolerate and perform well even in heterogeneous environments. However, heterogeneity presents many fundamental challenges to distributed training: synchronous model updates result in stragglers causing poor parallel efficiency, and asynchronous updates result in gradient and model staleness causing poor statistical efficiency [1].
In this paper, we address the challenges of distributed ML training in heterogeneous environments. Our goal is to make model training "omnivorous", and be able to run efficiently on dynamic and heterogeneous cluster configurations in shared data center and cloud environments. Our key insight is that having variable, instead of uniform mini-batch sizes on different workers, is a simple yet powerful technique for alleviating many of the performance degradation problems in heterogeneous environments.
Our dynamic batch sizing mechanism adjusts the mini-batch size on each worker based on the worker's throughput, by using a proportional-controller [2] that minimizes the differences in the workers' iteration times. This dynamic batch sizing technique permits training on clusters made up of servers with vastly different resource configurations; and on clusters with dynamic resource availability due to resource elasticity, over-commitment, or preemption. The technique enables us to train models efficiently on clusters comprising of servers with different CPUs and GPUs, which is a key differentiator from prior work in heterogeneous distributed training [3, 4] that instead focuses on random worker slowdowns. The prior work has shown that even small random slowdowns can result in the training times increase by an order of magnitude. This is only exacerbated with the systemic heterogeneity that we aim to alleviate.
The dynamic batching mechanism is able to reduce stragglers in Bulk Synchronous Parallel (BSP) training, and is designed as zero-configuration, black-box approach that can effectively work with different training, model, and resource configurations. Our approach allows distributed training on clusters with dynamic resource availability that are ubiquitous in cloud environments. By mitigating the performance degradation due to heterogeneity, our contributions enable low-cost training on heterogeneous collections of transient cloud servers such as EC2 spot instances [5] and Google Preemptible VMs [6], that are up to \(10\times\) cheaper than conventional cloud servers. We implement our dynamic batching mechanism and policies in TensorFlow, and make the following contributions:
1. We develop a dynamic batching mechanism for data-parallel training that continuously balances the load between workers by assigning them different sized mini-batches based on their throughput. Our proportional-control based technique reduces stragglers in BSP, and allows the mixing of CPU and GPU servers. It is able to ameliorate both static as well as dynamic heterogeneity.
2. We implement all our mechanisms and policies in TensorFlow using the estimator API, which allows most models to directly run in heterogeneous environments without any modifications.
3. We conduct a large scale study of training performance in various static and dynamic heterogeneity environments using popular ML workloads. Our techniques can reduce
the training times by as much as \(4\times\) compared to existing uniform-batching.
## II Background & Motivation
### _Heterogeneity in Data Centers and Clouds_
Resource heterogeneity is pervasive in modern data centers and clouds. In cloud environments, applications are often deployed on clusters composed of servers (i.e, VMs) of different resource capacities and sizes. This _static heterogeneity_ is _necessary_ for effectively using low-cost transient servers such as Amazon EC2 spot instances [5], Google Preemptible VMs [6], etc.
Since distributed model training is highly computationally intensive, using low-cost transient VMs or low-priority data center resources is a key technique for reducing training costs [7, 8]. Transient VMs can be unilaterally preempted by the cloud provider, which are akin to fail-stop failures. Distributed applications that can tolerate a failure of a (small) subset of their servers failing can benefit greatly from running on VMs of different sizes. Past work on transient computing [9, 10] has found that transient VMs of different sizes are usually _uncorrelated_ in their preemptions, and this diversification significantly reduces the risk of all the VMs preempted at the same time. Thus distributed training needs to be "omnivorous", capable of using different types of low-cost low-priority servers and cannot assume homogeneous clusters with constant resource availability.
### _Distributed Training_
Training of machine learning models entails learning the model parameters (a.k.a weights) of a given model (such as a deep neural network) over an input training dataset. This process is typically done through an iterative-convergent process that gradually minimizes some loss function of the model over the dataset, by using an optimization technique such as Stochastic Gradient Descent (SGD) [11].
Since ML training is highly compute intensive, parallelizing it using computational accelerators such as GPUs and TPUs via distributed training is vital [12, 13]. In distributed training, multiple _workers_ participate to iteratively refine the model. A common parallelization approach is _data-parallelism_, where training is launched on multiple workers, and each worker learns and updates the model parameters by processing a small batch of the training data [14]. Each iteration comprises of computing model updates to the previous model parameters, and sharing the updates with other workers to form a new global model. Training a popular image recognition model such as ResNet [15] requires tens of thousands of iterations until the model's error converges to a low-enough target.
Conventionally, workers send their updates to a smaller number of parameter servers that apply the updates and compute an "averaged" model that is broadcasted to workers before the next iteration [16]. Concretely, the learning process involves iteratively computing the model parameters over \(K\) workers, each processing a mini-batch of \(b\) items at iteration \(t\) and computing the gradient \(\nabla f(\mathbf{x}_{k,t})\). The gradients from all the workers are then collected and averaged by the parameter servers, and the update rule for the model parameters \(\mathbf{x}\) is given by :
\[\mathbf{x}_{t+1}=\mathbf{x}_{t}-\eta\frac{1}{K}\frac{1}{b}\sum_{k=1}^{k=K} \nabla f(\mathbf{x}_{k,t}), \tag{1}\]
where \(\eta\) is the learning rate parameter which is one of the "hyperparameters" of the model that is found through empirical search techniques.
### _Training Challenges in Heterogeneous Environments_
If the computing capacities of the workers is not uniform and constant, then data-parallel training suffers from severe performance degradation. The performance and model quality (i.e., model accuracy) of distributed training is _highly_ dependent on the communication and synchronization of the gradient updates to compute new model parameters. In bulk synchronous parallel (BSP) SGD, new model parameters are computed after gradients from _all_ workers have been received. Even in homogeneous conditions, stragglers are an important concern in synchronous data-parallel training. In heterogeneous environments, straggler workers with lower computational resources will take much longer to process their mini-batches. Thus, BSP suffers from poor parallel efficiency in heterogeneous environments because of stragglers that significantly increase the total training time.
## III Dynamic Mini-Batching
In this section, we describe our dynamic batching mechanism and policies for distributed training. Our focus is on data-parallel training on heterogeneous clusters of data center or cloud servers.
Fig. 1: Increase in the total training time compared to a homogeneous cluster for three popular ML workloads. Both the homogenous and heterogenous clusters have the same total amount of computing resources.
Fig. 2: With variable batching, we can decrease the batch size on the slower worker, and increase the batch size on the larger worker, so that no worker “waits” for another.
### _Key Idea: Variable Mini-Batch Sizes_
Conventional data-parallel training uses mini-batch SGD for distributing and parallelizing the model training process. This approach entails each worker processing a mini-batch of training samples independently and computing the gradients. The gradients are computed over the mini-batch of size \(b\) by all the workers. Due to resource heterogeneity, the mini-batch processing times (i.e., iteration times) across workers can be different. This results in stragglers in the case of BSP and staleness in the case of ASP--both of which cause a significant increase in the model training time to a desired accuracy level.
The main insight is that the mini-batch sizes need not be uniform across workers--instead, the mini-batch sizes should be proportional to the server resource availability. This _variable_ batching allows workers to process different amount of data. The goal is to reduce the differences in the workers' iteration times to reduce stragglers and staleness. This is illustrated in Figure 2, which shows the use of variable batching during the training process to adjust the worker batch sizes to minimize stragglers.
Such variable batching is compatible with distributed SGD--we assign a mini-batch size of \(b_{k}\) to worker \(k\). Because workers are processing different amounts of training data, their contribution in the training process is no longer uniform. In conventional SGD, the gradients from all workers are averaged as per Equation 1. However, with variable batching, the gradients computed by workers with larger batch sizes need to be "weighted" more than the gradients computed using smaller batches. Thus we _scale_ the gradients computed by each worker based on its mini-batch size, and the final gradients are computed using a weighted average.
We use linear gradient scaling: gradients on worker \(k\) are multiplied by \(\lambda_{k}\) such that \(\lambda_{k}\propto b_{k}\). To maintain equivalence with conventional uniform batching, we also require that \(\sum_{k}\lambda_{k}=1\). This yields \(\lambda_{k}=\frac{b_{k}}{\sum_{i=k}^{i=k}b_{i}}\). The new weights for the next iteration are then computed by doing a weighted average of the gradients:
\[g_{k,t}=\lambda_{k}\nabla f(\mathbf{x}_{b_{k},t}) \tag{2}\]
\[\mathbf{x}_{t+1}=\mathbf{x}_{t}-\frac{1}{K}\eta\sum_{k=1}^{k=K}g_{k,t}, \tag{3}\]
where \(\nabla f(\mathbf{x}_{b_{k},t})\) is the gradient computed using mini-batch \(b_{k}\) by worker \(k\). The weighted averaging is done by the parameter server, and preserves the convergence properties of SGD [17].
Ideally, we want perfect load balancing, and assign mini-batch-sizes to workers such that all workers finish their iterations at the same time. Due to servers of different sizes and dynamic resource availability due to interference or over-commitment, the processing power on workers also varies. In the next two subsections, we describe two approaches for assigning mini-batches to workers--a simple static open-loop allocation technique, and a closed-loop dynamic allocation that can respond to cluster resource dynamics.
### _Static Mini-batch Allocation Policy_
Instead of uniform mini-batches for all workers, our _static assignment_ policy computes mini-batch sizes proportional to the worker's computing power. Because model training is highly computation-bound, the throughput of workers is proportional to the available CPU and GPU resources. Thus given a heterogeneous cluster of \(K\) workers, we want \(b_{k}\propto X_{k}\), where \(X_{k}\) is the throughput (i.e., training samples processed per second) of server \(k\).
We seek to maintain the initial average mini-batch size, \(b_{0}\) that is provided for a given ML model. We then allocate the mini-batches to different workers such that the batch sizes are proportional the worker throughput, and the global batch size is maintained: \(b_{k}=\frac{b_{0}X_{k}}{\sum_{i}X_{i}}\). This ensures that \(\sum_{i}b_{i}=Kb_{0}\), where \(b_{0}\) corresponds to the conventional uniform mini-batch size. Importantly, this approach keeps the total global batch size constant, and invariant to variable batching.
Static mini-batch allocation seeks to "equalize" the iteration times on different workers, as illustrated in Figure 3, which shows the distribution of iteration times for ResNet-50 (BSP)1, with three workers in a heterogeneous cluster with \((3,5,12)\) CPU cores respectively. With uniform batching in Figure 2(a), the iteration times for the workers are different, due to the differences in their processing powers. In contrast, with the variable mini-batching approach, the iteration times for all the workers have similar frequency distributions, as seen in Figure 2(b).
Footnote 1: We train ResNet with BSP for all the examples and figures in this section.
By reducing the gap between iteration times among workers, the variable batching technique can reduce stragglers in the case of BSP and thus the total training time in heterogeneous environments. Unlike for BSP, our approach does not _directly_ address the root cause of slowdowns for ASP training. With ASP, the slowdown is a result of the statistical inefficiency arising due to multiple factors, including gradient update staleness. However the relation between staleness and training time is not as simple to model as the effect of stragglers on BSP [18, 19], and is not necessarily linear. Nevertheless, reducing the iteration gap allows us to ameliorate the staleness and improve the total training time even for ASP, albeit not as effectively as BSP.
**Estimating throughput.** We can estimate the worker throughput required for the variable batch allocation based on the server's resource configuration. When workers are using only CPU resources, a simple way is to assign batch sizes proportional to the number of CPU cores. In case a distributed
Fig. 3: Frequency distributions of iteration times across workers in a heterogeneous cluster. Worker 3 is 3x larger than worker 1, which is 2x larger than worker 2. Variable batching ensures that the iteration times across workers are similar.
training job is running on both CPU and GPU servers, we assign batch sizes proportional to the half-precision FLOPs (Floating Point Operations per Second) for each server. This a one-shot method that is black box and requires no adjustment, and is "open-loop".
However,throughput may not be exactly proportional to the server FLOPs. This error can cause imperfect load-balancing, and can result in sub-optimal mini-batching. Compared to the ideal batch sizes that equalize all iteration times, some workers may get smaller batches and wait for workers with larger batches. We can address this problem by _dynamically_ assigning mini-batch sizes, which we describe in the next subsection.
### _Proportional-Control Based Dynamic Policy_
To mitigate stragglers and staleness, it is crucial for workers to finish processing their mini-batches simultaneously. In the previous static technique, the mini-batches were allocated based on the estimated relative throughput of different workers.
This open-loop estimation, based on the hardware FLOPs, is not accurate in predicting the training throughput, in two major situations. First, the training throughput depends on intra-worker parallel scaling characteristics governed by Amdahl's law. Thus the observed throughput on large workers (with more CPUs) may be lower than what is indicated by their core counts. Second, many scenarios yield _dynamic_ resource availability, which the static mini-batching approach is ill-suited for.
Our _dynamic_ mini-batching technique is designed to handle throughput estimation errors, as well as handling dynamic resource availability due to server overcommittment or intermittent performance interference that lead to variable effective throughput on the affected workers. The key idea is to continuously adjust the mini-batch sizes on the workers. The goal is to equalize the iteration times among all the workers. Let worker \(k\) finish computing gradients for its mini-batch in time \(t_{k}\). Ideally, we want \(t_{i}=t_{j}\) for all workers \(i,j\).
The dynamic mini-batch adjustment uses a simple proportional-control approach to compute the mini-batch size of all workers. Since the goal is to equalize the batch processing times, the "error" is \(\tau_{k}=t_{k}-\bar{t}\), where \(\bar{t}\) is the average iteration time across all the workers. To minimize this error, the mini-batch size is updated by \(\Delta(b)\) by the following proportional control rule:
\[\Delta(b_{k})=-X_{k}\tau_{k}, \tag{4}\]
where, \(X_{k}\) is the throughput of worker \(k\), which can be empirically determined as \(X_{k}=b_{k}/t_{k}\). The new batch size for iteration \(i+1\) is computed as follows:
\[b_{k}^{i+1}=b_{k}^{i}+\Delta(b_{k}^{i}), \tag{5}\]
Thus slower workers (\(t>\bar{t}\)) will have a positive error \(\tau\), and their batch sizes will be decreased. Workers whose batch processing times are faster than average, are capable of handling a higher load, and will get a larger batch size after the dynamic adjustment. Simplifying the above two equations, we can compute the new batch size \(b_{k}^{(1)}\) from the initial batch size \(b_{k}^{(0)}\) as : \(b_{k}^{(1)}=b_{k}^{(0)}\bar{t}/t_{k}\).
This policy essentially combines model-based and conventional black-box PID controllers. Instead of using and tuning an arbitrary proportionality constant like in most PID controllers, we use the (estimated) throughput.
**Initial mini-batch sizes.** The dynamic mini-batching approach works with any initial batch size. By default, the initial batch sizes are allocated based on the throughput-based open-loop variable batching approach described in the previous subsection. In that case, any error in the throughput approximation (based on the CPU/GPU FLOPs) is corrected by the control mechanism.
While a good starting point is desirable, it is not necessary. The dynamic batching approach permits _any_ initial batch size allocation, with the caveat that the farther the initial batch size is from the ideal (i.e., throughput proportional), the larger number of batch adjustment steps are required to reach the equilibrium steady-state batch sizes. For example, Figure 3(a) shows the progress of the batch adjustment on three heterogeneous workers when all the workers are assigned the same initial batch size (which is sub-optimal). We can see that the mini-batch sizes on the different workers converge to their stable throughput-proportional values after only two batch adjustments. Thus, the dynamic batching technique is useful in situations where apriori throughput estimates are not be known.
#### Iii-C1 Control stability
The dynamic batch size adjustment can be done at the end of every iteration. However, it is neither prudent nor necessary to do so. Changing the batch size on workers is _not_ a zero-cost operation, because it involves terminating and restarting the training. Furthermore, due to the stochastic nature of training, iteration times on workers will never converge to the exact average, and there will always be some error which the proportional control mechanism will try to chase. This is illustrated in Figure 3(b), which shows the mini-batch sizes "oscillating" due to the dynamic batching adjustments.
To prevent these oscillations and reduce the overhead of batch adjustments, we use three main techniques: dead-banding, exponential smoothing iteration times, and lower-upper bounds on batch sizes. We describe these approaches below:
**Dead-banding.** After every iteration, we compute the new batch sizes using the proportional-control technique as described so far. We use a dead-band for our controller: batch
Fig. 4: Dynamic batch size adjustments.
sizes are updated only if the change is substantial. We compute the difference between \(b^{i+1}-b^{i}\) and do not update if this is smaller than threshold, \(\Delta_{min}(b)\). If the change in the batch sizes on all workers is less than \(\Delta_{min}(b)\), then no batch readjustment is made. The threshold can be chosen based on how sensitive we want the adjustment to be, and it also depends on the performance overhead of readjusting the batch sizes. For instance, current ML frameworks such as TensorFlow do not support graceful dynamic adjustment of batch sizes and require terminating and restarting the entire training process, in which case a larger threshold is preferable. Based on the TensorFlow overheads, we use a dead-band threshold of 0.05: meaning that the new batch sizes on all workers must be atleast change by 5%.
**Exponential Smoothing.** With dead-banding, we only need to make batch adjustments at the start of the training process and whenever the underlying resource availability of the workers changes due to resource over-commitment or preemption. To improve the controller stability and avoid spurious readjustments, we compute the error (deviation of iteration time from the cluster average) on multiple iterations. Specifically, the error is computed using an EWMA (Exponentially Weighted Moving Average) across all the iterations since the previous batch readjustment. This provides us with the "Integrator" component in the controller, and particularly useful to prevent outliers.
With the dead-banding, we don't update batches on every iteration, and the moving average is computed in the interval with no batch size updates. Assume that last batch update happened on iteration \(j\), and the current iteration is \(i\). We then compute the average of worker \(k\): \(\mu(k,i,j)\) = EWMA(\(t_{k}^{i},t_{k}^{i-1},...t_{k}^{j}\)). The smoothed iteration times (\(\mu\)) are used in Equation 4 to compute the error and the batch size update.
**Batch size bounds.** Finally, we enforce lower and upper bounds on mini-batch sizes on all workers. These bounds prevent extreme batch sizes in cases of extreme heterogeneity, and ensure that the total throughput does not drop because of variable batching. Extremely small batches cannot use all the hardware parallelism and yield low throughput. Similarly, large batches may exhaust memory resources and also result in lower throughput. This is illustrated in Figure 5, which shows the throughput increasing with the batch size, until a sharp decline due to memory exhaustion in the GPU, and a gradual decline for CPU workers.
We thus allow users to specify estimates of lower and upper bounds (\(b_{\text{min}},b_{\text{max}}\)) of the batch sizes for all the workers. As the training progresses and we readjust batch sizes, we get more data points for the throughput curve. If we observe a drop in worker throughput after increasing its batch size from \(b_{0}\) to \(b\), then we update its \(b_{\text{max}}=b_{0}\). This ensures that future batch readjustments will not result in a drop in throughput.
**Putting it all together.** We can integrate all the control stability techniques into the proportional controller. Assume that the latest iteration is \(i\), and the last batch-update was made in iteration \(j\). The pseudo-code for our dynamic batching can be expressed as:
1. Compute exponential moving average iteration times \(\mu(k,i,j)\) for all workers \(k\).
2. Use \(\mu(k,i,j)\) in Eqn 4, 5 to compute \(\Delta(b_{k})\) and \(b_{k}^{i+1}\).
3. Enforce batch size bounds: \(b_{k,min}\leq b_{k}\leq b_{k,max}\)
4. Apply deadbanding check. If \(\max_{k}\Delta(b_{k})/b_{k}>\Delta_{min}(b)\), update all batch sizes. Otherwise do nothing.
## IV Experimental Evaluation
We conduct all our evaluation using our modified TensorFlow implementation that monitors differences in iteration times and dynamically adjusts per-worker batch sizes. We use the following standard well-known training workloads:
* **ResNet-50:** TensorFlow's ResNet benchmark [20], trained on the standard CIFAR-10 dataset. We use a momentum optimizer with a learning rate schedule of [0.1, 0.01, 0.001, 0.0002].
* **MNIST CNN [21]:** with Adam [22] and learning rate of 0.0001.
* **Linear Regression:** To show our system effectively sustains heavy as well as comparatively lighter workloads, we perform Linear Regression (LR) on Harvard's bar crawl dataset [23].
**Experimental environment and setup.** We use the parameter server distribution strategy for all model training. We appropriately scale the number of parameter servers to ensure that they are not the bottleneck. All TensorFlow processes (master, parameter servers, and workers) are deployed inside Docker containers for ease of management and fine-grained resource accounting and control. We conduct all our empirical evaluation on a local cluster as well as on Google Cloud Platform. The local cluster's CPU servers have 48-core Intel Xeon Platinum 2.10GHz CPUs and 256 GB of RAM. The GPU is Nvidia Tesla P100-PCIe-16GB.
### _CPU Training_
In this subsection, we focus on static heterogeneity when the cluster is composed of VMs/containers of different sizes. We are primarily interested in determining the impact of heterogeneity, and not parallel scaling. Therefore, we evaluate on clusters with different heterogeneity levels but the same total resource capacity. For instance, we compare a cluster configuration with two workers with (4, 16) CPUs, vs. two workers with (8, 12) CPUs. For CPU-only clusters, we define the heterogeneity level as: \(\text{H-level}=\max\text{number of cores}/\min\text{number of cores}\).
Fig. 5: Training throughput (img/sec) increases with batch size, then declines because of hitting resource (memory) limits on the workers, especially on GPUs where the memory limit is strict.
**Local cluster.** We first present the training performance across different heterogeneity levels on our local cluster with three CPU workers. The total number of CPU cores across the three workers is 39, and so a H-level of 2 would yield a (9, 12, 18) CPU cores configuration. The total training time to reach a desired level of model accuracy across the three different workloads is shown in Figure 6. Compared to vanilla TensorFlow's uniform batching, our variable batching approach can significantly reduce the training time. In general, the variable batching does better compared to the uniform batching at higher heterogeneity levels, because it is able to mitigate the stragglers. For computationally intensive ResNet, our variable batching improves training times by \(2\times\) at H-level of 2, and \(2.4\times\) at the highest H-level of 10. The high heterogeneity levels result in very small workers (e.g., H-level 10 is a (2,17,20) configuration). The small workers end up being stragglers even with variable batching's load balancing, because we are not able to use any parallelism inside these small workers, yet still face the same communication and model synchronization overhead.
The MNIST CNN also sees a performance improvement of \(2\times\)--\(4\times\). Finally, the Linear Regression workload is the least computationally expensive, and sees the least benefit (\(\sim 15\%\)) from the load-balancing that variable batching provides, because it is communication and synchronization bound.
Importantly, our variable batching can ameliorate the heterogeneity-induced slowdown, and can "flatten the curve". At a high H-level of 6, ResNet training time only increases by \(2\times\) compared to the homogenous setup (Figure 6). Similarly, MNIST time increases by by \(4\times\), and Linear Regression by only \(5\%\).
**Result:**_Variable batching can mitigate stragglers in BSP and can reduce training time by \(4\times\) for high heterogeneity levels. Our technique is particularly effective in scenarios that are computation and not communication bound._
### _GPU Training_
For GPU training, we first consider an extreme heterogeneity case where the cluster comprises of both CPU and GPU workers. Specifically, we use a single GPU worker (Tesla P100) and CPU worker (48-core Intel Xeon). We compare the performance of uniform, variable, and dynamic batching in Figure (a)a.
Recall that variable batch allocation is an open-loop approach that assigns batch sizes based on the hardware FLOPs performance and not actual throughput. Compared to uniform batching, we are able to reduce the training time by more than \(4\times\) for the computationally intensive ResNet workload. For MNIST, the cluster is underutilized, since workload is not computationally bound, and we see a more modest 20% improvement in training time with our approach.
The performance of the Xeon Platinum CPUs used in our local cluster experiments is far closer to GPU performance most cloud CPUs. For instance, the ratios of the FLOPs and the batch size between the GPU and CPU was \(0.813:0.187\), and thus the GPU worker is "only" \(4.3\times\) faster.
Interestingly, the dynamic batching improves performance by about 3% compared to static variable batching for MNIST CNN, and has a negligible effect for ResNet. This intriguing result is because of the tradeoff of dynamic batching. For a computationally intensive workload like ResNet, hardware FLOPs approximates throughput, so there was not enough opportunity for the dynamic readjustments. The kill-restart approach poses a small performance overhead too. These two factors "cancel out" and in most cases, static variable batching is "good enough".
**Result:**_Variable batching allows efficient use of mixed GPU-CPU clusters, and can reduce the training time by up to \(4\times\)._
We also examine training performance on a cloud cluster with two different types of GPUs. Specifically, we run two VMs with Tesla T4 and two VMs with Tesla P4 GPUs. The training time of ResNet (BSP) was 90 minutes with uniform batching, and only 20 minutes with variable batching--a \(4.5\times\) improvement.
Fig. 6: With BSP synchronization, variable batching can reduce the total training time to accuracy by up to \(4\times\).
Fig. 7: GPU training.
## V Related Work
**Heterogeneous Training.** The closest work is [3], which develops synchronization techniques (DynSGD and ConSGD), for mitigating the effects of staleness and stragglers by explicitly accounting for staleness using a vector-clock technique. However much like other work in this area [4], the cluster heterogeneity they consider is only a result of stochastic performance variations (random worker slowdowns). Instead, we focus on _systemic_ and severe heterogeneity due to vastly different resource sizes of workers. Our fundamental idea of variable mini-batch sizes is agnostic to the synchronization technique and can also be integrated with ConsSGD to provide support for alleviating the random slowdowns due to performance interference.
Heterogeneity in training is being recognized as an important missing feature and many approaches are being developed. [24] uses a gradient coding scheme to tolerate stragglers due to static heterogeneity in a BSP setup. Our variable batching technique is applicable in existing parameter server based architectures and does not require gradient coding. Heterogeneity for decentralized training is explored in Hop [4], which uses a bounded staleness approach and bound the iteration-gap. The technique is shown to be effective in case of random worker slowdowns. Its effectiveness at high static heterogeneity levels is less clear, since the large iteration gaps may pose fundamental synchronization challenges in the decentralized setting.
Resource allocation for training is also an active area of work, and is challenging due to our incomplete first-principles understanding of SGD scaling, and profiling-driven empirical models are typically used. [25] shows how to do cluster resource allocation and scheduling for ML training jobs by developing and using an empirical performance model to determine number of workers and parameter servers to use. Similarly, Cynthia [26] uses an analytical performance model for cost efficient cloud resource provisioning. In contrast, our approach can directly start training without the need for apriori modeling. Our design goal was to design a generally usable mechanism that is plug-in compatible with different resource allocation approaches, training algorithms, and treats ML models as "black boxes". Integrated systems and training algorithm co-design, like in Orpheus [27] that improves consistency via periodic centralized synchronization, is an alternative approach.
**Model synchronization** impacts training performance, especially in cloud environments with higher stochasticity in server performance and network latencies. This has motivated many synchronization techniques such as stale synchronous parallel [28] and others [29, 30, 31, 32, 33]. The performance tradeoffs of synchronization techniques in dynamic cloud environments is studied in [34]. Although asynchronous approaches [1] seem promising in heterogeneous environments, gradient staleness is still a pernicious problem [35, 18, 36, 19].
**Batch size** in distributed training is one of the most crucial hyper-parameters that affects the training performance as well as the model convergence. Understanding these tradeoffs is a key problem in machine learning [37, 38, 39, 18]. Due to the duality between learning rates and global batch sizes [40], adjusting the _global_ batch size is a known technique to regulate the errors in SGD training [41]. Adabatch [42] and [43] describe a "batch size schedule" analogous to a learning rate schedule. This is distinct from our dynamic mini-batch adjustment, and the dynamic global batch schedules can easily be incorporated into our approach. Finally, the theoretical soundness of variable mini-batch sizes can be found in [17]. They also propose a new synchronization technique where gradient updates are "pulled" from workers periodically, irrespective of their mini-batch processing, resulting in different sized worker updates.
**Acknowledgments.** This work was partially supported by the Google Cloud research credits program.
|
2302.00868 | Speech Enhancement for Virtual Meetings on Cellular Networks | We study speech enhancement using deep learning (DL) for virtual meetings on
cellular devices, where transmitted speech has background noise and
transmission loss that affects speech quality. Since the Deep Noise Suppression
(DNS) Challenge dataset does not contain practical disturbance, we collect a
transmitted DNS (t-DNS) dataset using Zoom Meetings over T-Mobile network. We
select two baseline models: Demucs and FullSubNet. The Demucs is an end-to-end
model that takes time-domain inputs and outputs time-domain denoised speech,
and the FullSubNet takes time-frequency-domain inputs and outputs the energy
ratio of the target speech in the inputs. The goal of this project is to
enhance the speech transmitted over the cellular networks using deep learning
models. | Hojeong Lee, Minseon Gwak, Kawon Lee, Minjeong Kim, Joseph Konan, Ojas Bhargave | 2023-02-02T04:35:48Z | http://arxiv.org/abs/2302.00868v2 | # Speech Enhancement for Virtual Meetings on Cellular Networks
###### Abstract
We study speech enhancement using deep learning (DL) for virtual meetings on cellular devices, where transmitted speech has background noise and transmission loss that affects speech quality. Since the Deep Noise Suppression (DNS) Challenge dataset of _Interspeech 2020_ does not contain practical disturbance, we collect a transmitted DNS (t-DNS) dataset using Zoom Meetings over T-Mobile network. We select two baseline models: Demucs and FullSubNet. The Demucs is an end-to-end model that takes time-domain inputs and outputs time-domain denoised speech, and the FullSubNet takes time-frequency-domain inputs and outputs the energy ratio of the target speech in the inputs.
The goal of this project is to enhance the speech transmitted over the cellular networks using deep learning models.
## 1 Introduction
Speech enhancement (SE) has been widely studied for various edge devices and as preprocessing steps for various automatic systems [18]. In particular, as remote work using virtual meetings with cellular devices becomes more common, SE for the mobile meeting applications is essential.
The classical SE was driven by signal processing methods, such as Wiener filtering and spectral subtraction [20; 21]. However, recent studies have revealed the efficiency of data-driven methods, including deep learning [14; 15; 22; 16; 23].
The Deep Noise Suppression (DNS) Challenge dataset of _Interspeech_ 2020 has been released for data-driven SE research [6]. Recent DL-based SE studies have been conducted with the DNS dataset [16; 19; 17; 8]. However, the DNS dataset does not reflect the effect of transmission loss in the real-world network communication process.
In this project, we newly collect a transmitted DNS (t-DNS) dataset through the process shown in Figure 1. The t-DNS data set contains data traversed by T-mobile network. We aims to propose deep learning models that enhance the speech in the t-DNS dataset scoring better than _'auto'_ mode of Zoom's built-in background noise suppression model in terms of perceptual metrics and acoustic
Figure 1: Data acquisition process of t-DNS
metrics. With two baseline models, Demucs [7] and FullSubNet [8], we introduce an auxiliary loss in terms of acoustic metrics known as the extended Geneva Minimalistic Acoustic Parameter Set (eGeMAPS) [9] to make the eGeMAPS features well-preserved in the denoised speech. To the best of our knowledge, we are the first to propose the SE dataset and model for virtual meetings over cellular networks in the real world. When all data processing work is completed, it is expected that the t-DNS dataset and model will be published online and used for future SE studies.
## 2 Literature Review
Recent studies on SE have contributed to enhancing the perceptual quality of denoised speech [2; 4; 5].
The methods to obtain great perceptual quality are divided into metric-based learning and feature-based learning. The metric-based learning aims to train a model that outputs denoised speech that results in good evaluation when a certain perceptual metric is calculated with target speech. [2] used an auxiliary loss of Short Time Objective Intelligibility (STOI).
The feature-based learning aims to train a model that outputs denoised speech, which has similar features to the clean or target speech. This can be achieved by designing a loss function that captures the divergence between the target and denoised speech with respect to features of interest. [4] proposed a phone-fortified perceptual loss to use the phonetic information in speech in training models. [5] proposed an auxiliary eGeMAPS loss to prevent the output speech from being distorted compared to the target speech in regard to eGeMAPS features.
## 3 Model Description
We select two baseline models: Demucs [7] and FullSubNet [8]. The main difference is that the Demucs is an end-to-end model while the FullSubNet is a separate learning model. The Demucs takes a raw time-domain waveform input and outputs denoised speech, which is also in the time domain. By contrast, the FullSubNet takes a time-frequency-domain input and outputs values to compose the final denoised speech, which requires pre- and post-processing models for inputs and outputs.
### Demucs
In the Demucs [7], noisy speech \(\mathbf{x}\in\mathbb{R}^{T}\) is considered as the sum of the clean speech \(\mathbf{s}\in\mathbb{R}^{T}\) and noise \(\mathbf{n}\in\mathbb{R}^{T}\) as follows:
\[\mathbf{x}=\mathbf{s}+\mathbf{n}. \tag{1}\]
The Demucs model \(f\) is trained so that \(f(\mathbf{x})=\hat{\mathbf{s}}\approx\mathbf{s}\). The architecture of the SE model consists of a multi-layer convolutional encoder-decoder network with a sequence modeling LSTM network, which transforms the latent output of the encoder into a nonlinear transformation. The model \(f\) is trained with two types of loss functions: time-domain and time-frequency-domain losses. The time-domain loss \(L_{\text{time}}\) is the L1 loss between the clean and denoised output speech of \(f\), i.e.,
\[L_{\text{time}}=\frac{1}{T}||\mathbf{s}-\hat{\mathbf{s}}||_{1}. \tag{2}\]
The time-frequency-domain loss \(L_{\text{T-F}}\) consists of the spectral convergence loss \(L_{\text{sc}}\) and magnitude loss \(L_{\text{mag}}\), i.e., \(L_{\text{T-F}}=L_{\text{sc}}+L_{\text{mag}}\), where
\[L_{\text{sc}} =\frac{|||S|-|\hat{S}|||_{F}}{|||S|||_{F}}, \tag{3}\] \[L_{\text{mag}} =\frac{1}{T}||\log|S|-\log|\hat{S}|||_{1}, \tag{4}\]
with \(S\) and \(\hat{S}\) are the short-time Fourier transform (STFT) of \(\mathbf{s}\) and \(\hat{\mathbf{s}}\), respectively. Moreover, multiple time-frequency-domain losses can be used with respect to different STFT resolution for the number of fast Fourier transform bins, hop sizes, and lastly window lengths.
The end-to-end property of the Demucs is beneficial in transfer learning and in that less domain knowledge is required to use the Demucs. The performance of the causal/noncausal Demucs with
proper data augmentation skills, such as reverbing with two sources and partial dereverberation, reached state-of-the-art models in both objective and subjective measures. Also, the Demucs enhanced automatic speech recognition systems without retraining on noisy conditions.
### FullSubNet
The FullSubNet [8] is a fusion model of SE models that independently utilize fullband and subband information on short-time Fourier transform (STFT) of speech data. Fullband models take the full band, up to the Nyquist frequency, of the STFT data and capture the global cross-band spectral characteristics of input speech. By contrast, subband models take data in a partial frequency band and model local spectral patterns with fewer model parameters than fullband models.
As in the time domain, the STFT of noisy speech can also be represented with the STFT of the clean speech and noise, as follows:
\[X=S+N, \tag{5}\]
where \(X\), \(S\), and \(N\) are the STFT of \(\mathbf{x}\), \(\mathbf{s}\), and \(\mathbf{n}\), respectively. Let \(A(t,f)\) denote the \((t,f)\)th component of an STFT matrix \(A\in\mathbb{C}^{T\times F}\), where \(T\) is the number of frames and \(F\) is the number of frequency bins of STFT. In the FullSubNet architecture, the fullband LSTM model \(g_{\text{fullband}}\) takes \(\mathbf{X}_{t}=[|X(t,0)|,|X(t,1)|,\cdots,|X(t,F-1)|]^{T}\in\mathbb{R}^{F}\) as an input and extracts the fullband feature. The subband LSTM model \(g_{\text{subband}}\) takes an augmented input, which is the concatenation of the fullband output and subband spectra, i.e., \([|X(t,f-N)|,\cdots,|X(t,f)|,\cdots,|X(t,f+N)|,g_{\text{fullband}}(X)]^{T}\in \mathbb{R}^{2N+2}\), and predicts the complex ideal ratio mask (cIRM), \(M(t,f)\in\mathbb{C}\), which measures the energy ratio of the target speech to the entire noisy input speech for each time-frequency bin, i.e., \(S(t,f)=M(t,f)*X(t,f)\). The real and imaginary parts of the cIRM, \((M_{r},M_{i})\), for \(X(t,f)=X_{r}+iX_{i}\) and \(S(t,f)=S_{r}+iS_{i}\) is defined as follows [1]:
\[M_{r} =K\tanh(\frac{C}{2}\cdot\frac{X_{r}S_{r}+X_{i}S_{i}}{X_{r}^{2}+X_{ i}^{2}}), \tag{6}\] \[M_{i} =K\tanh(\frac{C}{2}\cdot\frac{X_{r}S_{i}-X_{i}S_{r}}{X_{r}^{2}+X_ {i}^{2}}), \tag{7}\]
where \(K\) and \(C\) are hyperparameters. The ground truth cIRM, \(M\), can be calculated from the clean and noisy speech pair, and the final denoised speech can be constructed from the output cIRM values and input noisy speech. Thus, the FullSubNet model \(g\) is trained so that \(g(X)=\hat{M}\approx M\). The \(g\) is trained with \(L_{\text{cIRM}}\), which measures the mean squared error between the true and estimated cIRMs. It is shown in [8] that the FullSubNet outperforms state-of-the-art models on the DNS dataset, and the information obtained in the fullband and subband models is complementary.
## 4 Dataset
The t-DNS dataset will be created based on the DNS Challenge dataset [6]. The DNS Challenge dataset aims to provide an extensive and representative dataset to train the speech enhancement models. It contains 500 hours of clean speech from 2,150 speakers and a noise data set with at least 500 clips for 150 audio classes. Also, it contains test data with and without reverberation, and we will focus on the test data without the reverberation. Noisy speech is generated by synthesizing clean and noise speech data. The synthesized noisy speech is then sent across a virtual microphone, Zoom Meetings, T-mobile network and finally to cellular devices, as shown in Figure 1. In the Zoom Meetings, a low-level built-in noise suppression model will be used to minimize the impact of the speech enhancement with using it. The data sent to each cellular device is collected by the computer through the audio interface.
## 5 Evaluation Metric
This section introduces the metrics for estimating the performance of our SE model. We explain target metrics utilizable in our project. All three metrics are classified as relative metrics, which require a reference signal to compare a given signal.
* **Frequency weighted Segmental Signal to Noise Ratio (fwSegSNR)** Time-domain and frequency-weighted measurements, Signal to Noise Ratio (SNR) and fwSegSNR are both based on a clean signal \(X\) enhanced signal \(\hat{X}\). This is given a different weight for each frequency. \(W(j,m)\) is the weight on the frequency band of \(j\)th, and \(K\) is the number of bands. \(M\) is the total number of frames in the signal. \(X(j,m)\) is critical critical band magnitude of clean signal at \(m\)th frame, \(j\)th frequency frequency band. \[\mathrm{fwSegSNR}=\frac{10}{M}\sum_{m=0}^{M-1}\frac{\sum_{j=1}^{K}W(j,m)\log_{ 10}\frac{X(j,m)^{2}}{(X(j,m)-X(j,m))^{2}}}{\sum_{j=1}^{K}W(j,m)}\] (8)
* **Perceptual Evaluation of Speech Quality (PESQ)** PESQ performs well in a wide range of codecs and network conditions. The core part consists of aggregating the disturbance to measure the audible error in three steps each by using \(p\) norm as Equation (9); frame-by-frame disturbance, split second disturbance, and speech length averaged disturbance. The notation \(N\) in Equation (9) indicates the total number of data in each normal-calculating part. PESQ returns a mean opinion score (MOS) from 0 to 5, with higher scores indicating better quality. Usually, PESQ indicates WB-PESQ, a wide band PESQ, and NB-PESQ indicates a narrow band PESQ. WB-PESQ, which has the benefit of transferring higher data rates, reads the input signal with input filter of 2 while NB-PESQ, which has the benefit of better sensitivity and range, does it as 1. \[L_{p}=\left(\frac{1}{N}\sum_{m=1}^{N}\text{disturbance}[m]^{p}\right)^{\frac{1}{ p}}\] (9)
* **Short-Time Objective Intelligibility (STOI)** STOI is a function to calculate the linear correlation coefficient of clean speech and denoised speech data. In Equation (10), \(X\) indicates a decomposed clean speech, and \(Y\) is a decomposed noisy speech after DFT-based 1/3 octave band decomposition. In Equation (10), \(d\) means the correlation coefficient of \(X\) and \(Y\) corresponds to each frame \(m\) and one-third octave band \(j\). In Equation (11), This \(d\) is averaged as a single scalar value indicating the voice intelligibility, where \(M\) represents the total number of frames and \(J\) represents the number of one-third octave bands. \[d_{j}(m)=\frac{\sum_{n}\bigg{(}X_{j}(n)-\frac{1}{N}\sum_{l}X_{j}(l)\bigg{)} \bigg{(}Y_{j^{\prime}}(n)-\frac{1}{N}\sum_{l}Y_{j^{\prime}}(l)\bigg{)}}{\sqrt {\sum_{n}\bigg{(}X_{j}(n)-\frac{1}{N}\sum_{l}X_{j}(l)\bigg{)}^{2}\sum_{n} \bigg{(}Y_{j^{\prime}}(n)-\frac{1}{N}Y_{j^{\prime}}(l)\bigg{)}^{2}}}\] (10) \[d=\frac{1}{JM}\sum_{j,m}d_{j}(m)\] (11)
## 6 Loss Function
### Temporal Acoustic Parameter Estimator
As a training boost, we fine-tune the two baseline models with temporal acoustic parameter (TAP) loss. This aims to minimize the temporal divergence between clean and enhanced acoustic parameters. Its availability in both time domain and time-frequency domain enables us to use it to both Demucs and FullSubNet models.
For a given signal \(\mathbf{y}\), let \(\mathbf{A_{y}}\in\mathbf{R}^{T\times 25}\) indicate the 25 temporal acoustic parameters in T discrete time frames, and \(A_{\mathbf{y}}(t,p)\) indicate it by each parameter \(p\) and discrete time frame \(t\). Then, TAP parameter gives an estimate of \(\mathbf{A_{y}}\) as \(\hat{\mathbf{A_{y}}}\) as in Equation (12).
\[\hat{\mathbf{A_{y}}}=\mathcal{TAP}(\mathbf{y}) \tag{12}\]
TAP estimator is obtained from a pretrained recurrent neural network which minimizes the mean absolute error defined as Equation (13).
\[\mathrm{MAE}\left(\mathbf{A_{y}},\hat{\mathbf{A}_{y}}\right)=\frac{1}{TP}\sum_{t =0}^{T-1}\sum_{p=0}^{P-1}|A_{\mathbf{y}}(t,p)-A_{\hat{\mathbf{y}}}(t,p)|\in \mathbb{R} \tag{13}\]
Using TAP estimators makes end-to-end learning possible by overcoming the non-differentiable properties of acoustic parameters.
### Temporal Acoustic Parameter Loss
Temporal acoustic parameter loss, \(\mathcal{L}_{\mathrm{TAP}}\), minimizes divergence between each TAP estimators of the clean and enhanced speech. The mathematical term is expressed in in Equation (14). \(\sigma(\mathbf{\omega})\) indicates the smoothed energy weights that emulates human hearing with bounded scales. Our loss function is a combination of the L1 loss and the acoustic loss. We control the weight of the acoustic loss by a parameter \(\alpha\).
\[\mathcal{L}_{\mathrm{TAP}}(\mathbf{s},\hat{\mathbf{s}})=\mathrm{MAE}( \mathcal{TA}(\mathbf{s})\odot\sigma(\mathbf{\omega}),\mathcal{TAP}(\hat{\mathbf{ s}})\odot\sigma(\mathbf{\omega})) \tag{14}\]
## 7 Experiments
### Metric evaluation
Table 1 summarizes the results of 150 noise data in the DNS 2020 dataset after speech enhancement. We inserted the raw waveform form into the processes of Demucs and FullSubNet without additional training. Both methods show high speech enhancement performance. However, in the case of FullSubNet, the performance is better than that of Demucs in PESQ metrics, and in the rest of the metrics, the performance of Demucs is better.
### Acoustic improvement
In addition to speech-level metric evaluation of denoised speech, the acoustic parameters-improving abilities of the SE models were analyzed. We used 25 acoustic parameters defined in the eGeMAPS. The acoustic parameters include frequency-related parameters, energy or amplitude-related parameters, spectral balance parameters, and temporal parameters. We denote the \(i\)th acoustic parameter vector of speech \(\mathbf{u}\) as \(A_{\mathbf{u}}^{(i)}\in\mathbb{R}^{T_{\mathbf{a}}}\) for \(i=1,2,\cdots,25\), where \(T_{\mathbf{u}}\) is the total number of time frames of \(\mathbf{u}\). To consider all denoised speech of an SE model \(m\), let \(\mathbf{A}_{m}^{(i)}\) be the augmented acoustic parameter vector such that \(\mathbf{A}_{m}^{(i)}=\left[(A_{\mathbf{u}}^{(i)})^{T}\right]_{\mathbf{u}\in \mathcal{S}_{m}}^{T}\in\mathbb{R}^{T}\), where \(\mathcal{S}_{m}\) is the set of all denoised speech of \(m\) and \(T=\sum_{\mathbf{u}\in\mathcal{S}_{m}}T_{\mathbf{u}}\). For better interpretation, augmented acoustic parameter vectors were standardized with some specific mean \(\mu_{i}\) and standard deviation \(\sigma_{i}\) values obtained in a large speech dataset for each acoustic parameter \(i\), as follows:
\[\mathbf{A}_{m}^{(i)}=\frac{\mathbf{A}_{m}^{(i)}-\mu_{i}\mathbf{1}_{T}}{\sigma _{i}}, \tag{15}\]
where \(\mathbf{1}_{T}\) is \(T\)-dimensional all-ones vector.
To see the acoustic improvement of \(m\), We first evaluate the mean absolute error (MAE) of the acoustic parameter of denoised speech to the acoustic parameters of the corresponding clean speech
\begin{table}
\begin{tabular}{l|c c c} \hline \hline & WB-PESQ & STOI(\%) & fwSNRseg(dB) \\ \hline Noisy & 1.582 & 91.51 & 12.62 \\ \hline Demucs & 2.647 & 96.52 & 17.17 \\ FullSubNet & 2.888 & 96.41 & 16.96 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Evaluation on enhancement in denoised speech compared to noisy speech
for every \(i\)th parameter, as follows:
\[\text{MAE}_{m}^{(i)}=\frac{1}{T}\sum_{t=1}^{T}\left|\mathbf{A}_{\text{clean}}^{(i )}(t)-\mathbf{A}_{m}^{(i)}(t)\right|, \tag{16}\]
where \(\mathbf{A}_{m}^{(i)}(t)\) is the \(t\)th component of \(\mathbf{A}_{m}^{(i)}\) and \(\mathbf{A}_{\text{clean}}^{(i)}\) denotes the augmented acoustic parameter vector of clean speech. We then evaluated the acoustic improvement \(I_{m}^{(i)}\) of an SE model \(m\) for the \(i\)th acoustic parameter, as follows:
\[I_{m}^{(i)}=\frac{\text{MAE}_{\text{noisy}}^{(i)}-\text{MAE}_{m}^{(i)}}{\text {MAE}_{\text{noisy}}^{(i)}}\times 100, \tag{17}\]
where \(\text{MAE}_{\text{noisy}}^{(i)}\) denotes the MAE for noisy speech. The acoustic improvement in Demucs and FullSubNet, i.e., \(I_{\text{Demucs}}\) and \(I_{\text{FullSubNet}}\), is shown in Fig. 2, where the \(y\)-axis represents the 25 acoustic parameters in the eGeMAPS. Moreover, the improvement with respect to the statistics for each acoustic parameter is also evaluated in Fig. 3. The SE models improved almost all acoustic parameters, as shown in Figs. 2 and 3. Some acoustic parameters, such as'spectralFlux_sma3' and 'Loudness_sma3', however, are degraded by the FullSubNet, which requires further analysis of the denoised speech of the FullSubNet.
Figure 2: Improvement in the acoustic parameters of the denoised speech of two SE models
## 8 Results
### Perceptual Evaluation
Table 2 shows the evaluation of each model in three metrics: fwSNRseg, PESQ and STOI. _Noisy_ is the raw noisy data before entering zoom. _Noisy Relay_ indicates the speech transmitted through zoom with 'low' mode of built-in background noise suppression. _Industrial Denoising_ indicates the speech transmitted through zoom with 'auto' mode of built-in background noise suppression. For each Demucs and FullSubNet, the three different models are used. _Baseline_ model is same as the provided model from the paper. _Fine-tuned_ model is further trained model with the training data. As
\begin{table}
\begin{tabular}{l|c c c} \hline \hline & fwSNRseg(dB) & PESQ & STOI(\%) \\ \hline Clean & - & - & - \\ Noisy & 12.629 & 1.582 & 91.52 \\ Noisy Relay (Low) & 4.804 & 1.549 & 79.76 \\ \hline Industrial Denoising (Auto) & 5.636 & 1.701 & 81.06 \\ \hline Demucs (Baseline) & 5.611 & 1.375 & 76.51 \\ Demucs (Fine-tuned) & 6.772 & 1.397 & 80.18 \\ Demucs (Ours) & 8.959 & 1.557 & 84.52 \\ \hline FullSubNet (Baseline) & 5.712 & 1.511 & 78.2 \\ FullSubNet (Fine-tuned) & 6.546 & 1.496 & 80.27 \\ FullSubNet (Ours) & 8.897 & 1.631 & 84.01 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Metrics of speech enhancement quality
Figure 3: Improvement in the statistics of acoustic parameters of the denoised speech of two SE models
the higher metrics means the better speech, there exists a degradation due to the transmission loss. The metrics of fine-tuned Demucs are better than those of the built-in low noise suppression model. However, the best Demucs model is worse than the auto mode in terms of PESQ for now. This is because the hyperparameter tuning is currently in progress. When the hyperparameter working is done, the metrics will be get much better. The results from FullSubNet show similar trends to those from Demucs.
### Acoustic Evaluation
The improvement of acoustic metrics is measured as how well the input noisy speech is processed into enhanced speech. The left portion of Figure 4 is the improvement of each Zoom's Low and Auto modes over untransmitted noisy speech. The right portion is about the improvement of auto mode over the low mode which shows that the auto mode is more powerful than the low mode. Even when using the Zoom's built-in noise suppression, noise added to the speech due to transmission on cellular networks degrades its speech in almost all aspects of eGeMAPS. In Figure 5, y-axis is for 25 acoustic parameters. The green and red bars represent the improvement of baseline and our models, repectively, compared to the Zoom's auto denoising mode. The blue bar represents how much our model is better than the baseline. Our model showed better improvements in most of the acoustic parameters.
## 9 Future Works
As the rest of the dataset is being processed, we only can investigate the dataset transmitted through T-mobile network. Once processing is done on the other 3 networks, we will compare the data from each of the 4 network provider and use SE to make the worst one the best. Also, we are considering to analyze the acoustic characteristics of t-DNS. Then, we can optimize the acoustic parameters using characteristics that will make the noisy speech even better than the enhanced speech in this project.
## 10 Conclusion
The main contribution of our work is that we provide the t-DNS dataset which reflects the effect of transmission loss in the real-world cellular network communication. Also, we applied temporal acoustic loss function to fine tune the two baseline models, Demucs and FullSubNet. Our model beats the baseline models and the industrial denoised model, showing the effect of training on t-DNS dataset and temporal acoustic loss function.
## 11 Division of work
The project work was evenly distributed, and all team members participated in report writing and regular meetings throughout the semester.
* **Hojeong Lee**: Results analysis, metric evaluation
* **Mineon Gwak**: FullSubNet implementation and experiments
* **Kawon Lee**: Demucs implementation and experiments, presentation
* **Minjeong Kim**: Results analysis, metric evaluation
## 12 Github repository
[https://github.com/Mineon-Gwak/Speech-enhancement-zoom-phone](https://github.com/Mineon-Gwak/Speech-enhancement-zoom-phone)
|
2306.14332 | Failed supernovae as a natural explanation for the binary black hole
mass distribution | The more gravitational wave sources are detected, the better the mass
distribution of binary black holes (BBHs) becomes known. This stellar graveyard
shows several features, including an apparent mass gap which makes the
distribution bimodal. The observed chirp mass distribution, in turn, appears to
be trimodal. We aim to investigate to which extend we can explain the observed
mass distribution with stellar evolution, specifically with the hypothesis that
the mass gap is caused by the difference between successful and failed
supernovae (SNe). We pose a hypothetical remnant function, based on literature
of stellar evolution simulations, which relates initial mass to remnant mass,
includes a black hole island and produces a bimodal remnant distribution.
Moreover, we look at observed type II SN rates in an attempt to detect the
effect of failed SNe. Finally, using a simplified estimation of binary
evolution, we determine the remnant distribution resulting from our remnant
function and compare it to observation. We find that failed SNe lower type II
SN rates by approximately 25%, but the inferred rate from SN surveys is not
accurate enough to confirm this. Furthermore, our estimation based on the
remnant function produces a mass distribution that matches the general shape of
the observed distributions of individual as well as chirp masses. Based on our
research, we conclude that the failed SNe mechanism and the presence of the
black hole island are a natural hypothesis for explaining the individual BBH
mass distribution and chirp mass distribution. However, for a more firm
conclusion more detailed simulations are needed. | Paul Disberg, Gijs Nelemans | 2023-06-25T20:05:02Z | http://arxiv.org/abs/2306.14332v1 | # Failed supernovae as a natural explanation for the binary black hole mass distribution
###### Abstract
Context:The more gravitational wave sources are detected, the better the mass distribution of binary black holes (BBHs) becomes known. This "stellar graveyard" shows several features, including an apparent mass gap which makes the distribution bimodal. The observed chirp mass distribution, in turn, appears to be trimodal.
Aims:We aim to investigate to which extend we can explain the observed mass distribution with stellar evolution, specifically with the hypothesis that the mass gap is caused by the difference between successful and failed supernovae (SNe).
Methods:We pose a hypothetical remnant function, based on literature of stellar evolution simulations, which relates initial mass to remnant mass, includes a "black hole island" and produces a bimodal remnant distribution. Moreover, we look at observed type II SN rates in an attempt to detect the effect of failed SNe. Finally, using a simplified estimation of binary evolution, we determine the remnant distribution resulting from our remnant function and compare it to observation.
Results:We find that failed SNe lower type II SN rates by approximately 25%, but the inferred rate from SN surveys is not accurate enough to confirm this. Furthermore, our estimation based on the remnant function produces a mass distribution that matches the general shape of the observed distributions of individual as well as chirp masses.
Conclusions:Based on our research, we conclude that the failed SNe mechanism and the presence of the black hole island are a natural hypothesis for explaining the individual BBH mass distribution and chirp mass distribution. However, for a more firm conclusion more detailed simulations are needed.
## 1 Introduction
The more gravitational wave (GW) sources are detected, the better the distribution of the masses of the detected remnants becomes known (e.g. Abbott et al. 2021c). These remnants are often black holes (BHs) in a binary system, which merge with each other after their binary evolution, through the emission of GWs (e.g. Abbott et al. 2016). This evolution includes the collapse into BHs, potentially accompanied by supernovae (SNe) of both stars, which means the SN mechanism can be important in determining the final remnant mass. We are interested in the distribution of these final masses.
The top panel of fig. 1 shows the distribution of the individual masses of the detected GW sources, from GWTC 1, 2 and 3 (Abbott et al. 2021a). The binary black holes (BBHs) seem to show a bimodal distribution, with a gap between \(14M_{\odot}\) and \(22M_{\odot}\), and an additional peak at \(35M_{\odot}\), although this peak seems to disappear when taking into account the uncertainty of the data. The bottom panel shows the distribution of the corresponding chirp masses of the black hole binaries (BHBs), represented by asymmetric Gaussians (as described in appendix A). This distribution appears to be trimodal (Abbott et al. 2021b), with one mode below \(10M_{\odot}\), one between \(10M_{\odot}\) and \(20M_{\odot}\), and one above \(20M_{\odot}\). Here, there is a gap as well: at \(11M_{\odot}\).
One hypothetical explanation for this is given by Broadhurst et al. (2018, 2022), who theorize that the gap, specifically the gap in the individual mass distribution, is real and caused by gravitational lensing, because of which the distance to some of the GW sources is underestimated and therefore the chirp mass overestimated. The gap is then located between the non-lensed and lensed sources. In order to produce a distribution similar to the top panel of fig. 1, Broadhurst et al. (2018, 2022) pose a merger rate which has high values at high redshift and low values at low redshift, since this produces the desired number of lensed and unlensed sources. We estimate, however, that the merger rate value they pose at high redshift implies a merger fraction which is approximately 50 times higher than the value Mandel & Farmer (2022) estimate through their Drake-like approach. Even if we make optimistic assumptions about some of the factors in this Drake-like equation, we still find a factor of 14 unaccounted for (as shown in appendix B). Although this factor is not large enough to completely dismiss the lensing hypothesis of Broadhurst et al. (2018, 2022), we find it sufficient to deem their hypothesis improbable.
We aim to explore whether the supernova (SN) mechanism could provide an alternative, more natural explanation for the observed gap. Literature suggests that the shock-wave in a core-collapse SN (CCSN) can be stalled and not cause a successful explosion (Mazurek 1982). The star can then collapse in its totality and form a stellar remnant in a direct collapse, or "failed", SN. Failed SNe are difficult to detect, since they do not cause a bright explosion but instead make a massive star seemingly disappear (Kochanek et al. 2008). Because of these difficulties, the
properties of failed SNe are not well-known, even though there are several potential candidates (e.g. Gerke et al. 2015; Adams et al. 2017, 2017, 2018; Basinger et al. 2021; Neustadt et al. 2021). In addition to the search for failed SNe, others have also looked at the implications for remnant mass distributions, e.g. Kochanek (2014, 2014) who attempts to estimate the low-mass BH distribution based on the failed SN mechanism. Overall, we state that the difference between failed SNe and successful ones could potentially create a gap in the mass distribution: some sources have a successful SN and lose mass before forming a remnant and other sources have a failed SN, conserve mass and form a more massive remnant. This means that the parameter which is important for the remnant mass distribution is the mass limit above which SNe fail.
In this work we investigate the SN mechanism in close binaries and how it affects the BBH mass distribution (when we mention SNe we refer to CCSNe, unless specified otherwise). We start by investigating this SN mechanism in section 2, wherein we describe the transitions between successful and failed SNe through an initial-final mass relation based on the works of Schneider et al. (2021) and Marchant et al. (2019) (section 2.1). We also attempt to compare this relation to observed SN rates (section 2.2). Then, in section 3, we show that this model naturally produces a bimodal mass distribution and a trimodal chirp mass distribution, similar to fig. 1. While our paper was under review, Schneider et al. (2023) published a paper with a similar approach. In the discussion (section 4) we will make a comparison. Finally, in section 5, we will summarize our conclusions.
## 2 Failed supernovae
### Remnant function
We are interested in the differences between successful and failed SNe, and the relation between zero-age main-sequence (ZAMS) mass and the masses of the remnants they form. SN explodability simulations indicate that there may not be only one mass range for successful SNe and one range for failed SNe (Ertl et al. 2016; Muller et al. 2016; Kresse et al. 2021). Instead, they find that there may be a failed SN range somewhere between \(20M_{\odot}\) and \(24M_{\odot}\), above which stars can have a successful SN again up until about \(27M_{\odot}\), where the failed SNe take over again, supported by other studies of pre-SN compactness (e.g. Sukhbold & Woosley 2014; Sukhbold et al. 2016). We combine this with the results of Schneider et al. (2021), who perform a simulation and investigate stellar evolution and remnant masses. They also find that there are two ranges for failed SNe: one small range for lower masses, which they call the "BH island", and a larger failed SN range for higher masses. They note that the two points of transition from successful to failed SNe coincide with the masses at which the stellar core change from convective to radiative carbon and neon burning, respectively. This could explain why there is a BH island in the first place. Schneider et al. (2021) also note that their relation between ZAMS mass and remnant mass gives rise to a bimodal mass distribution.
We construct a possible remnant function, starting by describing the BH island, based on Schneider et al. (2021), both for their models which concern case A/B mass transfer as well as their case C models. They describe the BH island as a function which produces remnants between approximately \(8M_{\odot}\) and \(10M_{\odot}\). In order to reproduce the observed BBH distribution, however, we change this to a linear function which produces remnants between \(8M_{\odot}\) and \(14M_{\odot}\). We give the exact defini
Figure 1: GW data from Abbott et al. (2021). The top panel shows the mass distribution of the individual BBHs (represented by a histogram of the mean values, with a bin-width of \(2M_{\odot}\)). We can also represent the datapoints as asymmetric Gaussians, as described in appendix A. The sum of these Gaussians, \(\sum_{i}dN_{i}/dm\), is given as well (solid line), together with a cumulative distribution function (CDF, dotted line on a different axis). The bottom panel shows the chirp mass distribution (grey lines) and is made to resemble the distribution from Abbott et al. (2021). In order to do this, we use asymmetric Gaussians as well, which take into account the 90% credible interval. We also show an adjustable-width kernel density estimation (AWKDE, dashed line), from Shimazaki & Shinomoto (2010); Shimazaki et al. (2018). The Gaussians are plotted on a logarithmic axis, while the sum and the AWKDE use a linear axis. Similarly to Abbott et al. (2021), we omit GW190814 from our analysis.
tion of our remnant function, including these mass ranges, in appendix C. For the other failed SNe, we use the fits given by Schneider et al. (2021), but because these fits start at a remnant mass of about \(16M_{\odot}\), we shift them upwards to \(22M_{\odot}\) instead. We motivate this by general uncertainty in the models plus the fact that rather than complete collapse or complete ejection, there could be partial fallback, so that some stars which are in the successful SN domain will in fact be able to form BHs. Finally, at the highest masses, we include the effect of pair-instability SNe (PISN) that leave behind no stellar remnant (Fraley, 1968) and pulsational pair-instability supernova (PPISN) (Woosley et al., 2007; Woosley, 2017), at slightly lower masses that can form the transition between the remnants of failed SNe and the remnant-less PISNe. For this, we make a polynomial fit to the PPISN results from Marchant et al. (2019). In order to create one coherent remnant function, we increase the slope of the case A/B function, connecting it to this PPISN fit, and also shift the PPISN fit for case C to lower ZAMS masses, connecting it to the case C function. The first adjustment does not influence our results significantly and the second adjustment is justified because we expect case C stars to have a more massive helium core at the end of their evolution than case A/B stars have. This is due to the fact that case C stars are able to grow more massive cores, meaning we expect case C stars to have a lower threshold for PPISNe. We show our complete remnant functions, one for case A/B and one for case C, in fig. 2. Here, we neglect influences of aspects such as metallicity, which we discuss in section 4. Nevertheless, this remnant function suffices for our purpose: showing that the bimodal BBH distribution is to be expected based on stellar evolution (which we do in section 3).
### Supernova rates
Before we turn to the mass distribution resulting from the remnant function, we try to detect failed SNe by investigating how they reduce SN rates. After all, since failed SNe do not produce the bright signal successful SNe do, an increase in amount of failed SNe would reduce the total SN rate. In order to make a rough estimate of the effect of failed SNe on SN rates, we assume one mass range of stars which have a successful SN, from \(m_{l}\) to \(m_{u}\). Above \(m_{u}\), stars are too heavy to explode and have a failed SN, meaning they do not contribute to the SN rate. This model differs from our remnant function, since here we assume only one mass range of failed SNe, but for our purposes this suffices. In this estimation, we neglect the influences of binary interaction, effectively limiting our scope to type II SNe.
We start by describing the SN rates. Using the initial mass function (IMF), \(N(m)dm=\kappa M\phi(m)dm\) where \(\kappa^{-1}=\int m\phi(m)dm\) and \(\phi(m)\) the Chabrier (2003) IMF shape, we express the total mass \(M\) in terms of the star formation rate (SFR) \(\psi(t)\). This comes down to the total mass of the stars of mass \(m\) which are created a lifetime \(\tau\) ago and equals \(M=\psi(t-\tau(m))dt\). Therefore, the SN rate is simply the amount of stars within a certain mass range which reach the end of their stellar lifetime at time \(t\) and is defined as:
\[R_{\rm SN}(t,m_{l},m_{u})=\kappa\int\frac{M}{dt}\phi(m)dm=\kappa\int_{m_{l}}^ {m_{u}}\psi(t-\tau)\phi(n)dm\quad, \tag{1}\]
where we use \(\tau=\tau(m)=10^{10}\mathrm{yr}\,(m/M_{\odot})^{-2.5}\). Since the precise shape of the SFR is often not determined accurately, it is difficult to determine \(R_{\rm SN}\) using this equation. Because of this, we look at the ratio between the SN rate and the SFR. The integrand of this ratio has a factor \(\psi(t-\tau)/\psi(t)\), which is why we make the approximation \(\psi(t-\tau)/\psi(t)\approx 1\). This approximation holds for constant SFRs and decreases in accuracy for SFRs which evolve on a short timescale. Also, we set \(m_{l}=8M_{\odot}\), which is non-controversial (e.g. Kochanek et al., 2008; Kochanek, 2014). The ratio of SN rate to SFR becomes, then:
\[\frac{R_{\rm SN}(t,8M_{\odot},m_{u})}{\psi(t)}\approx\frac{\int_{8M_{\odot}}^ {m_{u}}\phi(m)dm}{\int m\phi(m)dm}\equiv\Lambda(m_{u})\quad, \tag{2}\]
where we used the definition of \(\kappa\) and introduce \(\Lambda(m_{u})\) as shorthand for this function. The \(\kappa\) integral is taken from \(m_{min}=10^{-1}M_{\odot}\) up to \(m_{\rm max}=10^{2}M_{\odot}\), which means the value which
Figure 2: Remnant masses (\(m_{\rm RX}\)), as function of ZAMS mass (\(m_{\rm ZAMS}\)). The blue curve shows the remnant function for case A and case B mass transfer, based on the simulations by Schneider et al. (2021) and Marchant et al. (2019). The Marchant et al. (2019) results are shown as well (black dots). The triangle shows the transition point between the part of our function which is based on Schneider et al. (2021) and the part which is based on Marchant et al. (2019). The red curve shows the corresponding remnant function for case C mass transfer, where the part after the transition point is the PPISN fit translated to lower masses by \(42M_{\odot}\). The dashed lines show \(m_{\rm RX}/m_{\rm ZAMS}=1,\,0.65\) and \(0.35\), since the latter two are used as approximation in estimating the amount of mass transfer in our estimation. The shaded area shows the values of the remnant mass gap. Furthermore, we note that this remnant function is limited to BHs, for our purposes we are not interested in the neutron star (NS) masses. The remnant functions are explicitly given in appendix C.
does not take failed SNe into account is simply \(\Lambda(m_{\rm max})\).
In order to determine \(m_{u}\), we consider the SN simulations of Ertl et al. (2016), Muller et al. (2016) and Kresse et al. (2021), together with our remnant function based on Schneider et al. (2021). These simulations use different models of neutrino engines, which represent the collapsed core, and determine the mass dependent explodability per model. Muller et al. (2016) find a mass dependent probability distribution, where approximately \(m<20.5M_{\odot}\) and \(23.5M_{\odot}<m<27M_{\odot}\) have a high probability for a successful SN. The simulations of Ertl et al. (2016) and Kresse et al. (2021) find a similar distribution, with approximately \(m<21.5M_{\odot}\) and \(25M_{\odot}<m<27.5M_{\odot}\) which go SN. These are in good agreement with Schneider et al. (2021), who find that their case C results (as shown in fig. 2) are similar to the results for single stars. Keeping in mind the IMF, we make a crude approximation and estimate, based on these simulations, that \(m_{u}\approx 22.5M_{\odot}\). This is comparable to the estimation of \(20M_{\odot}\) which Mashian & Loeb (2017) use, for instance.
The value of \(m_{u}\) gives \(\Lambda(22.5M_{\odot})=9.06\cdot 10^{-3}M_{\odot}^{-1}\). In contrast, if we neglect the influence of failed SNe, this value becomes \(\Lambda(m_{\rm max})=1.18\cdot 10^{-2}M_{\odot}^{-1}\). This means we have found a failed SNe correction of \(1-\Lambda(22.5)/\Lambda(m_{\rm max})=0.232\approx 25\%\).
We now turn to SN surveys and use the works of Graur et al. (2015, 2017) and Botticella et al. (2017) in order to detect the predicted effect of failed SNe. As shown in the top panel of fig. 3: these data consist of the type II SN rate per unit mass (\(R_{\rm SNM}\)) versus the specific SFR (sSFR), i.e. the SFR per unit mass. The figure includes two fits, shaped as \(R_{\rm SNM}=a\cdot\rm sSFR^{b}\), where the first one includes all the data-points and the second one neglects two outliers. The fitted values of \(b\), \(0.68\) and \(1.04\), confirm that our assumption \(\psi(t-\tau)/\psi(t)\approx 1\) (which means \(R_{\rm SN}\propto\psi\) and \(b=1\)) is appropriate. If we assume \(b\approx 1\), the two points which are neglected for the second fit indeed appear to be outliers. The consequence of this assumption is that \(a=R_{\rm SNM}/\rm sSFR=\Lambda\). Our estimate \(\Lambda(22.5M_{\odot})=9.06\cdot 10^{-3}M_{\odot}^{-1}\) differs significantly from the fitted values of \(a\): \(4.0\cdot 10^{-6}M_{\odot}^{-1}\) and \(1.9\cdot 10^{-2}M_{\odot}^{-1}\). If we set \(b=1\), the fitted values of \(a\) become (\(7.2\pm 1.7\)) \(\cdot 10^{-3}M_{\odot}^{-1}\) and (\(7.3\pm 0.9\)) \(\cdot 10^{-3}M_{\odot}^{-1}\) respectively, which is consistent with \(\Lambda\). The bottom panel of fig. 3 shows the distribution of the individual values of \(R_{\rm SNM}/\rm sSFR\), including a kernel density estimation. The distribution shows an outlier at \(3\cdot 10^{-2}M_{\odot}^{-1}\), which corresponds to the outlier at \(1.5\cdot 10^{-12}\rm yr^{-1}\) in the top panel which was neglected in the second fit. Also, a peak is situated at approximately \(0.8\cdot 10^{-2}M_{\odot}^{-1}\), which is approximately the value of the fitted \(a\) with \(b=1\).
The question arises whether uncertainty in other aspects of our model can account for a similar deviation from the standard \(\Lambda(m_{\rm max})\). This deviation can, for instance, be accomplished by choosing \(m_{l}\) to be \(9.72M_{\odot}\), which seems high compared to the literature value of \(8M_{\odot}\), although not unreasonably. Furthermore, uncertainties in the IMF could also account for such a deviation. Not only does Chabrier (2003) give different parameter values for binary systems (which would cause a deviating value of \(\Lambda\)), but there is also uncertainty in the high-mass slope of the IMF. Parravano et al. (2018) state that this slope has a value of \(\alpha=2.35^{+0.35}_{-0.15}\). A value of \(2.45\) is enough to account for the 25% deviation, which is well within this range. This means that even if \(\Lambda(22.5M_{\odot})\) fits the data better than \(\Lambda(m_{\rm max})\), which the data may suggest, it would be difficult to attribute this to failed SNe. Because of this, we cannot conclude that we find the effects of failed SNe in the SN survey data and therefore cannot constrain the value of \(m_{u}\) based on this analysis. Despite not being able to confirm the failed SNe observationally, we are interested in how the remnant function from section 2.1 shapes the BBH mass distribution.
## 3 Binary black hole mass distribution
We now want to show that our remnant function indeed produces a BBH distribution similar to observation. In order to do this, we make a simple estimation of binary evolution and determine the resulting remnant mass distribution. In our estimation, we start with approximately \(1.2\cdot 10^{6}\) binaries, in which the primary masses are distributed according to the Chabrier (2003) IMF and the secondary masses are distributed uniformly. This means that we define the primary star here to have the highest _initial_ mass.
In our model, we also include mass transfer from the primary to the secondary in a Roche-lobe overflow (RLO) phase, and neglect mass transfer from the secondary to the primary. We state that the secondary accretes a fraction \(\eta\) of the mass expelled from the primary during the RLO phase. The precise value of \(\eta\) is uncertain (Dorozsmai & Toonen, 2022), so we will simply set \(\eta=0.5\), since it does not affect the general shape of the remnant distribution. In order to determine the amount of mass which is
Figure 3: Data from SN surveys (Graur et al., 2015, 2017; Botticella et al., 2017). The top panel shows the type II SN rates per unit mass (\(R_{\rm SNM}\)) versus the SFR per unit mass (\(S\)SFR, together with a fit for all the data-points (dashed line) and a fit which excludes the two outliers at \(\rm sSFR\approx 1.5\cdot 10^{-12}\rm yr^{-1}\) and \(1.5\cdot 10^{-9}\rm yr^{-1}\) (dotted line). The fits use the least-squares method on a first degree polynomial in logarithmic space and the error bars are one standard deviation. These data are corrected for the Chabrier (2003) IMF, since the original data follows the Salpeter (1955) IMF. For a constant SFR, \(L=\kappa\psi\int\nu\ell\rm{\it L}(m)dm\), where \(L(m)\propto m^{3.5}\) is the luminosity of a star with mass \(m\). However, since \(\tau(m)\propto m\), it cancels out with \(\kappa\) and makes the SFR independent of the IMF. However, the SN rate is proportional to \(\kappa\), so it is multiplied by a factor \(\kappa_{\rm SNM}/\rm sSFR\approx 1.38\), meaning that this correction also multiplies \(R_{\rm SNM}/\rm sSFR\) by a factor of \(1.38\). The bottom panel shows a histogram of the \(R_{\rm SNM}/\rm sSFR\) ratio (with a bin-width of \(2\cdot 10^{-3}M_{\odot}^{-1}\)), and a kernel density estimation (KDE).
expelled from the primary, we cannot simply take the difference between ZAMS mass and remnant mass, since in successful SNe there is mass loss outside of the RLO phase. We therefore approximate the expelled mass of the primary in the RLO phase by \(0.35\cdot m_{\rm ZAMS}\) for case A/B and \(0.65\cdot m_{\rm ZAMS}\) for case C, since these curves approximate the failed SN curves, as shown in fig. 2. The secondary, then, accretes a fraction \(\eta\) of this expelled mass and is rejuvenated, which means that we use this new mass as if it were the ZAMS mass. Besides \(\eta\) we also define the parameter \(\zeta\), which equals the fraction of binaries which follow the case C curve. Since \(\zeta\) also does not influence the general shape of the distribution significantly, we set \(\zeta=0.5\) as well.
The estimated mass distribution is shown in fig. 4. The left panels show the exact results from our estimation and the right panels show the results with an added Gaussian uncertainty, as described in appendix D. Unsurprisingly, the remnant mass distribution is indeed bimodal: it shows one peak at lower masses, caused by the BH island, and one peak at higher masses. The exact results of our estimation, without the Gaussian uncertainty, also show a PPISN peak around \(44M_{\odot}\), caused by the flat PPISN distribution in fig. 2. The general shape of the estimated distribution is similar to the observed distribution, although it is difficult to compare the two in detail because the relative heights of the peaks are influenced by multiple aspects. Not only is there a bias towards high-mass mergers in observation, but the parameters \(\eta\) and \(\zeta\) also influence the heights. The effects of these parameters, including the fact that varying them does not influence our conclusions about the general distribution, is shown in appendix E.
The resulting chirp mass distribution also shows interesting features. Since the individual mass distribution is bimodal, with one peak at either side of the mass gap at \(m_{G}=17M_{\odot}\), the chirp mass distribution shows four possible configurations, with the primary and secondary mass at the same and opposite sides of the mass gap. These configurations can clearly be found in the chirp mass distribution: there is one peak centered just below \(10M_{\odot}\) for binaries with both BHs below the mass gap, one peak centered around \(28M_{\odot}\) for binaries with both BHs above the mass gap, and also two peaks centered around \(15M_{\odot}\), representing a mixed population with BHs at either side of the gap. The latter mostly consists of binaries where the primary is above the gap and the secondary below, although, depending on \(\eta\), a small population of binaries where only the secondary is above the gap can be found as well. These features can be linked to the observed chirp mass distribution, because the observed distribution (fig. 1) shows a trimodal distribution very similar to our results: one peak just below \(10M_{\odot}\), one around \(28M_{\odot}\), a small group of binaries around \(15M_{\odot}\) and a gap just above \(10M_{\odot}\). We also predict a gap just below \(20M_{\odot}\), which is somewhat visible in our AWKDE but even more visible in the estimation by Abbott et al. (2021b). Our results are not identical to the observed chirp mass distribution, especially above \(20M_{\odot}\) they seem to differ, but the general shapes of the distributions do seem to agree.
In a comparison between our simple estimation and the GW data, we examine the primary versus secondary mass distribution. Fig. 5 shows this distribution, together with the posterior distributions of the GW observations. Both the simulated and the observed distributions clearly show the BH island and the other failed SNe. Also, there seem to be some observed mergers which can be linked to the mixed population. There are, however, also differences between the two distributions. For exam
Figure 4: BBH mass distribution resulting from our estimation, where we simulate approximately \(N=1.2\cdot 10^{6}\) binaries and set \(\eta=0.5\) and \(\zeta=0.5\). The latter means that half of the binaries follow the case A/B remnant function, and the other half follow the case C remnant curve (as shown in fig. 2). The top row shows the resulting mass distribution of the individual primary (red) and secondary (grey) masses, in a stacked histogram with a bin-width of \(1M_{\odot}\). The bottom row shows the chirp mass distribution in a similar histogram, where we distinguish between four possible configurations of the primary and secondary mass with respect to the central value of the mass gap (\(m_{G}\)): either both BHs are below the gap (dark blue), the secondary is below the gap while the primary is above (blue) or vice versa (beige), or both are above the gap (light blue). The left column shows the exact results of our estimation, while the right column adds a Gaussian uncertainty to the results in order to make the simulated distribution look more similar to the observed one (as shown in fig. 1). We base the Gaussian uncertainty on the average standard deviations of the GW data, as described in appendix D.
ple, there is a local maximum at \(m_{P^{\prime}}\approx m_{S^{\prime}}\approx 34M_{\odot}\), while the corresponding maximum in our distribution is situated around \(m_{P^{\prime}}\approx m_{S^{\prime}}\approx 28M_{\odot}\). Also, the observed distribution has relatively more BHs in the BH island, even though there is an observational bias towards high-mass mergers. However, we are interested in the general shape of the distribution and not normalisation and details. After all, consideration of partial fallback as well as the parameters \(\eta\) and \(\zeta\) influence the relative amount of BHs in the BH island. Another interesting feature of fig. 5 is the fact that the observed mass gap seems to be situated at lower masses than the simulated mass gap: while we set the simulated mass gap at \(m_{P^{\prime}}\approx m_{S^{\prime}}\approx 17M_{\odot}\), the observed gap appears to be around \(m_{P^{\prime}}\approx m_{S^{\prime}}\approx 14M_{\odot}\). This could mean that we did not have to shift the results of Schneider et al. (2021) upwards in our remnant function at all. The difference can be explained by the fact that applying the Gaussian uncertainty to the individual masses results in a two-dimensional distribution which is more symmetric than the observed posteriors. Finally, it is interesting to compare fig. 5 with the rate densities determined by Abbott et al. (2021). Where we find three populations: a BH island, other failed SNe and a mixed population, they find a similar distribution but for NS-NS, BH-BH and BH-NS mergers, respectively.
## 4 Discussion
We have proposed failed SNe, and in particular the difference between failed and successful SNe, as a natural explanation for the shape of the BBH mass distribution. In order to do this, we posed a remnant function (section 2.1), investigated type II SN rate data (section 2.2), and examined the BBH distribution caused by this function (section 3). Our model is rather simplistic, however, and we do not claim that our results describe the actual binary mass distribution in detail. We are therefore careful not to draw too strong conclusions in comparing our results from fig. 4 and fig. 5 to the data as shown in fig. 1. Although, the simplicity of our model does indicate that our conclusions are a robust feature of BBH distributions caused by remnant functions similar to ours.
Furthermore, we argue that our remnant function is a more natural hypothetical explanation for the BBH mass distribution than the gravitational lensing hypothesis mentioned in the introduction, for two reasons. Firstly, although they are difficult to compare, we estimate the difference of a factor 50 between the Broadhurst et al. (2018, 2022) merger rate and an optimistic BBH fraction estimate, discussed in appendix B, to be greater than the difference between our remnant function and the results of Schneider et al. (2021) and Marchant et al. (2019). Secondly, the chirp mass distribution is difficult to explain using the lensing hypothesis. After all, the two stars in the binary are either both lensed or both non-lensed. This means that there should not have been any binary where the primary is above the gap and the secondary below. The lensing hypothesis, therefore, has trouble explaining the trimodal chirp mass distribution, while our remnant function reproduces it.
As mentioned in the introduction: Schneider et al. (2023) published a paper which uses a similar method and reaches comparable conclusions, while our paper was in review. They use simulations equivalent to those of Schneider et al. (2021), showing a BH island, which result in a bimodal distribution of the individual BBHs. Their BBH distribution looks similar to the one we find, except for the PPISN peak since they do not incorporate these. Moreover, they argue that the bimodal mass distribution causes a trimodal chirp mass distribution with the same argument we use: the mixing between BHs from the BH island and the higher mass BHs. They also note that fallback after the SN can extend the BH island to lower masses, because of the same reason we have extended our BH island in fig. 2 to include lower masses. Furthermore, according to Schneider et al. (2023), metallicity mainly affects the case A/B curve in fig. 2, but because of the mass-loss through wind associated with metallicity they state that an increase in metallicity can shift the resulting mass distribution to lower masses. Overall, we find that their results are in good agreement with our findings, which strengthens our argument.
## 5 Conclusions
After our analysis of the BBH mass distribution and chirp mass distribution (section 3) resulting from our remnant function (section 2), we conclude the following:
* The mass distribution of BBHs appears to show a gap for approximately \(14M_{\odot}<m<22M_{\odot}\) (fig. 1). Gravitational lensing as an explanation for this gap (Broadhurst et al.
Figure 5: Primary versus secondary mass distributions, for the results of our estimation (left) and the GW data (right), both in two-dimensional histograms with a bin-width of \(1M_{\odot}\). The left panel shows the upper right panel from fig. 4. We use the notation \(P^{\prime}\) and \(S^{\prime}\) here to denote the stars with the largest and smallest final masses, respectively, in contrast to \(P\) and \(S\) from fig. 4, which concern the _initial_ masses. The right panel shows the cosmologically reweighted posterior distributions of the GW data (LIGO Scientific Collaboration and Virgo Collaboration 2021; LIGO Scientific Collaboration and Virgo Collaboration and KAGRA Collaboration 2021). Since the posteriors are not normalized, we rescaled them to \(\int_{\mathrm{M}_{\odot}}^{100M_{\odot}}\frac{c^{\mathrm{GW}}}{2\pi}dm\approx 1. 4\cdot 10^{5}\). Then, after adding all the posteriors, we normalized the resulting distribution in between \(5M_{\odot}\) and \(65M_{\odot}\). We omit GW200322 here, since no valid posterior for this source has been given. The grey lines are lines of equal chirp mass, with intervals of \(5M_{\odot}\).
2018, 2022) seems, initially, to assume an unreasonably high merger rate for \(z>2\) (fig. 2). We find that this merger rate implies a BBH fraction which is approximately 50 times larger than the one described by Mandel & Farmer (2022). This is improbably large, but not large enough to completely dismiss this possibility.
* We investigate failed SNe as a more natural explanation for the gap. Firstly, we approximate the results of the simulations by Schneider et al. (2021) and Marchant et al. (2019) with a comprehensive remnant function which describes the relation between ZAMS mass and remnant mass, for case A/B and case C binaries (fig. 2). This function includes a BH island, which existence is supported by SN explodability simulations and is the cause of the bimodality in the BBH mass distribution.
* We investigate whether we can confirm the mass range for failed SNe observationally. We assume a single mass range for successful SNe, which turn into failed SNe above a certain mass limit. We take an optimistic value of \(22.5M_{\odot}\) for this mass limit and find that this implies a \(R_{\rm SN}\)/SFR reduction of approximately 25%, since failed SNe are not detected and included in the type II SN rate. However, even though this reduction is compatible with SN survey data to some degree, the data is quite uncertain and small changes in other model parameters could also account for a similar reduction in SN rate. We therefore cannot state that the failed SNe mass range can be confirmed by the SN survey data.
* Using this remnant function, a bimodal BBH distribution can be estimated which look similar to observation. Also, our simplistic estimation produces a trimodal chirp mass distribution, which corresponds to observation. The primary versus secondary mass distribution resulting from our estimation corresponds to some degree with the posterior samples of the observed GW sources. This distribution clearly shows a mass gap, and a comparison of estimation and observation indicates that our remnant function does not need to deviate from literature values in the degree that it does, strengthening our conclusions.
Based on our results, we conclude that failed SNe, and in particular the relation between ZAMS mass and remnant mass which includes a BH island, can provide a natural explanation for the apparent bimodality in the observed BBH mass distribution. We therefore state that, when trying to explain the general shape of the observed BBH distribution, a consideration of stellar evolution is shown to be fruitful.
Future research could expand or improve multiple aspects of our work. Reproducing the observed BBH distribution in detail, for instance, would be an interesting topic of research. Not only could the remnant function perhaps stay closer to literature values and still produce the desired mass gap, as implied by fig. 5, but other aspects such as a consideration of partial fallback, PPISNe and the values of \(\eta\) and \(\zeta\) also influence the relative heights of the different peaks. It would be an interesting endeavour to create a model which includes these aspects in such a way that it reproduces the observed distribution in more detail. An interesting question could be if the chirp mass distribution which corresponds to such a detailed model also resembles observation in detail. And, finally, more (precise) SN survey and GW data would improve the observational verification, which could help in identifying the precise effects of failed SNe.
###### Acknowledgements.
This research has made use of data or software obtained from the Gravitational Wave Open Science Center (gw-openscience.org), a service of LIGO Laboratory, the LIGO Scientific Collaboration, the Virgo Collaboration, and KAGRA. LIGO Laboratory and Advanced LIGO are funded by the United States National Science Foundation (NSF) as well as the Science and Technology Facilities Council (STFC) of the United Kingdom, the Max-Planck-Society (MPS), and the State of Niedersachsen/Germany for support of the construction of Advanced LIGO and construction and operation of the GEO60 detector. Additional support for Advanced LIGO was provided by the Australian Research Council. Virgo is funded, through the European Gravitational Observatory (EGO), by the French Centre National de Recherche Scientifique (CNRS), the Italian Istituto Nazionale di Fisica Nucleare (INFN) and the Dutch Nikhef, with contributions by institutions from Belgium, Germany, Greece, Hungary, Ireland, Japan, Monaco, Poland, Portugal. Spain. The construction and operation of KAGRA are funded by Ministry of Education, Culture, Sports, Science and Technology (MEXT), and Japan Society for the Promotion of Science (JSPS), National Research Foundation (NRF) and Ministry of Science and ICT (MST) in Korea, Academia Sinica (AS) and the Ministry of Science and Technology (MoST) in Taiwan. This research is supported by the Netherlands Organisation for Scientific Research (NWO). We thank both Omo Pola for useful discussions and the anonymous referees who provided comments which helped to improve this paper.
|
2303.11494 | FlexVDW: A machine learning approach to account for protein flexibility
in ligand docking | Most widely used ligand docking methods assume a rigid protein structure.
This leads to problems when the structure of the target protein deforms upon
ligand binding. In particular, the ligand's true binding pose is often scored
very unfavorably due to apparent clashes between ligand and protein atoms,
which lead to extremely high values of the calculated van der Waals energy
term. Traditionally, this problem has been addressed by explicitly searching
for receptor conformations to account for the flexibility of the receptor in
ligand binding. Here we present a deep learning model trained to take receptor
flexibility into account implicitly when predicting van der Waals energy. We
show that incorporating this machine-learned energy term into a
state-of-the-art physics-based scoring function improves small molecule ligand
pose prediction results in cases with substantial protein deformation, without
degrading performance in cases with minimal protein deformation. This work
demonstrates the feasibility of learning effects of protein flexibility on
ligand binding without explicitly modeling changes in protein structure. | Patricia Suriana, Joseph M. Paggi, Ron O. Dror | 2023-03-20T23:19:05Z | http://arxiv.org/abs/2303.11494v1 | # FlexVDW: A machine learning approach to account for protein flexibility in ligand docking
###### Abstract
Most widely used ligand docking methods assume a rigid protein structure. This leads to problems when the structure of the target protein deforms upon ligand binding. In particular, the ligand's true binding pose is often scored very unfavorably due to apparent clashes between ligand and protein atoms, which lead to extremely high values of the calculated van der Waals energy term. Traditionally, this problem has been addressed by explicitly searching for receptor conformations to account for the flexibility of the receptor in ligand binding. Here we present a deep learning model trained to take receptor flexibility into account implicitly when predicting van der Waals energy. We show that incorporating this machine-learned energy term into a state-of-the-art physics-based scoring function improves small molecule ligand pose prediction results in cases with substantial protein deformation, without degrading performance in cases with minimal protein deformation. This work demonstrates the feasibility of learning effects of protein flexibility on ligand binding without explicitly modeling changes in protein structure.
## 1 Introduction
A critical problem in rational drug discovery is prediction of the position, orientation, and conformation of a ligand (e.g., a drug candidate) when bound to a target protein--i.e., the ligand's "binding pose." Protein-ligand docking methods, which are used to predict ligand binding poses, are key tools in drug discovery and molecular modeling applications (Kitchen et al., 2004; Ferreira et al., 2015).
The most widely used protein-ligand docking techniques assume a rigid protein (i.e., the positions of all protein atoms are fixed), which is often referred to as "rigid docking" (Verdonk et al., 2003; Friesner et al., 2004; Allen et al., 2015; Forli et al., 2016). Although this assumption of a rigid protein often works, rigid docking often fails to produce a near-native ligand pose (i.e., one that is close to the experimentally determined, or native, pose) when the shape of the protein's binding pocket must change for the ligand to bind. In such cases, atoms of the ligand in its native pose typically overlap ("clash") with atoms in the protein structure used for docking (Figure 1). Atoms that overlap experience extremely strong van der Waals repulsion. Rigid ligand docking methods thus predict that such poses will be extremely unfavorable energetically and generally rank them lower than any pose without such clashes--even when the clashes could have been easily resolved by minor changes in the structure of the protein's binding pocket. Such cases occur frequently in drug discovery, particularly when one is investigating novel ligands that differ substantially from ligands present in experimentally determined protein structures.
A variety of flexible protein docking techniques attempt to solve this problem by allowing the protein's binding pocket to deform during docking (Jones et al., 1997; Lemmon & Meiler, 2012; Miller et al., 2021). This approach is very computationally intensive, however, and has sometimes proven less accurate than rigid docking (Ravindranath et al., 2015; Bender et al., 2021). Likewise, ensemble docking techniques in which each ligand is docked to multiple protein structures have met
with mixed success, as selecting the protein structures and determining their relative favorability has proven difficult (Totrov and Abagyan, 2008; Novoa et al., 2010; Amaro et al., 2018; Evangelista Falcon et al., 2019; Korb et al., 2012).
In this work, we explore an alternative approach: rigid docking with a scoring function that has been adapted, through machine learning, to implicitly account for protein flexibility. In particular, we use an end-to-end machine learning approach to design a predictor of protein-ligand van der Waals (VDW) interaction energies. Given a single protein structure, our predictor is trained to recognize which types of deformations the protein's binding pocket can easily undergo, and to distinguish those from less favorable deformations. We name our machine-learned predictor of VDW interaction energies FlexVDW.
We show that incorporating FlexVDW into an industry-standard docking package (Glide) improves ligand binding pose prediction results in cases where ligand binding requires significant protein deformation, without compromising performance in cases with minimal protein deformation. Our work demonstrates the feasibility of learning effects of protein flexibility on ligand binding without explicitly modeling changes in protein structure.
## 2 Related Works
In general, protein-ligand docking involves two challenges (Elokely and Doerksen, 2013; Guedes et al., 2014): (1) designing a sampling algorithm to generate a large number of candidate ligand poses given a query ligand (i.e. ligand of interest) and a target protein structure, at least one of which should be close to the experimentally determined pose, and (2) designing a scoring function that ranks these candidate poses to select the best ones (i.e., those predicted to be closest to the native pose) as the final output. In this paper, we focus on the scoring challenge. Protein-ligand docking scoring function can be loosely categorized into two classes: (1) physics-based scoring functions, and (2) machine-learning scoring functions.
Physics-based scoring functions (Friesner et al., 2004; Verdonk et al., 2003; Trott and Olson, 2010; Coleman et al., 2013; Allen et al., 2015) characterize the binding of protein-ligand complexes based on a set of weighted scoring terms, which correspond to different physical effects such as VDW interactions, electrostatic interactions, hydrophobic interactions, and hydrogen bonds. The weights of these terms are typically determined by fitting experimental data using linear regression. It should be emphasized that the terms were carefully engineered to capture effects known to be important in determining ligand binding energy and represent decades of work.
Figure 1: The native binding pose of a ligand often clashes with the experimentally determined structure of its target protein when that structure has a different ligand bound. Panel A shows the structure of the protein \(\beta\)-secretase (BACE-1) — a major drug target — bound to a ligand known as “compound 5” (PDB entry 5IEI (Jordan et al., 2016)). The ligand (orange spheres, with each sphere representing one atom) packs favorably against two amino acids in the protein binding pocket (gray spheres) without any clashes (i.e., ligand atoms do not overlap with protein atoms). Panel B shows the same ligand (compound 5) in exactly the same geometry, but superimposed on a structure of BACE-1 that was determined in the presence of a different ligand (PDB entry 3CKP, (Park et al., 2008)). Here, the same two amino acids (gray spheres) assume different positions and therefore clash (overlap) substantially with the ligand atoms (orange spheres).
Machine learning (ML) scoring functions (Khamis et al., 2015) allow for a more general functional form. Progress has been made in these areas, including end-to-end learning without hand-crafted features using deep learning methods (Shen et al., 2020; Ragoza et al., 2017; Morrone et al., 2020; McNutt et al., 2021). Nevertheless, physics-based scoring functions such as Glide (Friesner et al., 2004) or DOCK (Coleman et al., 2013; Allen et al., 2015) have proven to be more generalizable to different drug target families, and especially to new drug targets not present in the training set, than ML-based functions and remain most widely used in drug discovery (Bender et al., 2021).
## 3 Methods
### Incorporating implicit protein flexibility into the scoring function for ligand docking
Our goal in this work is to demonstrate the feasibility of creating a VDW interaction energy predictor that implicitly accounts for protein flexibility. We therefore develop a neural network, FlexVDW, that predicts VDW interaction energy. To demonstrate its effectiveness, we integrate FlexVDW into Glide (Friesner et al., 2004), which is among the most widely used protein-ligand docking packages in the pharmaceutical industry. In particular, we replace the VDW term in Glide's physics-based scoring function with FlexVDW. Although in this work we chose Glide to incorporate our machine-learned VDW term, in principle our approach can be integrated with any existing physics-based scoring function, not limited to Glide, with refitting to the particular physics-based scoring function of interest.
When training our neural network (but not when using it to predict ligand docking poses), we take advantage of the fact that, for certain proteins, multiple experimentally determined structures are available, with a different ligand bound to the same protein in each structure. Adopting terminology from structural biology, We refer to each of these ligand-bound structures as a "holo" structure. The set of holo structures for a given protein captures multiple shapes the protein's binding pocket can adopt and thus provides information about the binding pocket's flexibility.
More concretely, the input to our ML model is a single protein structure to be used for docking a ligand, where the protein structure was determined in the presence of a different ligand, or with no ligand present at all. Our training labels, on the other hand, are generated by taking into account all available holo structures (see Figure S2). Importantly, our model can be used to predicting ligand binding to proteins different from those used in training, including proteins for which only a single structure is available. Indeed, when evaluating the performance of our model (Section 4.1), we consider only proteins that were not used in training. For many of these proteins, only a single structure is available.
To assign a label to each training input, we first use Glide to calculate the VDW score (i.e., VDW interaction energy) for the candidate pose superimposed on each available holo structure for the given protein. We then determine the minimum value across these scores -- that is, the most favorable score. We use this minimum value as the label (see Figure S2).
Formally, we define the minimum VDW score as
\[VDW^{\prime}(L)=\min_{\forall p_{i}\in\{p_{i},\dots,p_{N}\}}VDW(L,p_{i}) \tag{1}\]
where \(VDW(L,p_{i})\) is the Glide VDW score of a candidate ligand pose \(L\) with respect to a target protein structure \(p_{i}\).
When testing our model -- and when deploying it for drug discovery and biology applications -- we are given only a single structure of the target protein. Because of how the model is trained (on different proteins), however, it effectively predicts what the most favorable VDW score of that pose would be if multiple structures of the target protein were available. In other words, our model implicitly predicts flexibility of a protein's binding pocket given only a single structure of the protein.
### Datasets
Our training, validation, and test datasets consist of sets of poses of ligands docked to protein structures. The protein structures and small molecule ligands used to generate our ligand pose
datasets were obtained from the PDBBind 2019 refined dataset (Liu et al., 2015), a collection of protein-ligand complex structures with high resolution. The protein-ligand complex structures are categorized based on the protein (i.e., holo structures of a target protein are grouped together), and those proteins that have at least two holo structures are selected. See Figure S3 for distribution of the number of holo structures per unique protein used to generate the labels and ligand docking poses in the training and validation sets. In addition, we also included the benchmark set from Paggi et al. (2021) in our test dataset to ensure good coverage of major drug target protein families: GPCRs, kinases, ion channels and nuclear receptors (Santos et al., 2017). To ensure no data leakage, we split the proteins for training, validation, and testing such that no protein in the test dataset had more than 30% sequence identity with any protein in training or validation datasets. There are 228, 85 and 73 unique proteins in the training, validation and test datasets, respectively.
Next, candidate ligand poses for training and validation are generated using Glide SP (Friesner et al., 2004) with default parameters and then overlaid with a randomly selected holo structure of the same protein to generate poses with and without clashes with the receptor. For each query ligand, a maximum of five protein structures were randomly selected for docking, and 25 poses were randomly selected from each docking result. We follow the procedures described in Paggi et al. (2021) for preparing protein-ligand complex structures and ligands for docking.
Unlike in training/validation, in testing we are given only a single structure of the protein target on which to dock the query ligand. We can only use this one protein structure to generate candidate binding poses for the ligand. Therefore, in addition to (1) generating poses with Glide SP and normal (default) VDW parameters (VDW radius scaling of 1.0/0.8 for receptor/ligand), we ran (2) Glide SP with softened VDW parameters (VDW radius scaling of 0.6/0.5 for receptor/ligand) with extended sampling to generate candidate ligand poses with collisions with the target protein. For each scheme, we set the maximum number of candidate poses to 300 for each protein-ligand pair (referred to here as a "cross-docking pair"). Additionally, we also included a native pose of each query ligand in the candidate pose set, refined with an energy minimization protocol, since otherwise only about 80% of the cross-docking pairs have any near-native poses among the set of candidate poses generated by the two schemes above (see Figure S4). On average, we generate roughly 500 candidate ligand binding poses in total for each cross-docking pair.
For each protein-ligand pair in the test set, we randomly select one protein structure for docking. We ensure that this structure was determined experimentally in the presence of ligand substantially different from the (docked) query ligand -- in particular, that the two ligands have a Tanimoto coefficient of less than 0.4, where the Tanimoto coefficient is computed by comparing the Extended-Connectivity Fingerprints (ECFPs) of the two ligands. This results in 615 cross-docking pairs, which are further divided into two cases: (1) "difficult" cross-docking pairs, defined as those for which the native ligand binding pose, after energy minimization in the docking structure, still exhibits severe clashes with protein atoms (specifically, when the ratio of the distance between two atoms and the sum of their VDW radii is \(\leq 0.75\)), or where the ligand pose drifts significantly during energy minimization such that it exhibits an RMSD > 2.0A relative to the original (experimentally determined) ligand pose; (2) "other" cross-docking pairs, defined as the remaining ones. In the "difficult" cases, we expect significant deformation of the protein upon ligand binding, while we expect less protein deformation in the "other" cases.
### Architecture
The input to our ML model is a candidate pose for a ligand and a single protein structure to be used for docking (see Figure S2). We also provide our model with the corresponding Glide VDW score. Although we utilize all available holo structures of a target protein to create our training labels (i.e., to calculate VDW', as described in equation 1), we do not use these other structures in any way to make the prediction. This reflects the situation in practice, where often only one structure is available to dock the ligand of interest.
Our architecture has two main components: (1) the embedding unit (see Figure 2: green block) and (2) the pairwise unit (see Figure 2: blue block). The embedding unit learns an embedding of a protein-ligand pose structure, which is then passed to the pairwise unit. At the core of the embedding unit are 3D equivariant convolution layers (ENN Layers 1 and 2) that operate on a 3D atomic point cloud. This point representation in 3D space allows us to accurately represent the relative
positioning of atoms in the protein-ligand complex, which is important for capturing the interactions between protein atoms and ligand atoms. Each ENN layer consists of the sequential application of self-interaction, point convolution, point normalization, self-interaction, and nonlinearity (Eismann et al., 2020). Each atom/point in 3D is associated with a feature vector. At input, the model takes as features the basic element type of the atom (C, O, N, P, S, polar H, or halogen (F/Cl/Br)) encoded as a one-hot vector, the secondary structure (if applicable), the partial charge of each atom, and a Boolean flag indicating whether the atom belongs to the ligand or the protein. The point-wise feature vectors are updated through the ENN layers by aggregating local information of the nearest 50 neighboring points.
To regularize our networks, we downsample the protein from all atoms to the \(\alpha\) carbon (CA) of each amino acid residue in the last ENN layer (ENN Layer 2) of the embedding unit and apply the same learned function to each protein CA-ligand atom pair (i.e., the pairwise unit) to mimic the pairwise form of physical VDW interactions. More concretely, for each protein CA-ligand atom pair, their embeddings from the previous embedding unit are concatenated as input to the pairwise unit, a series of dense neural network layers, to compute their pairwise "interaction" features. These pairwise interaction features are averaged over all pairs (see Figure 2: Mean Pooling) and passed through the final dense neural network layer (see Figure 2: Final Dense Layer) to obtain a single scalar prediction. Inspired by Wang et al. (2019); Husa et al. (2020), which use a prior energy for learning molecular dynamics force fields, we use additional information from the Gilde VDW score and pass it as input to the min() function in the last layer along with the output from the Final Dense Layer in order to make the final prediction.
For details on the architecture and the hyperparameters used for each component of the architecture, see Supplement S1 and Figure S1
### Training
We formulate the training as a regression task aimed at predicting VDW', the minimum of the candidate ligand's VDW score over several available holo structures of the protein. The MSE loss between the actual and predicted values of VDW' is used as a loss function. To prevent loss explosion during training, the training label is capped at 100; otherwise, it could occasionally be on the order of a million or more. We train with the Adam optimizer in PyTorch (Paszke et al. (2019)) with a learning rate of 0.00005 and a batch size of 4 for 10 epochs and monitor the loss on the validation set at every epoch. In the first 5 epochs, the input Glide VDW score is ignored, in order to prevent the model from overfitting to the Glide VDW score instead of learning about protein flexibility. In the next 5 epochs, the Glide VDW score is added. The weights of the network that performs best on the validation set are then used to evaluate the predictions on the test set. We train the models on one NVIDIA GeForce RTX 3090 GPU for around 20 hours.
## 4 Results
### Evaluation of cross-docking results on test set
To evaluate the strength of our machine-learned scoring function, FlexVDW, in terms of docking accuracy, we evaluate the top-N near-native hit rate, which is defined as the fraction of cross-docking cases for which a near-native pose is included in the first N poses when the poses are ranked by the docking score. Here, we consider a pose to be near native if its root mean square deviation (RMSD) from the experimentally determined pose is less than or equal to 2.0A (a threshold commonly used in practice (Kontoyianni et al., 2004; Cole et al., 2005)).
The evaluation is performed on the candidate ligand poses generated for the 615 cross-docking pairs in the test dataset (see Section 3.2). During testing, only a single protein structure is provided to our ML model. We compare performance of the Glide scoring function with its original VDW term and with that term replaced by FlexVDW. As can be seen in Figure 3, incorporation of FlexVDW into Glide improves performance in "difficult" cross-docking cases (middle panel), where significant deformation of the protein is typically required upon ligand binding. At the same time, FlexVDW achieves performance similar to that of Glide's original VDW term for the "other" cross-docking cases where less protein deformation is typically required (right panel).
In addition, as a baseline, we evaluate the accuracy of a scoring function in which we simply remove the VDW term while keeping the other terms of the Glide scoring function. As we can see in Figure 3, although eliminating the VDW term leads to a better top-N near-native hit rate for "difficult" cases compared to FlexVDW, overall performance deteriorates (especially for the top-1 near-native hit rate), which shows the importance of including a VDW term in the docking score. As we allow more ligand poses with severe collisions with protein backbones ("garbage poses") in the candidate pose set, the performance of FlexVDW decreases, but the performance of the docking score without a VDW term decreases even more, showing that our approach is able to generalize to some extent even if we never train the model with "garbage" poses, and further highlighting the importance of the VDW term in the docking score (see Figure S5).
### Comparison of Glide and FlexVDW predicted scores and top-1 poses
Next, we compare the FlexVDW and Glide VDW scores for the native ligand poses when superimposed on structures of the target protein determined with other ligands bound. In Figure 4A-C, the native ligand poses clash with the docking structures. Glide assigns very high (unfavorable) VDW scores, preventing it from predicting these poses. Indeed, in these cases, Glide's top-ranked (top-1) ligand pose predictions differ substantially from the native pose(see Figure S6A-C). In contrast, our machine-learned predictor, FlexVDW, handles these cases better. In two of the three cases, it ranks near-native poses first (top-1) (see Figure S6A and C). In the third case (Figure S6B), even though FlexVDW predicts a negative VDW score for the native ligand pose (see Figure 4B), the near-native poses are eventually rejected due to the high electrostatic repulsion energy, thus FlexVDW fails to
Figure 2: Schematic of the architecture of FlexVDW network. The output dimensions of the individual layer are indicated in parentheses. At input, the model takes in a single protein structure and a single candidate ligand pose (orange block) to predict the VDW interaction energy of the ligand pose with respect to the protein structure. The model featurizes the input into basic element type of the atom (C, O, N, P, S, polar H, and F/Cl/Br), secondary structure (if applicable), partial charge and a protein/ligand Boolean flag for each atom. FlexVDW consists of two main components: (1) embedding unit (green block) and (2) pairwise unit (blue block). At the core of the embedding unit are 3D equivariant convolution layers (ENN Layers 1 and 2; light green blocks) that operate on the atomic point cloud to learn the embedding of protein/ligand atoms, which is then used to predict the VDW score of the ligand pose. To regularize our networks, we downsample the protein from all atoms (\(P_{all}\)) to \(\alpha\)-carbons (\(P_{CA}\)) at the last layer of the embedding unit (ENN Layer 2), and apply the same learned function to each protein CA–ligand atom pair (i.e., the pairwise unit) to mimic the pairwise form of physical VDW interactions. In addition, we calculate the Glide VDW score and pass it as input to the min() function in the last layer to make the final prediction. For details on the architecture and the hyperparameters used for each component of the architecture, see Supplement S1 and Figure S1.
select the near-native pose as the top-1 pose. When there is no clash between the ligand pose and the protein structure used for docking, FlexVDW is comparable to Glide in ranking near-native ligand pose highly (see Figure 4D and S6D).
## 5 Discussion
We have demonstrated the feasibility of learning a scoring function that accounts for protein flexibility in ligand docking without explicitly modeling changes in protein structure. Given a protein structure and a candidate ligand pose, FlexVDW predicts the VDW value of this ligand pose, taking into account the flexibility of the protein.
To evaluate the strength of our machine-learned scoring function in terms of docking accuracy, we evaluate the top-N near-native hit rate of cross-docking protein-ligand pairs. To ensure the generalizability of our methods to different protein families, we select our test cases to cover the major drug target protein families, including GPCRs, kinases, ion channels, nuclear receptors, and others. We show that incorporating this machine-learned VDW term into Glide, a state-of-the-art physics-based scoring function, improves docking accuracy in cases with substantial protein deformation upon ligand binding, without degrading performance in cases with minimal protein deformation upon ligand binding. Our approach could be integrated with any existing physics-based scoring function, not limited to Glide, with refitting to the particular physics-based scoring function of interest.
There are several limitations to our approach. First, we formulate our learning task in terms of predicting the global VDW score. Reformulating the learning task in terms of predicting VDW interaction energies between individual pairs of atoms could potentially provide a better signal for which parts of the protein are flexible upon ligand binding. Additionally, we consider only VDW interactions and ignore the electrostatic interaction. In some cases, a near-native ligand pose is eliminated not only due to a high VDW energy, but also due to a high electrostatic repulsion energy. Future work is necessary to address these issues.
Second, because we assign training labels using the minimum VDW score across multiple holo structures as a proxy for protein flexibility, the extent to which our model can learn about protein flexibility is limited by the diversity of available holo structures. This could potentially be improved by including snapshots from molecular dynamics simulations as additional protein structures when determining the training labels.
Note that when using our model to predict ligand binding poses, we use only a single structure of the target protein -- because often only one a single structure of a given protein is available. Indeed,
Figure 3: Percentage of cases for which a near-native pose is included in the top N poses sorted by docking score (higher is better). Our approach significantly improves over Glide performance in “difficult” cross-docking cases where significant deformation of the protein is expected upon ligand binding, while maintaining performance in “other” cross-docking cases where minimal deformation of the protein is expected. Although the absence of a VDW term in the docking score leads to better performance in “difficult” cases, it worsens overall performance (especially for the top-ranked pose), showing the importance of including a VDW term in the docking score.
when when evaluating the performance of our model, we use only a single structure for each protein, and the proteins used for evaluation are all substantially different from those used to train the model.
In summary, our work is a step toward incorporating implicit protein flexibility into ligand docking, which will improve the accuracy of ligand binding pose prediction.
### Funding Information
PS was supported by a Graduate Research Fellowship from the US National Science Foundation (NSF). JMP was supported by a Stanford Graduate Fellowship.
|
2307.15305 | Bursty Star Formation Naturally Explains the Abundance of Bright
Galaxies at Cosmic Dawn | Recent discoveries of a significant population of bright galaxies at cosmic
dawn $\left(z \gtrsim 10\right)$ have enabled critical tests of cosmological
galaxy formation models. In particular, the bright end of the galaxy UV
luminosity function (UVLF) appears higher than predicted by many models. Using
approximately 25,000 galaxy snapshots at $8 \leq z \leq 12$ in a suite of
FIRE-2 cosmological "zoom-in'' simulations from the Feedback in Realistic
Environments (FIRE) project, we show that the observed abundance of UV-bright
galaxies at cosmic dawn is reproduced in these simulations with a multi-channel
implementation of standard stellar feedback processes, without any fine-tuning.
Notably, we find no need to invoke previously suggested modifications such as a
non-standard cosmology, a top-heavy stellar initial mass function, or a
strongly enhanced star formation efficiency. We contrast the UVLFs predicted by
bursty star formation in these original simulations to those derived from star
formation histories (SFHs) smoothed over prescribed timescales (e.g., 100 Myr).
The comparison demonstrates that the strongly time-variable SFHs predicted by
the FIRE simulations play a key role in correctly reproducing the observed,
bright-end UVLFs at cosmic dawn: the bursty SFHs induce order-or-magnitude
changes in the abundance of UV-bright ($M_\mathrm{UV} \lesssim -20$) galaxies
at $z \gtrsim 10$. The predicted bright-end UVLFs are consistent with both the
spectroscopically confirmed population and the photometrically selected
candidates. We also find good agreement between the predicted and
observationally inferred integrated UV luminosity densities, which evolve more
weakly with redshift in FIRE than suggested by some other models. | Guochao Sun, Claude-André Faucher-Giguère, Christopher C. Hayward, Xuejian Shen, Andrew Wetzel, Rachel K. Cochrane | 2023-07-28T04:52:07Z | http://arxiv.org/abs/2307.15305v2 | # Bursty Star Formation Naturally Explains the Abundance of Bright Galaxies at Cosmic Dawn
###### Abstract
Recent discoveries of a significant population of bright galaxies at cosmic dawn (\(z\gtrsim 10\)) have enabled critical tests of cosmological galaxy formation models. In particular, the bright end of the galaxy UV luminosity function (UVLF) appears higher than predicted by many models. Using approximately 25,000 galaxy snapshots at \(8\leq z\leq 12\) in a suite of FIRE-2 cosmological "zoom-in" simulations from the Feedback in Realistic Environments (FIRE) project, we show that the observed abundance of UV-bright galaxies at cosmic dawn is reproduced in these simulations with a multi-channel implementation of standard stellar feedback processes, without any fine-tuning. Notably, we find no need to invoke previously suggested modifications such as a non-standard cosmology, a top-heavy stellar initial mass function, or a strongly enhanced star formation efficiency. We contrast the UVLFs predicted by bursty star formation in these original simulations to those derived from star formation histories (SFHs) smoothed over prescribed timescales (e.g., 100 Myr). The comparison demonstrates that the strongly time-variable SFHs predicted by the FIRE simulations play a key role in correctly reproducing the observed, bright-end UVLFs at cosmic dawn: the bursty SFHs induce order-or-magnitude changes in the abundance of UV-bright (\(M_{\rm UV}\lesssim-20\)) galaxies at \(z\gtrsim 10\). The predicted bright-end UVLFs are consistent with both the spectroscopically confirmed population and the photometrically selected candidates. We also find good agreement between the predicted and observationally inferred integrated UV luminosity densities, which evolve more weakly with redshift in FIRE than suggested by some other models.
galaxies: formation - galaxies: evolution - galaxies: star formation - galaxies: high-redshift +
Footnote †: journal: ApJL
0000-0002-0880-0885]Guochao Sun
0000-0002-3181-7885]Claude-Andre Faucher-Giguere
0000-0002-1881-7885]Christopher C. Hayward
0000-0002-4133-0886]Xuejian Shen
0000-0002-4133-0886]Andrew Wetzel
0000-0002-1883-0886]Rachel K. Cochrane
## 1 Introduction
For the first time, the _James Webb Space Telescope (JWST)_ has unlocked the door to a population-level analysis of galaxies well into the era of cosmic dawn (for a review of key high-redshift science themes of _JWST_, see Robertson, 2022). Following its discovery of an unexpectedly high abundance of UV-bright, massive galaxy candidates at redshift \(z\gtrsim 10\)(e.g., Finkelstein et al., 2022; Naidu et al., 2022; Donnan et al., 2023; Harikane et al., 2023; Yan et al., 2023), there is a long list of intriguing questions to be answered about how to interpret these observations. What is the true nature (redshift, mass, metallicity, age, etc.) of these bright galaxies? If they are truly massive galaxies at cosmic dawn, what makes it possible for them to have formed so early? Are these observations in significant tension with the standard \(\Lambda\)CDM cosmological model? Observational and theoretical investigations into these questions are being actively pursued in a large body of recent literature from different perspectives, including the pu
rity of high-\(z\) candidates (Naidu et al., 2022; Arrabal Haro et al., 2023; Curtis-Lake et al., 2023; Furlanetto & Mirocha, 2023; Zavala et al., 2023), the physics of star formation in high-\(z\) galaxies (Dekel et al., 2023; Mirocha & Furlanetto, 2023; Robertson et al., 2023; Qin et al., 2023; Sipple & Lidz, 2023; Trinca et al., 2023), the implications of high-\(z\) observations for the cosmological model (Boylan-Kolchin, 2023; Hassan et al., 2023; Melia, 2023), and so forth.
While spectroscopic follow-up studies for many of the galaxy candidates are still ongoing, conservative lower limits on the bright end of UV luminosity function (UVLF) and the integrated UV luminosity density at \(z\gtrsim 10\) derived from the existing, spectroscopically confirmed samples have already suggested milder redshift evolution towards \(z>10\) than expected by many theoretical models (e.g., Harikane et al., 2023). Such a higher-than-expected abundance of bright galaxies based on secure redshifts is consistent with earlier studies based on photometrically selected samples, thus calling for a re-examination of the theoretical landscape of galaxy formation at cosmic dawn1. Several physical mechanisms have been considered to explain a high abundance of bright galaxies at high redshifts. For example, a higher star formation efficiency (SFE) resulting from less efficient feedback regulation could boost the UV-bright galaxy abundance by forming more stars per unit baryon (Dekel et al., 2023; Harikane et al., 2023), whereas a more top-heavy initial mass function (IMF) of the stellar population could similarly lead to more bright galaxies by creating more UV photons per unit stellar mass formed (Inayoshi et al., 2022; Yung et al., 2023). A conspiracy between the redshift evolution of dust attenuation and the abundance of massive halos at high \(z\) could also potentially allow the bright-end UVLF and UV luminosity density to evolve relatively mildly (Ferrara et al., 2023; Mirocha & Furlanetto, 2023), although such a coincidence would not by itself explain the correct absolute abundance of bright galaxies. A number of studies have also examined the possibility that the high abundance of early massive galaxies implies physics beyond the standard \(\Lambda\)CDM cosmology, such as a modified primordial power spectrum (Hirano & Yoshida, 2023; Padmanabhan & Loeb, 2023; Parashari & Laha, 2023; though see Sabti et al., 2023), primordial non-Gaussianity (Biagetti et al., 2023), or alternative dark matter models (Bird et al., 2023; Dayal & Giri, 2023; Gong et al., 2023).
Footnote 1: Some recent studies found that galaxies with properties similar to observed ones could be reproduced in simulations (e.g., Keller et al., 2023; McCaffrey et al., 2023). However, these studies did not directly model the UVLF and compare it with available JWST measurements.
Another promising avenue to elevate the abundance of bright galaxies is the strong time variability ("burstiness") of star formation. In recent years, several different galaxy formation simulations have predicted that the star formation rate (SFR) is highly time-variable in low-mass galaxies (e.g., Hopkins et al., 2014; Dominguez et al., 2015; Muratov et al., 2015; Sparre et al., 2017; Pallottini & Ferrara, 2023). The prediction of bursty star formation appears generic to codes that resolve the clustering of supernovae in the interstellar medium (Hu et al., 2023). The simulations predict that bursty star formation is especially common in low-mass galaxies, likely due to the shallow potential wells which allow clumpy, cold inflows and outflows to drive repeated inflow-star formation-outflow cycles (Stern et al., 2021; Gurvich et al., 2023; Byrne et al., 2023; Hopkins et al., 2023). Since low-mass galaxies dominate at high redshift, we expect the implications of bursty star formation on the UVLF to be particularly important in this regime (e.g., Furlanetto & Mirocha, 2022). Indeed, evidence for an increased level of bursty star formation has emerged from recent _JWST_ observations of cosmic dawn galaxies (e.g., Dressler et al., 2023; Endsley et al., 2023; Looser et al., 2023, 20). As pointed out in recent theoretical studies (Mason et al., 2023; Mirocha & Furlanetto, 2023; Shen et al., 2023; Munoz et al., 2023), an increased level of UV variability sourced by bursty star formation can give rise to more UV-bright galaxies due to the Eddington bias, which flattens the bright end of the UVLF. In this case, the observed UVLFs at \(z\gtrsim 10\) could potentially be explained by bursty star formation combined with "normal" SFE and production efficiency of UV photons. While bursty star formation can in principle enhance the abundance of bright galaxies, it remains to be shown whether the enhancement is sufficient to reproduce the observed bright-end of the UVLF in a self-consistent galaxy formation model, such as those provided by hydrodynamic simulations.
In this Letter, we use a suite of cosmological "zoom-in" simulations from the Feedback in Realistic Environments (FIRE) project2 to investigate the effects of bursty star formation on the UVLF at \(8\leq z\leq 12\). In these simulations, the SFR variability arises self-consistently from the modeling of standard stellar feedback processes. It is noteworthy that these simulations -- generated before the launch of _JWST_ -- were in particular not in any way tuned to match recent observa
tions. Moreover, the simulations use exactly the same FIRE-2 code (Hopkins et al., 2018) that has been used to evolve large sets of simulated galaxies all the way to \(z=0\) and demonstrated to produce broadly realistic galaxy properties down to the present time (e.g. Wetzel et al., 2023, and references therein). This is in contrast with many other simulations of cosmic dawn galaxies, in which the simulations are stopped at high redshift and for which we therefore do not know how the feedback model performs at lower redshifts. We show that the FIRE-2 simulations produce an excellent match to the UVLF recently measured by _JWST_ during cosmic dawn, and that the time variability of star formation plays an important role in explaining the observations at the bright end. These results constitute an important test of the feedback model and highlight the importance of considering the variability of star formation when modeling high-\(z\) observations.
Throughout the Letter, we adopt a flat \(\Lambda\)CDM cosmology consistent with Planck Collaboration et al. (2020), and all magnitudes are quoted in the AB system (Oke and Gunn, 1983).
## 2 Simulations and Analysis Methods
### The Simulations
In this Letter, we analyze the same set of simulations as recently studied by Sun et al. (2023), which is a subset of the _High-Redshift_ suite (Ma et al., 2018, 2019, 2019) of the FIRE-2 cosmological zoom-in simulations (Hopkins et al., 2018). The FIRE-2 simulations use the GIZMO code with its meshless-finite mass (MFM) hydro solver (Hopkins, 2015), and include multiple channels of stellar feedback to regulate star formation. Star formation occurs in dense molecular gas (\(n_{\rm H}>1000\,{\rm cm}^{3}\)) that is self-gravitating and self-shielding. The stellar feedback mechanisms implemented include: (1) energy, momentum, mass, and metal injection from core collapse and Type Ia supernovae and winds from OB and AGB stars, (2) photoionization and photoelectric heating, and (3) radiation pressure. A redshift-dependent but homogeneous ionizing background is also included following Faucher-Giguere et al. (2009).3 The baryonic (dark matter) mass resolution of the set of simulations considered in this work is \(m_{\rm b}=7\times 10^{3}\,M_{\odot}\) (\(m_{\rm DM}=4\times 10^{4}\,M_{\odot}\)), except for the simulations z5m11a and z5m11b, which have \(m_{\rm b}\approx 1\times 10^{3}\,M_{\odot}\) (\(m_{\rm DM}=5\times 10^{3}\,M_{\odot}\)). The gravitational softenings are fixed in physical units to \(\epsilon_{\rm DM}=42\,{\rm pc}\) for the dark matter and \(\epsilon_{\rm star}=2.1\,{\rm pc}\) for stars. The gravitational softenings are adaptive for gas, with a minimum of \(\epsilon_{\rm b}=0.42\,{\rm pc}\). This is, again, with the exception of z5m11a and z5m11b (see Figure 4 for a list of simulation IDs considered in this work), which have \(\epsilon_{\rm DM}=21\,{\rm pc}\), \(\epsilon_{\rm star}=1.4\,{\rm pc}\), and \(\epsilon_{\rm b}=0.28\,{\rm pc}\).
Footnote 3: The version of the ionizing background used in these simulations reionizes the universe at \(z_{\rm reion}\approx 10\), which is earlier than the mid-point of reionization of \(z_{\rm reion}\approx 8\) favored by more recent observational constraints (e.g., Planck Collaboration et al., 2020; Faucher-Giguère, 2020). However, our main results focus on the bright end of the UVLF, which arises from relatively massive halos, whereas the suppression of galaxy formation due to heating by the ionizing background primarily affects low-mass halos (\(M_{\rm h}\lesssim 10^{9}\,M_{\odot}\); e.g. Gnedin, 2000; Noh and McQuinn, 2014). Moreover, an earlier reionization redshift implies that in the present simulations, galaxy formation is suppressed starting earlier in the small halos, so adopting a more up-to-date reionization model would (if anything) enhance the predicted UV luminosity density. Similar arguments apply to other IGM heating processes.
Part of the _High-Redshift_ suite of simulations was presented and analyzed in detail by Ma et al. (2018, 2018) for the predicted properties of the simulated galaxy population at \(5\leq z\leq 12\), including sizes, morphologies, scaling relations, and number statistics measured by the stellar mass and luminosity functions. In this follow-up analysis of Ma et al. (2018) motivated by recent _JWST_ observations of the abundance of galaxies at \(z\gtrsim 10\), we follow closely the methodology adopted in Ma et al. (2018) for fair comparisons, but the sample size of high-\(z\), massive galaxies has been substantially increased to better determine the bright-end behavior of galaxy UVLF s at cosmic dawn. Below, we will only briefly summarize the key information about the sample of simulated galaxies pertinent to the analysis presented here. We refer interested readers to the aforementioned papers for further details about the FIRE-2 simulations and the _High-Redshift_ suite.
For a robust analysis of UVLF s at their bright end, we build a maximum possible sample size of massive galaxies by making use of all the zoom-in simulations available at each redshift above the ending redshift \(z_{\rm end}\). In each zoom-in region, we consider all the well-resolved halos4 that host a _central_ galaxy, rather than the one hosting just the most massive, primary galaxy (typically near the center of the zoom-in region). Following Ma et al.
(2018b), we define galaxies based on catalogs of halos identified with the Amiga Halo Finder (AHF; Knollmann and Knebe, 2009). The radius \(R_{\rm max}\) at which the halo rotation curve reaches maximum is used to define a galaxy by incorporating star particles within \(R_{\rm max}/3\) and excluding the contamination from subhalos outside \(R_{\rm max}/5\). We restrict the scope of our UVLF analysis to halos with mass \(M_{\rm h}>10^{7.5}\,M_{\odot}\) in snapshots at \(8\leq z\leq 12\) because most of the recent UVLF measurements at \(z<8\) with _JWST_ can be well explained by previous theoretical predictions and a sufficiently constraining sample of spectroscopically-confirmed galaxies is not available at \(z>12\)(Harikane et al., 2023). In Appendix A, we illustrate how the halo/galaxy sample is constructed with (snapshots of) the 26 individual zoom-in simulations, which build up a total sample of \(\approx 25,000\) galaxy snapshots over \(8\leq z\leq 12\). For all simulations, snapshots are saved at a cadence of every 10-20 Myr.
### Processing of the Simulations
We process the simulated galaxy sample in order to arrive at their 1600 A UV magnitudes \(M_{\rm UV}\), following Sun et al. (2023). Templates of binary, single-stellar-population (SSP) spectra from BPASS v2.1 (Eldridge et al., 2017) are interpolated and applied to star particles according to their stellar age and metallicity, assuming a Kroupa IMF (Kroupa, 2001). Including nebular (continuum) emission can in principle augment both the UV emissivity and variability (Byler et al., 2017), although we opt to ignore it here as nebular emission is not expected to strongly affect the measurement of \(M_{\rm UV}\), especially when compared with effects of SFR variations. Two notable differences from Sun et al. (2023) exist, though, for the treatment of (1) the connection between \(M_{\rm UV}\) and the SFH and (2) dust attenuation, on which we elaborate below.
#### 2.2.1 Bursty vs Smoothed Star Formation Histories
At cosmic dawn, an increased SFR variability can strongly modulate the observed number statistics of galaxies. To assess the impact of bursty star formation on the \(M_{\rm UV}\)-\(M_{\rm h}\) relation and thus the UVLF, we consider two contrasting scenarios to model \(M_{\rm UV}\).
The baseline scenario, which we refer to as "bursty", assumes that the SFH of each galaxy in our sample is exactly as predicted by the simulations and thus \(M_{\rm UV}\) can be derived by summing up the spectral emissivities of all star particles of the galaxy at a given snapshot according to their age and metallicity, as in Sun et al. (2023). This is the approach most faithful to the SFHs predicted by the simulations. In this approach, \(M_{\rm UV}\) naturally inherits the burstiness predicted by the simulations -- as the SFR varies, the UV 1600 A luminosity of the galaxy also fluctuates accordingly because most of FUV continuum emission is sourced by the massive, short-lived stars formed. As a result, a bursty SFH imprints significant stochasticity in \(M_{\rm UV}\) at a fixed stellar or halo mass.
In the contrasting scenario, which we refer to as "smoothed", we artificially reduce the impact of bursty SFH on the evaluation of \(M_{\rm UV}\) by redistributing the ages of star particles (while retaining their metallicities). Specifically, we first define a smoothing kernel of duration \(\tau_{\rm SF}\) Myr and bin star particles using their star formation times into time bins of width \(\tau_{\rm SF}\). We then redistribute the ages of the star particles in individual bins such that the stellar mass forms at a nearly constant rate by enforcing evenly-distributed star formation times within each bin. This redistribution of stellar ages effectively smooths the SFH and reduces to the "bursty" case for a sufficiently small \(\tau_{\rm SF}\). Notably, unlike some previous work where effects of varying the UV variability on UVLFs are studied assuming a fixed mean/median \(L_{\rm UV}\)-\(M_{\rm h}\) relation (e.g., Mirocha and Furlanetto, 2023; Shen et al., 2023), our method by its nature conserves the total amount of cosmic star formation such that the two scenarios differ only in terms of the short-timescale SFR variability and its impact on the UV emissivity.
#### 2.2.2 Dust Attenuation
Observations have shown compelling evidence of early chemical enrichment and the production of non-negligible dust in galaxies at \(z\gtrsim 7\)(Tamura et al., 2019; Fudamoto et al., 2021; Witstok et al., 2023). A reasonable treatment of dust attenuation is therefore needed for our predictions of the UVLF at cosmic dawn, especially at the bright end because massive (intrinsically UV-bright) galaxies generally contain more dust.
To estimate the effect of dust attenuation on \(M_{\rm UV}\), we employ an empirical model motivated by an up-to-date measurement of the \(\beta_{\rm UV}\)-\(M_{\rm UV}\) (color-magnitude) relation at \(z>8\) by Cullen et al. (2023) using a combination of _JWST_ and ground-based observations5. We combine the best-fit relation \(\beta_{\rm UV}=-0.17M_{\rm UV}+5.40\) with the attenuation-UV slope relation, \(A_{\rm UV}=0.48(\beta_{\rm UV}+2.62)\), determined from \(z\approx 5.5\) galaxies observed in the ALPINE survey (Fudamoto et al., 2020; see also Reddy et al., 2018). While an extrapolation in redshift is involved, this best-fit relation from ALMA observations
represents a state-of-the-art empirical baseline for estimating dust attenuation properties at cosmic dawn, which should suffice for the purpose of this work. We neglect the scatter around these mean relations given its small impact on \(M_{\rm UV}\) and caution that results with dust attenuation included that follow should be taken as rough estimates only. The validity of these simplistic treatments can be tested with simulations with detailed dust radiative transfer (Cochrane et al., 2019, 2022; Ma et al., 2019; Vogelsberger et al., 2020; Shen et al., 2022) and multi-wavelength observations (Akins et al., 2023; Bakx et al., 2023), which are left for future work. We note, though, that at \(z>10\) the difference between UVLFs with and without dust attenuation is predicted to be very small in our model (see Figure 2), such that uncertainties in the treatment of dust should not affect our results significantly.
### Estimating the UVLF from Zoom-in Simulations
Using UV magnitudes derived for the sample of simulated galaxies binned into redshift bins of width \(\Delta z=\pm 0.5\), we calculate the UVLF through a convolution with the halo mass function (HMF) following the "HMF-weighting" method introduced by Ma et al. (2018). This method has been verified to provide robust estimates of the UVLF from galaxy samples drawn from zoom-in simulations, so we only summarize briefly here. First, in narrow halo mass and redshift bins, we count the number of simulated halos \(N_{\rm S}\) from the sample and compute the expected number of halos \(N_{\rm E}\), which scales with the HMF, \({\rm d}n/{\rm d}\log M_{\rm h}\), calculated using the hmf code (Murray et al., 2013) for the fitting function from Behroozi et al. (2013). A common weight \(w=N_{\rm E}/N_{\rm S}\) is assigned to all the halos in the same bin, such that a summation of halo weights in a given mass bin yields the expected number of halos in the universe. These weights are then applied to sample galaxies binned in \(M_{\rm UV}\) to obtain the UVLF, which is essentially a convolution between the HMF and \(M_{\rm UV}\)-\(M_{\rm h}\) relation including the full, \(M_{\rm h}\)-dependent distribution (see Section 2.4 of Ma et al. 2018). Finally, we stress that, compared with Ma et al. (2018) where only a subset of the _High-Redshift_ suite was analyzed, we substantially increase the number of samples of massive halos/bright galaxies in this work (a factor 8 increase of halos with \(M_{\rm h}>10^{10}\,M_{\odot}\) at \(z=10\)) by considering the full _High-Redshift_ suite as in Ma et al. (2019), thereby extending the magnitude down to which the UVLF at \(z>10\) can be reliably determined to \(M_{\rm UV}<-20\), overlapping with the bright-end UVLF probed by _JWST_.
## 3 Results
### The \(M_{\rm UV}\)-\(M_{\rm h}\) Relation and the UVLF
Following the methods outlined in Sections 2.2 and 2.3, we first use our samples of simulated galaxies to quantify the \(M_{\rm UV}\)-\(M_{\rm h}\) relation in different redshift regimes, assuming either "bursty" or "smoothed" SFH. A comparison of the \(M_{\rm UV}\)-\(M_{\rm h}\) relations at \(z=8\), 10, and 12 from our simulations is shown in the top row of Figure 1. Overall, galaxies become more UV-bright at higher \(M_{\rm h}\) and, at a given \(M_{\rm h}\), \(M_{\rm UV}\) decreases modestly with increasing redshift as a result of more rapid halo growth at higher redshift. A significant scatter in \(M_{\rm UV}\) around the median relation that gradually increases towards lower masses exists, which is a sign of increasing star formation burstiness at low masses, given the proportionality between \(L_{\rm UV}\) and the SFR. At a fixed \(M_{\rm h}\), we find a modest trend for the scatter in \(M_{\rm UV}\) to decrease with decreasing redshift that continues to \(z<8\) (not shown). This tentative evidence for the redshift evolution of the UV variability might be testable using comparisons of SFR indicators sensitive to different star formation timescales or high-precision measurements of the halo-galaxy connection with galaxy clustering (see Section 4). From the comparison between the "bursty" and "smoothed" cases shown by the 5-95th percentiles (especially in the top middle panel where three "smoothed" cases with varying \(\tau_{\rm SF}\) are shown), it can be seen that evaluating \(M_{\rm UV}\) from a smoothed SFH leads to a shallower \(M_{\rm UV}\)-\(M_{\rm h}\) relation with a reduced scatter in \(M_{\rm UV}\) at higher masses, which effectively suppresses the population of UV-bright galaxies at a given \(M_{\rm h}\).
In the bottom row of Figure 1, we show the UVLF at \(z=8\)-12 implied by the \(M_{\rm UV}\)-\(M_{\rm h}\) relation. From the comparisons against recent observational constraints and between the two SFH cases, several key results are immediately apparent. First, in the fiducial, "bursty" SFH scenario, the predicted UVLFs agree remarkably well with the observational constraints available. In particular, our \(z\gtrsim 10\) predictions lie safely above the firm lower bounds set by the dust-uncorrected, spectroscopically-confirmed samples recently compiled by Harikane et al. (2023), and they are also broadly consistent with the variety of measurements based on photometrically selected candidates (see the caption for
details)6. Unlike some other theoretical predictions (e.g., Mason et al., 2023; Yung et al., 2023), for which a clear tension with the spec-\(z\) lower bounds exists without modifications, our bursty-case predictions do not require any additional tuning of UV variability or production efficiency to match observations. Despite uncertainties associated with the treatment of dust, this good agreement implies that the UVLFs observed by _JWST_ at \(z\gtrsim 10\) are consistent with generally "normal" SFE and UV production efficiency as predicted by the FIRE-2 simulations. As demonstrated in Ma et al. (2018), the relation between \(M_{*}\) and \(M_{\rm h}\) in these simulations is broadly consistent with extrapolations from lower \(z\) where empirical analyses show that the SFE is strongly suppressed by stellar feedback in low-mass halos (Behroozi et al., 2013; Tacchella et al., 2018).
Footnote 6: We have verified by bootstrapping 1000 times the simulated galaxy samples that the statistical uncertainty on the UVLF, especially at the bright end, is small enough that it does not affect the bright-end comparisons of interest to this study. In the brightest bin, the \(1\sigma\) statistical uncertainties in \(\log\phi\) estimated from bootstrapping are approximately \(0.15\,\)dex, \(0.15\,\)dex, and \(0.3\,\)dex at \(z=8\), \(10\), and \(12\), respectively.
Second, in the contrasting, "smoothed" SFH scenario, a clear deficit of UV-bright galaxies is seen as a result of suppressed up-scattering in \(M_{\rm UV}\) of low-mass halos when the SFR is averaged over a long timescale \(\tau_{\rm SF}\). The underestimated abundance of UV-bright galaxies reveals the important role played by the burstiness of star formation in determining the number statistics of galaxies at cosmic dawn. As also shown by the comparison of different \(\tau_{\rm SF}\) values at \(z=10\), smoothed SFHs with \(\tau_{\rm SF}\gtrsim 100\,\)Myr result in bright-end UVLFs that are too steep compared with observations, especially the photometrically selected samples, for which the bright-end UVLF can be underpredicted at \(>2\
Figure 1: _Top:_ UV magnitude–halo mass relations at \(z=8\)–\(12\). Data for individual galaxies are denoted by the grey dots (no smoothing applied to the SFH). The thick solid curves indicate the range of the 5th and 95th percentiles in the “bursty” and “smoothed” cases, from which the suppression of bright galaxy number counts due to smoothing is apparent. _Bottom:_ UVLFs at \(z=8\)–\(12\) derived from the convolution between the UV magnitude–halo mass relation and the HMF. Dust-free predictions are shown as solid for both “bursty” and “smoothed” cases, whereas the dust-attenuated scenario is shown as dashed for only the “bursty” case (Section 2.2.2) for visual clarity. Constraints from observations are shown by the data points in black for the spectroscopically-confirmed-only samples (Harikane et al., 2023) and in grey for data sets involving photometric candidates (Oesch et al., 2018; Bowler et al., 2020; Rojas-Ruiz et al., 2020; Bouwens et al., 2021, 2023; Finkelstein et al., 2022; Leethochawalit et al., 2022; Castellano et al., 2023; Donnan et al., 2023; Harikane et al., 2023; Pérez-González et al., 2023). Cases with larger and smaller smoothing timescale \(\tau_{\rm SF}\) values than the fiducial one (\(100\,\)Myr) are shown at \(z=10\) to illustrate the impact of SFH smoothing on the UVLF.
(Castellano et al., 2023; Donnan et al., 2023). At \(z=12\), predictions of the smoothed SFH are in tension with even the most conservative lower limits derived from only the spectroscopically confirmed samples (Harikane et al., 2023). It is therefore clear that the UVLF serves as a useful probe of the burstiness in the SFH, as been noted in e.g., Furlanetto & Mirocha (2022) and Shen et al. (2023), although in practice it can be challenging to extract the burstiness information from only the UVLF measurements (see Section 4). The overall shallower \(M_{\rm UV}\)-\(M_{\rm h}\) relation when the SFH is smoothed also leads to slightly steeper slope at the faint end, although the effect is much smaller than the suppression at the bright end.
The binned UVLFs without dust attenuation extracted from our simulations at \(z=8\), 10, and 12 are summarized in Table 1. As has been demonstrated in Figure 1, dust attenuation only modestly affects the UVLF at the very bright end, reducing \(\phi\) (in the brightest bin) by approximately 0.4, 0.25, and 0.01 dex at \(z=8\), 10, and 12, respectively. The binning scheme is chosen such that the brightest \(M_{\rm UV}\) bin contains more than ten simulated galaxies for robust statistics. Meanwhile, we fit the dust-free UVLF at \(8\leq z\leq 12\) assuming a universal double-power law (DPL) in \(M_{\rm UV}\),
\[\Phi(M_{\rm UV})=\frac{0.4(\ln 10)\,10^{\phi_{*}}}{10^{0.4(\alpha+1)(M_{ \rm UV}^{\rm UV}-M_{\rm UV})}+10^{0.4(\beta+1)(M_{\rm UV}^{\rm *}-M_{\rm UV})}}. \tag{1}\]
We specify the redshift-dependent DPL parameters \(\phi_{*}\), \(M_{\rm UV}^{*}\), \(\alpha\), and \(\beta\) in the form of a single power law as \(\phi_{*}(z)=\phi_{*,0}[(1+z)/10]^{\phi_{*,1}}\), \(M_{\rm UV}^{*}(z)=M_{\rm UV}^{*,0}[(1+z)/10]^{M_{\rm UV}^{*}}\), \(\alpha_{*}(z)=\alpha_{*,0}[(1+z)/10]^{\alpha_{*,1}}\), and \(\beta_{*}(z)=\beta_{*,0}[(1+z)/10]^{\beta_{*,1}}\), where the best-fit parameters are found to be \(\phi_{*,0}=-2.01\), \(\phi_{*,1}=0.68\), \(M_{\rm UV}^{*,0}=-17.26\), \(M_{\rm UV}^{*,1}=-0.08\), \(\alpha_{*,0}=-0.31\), \(\alpha_{*,1}=-0.93\), \(\beta_{*,0}=0.68\), and \(\alpha_{*,1}=0.93\).
Figure 2 shows a comparison between the binned and best-fit UVLFs predicted by our simulations and other theoretical predictions in the literature based on cosmological hydrodynamical simulations (Ocvirk et al., 2020; Vijayan et al., 2021; Dawoodbhoy et al., 2023; Kannan et al., 2023; Wilkins et al., 2023). Overall, our predicted UVLFs show a weaker redshift evolution beyond \(z=8\) compared with the predictions from the MillenniumTNG (Kannan et al., 2023) and CoDa II (Ocvirk et al., 2020; Dawoodbhoy et al., 2023) simulations, which results in a higher abundance of bright (\(M_{\rm UV}\lesssim-20\)) galaxies at \(z\gtrsim 10\). Our bright-end predictions are generally comparable to those from the FLARES simulations (Vijayan et al., 2021; Wilkins et al., 2023) in both normalization and slope, despite the vastly different nature of the simulations and methods to evaluate the UVLF. It is noteworthy, though, that the FIRE-2 simulations analyzed in this work have significantly higher resolution (\(m_{\rm b}\approx 7\times 10^{3}\,M_{\odot}\) in FIRE-2 vs. \(m_{\rm b}\approx 2\times 10^{6}\,M_{\odot}\) in FLARES), which allows us to predict the UVLFs at \(8\leq z\leq 12\) down to \(M_{\rm UV}\sim-10\) vs. the FLARES
\begin{table}
\begin{tabular}{r r r r r r} \hline \hline \(M_{\rm UV}\) & \(\log\phi\) & \(M_{\rm UV}\) & \(\log\phi\) & \(M_{\rm UV}\) & \(\log\phi\) \\ \hline \multicolumn{2}{c}{\(z=8\)} & \multicolumn{2}{c}{\(z=10\)} & \multicolumn{2}{c}{\(z=12\)} \\ \cline{2-6} \multicolumn{1}{c}{\(-10.5\)} & \(-0.085\) & \(-10.25\) & \(-0.207\) & \(-9.75\) & \(-0.234\) \\ \(-12.5\) & \(-0.570\) & \(-12.25\) & \(-0.677\) & \(-11.75\) & \(-0.971\) \\ \(-14.5\) & \(-1.206\) & \(-14.25\) & \(-1.242\) & \(-13.75\) & \(-1.576\) \\ \(-16.5\) & \(-1.926\) & \(-16.25\) & \(-2.124\) & \(-15.75\) & \(-2.200\) \\ \(-18.5\) & \(-2.815\) & \(-18.25\) & \(-3.072\) & \(-17.75\) & \(-3.282\) \\ \(-20.5\) & \(-3.872\) & \(-20.25\) & \(-4.344\) & \(-19.75\) & \(-4.500\) \\ \(-22.5\) & \(-5.158\) & \(-22.25\) & \(-5.902\) & & \\ \hline \multicolumn{2}{l}{**Notes.**} & \multicolumn{1}{c}{\(\phi\) values are quoted in units of mag\({}^{-1}\) Mpc\({}^{-3}\). See Equation (1) for analytic fits to the UVLF over \(8<z<12\). For reference, in the two brightest bins, \(\phi\) is extracted from a sample of (39, 17), (39, 13), (93, 17) galaxies at \(z=8\), 10, and 12, respectively. & \\ \hline \end{tabular}
\end{table}
Table 1: Dust-free UVLFs at \(z=8\), 10, and 12 from the simulated galaxies.
Figure 2: Dust-free UVLFs at \(z=8\), 10, and 12 predicted by the FIRE-2 simulations and from the literature. The binned and the best-fit, double-power law UVLFs are denoted by the crosses and solid curves, as specified in Table 1 and Equation (1), respectively. Several example dust-free predictions from other cosmological hydrodynamical simulations, including MillenniumTNG (dashed, Kannan et al., 2023), FLARES (dotted, Vijayan et al., 2021; Wilkins et al., 2023), and CoDa II (dotted and only at \(z=8\) and 10, Ocvirk et al., 2020; Dawoodbhoy et al., 2023) are also plotted for comparison.
predictions down to \(M_{\rm UV}\sim-18\). We have also verified that UVLFs in this work and from Ma et al. (2018, 2019) are in good agreement in the overlapping regime.
### UV Luminosity Density
By integrating the predicted UVLFs, we can derive the UV luminosity density, \(\rho_{\rm UV}\), as a function of time, which traces the cosmic star formation rate density (SFRD). Since at \(z\gtrsim 10\) only the brightest end (\(M_{\rm UV}\ll M_{\rm UV,*}\)) of the UVLF has been probed, we follow Harikane et al. (2023) to compare the UV luminosity density contributed by galaxies brighter than \(M_{\rm UV}=-18\), namely \(\rho_{\rm UV,bright}=\rho_{\rm UV}(M_{\rm UV}<-18)\), which corresponds to the contribution from halos with \(M_{\rm h}\gtrsim 10^{10}\,M_{\odot}\) at \(z=10\). The unconstrained contribution by fainter, lower-mass galaxies is highly sensitive to the faint-end slope of the UVLF and might even outweigh \(\rho_{\rm UV,bright}\)(Sun & Furlanetto, 2016), but the comparison restricted to \(M_{\rm UV}<-18\) galaxies still serves as a useful test of the overall abundance of bright, massive galaxies and their SFE at cosmic dawn7.
Footnote 7: Results from this work, Harikane et al. (2018, 2023), and Bouwens et al. (2023) are integrated down to \(M_{\rm UV,lim}=-18\), whereas the rest are down to \(M_{\rm UV,lim}=-17\). Figure 3 thus shows conservatively that our simulations without smoothing predict enough total UV emission compared with observations, regardless of the modest difference in \(M_{\rm UV,lim}\).
Figure 3 shows a comparison of the cumulative UV luminosity density between the dust-attenuated predictions from our simulations and a compilation of constraints from observations and theoretical forecasts in the literature. Throughout, dust-attenuated predictions from models/simulations (curves) are compared with observations (data points), which are dust-uncorrected. Over \(8\leq z\leq 12\), dust-attenuated luminosity densities predicted by our simulations without smoothing the SFH are fully consistent with observations of both photometric galaxy candidates and spectroscopically-confirmed galaxies that provide firm lower limits. Due to the integrated nature of \(\rho_{\rm UV}\), the "smoothed" case appears more consistent with the spec-\(z\)-only lower limits here than at the bright end of the UVLF as shown in Figure 2. In both cases with dust attenuation, a power-law evolution of \(\rho_{\rm UV}\propto(1+z)^{-0.3}\) over \(8\leq z\leq 12\) is implied, which appears more gradual compared with the predictions by some previously proposed semi-analytic/semi-empirical models, such as in Mason et al. (2015) and Harikane et al. (2018).
## 4 Discussion and Conclusions
We have demonstrated that the FIRE-2 simulations with a multi-channel implementation of standard stellar feedback processes can reproduce well the observed abundance of UV-bright galaxies at \(z\gtrsim 10\), including both the photometrically selected candidates and the spectroscopically confirmed sources recently discovered by _JWST_. We further showed that the bursty SFH predicted to be common in galaxies at cosmic dawn is important for explaining the bright-end of the UVLF. With burstiness included, the simulations demonstrate that a boosted UV emissivity due to, e.g., an enhanced SFE, a top-heavy IMF, AGN contributions, or Population III stars (see e.g., Harikane et al., 2023, 2023), is not necessary to explain the bright-end UVLF at \(z\gtrsim 10\). (This is of course not to say that none of these other effects could be present in the real universe, so it certainly remains interesting to investigate these other possibilities!) Compared to semi-analytic/empirical models (Mason et al., 2023; Mirocha & Furlanetto, 2023; Shen et al., 2023; Yung et al., 2023), our predictions based on the FIRE-2 simulations avoid ad hoc fine-tuning of the \(M_{\rm UV}\)-\(M_{\rm h}\) relation to match observations.
Though not shown explicitly in this Letter, we have verified that the stellar mass-halo mass (SMHM) relation, as a measure of the time-integrated, galaxy-scale SFE, \(f_{\star}\equiv M_{\star}/(f_{\rm b}M_{\rm h})\) (where \(f_{\rm b}=\Omega_{\rm b}/\Omega_{\rm m}\) is the cos
Figure 3: The cumulative UV luminosity density \(\rho_{\rm UV}(<M_{\rm UV,lim})\) integrated down to \(M_{\rm UV,lim}\simeq-18\) with dust attenuation included (see Section 2.2.2). At \(z\gtrsim 10\), some theoretical models (e.g., Mason et al., 2015; Harikane et al., 2018) underestimate \(\rho_{\rm UV}\) compared with observational constraints based on photometric galaxy candidates (e.g., Bouwens et al., 2023; Donnan et al., 2023; McLeod et al., 2023; Pérez-González et al., 2023) and/or spectroscopically-confirmed galaxies as firm lower limits (Harikane et al., 2023). Predictions from our “bursty” case are broadly consistent with both photometric and spectroscopic samples and show a slightly weaker redshift evolution \(\rho_{\rm UV}\propto(1+z)^{-0.3}\) over \(8\leq z\leq 12\).
mic baryon fraction), barely evolves over \(5\leq z\leq 12\) in our simulations. This is consistent with the previous results presented in Ma et al. (2018) based on a subset of the full _High-Redshift_ suite considered in this work (see their Figure 4). These results indicate that \(f_{\star}\) changes from approximately \(10^{-3.3}\) to \(10^{-1.5}\) as \(M_{\rm h}\) increases from \(10^{8}\,M_{\odot}\) to \(10^{11}\,M_{\odot}\) following a simple power law of slope \(\sim 0.6\) in log-log space. Thus, even though star formation is bursty, the galaxy-scale SFE is not strongly enhanced in these simulations relative to, e.g., an extrapolation of the SMHM relation empirically determined at lower redshift (Behroozi et al., 2019). In particular, our simulations do not appear to realize the "feedback-free starburst" scenario predicted by Dekel et al. (2023) using analytic arguments, which would result in \(f_{\star}\) values up to order-unity.8
Footnote 8: While the FIRE-2 simulations assume a local, _instantaneous_ SFE of 100% per free-fall time, this only applies in dense, self-gravitating gas (see the methods in Hopkins et al., 2018). On galaxy and molecular cloud scales, stellar feedback generally regulates the SFE to much lower values (e.g., Grudic et al., 2018; Orr et al., 2018; Gurvich et al., 2020).
We note that Pallottini and Ferrara (2023) also recently used a set of cosmological zoom-in simulations (SERRA; Pallottini et al., 2022) to investigate some implications of stochastic star formation in early galaxies for the abundance of \(z\gtrsim 10\) galaxies observed by _JWST_. By characterizing the distribution of time-dependent variations in the SFR of individual galaxies, they concluded that the predicted SFR variability cannot account for the required boost suggested by some recent literature to match the observed UVLF at \(z\gtrsim 10\)(Mirocha and Furlanetto, 2023; Shen et al., 2023). However, Pallottini and Ferrara (2023) did not self-consistently derive the UVLF from their simulations. Since other physical factors such as the SFE also impact the UVLF, in addition to burstiness (Mirocha and Furlanetto, 2023; Munoz et al., 2023), in order to unambiguously gauge the importance of bursty star formation it is desirable to perform a self-consistent, end-to-end study of the UVLF as we do in this work.
Looking ahead, a detailed characterization of the SFR variability on different timescales will shed light on the physical processes at play in the build-up of galaxies at early times, as has been demonstrated in recent work using periodogram (Pallottini and Ferrara, 2023) or more generally power spectral density (PSD) analysis (Iyer et al., 2020; Tacchella et al., 2020). Moreover, various implications of bursty star formation should be explicitly considered when interpreting observations of high\(-z\) galaxies. For example, Sun et al. (2023) showed that SFR variability introduces important selection effects in rest UV-selected samples. Since most galaxies at cosmic dawn may form stars in a highly bursty manner, the impact of burstiness on galaxy number statistics also raises questions about how to reliably constrain cosmology with high-\(z\) galaxy observations (Sabti et al., 2023). At the same time, it is of great interest to investigate how to observationally characterize the time variability of star formation and its mass and redshift dependence, e.g. using SFR indicators sensitive to different timescales (Sparre et al., 2017; Flores Velazquez et al., 2021; Sun et al., 2023) or the spatial clustering of galaxies (Munoz et al., 2023). Quantifying the effects of bursty star formation on statistics such as galaxy clustering is a critical stepping stone towards the usage of high-\(z\) galaxies as robust cosmological probes.
The authors thank the anonymous reviewer for comments that helped improve this Letter, as well as Pratik Gandhi, Yuichi Harikane, and Julian Munoz for helpful discussion. GS was supported by a CIERA Postdoctoral Fellowship. CAFG was supported by NSF through grants AST-2108230 and CAREER award AST-1652522; by NASA through grants 17-ATP17-0067 and 21-ATP21-0036; by STScI through grant HST-GO-16730.016-A; and by CXO through grant TM2-23005X. The Flatiron Institute is supported by the Simons Foundation. AW received support from: NSF via CAREER award AST-2045928 and grant AST-2107772; NASA ATP grant 80NSSC20K0513; and HST grants AR-15809, GO-15902, GO-16273 from STScI. The simulations used in this Letter were run on XSEDE computational resources (allocations TG-AST120025, TG-AST130039, TG-AST140023, and TG-AST140064). Additional analysis was done using the Quest computing cluster at Northwestern University. BPASS (Eldridge et al., 2017), GizmoAnalysis (Wetzel et al., 2016; Wetzel and Garrison-Kimmel, 2020), hmf(Murray et al., 2013)
## Appendix A Forming the halo/galaxy sample
Throughout, we analyze snapshots of a galaxy in a \(\Delta z=0.5\) bin multiple times per the cadence at which snapshots are stored (every 10-20 Myr). While the same galaxy from neighbouring snapshots are not strictly independent as far as \(M_{\rm UV}\) is considered, this method is useful because the highly time-variable SFR limits the temporal correlation between consecutive snapshots. It yields a large statistical sample appropriate for UVLF analysis (see Figure 4) and the sampling cadence does not bias the results, as have been shown by analyses that randomly exclude approximately half of the samples (Ma et al., 2018). At \(z=8\), 10, and 12, the UVLF is evaluated from a sample of approximately 12,000, 9,000, and 4,000 galaxies, respectively. Summing over the three redshift bins, this yields \(\approx 25,000\) galaxy samples in total.
|
2308.14846 | Trust in Construction AI-Powered Collaborative Robots: A Qualitative
Empirical Analysis | Construction technology researchers and forward-thinking companies are
experimenting with collaborative robots (aka cobots), powered by artificial
intelligence (AI), to explore various automation scenarios as part of the
digital transformation of the industry. Intelligent cobots are expected to be
the dominant type of robots in the future of work in construction. However, the
black-box nature of AI-powered cobots and unknown technical and psychological
aspects of introducing them to job sites are precursors to trust challenges. By
analyzing the results of semi-structured interviews with construction
practitioners using grounded theory, this paper investigates the
characteristics of trustworthy AI-powered cobots in construction. The study
found that while the key trust factors identified in a systematic literature
review -- conducted previously by the authors -- resonated with the field
experts and end users, other factors such as financial considerations and the
uncertainty associated with change were also significant barriers against
trusting AI-powered cobots in construction. | Newsha Emaminejad, Reza Akhavian, Ph. D | 2023-08-28T19:07:14Z | http://arxiv.org/abs/2308.14846v1 | # Trust in Construction AI-Powered Collaborative Robots: A Qualitative Empirical Analysis
###### Abstract
Construction technology researchers and forward-thinking companies are experimenting with collaborative robots (aka cobots), powered by artificial intelligence (AI), to explore various automation scenarios as part of the digital transformation of the industry. Intelligent cobots are expected to be the dominant type of robots in the future of work in construction. However, the black-box nature of AI-powered cobots and unknown technical and psychological aspects of introducing them to job sites are precursors to trust challenges. By analyzing the results of semi-structured interviews with construction practitioners using grounded theory, this paper investigates the characteristics of trustworthy AI-powered cobots in construction. The study found that while the key trust factors identified in a systematic literature review -conducted previously by the authors- resonated with the field experts and end users, other factors such as financial considerations and the uncertainty associated with change were also significant barriers against trusting AI-powered cobots in construction.
## Introduction
The construction industry continues to adopt technologies that help address its grand challenges such as poor safety and productivity records and shortage of skilled labor. Collaborative robots (aka cobots), is a prime example of such technologies, and is increasingly becoming a major component of this evolution [1]. Cobots can revolutionize the construction industry by making tedious, repetitive, and physically demanding tasks safer, more efficient, and with higher cost effectiveness [14]. They are equipped with advanced sensors and safety features that allow them to perform tasks with precision and avoid accidents. Cobots, are now being used in a wide range of construction tasks, from bricklaying to welding, 3D printing, heavy lifting, manual material handling, and inspection [15]. By augmenting human workers, cobots help reduce fatigue and increase productivity, while also freeing up workers to focus on more complex tasks [11]. The use of cobots in construction also helps improve project timelines and reduce overall costs, making them an attractive investment for companies looking to stay competitive in the ever-evolving construction industry [16]. However, despite the many benefits of cobots, there are also major challenges that need to be addressed before they can be fully integrated into the construction jobsites. One of the most important of these is building trust between construction workers and cobots [17]. Trust is a complex concept that can be defined as the belief in the reliability and integrity of someone or something [18]. In
the context of construction, trust between workers and cobots is important for ensuring that these technologies are used effectively and safely (Emaminejad, Maria North, and Akhavian 2021).
## Research Background
Recent studies have focused on the acceptance and trust of collaborative robots (cobots) in industrial workplaces. These studies have examined factors such as socio-technical systems, interpretability, predictability, transparency, reliability, framing, and human factors. One study proposed a conceptual model that combined the Unified Theory of Acceptance and Use of Technology (UTAUT) and Socio-Technical Systems theory (STS) to understand critical factors influencing the acceptance of cobots and drive perceived work performance improvement at the organizational level (Prassida and Asfari 2022). Another study explored the effectiveness of different light- and motion-based cobot signals in various collaborative mini-games. The studies recommend design improvements for cobots, including programming and interface designs, educational technologies, and careful selection of information to counteract negative effects of failures (Mara et al. 2021). In a study by Michaelis et al. (2020), interviews with manufacturing experts revealed that design improvements for cobots, including programming and interface designs, and educational technologies are required to support collaborative use (Michaelis et al. 2020). Another study by Kluy and Roesler et al. (2021) investigated the influence of transparency and reliability on perception of and trust towards cobots (Kluy and Roesler 2021). Kopp et al. (2022) explored how framing and perceived cooperativeness affect anthropomorphization and human-robot trust in inexperienced factory workers. Lambrechts et al. (2021) emphasized the need for phased implementation and the leadership role in cobot success by highlighting the importance of linking human factors to the future of work and focuses on reskilling and upskilling logistics professionals in response to robotization. The authors conducted a literature review on trust in cobots within construction projects and identified four trust dimensions: transparency and interoperability, reliability and safety, performance and robustness, and privacy and security. Trust across these dimensions is crucial for workers to embrace and collaborate effectively with cobots. Lack of trust can impede cobot adoption and implementation in construction. Trust is essential for collaboration, successful implementation, and worker comfort with cobots (Emaminejad and Akhavian 2022). The existing literature primarily focuses on cobots in manufacturing or other industries, with limited research on how these cobots are perceived in the construction industry. The current study aims to fill this gap by obtaining insights from practitioners in the AEC industry.
## Methodology
The previous literature exploration by the authors has resulted in identifying the technical and psychological factors that increase trust in cobots at both the organizational and individual levels. The authors then formulated the constructs of a theoretical model called Construction Robotics Adoption Drivers (ConRAD) by interviewing construction practitioners, with the goal of confirming or refuting previous findings from the literature review and gaining practical insights. The scope of the interviews was also covered other aspects such as the needs and challenges in training and upskilling construction practitioners for digital transformation that includes AI-powered cobots. The grounded theory was utilized as a research tool for systematic analysis of data to develop a theory that is grounded in the participants' experiences and perspectives (Soliman and Kan 2004). Figure 1 shows how grounded theory was applied in this study (Di Gregorio 2003).
The research team conducted in-depth interviews (approved by the San Diego State University Institutional Review Board (IRB)) with 11 construction professionals who met specific criteria, including having experience working in the AEC industry, working in various company types and construction industry sectors, having experience working with technology in the AEC, and conducting research on topics related to the implementation of intelligent cobots/robots in the AEC. The sample included a diverse group of participants including two Presidents/CEOs, an Executive Vice President, a Senior Director, a Robotics Lead, three VDC Managers, and three Superintendents from five different construction companies in the US. The interviews were conducted individually and online via Zoom, using a pre-approved protocol that included a description of the process, verbal consent, and a set of 8 semi-structured questions (Table 1). However, free discussion was allowed to unfold in a more natural conversation with unscripted questions being added in order to gain a clearer understanding of emerging concepts. The interviewees were shown a video of collaborative robots in practice, and a definitions table was
\begin{table}
\begin{tabular}{|p{227.6pt}|} \hline
1. What is your opinion about using robots in construction? \\ \hline
2. [after watching a short video involving three different types of cobots in construction settings] \\ Based on what you just watched, what is your opinion about using intelligent cobots in construction projects? \\ \hline
3. In your opinion, what are the challenges limiting the adoption of intelligent cobots in construction? \\ \hline
4. What makes you (not) trust an intelligent cobot? what aspect(s) (e.g., technical, management, financial, psychological, etc.) have the most impact on your opinion about trusting them? \\ \hline
5. In your opinion, is trusting intelligent cobots a top-down approach or bottom-up in construction projects and company organizational structure? Meaning that should it first be trusted by workers than the managers or the other way around? \\ \hline
6. To prepare project teams to work with intelligent cobots, what types of training (e.g., technical, soft skills, social) would you like to see for: either of these groups: \\ A. Field personnel (e.g., workers, foremen, superintendents) \\ B. Office personnel (e.g., project engineers, project managers, project executives) \\ \hline
7. A. Do you think that intelligent robots are poised to replace workers, so workers are assigned to higher-level roles? If so, in what capacity? \\ \hline
7. B. Do you want to see intelligent robots will replace workers, so workers are assigned to higher level roles? If so, in what capacity? \\ \hline
8. We will soon start a nationwide survey on this topic. Is there anything you are curious about and would like to gauge the opinion of the construction industry? \\ \hline \end{tabular}
\end{table}
Table 1: Interview questions.
Figure 1: Schematic overview of qualitative data analysis using grounded theory
provided to ensure a clear understanding of terms used during the interview. They took place between May and August 2022 and resulted in a total of 11.5 hours of audio-recorded conversations, which were transcribed word-for-word for accuracy. NVivo 12 software was used to code the data collected during the interviews into themes and categories and theories were continuously revised and refined to ensure that they remained grounded in the participants' experiences and perspectives.
## Results and Discussion
The main concepts and themes emerging from the interviewees' perceptions are summarized in Figure 2, which will be discussed in more detail in the following sections.
**General knowledge and opinion**. To begin collecting information through interviews in this research, it is important to assess the interviewee's level of general knowledge about robotics role, and impression of working alongside cobots in construction projects. Inadequate familiarity and basic understandings of a technology can pose a significant obstacle in building trust and gaining acceptance of that technology. Most of the interviewees had positive opinions about the potential of robotics in construction and there is a general consensus that robots have the potential to revolutionize the construction industry by increasing safety and productivity and facilitating collaboration between different parties. One of the interviewees referred to the use of cobots as a great equalizer between automation and human supervision for ensuring consistent product quality. Many of the interviewees confirmed an original hypothesis of this research that the robots can be used to perform repetitive or hazardous tasks, such as lifting heavy items, reducing the risk of back and soft tissue injuries, and freeing up human workers to do more complex work. However, the level of optimism regarding the current state of robotics in construction varied among the interviewees. Some interviewees have not seen any robots in-use and believed that they are only used for presentations and showcases. There was a sense that the construction industry is still in the early stages of exploring this technology, and there will be further advancements in the future to expect. Some interviewees expressed concerns about the effectiveness of robots in the construction industry. It was noted that cobots may not be suitable for all tasks, as not all jobs are repetitive, and they may require significant time and monetary invest also emphasized that robots should be able to adapt to changing conditions on construction sites and be able to learn from their experiences. Moreover, regarding the types of projects, some of them believed that large projects like bridge construction may be able to absorb the cost and benefits from using cobots, while smaller projects may struggle to justify the expense. Two superintendents raised concerns about safety, mentioning that they have not encountered robots at their workplace and view the idea as
Figure 2: Main trust barriers based on experts’ perceptions.
being in the testing phase. There were also concerns about the impact of robotics on human workers and their job security, which alludes the need for providing appropriate training and education to adapt to the changing nature of the construction industry. However, some other interviewees mentioned that robots can free up personnel to do more mentally intensive tasks that require creativity and complex decision-making, and that AI can be used to streamline workflows.
**Potential challenges for adoption**. One of the primary challenges for the adoption of intelligent cobots in the construction industry is the cost of manufacturing robots. In addition, standardization is essential to the implementation of robots in construction, as there are many engineering firms, and each project is configured differently. Different manufacturers have different designs and interfaces for their machines, which makes it difficult to develop standard training for workers to use them. The lack of understanding and education regarding the technology is another significant challenge. Construction companies are not aware of the benefits of using robots and new technologies. The implementation of cobots requires education and information campaigns to raise awareness and convince construction companies to use robots. The lack of a clear business case for using cobots is also an obstacle to the adoption of them in the construction industry. Companies need to see a clear return on investment before they invest in the technology. The complexity of the construction site and the difficulty of programming robots to adapt to the constantly changing environment, which is very challenging even for humans are also major challenges. Construction sites are often dynamic and chaotic, and there are many variables that can affect the ongoing work. Cobots need to be programmed to handle unpredictability, which requires sophisticated technology and algorithms. Moreover, potential closed-minded mentality of workers and the fear of losing their jobs to the machines are also big challenges. People tend to have a closed mindset about the benefits of a new technology and prefer to do things the old-fashioned way. People who come from a demographic that primarily depends on physical labor to make a living have a greater fear of losing their jobs to machines. They'll eventually get comfortable with the machines, but the biggest challenge lies in getting them to embrace the change.
**Trust barriers**. Many interviewees mentioned the importance of cobot's ability to perform tasks efficiently, without errors, and within a reasonable amount of time. Demonstrating cobot's ability to adapt to different environments, and process information can also help build trust. The interviewees also emphasized the need for proof of concept and testing to identify practical use-cases for the cobot and to find its limitations. Close monitoring and quality control can help mitigate safety concerns, and the use of case studies and sales pitches to demonstrate successful implementations of the technology in other organizations can help build trust. Moreover, several interviewees raised concerns about the "fear of the unknown" and the lack of understanding of how the technology works. Interviewees emphasized the importance of transparency and building trust with clients and stakeholders. They suggested that manufacturers must pay attention to quality control to avoid errors that could result in recalls or retraining. They also pointed out that multiple masons on a job site can provide more quality control than a single automated system and how the robot manufacturer justifies its ability to control quality should be clarified. The technical skills of the individual paired with the cobot are also essential. The aforementioned points align with the existing literature and validate the fact that trust between humans and robots is heavily influenced by factors such as reliability, performance, transparency, and interpretability. Some interviewees also highlighted the importance of the size and appearance of the cobot, suggesting that smaller cobots may be less intimidating and more approachable than larger humanoid robots. Interviewees
also emphasized the importance of safety and control, with several suggesting the need for manual human interfaces to override the cobot when necessary. Having control over the cobot is essential to gain workers' trust. A notable observation was that only three interviewees expressed worries about privacy and security when they were asked, specifically with the use of cameras in cobots, and suggested that manufacturers should implement appropriate measures to mitigate these concerns. They acknowledged that these concerns may be more prevalent among companies involved in larger, public construction projects.
**Trust initiation approach**. There was no clear consensus among participants on whether trusting intelligent cobots is a top-down or bottom-up approach in construction projects and company organizational structure. Some interviewees suggested that a top-down approach is essential since it is the management who makes decisions about investing in technology and providing training and support to workers. Others suggested a bottom-up approach is necessary since the workers are the ones who will use the technology and must feel comfortable and familiar with it. They need to be trained and allowed to ask questions and provide feedback. However, a majority of the interviewees suggested the need to a blend of the top-down and bottom-up approaches to build trust in intelligent cobots. They believed that managers should educate the workers about the benefits of cobots, involve them in the decision-making process, provide training, and allow them to provide feedback to improve the technology's performance. At the same time, top management should investigate the technology, research its benefits, and set expectations for the rest of the organization to embrace it. Some interviewees, on the other hand, suggested that depending on the situation, building trust in intelligent cobots may require a collaborative approach, where the company invests in technology, provides training, and communicates its benefits to workers who should show willingness to embrace the technology and provide feedback to improve it. Ideally, trust in cobots will only be achieved when both the management and workers are convinced of its benefits and potential to enhance efficiency and safety.
**Training**. Analyzing the responses regarding the types of training required to prepare project teams to work with intelligent cobots, several common themes emerged but overall, the responses suggest that there is no one-size-fits-all solution to preparing project teams to work with intelligent cobots. First, there was a consensus among the interviewees that technical training is crucial when working with robots, and safety should be a top priority. This includes training on how to use and operate the robots, as well as how to use them effectively to improve productivity. Interviewees also suggested that training should be tailored to the needs of the organization, and management needs to be involved in the process to ensure that everyone is on board with the change. Second, soft skills training may be necessary to ensure that personnel are open to change and willing to adapt to new technology. Interviewees suggested that this could include training in communication, teamwork, and adaptability. Third, interviewees suggested that it is important to communicate the benefits of the technology in a way that is easily understandable to different groups, such as the field and office personnel both. This could involve creating specialized roles focused on using the new technology and moving from job site to job site and ensuring that communication channels are established between different levels of personnel. Fourth, it is important to have a supportive organization culture where office personnel support the field personnel in adopting to the new teamwork schemes. Change agents could be identified among superintendents who can demonstrate to their colleagues the benefits of the technology, and education, training, and experience should be in balance for the best results. Fifth, the interviewees
suggested that training should be a continuous process, and it is important to have hands-on training in an environment similar to the construction site to build trust.
**Job security**. It appears that the prevailing view from interviewees is that while intelligent cobots are set to replace workers in specific roles, they will not completely supplant the labor force. Rather, they will transform job characteristics and necessitate different skill sets for job execution. They suggest that the integration of cobots will change the nature of jobs, allowing workers to focus on more skilled and higher-paying tasks. This shift will lead to more efficient and effective workflows that will allow companies to expand and create more jobs. Several interviewees believed that the use of cobots in the workforce is a natural progression given the trajectory of technology. The key challenge for the construction industry will be to find the right balance between the use of cobots and employing humans to perform tasks that require creativity and complex decision-making, and to ensure that workers are comfortable and confident in working alongside cobots and supervise them. The majority of them believed that cobots will not replace human workers, but instead augment their work which will help with the labor shortage. They suggested that industries will require workers with skills such as programming the robots, and new interdisciplinary jobs will be created as a result. Some believe that workers will be needed to perform certain tasks that cobots are not flexible enough to handle, while others suggest that workers will focus on developing skills in areas where machines are not efficient, such as creativity and emotional intelligence. They mentioned that workers' roles will shift to quality control and fine-tuning of the work done by cobots, as well as maintenance and repair of cobots. They also believed that workers with high-level expertise, such as welders and masons, working with technology developers, are vital to improve the cobots performance.
## Conclusion
Through interviews with 11 experts in construction industry, this study tries to understand factors that influence trust in cobots and adoption barriers in the construction industry. This understanding is critical for successful implementation and adoption of cobots and new technologies that are trusted and accepted by workers. The overall consensus was that the use of cobots has the potential to revolutionize the construction industry by increasing safety, productivity, and collaboration between different parties. However, there were some significant hurdles that need to be overcome, including the high cost of manufacturing robots, lack of standardization, complexity of the construction site, and fear of job loss. Additionally, building trust between humans and robots requires a multi-faceted approach that addresses a wide range of concerns, from technical functionality to ethical implications. Therefore, it is necessary to have a thoughtful and well-planned approach that includes education and awareness-raising campaigns to convince construction companies to use cobots and make a clear business case for investment. Technical training, soft skills training, effective communication, and a supportive company culture are also necessary for this approach. Furthermore, finding the right balance between the use of technology and the employment of human workers is essential.
|
2306.02550 | The Way to Quench: Galaxy evolution in Abell 2142 | We show how the star formation activity of galaxies is progressively
inhibited from the outer region to the center of the massive cluster A2142.
From an extended spectroscopic redshift survey of 2239 galaxies covering a
circular area of radius $\sim 11$~Mpc from the cluster center, we extract a
sample of 333 galaxies with known stellar mass, star formation rate, and
spectral index $D_n4000$. We use the Blooming Tree algorithm to identify the
substructures of the cluster and separate the galaxy sample into substructure
galaxies, halo galaxies and outskirt galaxies. The substructure and halo
galaxies are cluster members, whereas the outskirt galaxies are only weakly
gravitationally bound to the cluster. For the cluster members, the star
formation rate per stellar mass decreases with decreasing distance $R$ from the
cluster center. Similarly, the spectral index $D_n4000$ increases with $R$,
indicating an increasingly average age of the stellar population in galaxies
closer to the cluster center. In addition, star formation in substructure
galaxies is generally more active than in halo galaxies and less active than in
outskirt galaxies, proving that substructures tend to slow down the transition
between field galaxies and cluster galaxies. We finally show that most actively
star forming galaxies are within the cluster infall region, whereas most
galaxies in the central region are quiescent. | Cheng-Gong Qu, Heng Yu, Antonaldo Diaferio, Jubee Sohn, DengQi Liu | 2023-06-05T02:53:53Z | http://arxiv.org/abs/2306.02550v2 | # The Way to Quench: Galaxy Evolution in A2142
###### Abstract
We show how the star formation activity of galaxies is progressively inhibited from the outer region to the center of the massive cluster A2142. From an extended spectroscopic redshift survey of 2239 galaxies covering a circular area of radius \(\sim 11\) Mpc from the cluster center, we extract a sample of 333 galaxies with known stellar mass, star formation rate, and spectral index \(D_{n}4000\). We use the Blooming Tree algorithm to identify the substructures of the cluster and separate the galaxy sample into substructure galaxies, halo galaxies and outskirt galaxies. The substructure and halo galaxies are cluster members, whereas the outskirt galaxies are only weakly gravitationally bound to the cluster. For the cluster members, the star formation rate per stellar mass decreases with decreasing distance \(R\) from the cluster center. Similarly, the spectral index \(D_{n}4000\) increases with \(R\), indicating an increasing average age of the stellar population in galaxies closer to the cluster center. In addition, star formation in substructure galaxies is generally more active than in halo galaxies and less active than in outskirt galaxies, proving that substructures tend to slow down the transition between field galaxies and cluster galaxies. We finally show that most actively star forming galaxies are within the cluster infall region, whereas most galaxies in the central region are quiescent.
methods: data analysis-astronomical databases: catalogs-galaxies: structure-galaxies: star formation-galaxies: evolution-galaxies: interactions 0000-0002-4002-2886]ChengGong Qu
0000-0002-4071-5885]Heng Yu
0000-0002-1888-7885]Antonaldo Diaferio
0000-0002-1887-7885]Jubee Sohn
0000-0002-1887-7885]DengQi Liu
## 1 Introduction
According to the current model of the formation of cosmic structures, clusters of galaxies form by gravitational instability from perturbations in the initial matter density field. Small groups of galaxies flow along the filaments of the cosmic web and contribute to the formation and evolution of galaxy clusters. In hierarchical scenarios, increasingly massive clusters form, on average, at increasingly later times (Neto et al., 2007; Boylan-Kolchin et al., 2009).
Spectrophotometric properties of galaxies are correlated with the density of the galaxy environment. For example, galaxies in the local universe show two distinct distributions in the color-magnitude diagram: a red sequence, mostly due to early-type galaxies, and a blue cloud, mostly due to star-forming late-type galaxies (Strateva et al., 2001; Blanton et al., 2003). Galaxies on the red sequence are generally located in the dense central regions of galaxy clusters, whereas blue-cloud galaxies populate less dense environments(Dressler, 1980; Postman & Geller, 1984; Balogh et al., 2004; Rawle et al., 2013; Crone Odekon et al., 2018; Mishra et al., 2019). In the current model of galaxy formation, late-type galaxies might evolve into early-type galaxies through various processes, including galaxy merging, tidal stripping, and ram pressure stripping (Bekki, 1999; Taylor & Babul, 2004). While falling from the outskirts to the center of a massive galaxy cluster, galaxies are likely to be affected by these types of interaction. Although the shock of the hot intracluster medium (ICM) acting on the cold gas of the galaxy can sometimes increase the star formation activity (Safarzadeh & Loeb, 2019), this ram pressure stripping mostly removes cold gas from the galaxy and inhibits star formation (Gunn & Gott, 1972; Balogh et al., 2000; Jablonka et al., 2013; Peng et al., 2015; Deshev et al., 2020; Taylor et al., 2020). The timescale associated with this starvation mechanism is \(\sim 4\) Gyr (Peng et al., 2015), gener
ally longer than the timescale for ram pressure stripping of \(\sim 0.5-4\)Gyr.
In a cluster, the local galaxy density is correlated with the distance from the cluster center, and the fraction of late-type galaxies increases with clustrocentric distance, as happens, for example, in the Perseus cluster (Meusinger et al., 2020). Similarly, in A2029, the spectral index \(D_{n}4000\) of the cluster galaxies, an indicator of the age of their stellar population, decreases with clustrocentric distance, suggesting younger stellar populations in the outer galaxies, as expected for late-type galaxies (Sohn et al., 2019).
Most rich clusters exhibit some amount of substructure in the galaxy distribution (Geller & Beers, 1982; Wen & Han, 2013). Since galaxies in substructures have relative velocities comparable to the velocity dispersion of stars in galaxies, the probability of galaxy mergers increases (Girardi et al., 2015; Zarattini et al., 2016). Indeed, many early-type galaxies are in substructures (Einasto et al., 2014), suggesting that galaxy mergers might have already taken place before the galaxies were completely accreted by the cluster. Because of the diversity of environments within a galaxy cluster and its outer region, investigating the properties of cluster galaxies provides crucial information on galaxy evolution.
A2142 is a massive galaxy cluster at redshift z \(\sim\) 0.0898. It is located at the center of a supercluster connected to the large-scale filamentary structure (Einasto et al., 2020). The cold fronts of A2142 observed in X-rays are probably the result of a sloshing cool core in the central region (Markevitch et al., 2001; Tittley & Henriksen, 2005; Owers et al., 2011). A galaxy group that is undergoing ram pressure stripping is also observed near the radius \(R_{500}\)(Eckert et al., 2014). The outskirts of the cluster are dominated by star-forming blue galaxies, unlike the inner region (Einasto et al., 2018). Although the dense environment of the central region of the cluster has an impact on the evolution of galaxies, many galaxies are within high-density substructures flowing toward the cluster along filaments that surround it. The relation between galaxy properties, clustrocentric distance, and local environment is thus complicated by the presence of substructures.
The caustic method based on a hierarchical clustering algorithm (Yu et al., 2015) can be used to identify the substructures of clusters. The algorithm was successfully applied to A85 (Yu et al., 2016) and A2142 (Liu et al., 2018). The Blooming Tree algorithm is an updated version of the algorithm (Yu et al., 2018). Here, we plan to apply the Blooming Tree algorithm to identify the substructures of A2142 and constrain the relation between galaxy properties and local environment.
This paper is organized as follows. In Section 2, we present our data. In Section 3, we separate our sample into three subsamples according to their membership of the cluster, substructures, or outer region. We distinguish star-forming galaxies from quiescent galaxies, and discuss the relation between their physical properties and their environment. We summarize our results in Section 4. Throughout this paper, we adopt a Wilkinson Microwave Anisotropy Probe standard cosmological model with \(\Omega_{m}\) = 0.272, \(\Omega_{\Lambda}\) = 0.728, and \(H_{0}=70.4\,km\,s^{-1}\,Mpc^{-1}\)(Komatsu et al., 2011). All the errors we mention are \(1\sigma\).
## 2 Observational Data
Liu et al. (2018) compiled a spectroscopic redshift survey of 2239 galaxies in the field of view of A2142. Hereafter, we call these 2239 galaxies the \(z\)-available galaxies. This catalog covers a circle of radius 0.\({}^{\circ}\)56 from the cluster center, whose celestial coordinates are R.A. = 239.\({}^{\circ}\)5833 and decl. = 27.\({}^{\circ}\)2334. This angular radius corresponds to a radius of 10.8 Mpc at the cluster redshift \(z=0.09\). Figure 1 shows the redshift distribution of the galaxies around this redshift. For the analysis of the structure of A2142, we consider the 1186 galaxies with redshift in the range [0.06,0.12]. Hereafter, we call these 1186 galaxies the \(z\)-slice galaxies. The redshift distribution of the \(z\)-slice galaxies is shown by the gray bars in Fig. 1. The red solid line is the Gaussian fit to this distribution obtained after 3\(\sigma\) clipping.
The star formation rates(SFRs) and the stellar masses, \(M_{\star}\), of the \(z\)-slice galaxies are collected from the GALEX-SDSS-WISE Legacy Catalog (GSWLC, Salim et al., 2018). This catalog is based on photometric data in multiple bands, including UV data taken by the Galaxy Evolution Explorer (GALEX, Martin et al., 2005) and optical data taken by the Sloan Digital Sky Survey (SDSS, Abazajian et al., 2009).
We also consider the spectral index \(D_{n}4000\), the ratio of the average flux densities in the narrow continuum bands 3850-3950 A and 4000-4100 A (Balogh et al., 1999). This spectral index correlates with the age of the stellar population that contributes most of the electromagnetic emission in the optical band (Bruzual et al., 1983; Poggianti & Barbaro, 1997). The \(D_{n}4000\) values are queried from the database of SDSS (Hopkins et al., 2003).
There are 333 galaxies, out of the 1186 \(z\)-slice galaxies, with SFRs, stellar mass \(M_{\star}\), and \(D_{n}4000\) available. Hereafter, we call these 333 galaxies the data-available galaxies. The spectroscopic completeness of the 2239 \(z\)-available galaxies as a function of the Petrosian \(r\)-band magnitude is shown by the blue line in the top panel of Fig. 2. The decrease in spectroscopic completeness at magnitudes fainter than 18 mag is caused by the Petrosian \(r\)-band magnitude limit \(m_{r,Petro,0}<17.77\) of the spectroscopic galaxy sample of SDSS (Balogh et al., 1999). The red dashed line indicates the ratio between the 333 data-available galaxies and the 1186 \(z\)-slice galaxies.
The bottom panel of Fig. 2 shows the spatial distribution of the ratio between the number of data-available galaxies and the number of \(z\)-slice galaxies. We limit the computation of this ratio to the galaxies with \(m_{r,Petro,0}<17.77\). We have 303 data-available galaxies and 319 \(z\)-slice galaxies brighter than this magnitude limit. The two-dimensional map shown in Fig. 2 has \(10\times 10\) pixels for a squared field of view of \(1.^{\circ}12\) on a side. The overall ratio is \(303/319=0.95\) with a standard deviation \(0.286\). In the panel, the pixels outside the red circle contain no data.
## 3 Data Analysis
In Sect. 3.1 we use the Blooming Tree algorithm to split our galaxy sample into three subsamples: halo, substructure, and outskirt galaxies. In Sect. 3.2 we distinguish star-forming galaxies from quiescent galaxies; Sect. 3.3 discusses the relation between the star-formation rate per unit mass (the specificSFR, or sSFR) and the spectral index \(D_{n}4000\); Sect. 3.4 focuses on the radial distribution of sSFR and \(D_{n}4000\), and Sect. 3.5 discusses the galaxy distribution in the \(R\)-\(v\) diagram.
### Halo, substructure, and outskirt galaxies
The Blooming Tree algorithm is a method for identifying substructure based on the hierarchical clustering algorithm (Yu et al., 2018). It arranges all the galaxies in the field of view into a tree, or dendrogram. The arrangement is based on the pairwise projected binding energy, which is estimated from the location of the galaxies on the sky and from their redshift (Diaferio, 1999). By adopting a proper density contrast parameter \(\Delta\eta\), we can trim the tree into distinct structures: \(\Delta\eta\) is the difference between two values of \(\eta\), the former associated with the structure and the latter associated with the surrounding background structure; \(\eta\) combines the line-of-sight velocity dispersion, the size, and the number of galaxies in the structure; increasing values of \(\Delta\eta\) identify increasingly dense structures (see Yu et al., 2018, for further details).
We apply the Blooming Tree algorithm to the sample of 1186 \(z\)-slice galaxies. By setting \(\Delta\eta=5\), we identify 684 cluster galaxies. By increasing the density contrast to \(\Delta\eta=25\), we identify denser structures: we find 16 structures with more than five member galaxies. The basic properties of these 16 structures are listed in Table 1: \(n_{\rm g}\) is the number of member galaxies, \(n_{\rm d}\) is the number of member galaxies with known SFR, \(M_{\star}\), and \(D_{n}\)4000, \(z_{\rm sub}\) is the average redshift of the structure, and \(v_{\rm disp}\) is the velocity dispersion of the structure. All the 480 members of the 16 structures, which are named sub1 to sub16, belong to the 684 cluster galaxies identified with the contrast parameter \(\Delta\eta=5\). We
Figure 1: Redshift distribution of the galaxies in the field of view of A2142. The hollow bars show the distribution of the \(z\)-available galaxies. The gray bars show the distribution of the \(z\)-slice galaxies, the 1186 galaxies in our sample whose redshift is in the range \([0.06,0.12]\). The vertical dashed lines indicate this redshift range. The red solid line is the Gaussian fit to the distribution of the \(z\)-slice galaxies after 3\(\sigma\) clipping.
Figure 2: Top panel: the spectroscopic completeness of the \(z\)-available galaxies as a function of the Petrosian \(r\)-band magnitude (blue line). The red dashed line shows the ratio between the number of data-available galaxies and the number of \(z\)-slice galaxies. Bottom panel: the distribution of the ratio between the number of data-available galaxies and the number of \(z\)-slice galaxies on the sky. In this panel only galaxies with \(m_{r,Petro,0}<17.77\) are considered. The red circle shows a circle of radius \(0.^{\circ}56\) around the cluster center indicated by the red cross.
associate the remaining 204 out of these 684 galaxies with a diffuse component indicated by grp0 in Table 1.
The distribution of the \(z\)-slice galaxies on the sky is shown in Fig. 3, where the 480 members of the structures are represented as colored squares; the 204 cluster members associated with grp0 are represented as open triangles; the remaining 502 \(z\)-slice galaxies which belong neither to the cluster nor to any structure, are represented by black dots.
The galaxies are plotted on top of the map of Petrosian \(r\)-band luminosity density. The density map is computed from the 1186 \(z\)-slice galaxies by assuming that the \(r\)-band luminosity \(L_{R}\) of each galaxy is smoothed with a 2D Gaussian window of \(2^{\prime}\) width. (see Wen & Han 2013, for details). Figure 3 shows that the distribution of the luminosity density is generally consistent with the distribution of the structures, as expected.
The structures from sub2 to sub16 are distinct components that we identify as cluster substructures. The structure sub1 (orange squares in Fig. 3) is located at the cluster center and we identify this structure with the cluster core. We define halo galaxies to be the galaxies in the core and in the structure grp0, substructure galaxies to be members of the structures from sub2 to sub16, and outskirt galaxies to be the \(z\)-slice galaxies that are neither halo nor substructure galaxies.
The 333 data-available galaxies, with known SFR, \(M_{\star}\), and \(D_{n}\)4000, separate into 109 halo galaxies (grp0 and sub1), 80 substructure galaxies (sub2 to sub16), and 144 outskirt galaxies. In the following analysis and discussion we focus on these three galaxy subsamples.
### Star-forming and quiescent galaxies
Cluster galaxies at low redshift generally distributed into two distinct groups in the plane of stellar mass versus star formation rate, \(M_{\star}-\)SFR (Noeske et al., 2007; Peng et al., 2015): the two groups distinguish the star-forming (SF) galaxies, with smaller values of the spectral index \(D_{n}4000\), and the quiescent galaxies, with larger values of \(D_{n}4000\). The 333 data-available galaxies in our sample show this bimodal distribution, with most galaxies being massive and quiescent (Fig. 4). It is worth noting that there are only star-forming galaxies when \(M_{\star}<10^{10.4}M_{\odot}\). To keep the completeness of our sample, we do not remove them. However, our later results remain the same without these less massive starforming galaxies.
We consider the sSFR, defined as the SFR per unit stellar mass \(M_{\star}\). We separate the SF from the quiescent galaxies according to the threshold sSFR = \(10^{-11}\) yr\({}^{-1}\)(McGee et al., 2011; Wetzel et al., 2012). The 333 data-available galaxies separate into 93 SF galaxies and 240 quiescent galaxies. Table 2 lists how these galaxies are distributed into halo, substructure, and outskirt galaxy samples. The fraction of SF galaxies steadily increases from the halo sample to the outskirt sample. This trend suggests that star formation is progressively quenched from the outskirt galaxies to the substructure and the halo galaxies. The larger fraction of SF galaxies in the substructures than in the halo sample is also consistent with the scenario where the substructure galaxies entered the cluster more recently than the halo galaxies, and star formation in substructure galaxies is less inhibited than in halo galaxies.
There are two SF galaxies with SFR larger than \(10~{}M_{\odot}\)yr\({}^{-1}\). We label these galaxies S1 and S2. S1, with \(\log[{\rm SFR}/({\rm M}_{\odot}{\rm yr}^{-1})]=1.866\), is the brightest galaxy of the substructure sub14; S2, with \(\log[{\rm SFR}/({\rm M}_{\odot}{\rm yr}^{-1})]=1.156\), is the brightest galaxy of the substructure sub3. Sub14 is a substructure falling toward the center of the cluster at a high speed, as shown by Eckert et al. (2014, 2017), and Liu et al. (2018). Ram pressure stripping might enhance the star formation rate of S1 (Roberts et al., 2021). The image of S1 is also disturbed, suggesting an ongoing galaxy merger (Liu et al., 2018). In contrast, the large SFR of S2 might derive from its nature of grand-design spiral galaxy, whose face-on image appears undisturbed.
The decreasing fraction of quiescent galaxies from the halo to the outskirt sample is also apparent in the decreasing fraction of galaxies lying on the red-sequence relation in the color-magnitude diagram (Fig. 5). The mean colors of the quiescent galaxies are comparable in the three samples: 0.99
\begin{table}
\begin{tabular}{|l|c c c c|} \hline \hline GroupID & \(n_{\rm g}\) & \(n_{\rm d}\) & \(z_{\rm sub}\) & \(v_{\rm disp}({\rm km\,s^{-1}})\) \\ \hline cluster & 684 & 189 & 0.0898 & 912\(\pm\)11 \\ \hline grp0 & 204 & 66 & 0.0895 & 1059\(\pm\)10 \\ sub1 & 178 & 43 & 0.0902 & 786\(\pm\)10 \\ sub2 & 81 & 26 & 0.0884 & 477\(\pm\)10 \\ sub3 & 41 & 10 & 0.0929 & 464\(\pm\)10 \\ sub4 & 26 & 7 & 0.0870 & 303\(\pm\)10 \\ sub5 & 22 & 3 & 0.0916 & 403\(\pm\)11 \\ sub6 & 18 & 5 & 0.0888 & 266\(\pm\)10 \\ sub7 & 17 & 8 & 0.0895 & 356\(\pm\)7 \\ sub8 & 17 & 4 & 0.0960 & 318\(\pm\)14 \\ sub9 & 12 & 1 & 0.0858 & 353\(\pm\)15 \\ sub10 & 12 & 1 & 0.0892 & 283\(\pm\)11 \\ sub11 & 12 & 2 & 0.0870 & 351\(\pm\)6 \\ sub12 & 11 & 4 & 0.0897 & 334\(\pm\)6 \\ sub13 & 10 & 3 & 0.0907 & 219\(\pm\)12 \\ sub14 & 9 & 2 & 0.0946 & 304\(\pm\)10 \\ sub15 & 7 & 4 & 0.0906 & 202\(\pm\)8 \\ sub16 & 7 & 0 & 0.0887 & 347\(\pm\)6 \\ \hline \end{tabular}
\end{table}
Table 1: Properties of the galaxy structures. \(n_{g}\) is the number of member galaxies, and \(n_{d}\) is the number of member galaxies with known SFR, \(M_{\star}\),and \(D_{n}4000\). \(z_{sub}\) is the average redshift of the structure and \(v_{disp}\) is its velocity dispersion.
\(\pm\) 0.07, 0.95 \(\pm\) 0.06, and 0.96 \(\pm\) 0.06 for the halo, substructure, and outskirt samples, respectively.
### The sSFR\(-D_{n}4000\) relation
Figure 6 shows the distributions of the spectral index \(D_{n}4000\) and the sSFR for the galaxies in our three samples. This figure also shows the correlation between these two quantities. For the halo and substructures galaxies, the distributions peak at small sSFR and large \(D_{n}4000\), whereas for the outskirt galaxies the distributions appear somewhat flat. This different behavior indicates a correlation between the environment and the galaxy properties.
Our galaxy sample indeed confirms the expected anticorrelation between sSFR and \(D_{n}4000\)(Kauffmann et al., 2004): \(D_{n}4000\) increases with the age of the stellar population (Balogh et al., 1999) and is thus expected to increase with decreasing sSFR if sSFR decreases with increasing age of the stellar population (Duarte Puertas et al., 2022).
We separately fit the sSFR\(-D_{n}4000\) relation for the SF galaxies and the quiescent galaxies, and find \(D_{n}4000=-0.39\log(\mathrm{sSFR}/\mathrm{yr}^{-1})-2.24\) and \(D_{n}4000=-0.06\log(\mathrm{sSFR}/\mathrm{yr}^{-1})+1.17\) for the two samples, respectively. The relation is steeper for the sample of SF galaxies than for the quiescent galaxies, suggesting that the star formation activity gradually decreases in increasingly denser environments.
### The radial distribution of sSFR and \(D_{n}4000\)
The properties of cluster galaxies are closely related to the local galaxy density, which, in turn, generally depends on the
\begin{table}
\begin{tabular}{l l l l} \hline \hline Galaxy Sample & \(n_{\mathrm{d}}\) & SF & Quiescent \\ \hline Total & 333 & 93 (27.9\%) & 240 (72.1\%) \\ Halo & 109 & 10 (9.2\%) & 99 (90.8\%) \\ Substructure & 80 & 20 (25.0\%) & 60 (75.0\%) \\ Outskirt & 144 & 63 (43.8\%) & 81 (56.3\%) \\ \hline \end{tabular}
\end{table}
Table 2: Star-forming and Quiescent Data-available Galaxies.\(n_{d}\) is the number galaxies with known SFR, \(M_{\star}\), and \(D_{n}4000\).
Figure 3: Distribution on the sky of the 1186 \(z\)-slice galaxies superimposed on the distribution of their Petrosian \(r\)-band luminosity. The galaxy sample separates into 382 halo galaxies and 302 substructure galaxies, totalling to 684 cluster members, and 502 outskirt galaxies. The open triangles show the members of grp0, the orange squares show the members of the core sub1, the other colored squares show the members of the structures from sub2 to sub16, and the black dots show the outskirt galaxies. The two black dashed circles have radius \(R_{500}=1.408\) Mpc and \(R_{200}=2.160\) Mpc at the cluster redshift \(z=0.09\)(Tchernin et al., 2016).
clustercentric distance (Odekon et al., 2018; Coccato et al., 2020; Meusinger et al., 2020). Figure 7 shows the dependence of the sSFR on the projected radius \(R\) from the cluster center for the entire galaxy sample (top panel) and for the three galaxy samples separately (bottom panel). The data are divided into 10 equally spaced radial bins. The median value of each bin is given. Their rms values are shown with shading. Despite the large scatter, our entire sample shows that sSFR increases with \(R\). The substructure galaxies are mainly responsible for this trend. Indeed, the outskirt galaxies show a flat relation, with larger sSFRs than substructure galaxies, on average, and the halo galaxies show a slightly decreasing relation.
Figure 8 shows the dependence of \(D_{n}4000\) on \(R\)(Balogh et al., 1999). It mirrors the dependence of sSFR on \(R\), because of the correlation between sSFR and \(D_{n}4000\) shown in Fig. 6: for the entire sample, \(D_{n}4000\) decreases with increasing \(R\), similarly to the galaxies in A2029 (Sohn et al., 2019). As in the sSFR-\(R\) relation, this trend is mostly due to the substructure galaxies, whereas halo and outskirt galaxies have shallower relations, with the values of \(D_{n}4000\) of the outskirt galaxies smaller, on average, than those for the other two galaxy samples.
Figures 7 and 8 show that the average values of sSFR and \(D_{n}4000\) of the substructure galaxies in the cluster center are comparable to the values of the halo galaxies. Similarly, at large radii, these quantities of the substructure galaxies are comparable to the values of the outskirt galaxies. This result suggests (1) that the substructure galaxies are sensitive to the environment of their own substructure, and (2) that substructures tend to slow down the transition from field galaxies to cluster galaxies. This scenario is consistent with results of simulations, which suggest that orphan galaxies that have lost their subhalos are more vulnerable to environmental effects
Figure 4: The distribution of the 333 data-available galaxies in the SFR\(-M_{*}\) plane. The red circles, blue squares, and black triangles represent halo, substructure, and outskirt galaxies, respectively. The black dotted line shows the specific SFR, sSFR = \(10^{-11}\) yr\({}^{-1}\), separating SF galaxies from quiescent galaxies. S1 and S2 are the brightest galaxies of substructures sub14 and sub3, respectively. BCG is the brightest galaxy of A2142.
Figure 5: The color–magnitude (\(m_{g}-m_{r}\))–\(m_{r}\) diagram for the halo galaxies (top), substructure galaxies (middle), and outskirt galaxies (bottom). The magenta and cyan dots show the quiescent and SF galaxies, respectively. The dashed line in each pane is the red-sequence fit to the quiescent halo galaxies in the top panel: \(m_{g}-m_{r}\) = \(-0.0314m_{r}\) + \(1.528\).
Figure 6: Top panel: the distribution of sSFR for the halo galaxies (red histogram), the substructure galaxies (blue histogram), and the outskirt galaxies (hollow histogram). Right panel: same as the top panel for \(D_{n}4000\). Bottom left panel: the relation between sSFR and \(D_{n}4000\). The two dotted lines are linear fits to the SF and quiescent galaxies separately. For illustrative purposes, the entire galaxy sample is separated into bins of fixed width on the \(\log(\mathrm{sSFR/yr^{-1}})\) axis: for each of these bins, the shaded areas show the rms values of \(D_{n}4000\) around the mean. BCG, S1, and S2 are the central bright galaxies of A2142, sub14, and sub3, respectively.
than those that still have them(Cora et al., 2018). Those orphan galaxies belong to the diffuse halo galaxies here.
Considering the sSFR depends strongly on the stellar mass, the mass segregation effect could bias our result. We check the distribution of radial stellar mass log\(M_{\star}\) of the three subsamples and find their median masses are consistent at all radii, as Fig. 9 shows.
### The \(R\)\(-\)\(v\) diagram
We know only three out of the six phase-space coordinates of each galaxy in the field of view of A2142: the two celestial coordinates and the line-of-sight velocity. This limited knowledge prevents us from grasping the full dynamics of the cluster and its structure. Nevertheless, from the distribution of galaxies in the \(R\)\(-\)\(v\) diagram, namely the line-of-sight velocity versus the projected distance from the cluster center, we can infer the global depth of the gravitational potential well of the cluster, or, equivalently, the escape velocity from the cluster (Diaferio and Geller, 1997; Diaferio, 1999;
Figure 8: Same as Fig. 7 for the \(D_{n}4000\)\(-\)\(R\) relation.
Figure 7: The dependence of sSFR on the projected clustrocentric radius \(R\). Top panel: the entire galaxy sample with the rms values of sSFR around the mean in each of the 10 evenly spaced radial bins (shaded area). The dots indicate the median values of each bin. The dotted line shows the best linear fit. The two vertical solid lines show the two radii \(R_{500}=1.408\) Mpc and \(R_{200}=2.16\) Mpc. Bottom panel: same as the top panel for the three galaxy subsamples separately: halo (red), substructure (blue), and outskirt galaxies (black/gray).
Figure 9: Same as Fig. 7 for the log\(M_{\star}\)\(-\)\(R\) relation. The solid black line indicates the median value in each bin.
Serra et al., 2011), and identify the galaxies that are members of the cluster (Serra and Diaferio, 2013). This information can be extracted for a large interval of projected distances from the cluster center, from the central region to radii much larger than the virial radius, in regions where the dynamical equilibrium hypothesis does not hold and where the galaxies surrounding the cluster are falling into it for the first time.
Figure 10 shows the \(R\)\(-\)\(v\) diagram, or projected phase-space (PPS) diagram, of our three galaxy samples. The blue dotted lines show the location of the caustics derived in Sohn et al. (2020). The caustics are related to the escape velocity from the cluster (Diaferio and Geller, 1997; Diaferio, 1999). According to Serra and Diaferio (2013), the sample of galaxies within the caustics contains (\(95\pm 3\))% of the real members and is contaminated by \(\sim 8\)% of interlopers within \(3R_{200}\). The caustic technique thus represents a valid procedure to identify cluster members in real data.
Alternatively, Oman and Hudson (2016) adopt an approximate relation to identify cluster members. Their relation is based on dark matter-only simulations. By defining as interlopers those satellite dark matter halos with distance, in real space, \(r_{3d}>2.5r_{vir}\), with \(r_{vir}\) the cluster virial radius, they find that, in the \(R\)\(-\)\(v\) diagram, the line \(v/\sigma_{3d}=-(4/3)R/r_{vir}+2\) roughly separates the region of the \(R\)\(-\)\(v\) diagram dominated by the cluster members from the region dominated by interlopers. The black solid lines in Fig. 10 show the line of Oman and Hudson (2016), where we set \(\sigma_{3d}=\sqrt{3}\sigma_{cluster}\) and \(R_{200}/r_{vir}=0.73\). The black solid lines are roughly consistent with the caustic location and thus appear to be a reasonable proxy for the caustics. For the sake of simplicity, we adopt these black solid lines, rather than the caustics, as the cluster boundaries.
Figure 10 shows two additional sets of black dashed lines: they have the same slope as the black solid lines and cross the \(v/\sigma_{cluster}=0\) axis at \(R_{200}\) and \(R_{500}\), respectively. We adopt these lines in the \(R\)\(-\)\(v\) diagram as the counterparts of \(R_{200}\) and \(R_{500}\) in real space.
The distribution of our galaxy samples in the \(R\)\(-\)\(v\) diagram is generally consistent with the identification of the cluster members derived in Sect. 3.1: most outskirt galaxies (triangles), which are not expected to be cluster members, lie outside the regions identified by the caustics or the black solid lines, whereas most halo and substructure galaxies, which are expected to be cluster members, are within these regions.
We now investigate the star formation activities of the galaxies in the infall region of the cluster. We define the infall region as the band of the \(R\)\(-\)\(v\) diagram between the black solid line and the black dashed line crossing the point \((R_{200},0)\). We consider the specific SFR, sSFR, and the spectral index \(D_{n}4000\) as a function of the distance \(\Delta d\) of each galaxy from the black solid line: \(\Delta d\) is thus the segment perpendicular to the black solid line joining the galaxy and the black solid line. According to the analysis of the orbits of galaxies falling into clusters in numerical simulations (e.g., Yoon et al., 2017; Arthur et al., 2019), a galaxy that is falling into the cluster traces a trajectory roughly parallel to the black solid line in the \(R\)\(-\)\(v\) diagram; the radial coordinate \(R\) of this trajectory clearly decreases during the galaxy infall. Therefore, larger \(\Delta d\) implies larger initial radial distance of the falling galaxy.
Figure 11 shows the \(\Delta d\) distribution of the 93 SF galaxies, namely the data-available galaxies with \(\log(\mathrm{sSFR}/\mathrm{yr}^{-1})>-11\). Out of these 93 galaxies, 30 are cluster members: specifically there are 20 substructure galaxies and 10 halo galaxies. Out of these 20 and 10 galaxies, 17 and 8, respectively, lie in the infall region, namely in the band between the black solid line and the outer black dashed line in the \(R\)\(-\)\(v\) diagram. Therefore, 83% (25 out of 30) of the cluster members that are actively forming stars are in the infall region. Our sample thus demonstrates, as expected, that the dense intracluster medium within \(R_{200}\) inhibits the star formation activity. The only SF galaxy (LEDA 1801474) within \(R_{500}\) is a halo galaxy. The SDSS image of this galaxy suggests that its star formation activity is triggered by an ongoing merger.
Figure 12 shows the \(\Delta d\) distribution of the 82 galaxies with spectral index \(D_{n}4000<1.6\). We call these galaxies blue galaxies. Their distribution is similar to the distribution of sSFR in Fig. 11. There are 26 (84%) out of 31 blue galaxies that are cluster members, namely either substructure or halo galaxies, in the infall region. There are 17 out of 20 substructure blue galaxies, and 9 out of 11 halo blue galaxies. The only blue galaxy within \(R_{500}\) is a halo galaxy (SDSS
Figure 10: The \(R\)\(-\)\(v\) diagram of the 333 data-available galaxies of A2142. The data-available galaxies consist of 80 substructure galaxies (squares), 109 halo galaxies (circles), and 144 outskirt galaxies (triangles). Cyan and magenta symbols show SF and quiescent galaxies, respectively. The symbol size is proportional to the specific SFR. The red square on the left is the BCG. The blue dotted lines show the caustic location. The black solid and dashed lines are described in the text.
J155827.26+271300.3). Its color might be contaminated by a nearby blue object, which is only 4 arcsec away.
Figures 11 and 12 show that the SF and blue galaxies that are cluster members are concentrated in the infall region, namely in the PPS region located between the black dashed line corresponding to \(R_{200}\) and the black solid line. This result indicates that the dense ICM environment substantially inhibits the star formation activity of the galaxies once they enter the region within \(R_{200}\). In addition, the transition from star forming galaxies to quiescent galaxies substantially ends at radii larger than \(R_{500}\).
## 4 Summary
We compiled a catalog of 333 galaxies from a spectroscopic redshift survey of 2239 galaxies in the field of view of the cluster A2142 (Liu et al., 2018). The survey covers a circular area of radius \(\sim 11\) Mpc from the cluster center. Each of the 333 galaxies has measured stellar mass \(M_{\star}\), SFR, and spectral index \(D_{n}\)4000. We use the Blooming Tree algorithm, an algorithm for the identification of cluster substructure (Yu et al., 2018), to separate our sample into three subsamples: the halo, the substructure, and the outskirt galaxies. The halo and the substructure galaxies are cluster members. The outskirt galaxies are still in the outer region of the cluster, but, according to the Blooming Tree algorithm, their gravitational bond to the cluster is weak.
We investigate the relation between the environment and the star formation activity of the galaxies in these three subsamples. Our main conclusions are as follows.
* The specific SFR, sSFR=SFR/\(M_{\star}\), is larger in the outskirt galaxies and smaller in the halo galaxies. In addition, the sSFR increases, on average, with increasing distance from the cluster center; similarly, the spectral index \(D_{n}4000\), which is an indicator of the age of the stellar population, decreases with increasing distance from the cluster center. Both results show that the star formation activity tends to be inhibited a high density environment.
* The sSFR of substructure galaxies is intermediate between the sSFR of halo galaxies and that of outskirt galaxies; in addition, the sSFR depends on the environment of the substructure of the galaxy, being smaller, on average, for galaxies in substructures that are close to the cluster center, and larger for galaxies in substructures that are in the outer region of the cluster. The spectral index \(D_{n}4000\) shows the same behavior. This result demonstrates that substructures tend to slow down the transition between field galaxies and cluster galaxies.
* Galaxies that are actively forming stars mostly lie in the cluster infall region, roughly between \(R_{200}\) and the turn-around radius: star formation is progressively inhibited while approaching \(R_{200}\) and substantially quenched within \(R_{200}\).
Our analysis demonstrates the relevance of spectroscopic redshifts for investigating the connection between the physical properties of galaxies and their environment. For this goal, our Blooming Tree algorithm proves efficient at associating the galaxies with the composite structures of a cluster. The application of the Blooming Tree algorithm to data from future extensive spectroscopic surveys, such as Euclid (Euclid Collaboration et al., 2022) or LSST (Ivezic et al., 2019), is thus expected to greatly enhance our comprehension of galaxy evolution in clusters.
Figure 11: Top: relation between sSFR and \(\Delta d\) for the 93 SF galaxies in the data-available galaxy sample. Middle: the distribution of \(\Delta d\) for the full sample of 93 SF galaxies. Bottom: the distribution of \(\Delta d\) for the halo, substructure, and outskirt SF galaxies separately. The two vertical black dotted lines indicate \(R_{200}\) (\(\Delta d=-0.9\)) and \(R_{500}\) (\(\Delta d=-1.2\)), respectively. The vertical black solid line is the boundary line \(\Delta d=0\).
Figure 12: Same as Fig. 11 for the 82 blue galaxies with \(D_{n}4000<1.6\).
## Acknowledgments
We thank the referee sincerely for his/her valuable comments and suggestions in the report. This work was supported by Bureau of International Cooperation, Chinese Academy of Sciences GJHZ1864. A.D. acknowledges partial support from the INFN grant InDark.
|
2305.14662 | Probabilistic wind power forecasting resilient to missing values: an
adaptive quantile regression approach | Probabilistic wind power forecasting approaches have significantly advanced
in recent decades. However, forecasters often assume data completeness and
overlook the challenge of missing values resulting from sensor failures,
network congestion, etc. Traditionally, this issue is addressed during the data
preprocessing procedure using methods such as deletion and imputation.
Nevertheless, these ad-hoc methods pose challenges to probabilistic wind power
forecasting at both parameter estimation and operational forecasting stages. In
this paper, we propose a resilient probabilistic forecasting approach that
smoothly adapts to missingness patterns without requiring preprocessing or
retraining. Specifically, we design an adaptive quantile regression model with
parameters capable of adapting to missing patterns, comprising two modules. The
first is a feature extraction module where weights are kept static and biases
are designed as a function of missingness patterns. The second is a
non-crossing quantile neural network module, ensuring monotonicity of
quantiles, with higher quantiles derived by adding non-negative amounts to
lower quantiles. The proposed approach is applicable to cases under all
missingness mechanisms including missing-not-at-random cases. Case studies
demonstrate that our proposed approach achieves state-of-the-art results in
terms of the continuous ranked probability score, with acceptable computational
cost. | Honglin Wen | 2023-05-24T02:58:32Z | http://arxiv.org/abs/2305.14662v3 | # Probabilistic Wind Power Forecasting with Missing Values via Adaptive Quantile Regression
###### Abstract
Missing values challenge the probabilistic wind power forecasting at both parameter estimation and operational forecasting stages. In this paper, we illustrate that we are allowed to estimate forecasting functions for each missing patterns conveniently, and propose an adaptive quantile regression model whose parameters can adapt to missing patterns. For that, we particularly design a feature extraction block within the quantile regression model, where parameters are set as a function of missingness pattern and only account for observed values. To avoid the quantile-crossing phenomena, we design a multi-task model to ensure the monotonicity of quantiles, where higher quantiles are derived by the addition between lower quantiles and non-negative increments modeled by neural networks. The proposed approach is distribution-free and applicable to both missing-at-random and missing-not-at-random cases. Case studies demonstrate that the proposed approach achieves the state-of-the-art in terms of the continuous ranked probability score.
Probabilistic forecasting, machine learning, missing values, adaptive quantile regression, quantile-crossing.
## I Introduction
Probabilistic wind power forecasting is deemed as the workhorse to accommodate wind power uncertainty in power system operations and electricity markets. It leverages the information up to the current time and communicates the probability distribution of wind power generation at a future time in the forms of quantiles, prediction intervals, densities, etc [1]. It has been adopted in several power system applications such as energy trading [2] as well as reserve management [3], and has attracted increasing interest from the industries [4].
Usually, probabilistic wind power forecasting models are developed in a data-driven manner via parametric or non-parametric approaches [5]. The parametric approach assumes that wind power generation follows some kind of distribution (for instance Gaussian), and estimates the shape parameters via machine learning methods, whereas the non-parametric approach is distribution-free. Among the non-parametric approaches, quantile regression (QR) [6] is the most popular one, since it is easy to use and achieves the state of the art in several forecasting competitions. However, it requires estimating a separate model for each quantile level, which often leads to embarrassing quantile-crossing phenomena, i.e., higher quantiles are smaller than lower quantiles. Recently, it has been proposed a continuous and distribution-free probabilistic forecasting approach, which is allowed to predict the full distribution at once by transforming the base distribution to the desired one [7] and therefore avoids the quantile-crossing phenomena by nature. Specifically, both the base distribution and transforms are learned via machine learning.
Although forecasting approaches and products have developed a lot by leveraging cutting-edge techniques [8, 9] as well as data sharing mechanisms [10, 11, 12], the forecasting community has less investigated missing values within datasets. In fact, missing values are inevitable in real-world data, which may be caused by sensor failures and communication errors, for instance. In modern statistical theory, missingness mechanisms can be classified into missing completely at random (MCAR), missing at random (MAR), and missing not at random (MNAR) cases, according to if missingness is dependent on observed or missing values. For example, missingness caused by sensor failures usually belongs to the MCAR mechanism (as it is irrelevant to observed or missing values), whereas missingness caused by wind power curtailment belongs to the MNAR mechanism (as it often occurs at high wind speeds, i.e., the missingness depends on missing values). Intuitively, missing values challenge the calculation of model prediction. For that, a natural idea is to impute missing values before training and forecasting [13], which is referred to as the "impute, then predict" strategy. It has also been proposed to perform the imputation and forecasting tasks simultaneously based on deep learning techniques [14], which is adopted in the DeepAR model [15]. However, it is suggested that even optimal single imputed data lead to biased parameter estimation and prediction, since the uncertainty about the missing features is discarded [16]. Alternatively, one can leverage multiple imputations [17, 18] on the training data, and then develop a family of models, but it brings issues with tractability. Therefore, it remains an open issue to develop probabilistic wind power forecasting approaches in the context of missing values.
Compared to the inference with missing values that aims at parameter estimation for probabilistic models or imputation [19], forecasting/prediction with missing values focuses on the quality of forecasts. The seminar works can date back to [20, 21], where authors investigate time series forecasting in the context of missing values via auto-regressive moving average (ARMA) and auto-regressive integrated moving average (ARIMA) models. They represent ARMA and ARIMA models in the state-space form and address the calculation issues aroused by missing values by skipping the state update. It is useful at both model estimation and operational forecasting stages but is restricted to point forecasts and linear models. A robust optimization approach has been proposed in [22], which minimizes the worst-case loss when a proportion of features are missing. Thus, it is applicable to both point and probabilistic forecasting by setting the corresponding loss functions. However, it only focuses on the model estimation stage and requires controlling the number of missing features. In [23], it has been proposed a "universal imputation" strategy based on the assumption that data are MAR, where missing
values and targets are treated equally, and the focus is on the joint distribution of features and targets. After estimating the joint distribution, the probabilistic forecasts can be derived by marginalization with respect to features. Though it demonstrates better forecasting quality than the common used "impute, then predict" strategy, it relies on the fully conditional specification technique, the training time of which compromises its practical applications. A similar idea can be also found in [24].
In fact, given a missingness pattern of features, one can obtain the Bayes-optimal estimate for the forecasting function, e.g. mean function and quantile function via the typically used forecasting paradigm [16]. It is shown that multi-layer perceptrons (MLPs) can be Bayes consistent even in the MNAR cases [16]. However, it is intractable to train a submodel for each missingness pattern. For features of dimension \(d\), it may require estimating \(2^{d}\) sub-models in the worst case [25]. Besides, the samples to train each submodel are considerably reduced, as it treats each missingness pattern separately. Therefore, it is appealing to develop models that are adaptive to several missingness patterns [26]. In this work, we propose an adaptive quantile regression approach by designing models with adaptive parameters and leveraging multiplication with missingness indicator [27]. That is, we set the parameters of the quantile regression model as a function of missing patterns. The model mainly consists of two modules. One is responsible for feature extraction by taking the original features with missing values as inputs, whereas the other is designed for nonlinear mapping. By using masked weight matrices in the feature extraction module, we are allowed to calculate model prediction with observations. Specifically, the bias (a part of parameters) in this module is set as the function of missingness patterns. Then, the latent features are fed into the followed module, i.e., a group of MLPs, which yield several quantile functions guided by corresponding pinball losses. To avoid the quantile-crossing phenomena, we adopt a multi-task framework similar to [28], where the higher quantiles are derived by the addition between lower quantiles and non-negative increments modeled by neural networks. We validate the proposed approach based on data from wind toolkit [29], where values are removed according to designed missingness mechanisms (including MAR and MNAR mechanisms). Case studies demonstrate that the proposed model achieves state-of-the-art in terms of continuous ranked probability score (CRPS), especially in MNAR cases.
In a nutshell, we mainly contribute a probabilistic forecasting approach with missing values by designing quantile regression models with adaptive parameters, which is applicable to both MAR and MNAR cases. The paper is organized as follows. Section II describes the preliminaries of probabilistic wind power forecasting and quantile regression. Section III formulates the problem, whereas section IV presents the proposed approach. Section V presents the setups of case studies and Section VI presents the results. Section VII concludes this paper.
**Notations**: we denote random variables as uppercase letters (such as \(Y\)), and their realizations as lowercase letters (such as \(y\)). We denote time as \(t\) and use it as subscripts to represent random variables and realizations at time \(t\), for instance, \(Y_{t}\) and \(y_{t}\). Missing values are denoted as \(\mathtt{NA}\), and the observations blurred with missing values are denoted as \(\tilde{y}_{t}\in\mathbb{R}\cup\mathtt{NA}\). Missingness indicators are the realizations of random variable \(M_{t}\) and denoted as \(m_{t}\in\{0,1\}\), where \(m_{t}=1\) implies \(\tilde{y}_{t}=\mathtt{NA}\) and \(m_{t}=0\) implies \(\tilde{y}_{t}=y_{t}\).
## II Preliminaries
### _Probabilistic Wind Power Forecasting_
Probabilistic Wind Power Forecasting aims at communicating the probabilistic distribution of wind power generation at the future time with a lead time \(k\), i.e., \(Y_{t+k}\). It often relies on a model \(\mathcal{M}\) with parameters \(\mathbf{\theta}\), and leverages the information up to the current time \(t\), i.e., \(\mathbf{x}_{t}\). The information \(\mathbf{x}_{t}\) may include weather and wind power generation at the previous time. In this work, let us assume that \(\mathbf{x}_{t}\) consists of lagged wind power generation values of length \(h\), i.e., \(\mathbf{x}_{t}=[y_{t-h+1},y_{t-h+2},\cdots,y_{t}]^{\top}\). Denote the cumulative distribution function (c.d.f) of \(Y_{t+k}\) as \(F_{t+k}(y)\). Then, probabilistic wind power forecasting can be described as
\[\hat{F}_{t+k}(y)=F_{t+k}(y|\mathbf{x}_{t};\mathcal{M},\mathbf{\theta}). \tag{1}\]
The model \(\mathcal{M}\) can be set as some kind of distributional model such as logit-normal distribution [8]. Alternatively, it can be set as a group of increasing quantiles. Denote the \(\alpha\)-th quantile of \(Y_{t+k}\) as \(q^{\alpha}_{t+k}\), which is defined as
\[q^{\alpha}_{t+k}=F^{-1}_{t+k}(\alpha). \tag{2}\]
Then the forecast for distribution \(F_{t+k}(y)\) can be also derived as
\[\{\hat{q}^{\alpha_{1}}_{t+k},\hat{q}^{\alpha_{2}}_{t+k},\cdots,\hat{q}^{\alpha _{m}}_{t+k}\},\ \alpha_{1}<\alpha_{2}<\cdots<\alpha_{m},\]
where \(\hat{q}^{\alpha_{i}}_{t+k}\) is the estimated \(\alpha_{i}\)-th quantile. Such quantiles can be derived via quantile regression [6]. Let \(g(\mathbf{x};\mathbf{\theta},\alpha)\) represent a quantile function, such that
\[q^{\alpha}_{t+k}=g(\mathbf{x}_{t};\mathbf{\theta},\alpha). \tag{3}\]
In particular, it can be represented as the inner product of coefficient and latent features learned via machine learning [30]. Denote the coefficient as \(\mathbf{w}\) and the features as \(\phi(\mathbf{x}_{t};\mathbf{\theta}_{\phi})\), i.e., \(\mathbf{\theta}=\{\mathbf{w},\mathbf{\theta}_{\phi}\}\). We describe the function as
\[g(\mathbf{x}_{t};\mathbf{\theta},\alpha)=\mathbf{w}^{\top}\phi(\mathbf{x}_{t};\mathbf{\theta}_{\phi }), \tag{4}\]
The parameter \(\mathbf{\theta}\) can be estimated via machine learning based on historical data \(\{(\mathbf{x}_{t},y_{t+k})|t=1,2,\cdots,n\}\). Concretely, they are estimated by minimizing the pinball loss \(\mathcal{L}\), i.e.,
\[\mathcal{L}=\frac{1}{n}\sum_{t}\ell(y_{t+k},g(\mathbf{x}_{t};\mathbf{\theta},\alpha))\]
where \(\ell(y_{t+k},g(\mathbf{x}_{t};\mathbf{\theta},\alpha))\) is defined as
\[\begin{split}&\ell(y_{t+k},g(\mathbf{x}_{t};\mathbf{\theta},\alpha))=\\ &\max(\alpha(y_{t+k}-g(\mathbf{x}_{t};\mathbf{\theta},\alpha)),(\alpha-1)( y_{t+k}-g(\mathbf{x}_{t};\mathbf{\theta},\alpha))).\end{split} \tag{5}\]
### _Missingness Mechanism_
In the modern statistical theory [19], missingness mechanisms can be classified into three categories: MCAR, MAR, and MNAR. Intuitively, the observations for both \(\mathbf{x}_{t}\) and \(y_{t+k}\) may contain missing values. Therefore, we write the observations as \(\tilde{\mathbf{x}}_{t}\) and \(\tilde{y}_{t+k}\). We introduce a missingness indicator vector \(\mathbf{m}_{t}\in\{0,1\}^{d}\) for \(\tilde{\mathbf{x}}_{t}\), which is the realization of a random variable \(\mathbf{M}_{t}\), s.t., \(\tilde{x}_{t,i}=x_{t,i}\) when \(m_{t,i}=0\) and \(\tilde{x}_{t,i}=\textsc{NA}\) when \(m_{t,i}=1\). The observed part of \(\mathbf{x}_{t}\) is denoted as \(\mathbf{x}_{o,t}=o(\mathbf{x}_{t},\mathbf{m}_{t})\), whereas the missing part of \(\mathbf{x}_{t}\) is denoted as \(\mathbf{x}_{m,t}=o(\mathbf{x}_{t},1-\mathbf{m}_{t})\).
Taking the multivariable \(\mathbf{x}_{t}\) as an example, the parametric model for the joint distribution of the data sample and its mask is described as
\[f(\mathbf{x}_{t},\mathbf{m}_{t};\xi,\psi)=f(\mathbf{x}_{t};\xi)f(\mathbf{m}_{t}|\mathbf{x}_{t}; \psi), \tag{6}\]
where \(\xi\) and \(\psi\) represent the parameters of distribution for data and mask. The data sample can be split into an observed part \(\mathbf{x}_{o,t}\) and a missing part \(\mathbf{x}_{m,t}\), i.e., \(\mathbf{x}_{t}=(\mathbf{x}_{o,t},\mathbf{x}_{m,t})\). Then the missingness mechanism is referred to as MCAR if
\[f(\mathbf{m}_{t}|\mathbf{x}_{t};\psi)=f(\mathbf{m}_{t};\psi),\]
which means the missingness is independent of the data sample. The missingness mechanism is referred to as MAR if
\[f(\mathbf{m}_{t}|\mathbf{x}_{t};\psi)=f(\mathbf{m}_{t}|\mathbf{x}_{o,t};\psi).\]
That is, the missingness is dependent on \(\mathbf{x}_{o,t}\), yet independent of \(\mathbf{x}_{m,t}\). Similarly, the missingness mechanism is referred to as MNAR if
\[f(\mathbf{m}_{t}|\mathbf{x}_{t};\psi)=f(\mathbf{m}_{t}|\mathbf{x}_{t}^{o},\mathbf{x}_{t}^{m};\psi),\]
i.e., the missingness is dependent on both the observed and missing values. In fact, both the real data and missing mechanisms are unavailable. Alternatively, we are only allowed to get access to \((\tilde{\mathbf{x}}_{t},\tilde{y}_{t+k})\). In this work, we aim at developing a forecasting approach based on only observations without assumptions on the missingness mechanisms.
## III Methodology
In this section, we illustrate the approach to develop models directly with observations \((\tilde{\mathbf{x}}_{t},\tilde{y}_{t+k})\). In what follows, we first discuss the challenge within the probabilistic wind power forecasting aroused by missing values, i.e., the discrete nature of NA. Then, we describe the idea to deal with such an issue, that is, adapting the model parameters with missingness patterns.
### _Problem Formulation_
As \((\tilde{\mathbf{x}}_{t},\tilde{y}_{t+k})\) has no contributions to the parameter estimation if \(\tilde{y}_{t+k}\) is missing, we only focus on samples where \(\tilde{y}_{t+k}\) is observed in the following context. Then, the estimate of parameters is derived via
\[\hat{\mathbf{\theta}}=\arg\min_{\mathbf{\theta}}\frac{1}{n}\sum_{t}\ell(\tilde{y}_{t+ k},g(\tilde{\mathbf{x}}_{t};\mathbf{\theta},\alpha)), \tag{7}\]
though \(\tilde{\mathbf{x}}_{t}\) contains missing values. Unfortunately, most off-the-shelf machine learning methods do not work with the half-discrete nature of \(\mathbb{R}\cup\textsc{NA}\). Alternatively, we are allowed to estimate a quantile function for each missingness pattern by selecting samples with such missingness pattern, as suggested by [16]. For instance, for a missingness pattern \(\mathcal{P}_{i}\in\{0,1\}^{d}\), we denote the corresponding quantile function as \(g_{\mathcal{P}_{i}}\) with parameters \(\mathbf{\theta}_{i}\). It is expressed as
\[g_{\mathcal{P}_{i}}(\tilde{\mathbf{x}}_{t};\mathbf{\theta}_{i},\alpha)=g(\mathbf{x}_{o,t };\alpha,\mathbf{\theta}_{i},\mathbf{m}_{t}=\mathcal{P}_{i}). \tag{8}\]
Therefore, the quantile function can be constructed as a combination of functions
\[g(\tilde{\mathbf{x}}_{t};\mathbf{\theta},\alpha)=\begin{cases}&g_{\mathcal{P}_{1}}( \tilde{\mathbf{x}}_{t};\mathbf{\theta}_{1},\alpha),\ \mathbf{m}_{t}=\mathcal{P}_{1},\\ &g_{\mathcal{P}_{2}}(\tilde{\mathbf{x}}_{t};\mathbf{\theta}_{2},\alpha),\ \mathbf{m}_{t}= \mathcal{P}_{2},\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\ldots\\ &g_{\mathcal{P}_{2^{d}}}(\tilde{\mathbf{x}}_{t};\mathbf{\theta}_{2^{d}},\alpha),\ \mathbf{m}_{t}= \mathcal{P}_{2^{d}}.\end{cases} \tag{9}\]
Intuitively, it requires estimating \(2^{d}\) sub-models, which scales exponentially with the dimension and is therefore impractical.
Fig. 1: The sketch of the proposed approach, where blank blocks in \(\tilde{\mathbf{x}}_{t}\) indicate missing values.
### _Method_
To address the aforementioned tractability issue, we seek to develop a model whose parameters can adapt to missingness patterns. It is described as
\[g(\tilde{\mathbf{x}}_{t};\mathbf{\theta}(\mathbf{m}_{t}),\alpha).\]
For that, we expect the feature function \(\phi(\cdot)\) can adapt to missingness patterns, i.e.,
\[g(\tilde{\mathbf{x}}_{t};\mathbf{\theta}(\mathbf{m}_{t}),\alpha)=\mathbf{w}^{\top}\phi(\tilde{ \mathbf{x}}_{t};\mathbf{\theta}_{\phi}(\mathbf{m}_{t})). \tag{10}\]
Specifically, we design the feature function \(\phi(\cdot)\) as a composition of \(\phi_{1}(\cdot)\) and \(\phi_{2}(\cdot)\), where \(\phi_{1}(\cdot)\) is a linear function with parameters \(\mathbf{\theta}_{\phi_{1}}\) and \(\phi_{2}(\cdot)\) is a nonlinear function with parameters \(\mathbf{\theta}_{\phi_{2}}\). We have
\[\phi(\tilde{\mathbf{x}}_{t};\mathbf{\theta}_{\phi}(\mathbf{m}_{t}))=\phi_{2}(\phi_{1}( \tilde{\mathbf{x}}_{t};\mathbf{\theta}_{\phi_{1}}(\mathbf{m}_{t})),\mathbf{\theta}_{\phi_{2}}), \tag{11}\]
where \(\mathbf{\theta}_{\phi_{1}}\) is a function of \(\mathbf{m}_{t}\). Let \(\mathbf{z}_{t}\) denote the output of \(\phi_{1}(\tilde{\mathbf{x}}_{t};\mathbf{\theta}_{\phi_{1}}(\mathbf{m}_{t}))\), i.e.,
\[\mathbf{z}_{t}=\phi_{1}(\tilde{\mathbf{x}}_{t};\mathbf{\theta}_{\phi_{1}}(\mathbf{m}_{t})).\]
Obviously, NA will not contribute to \(\mathbf{z}_{t}\). Thus, we define \(\phi_{1}(\cdot)\) as a function that outputs the linear combination of observations in \(\tilde{\mathbf{x}}_{t}\), which is described as
\[\begin{split} z_{t,i}=&\sum_{j:m_{t,j}=0}\mathbf{W}_{ \phi_{1}}(\mathbf{m}_{t})[i,j]\tilde{x}_{t,j}+\mathbf{b}_{\phi_{1}}(\mathbf{m}_{t})[i]\\ =&\mathbf{W}_{\phi_{1}}(\mathbf{m}_{t})[i,:]\mathrm{diag}(1 -\mathbf{m}_{t})\tilde{\mathbf{x}}_{t}+\mathbf{b}_{\phi_{1}}(\mathbf{m}_{t})[i],\end{split} \tag{12}\]
where \(\mathbf{W}_{\phi_{1}}(\mathbf{m}_{t})\) and \(\mathbf{b}_{\phi_{1}}(\mathbf{m}_{t})\) are the parameters of \(\phi_{1}(\cdot)\), i.e., \(\mathbf{\theta}_{\phi_{1}}=\{\mathbf{W}_{\phi_{1}}(\mathbf{m}_{t}),\mathbf{b}_{\phi_{1}}(\bm {m}_{t})\}\), and \(\mathrm{diag}(\cdot)\) returns a square diagonal matrix with the elements of the input vector. Then \(\mathbf{z}_{t}\) is used as the input to \(\phi_{2}(\cdot)\), i.e.,
\[q^{\alpha}_{t+k}=\mathbf{w}^{\top}\phi_{2}(\mathbf{z}_{t};\mathbf{\theta}_{\phi_{2}}).\]
## IV Proposed Approach
With the main idea described in Section III, we now describe the proposed approach in detail. The linear function \(\phi_{1}(\cdot)\) is implemented similar to the NeuMiss block proposed in [27], whereas the nonlinear function \(\phi_{2}(\cdot)\) is implemented as an MLP. In particular, we place the quantile regression model in a multi-task framework and ensure the monotonicity of quantiles. We sketch the proposed approach in Figure 1, which consists of a feature extraction block and several MLPs correlated with previous ones via addition successively.
### _Feature Extraction_
In the feature extraction block, we replace NA in \(\tilde{\mathbf{x}}_{t}\) with 0, and denote it as \(\hat{\mathbf{x}}_{t}\), which operates as
\[\hat{\mathbf{x}}_{t}=\tilde{\mathbf{x}}_{t}\odot(1-\mathbf{m}_{t}),\]
where \(\odot\) is an elementwise product operator. Then equation (12) can be also rewritten compactly as
\[\mathbf{z}_{t}=\mathbf{W}_{\phi_{1}}(\mathbf{m}_{t})\hat{\mathbf{x}}_{t}+\mathbf{b}_{\phi_{1}}( \mathbf{m}_{t}) \tag{13}\]
In particular, we use a special case of adaptive regression where \(\mathbf{W}_{\phi_{1}}(\mathbf{m}_{t})\) are static whereas \(\mathbf{b}_{\phi_{1}}(\mathbf{m}_{t})\) is a function of the missingness patterns. Specifically, we set \(\mathbf{b}_{\phi_{1}}(\mathbf{m}_{t})\) as
\[\mathbf{b}_{\phi_{1}}(\mathbf{m}_{t})=\mathbf{b}_{\phi_{1}}\odot\mathbf{m}_{t}. \tag{14}\]
Then equation (13) can be rewritten as
\[\mathbf{z}_{t}=\mathbf{W}_{\phi_{1}}\hat{\mathbf{x}}_{t}+\mathbf{b}_{\phi_{1}}\odot\mathbf{m}_{t}. \tag{15}\]
In this work, we design the feature extraction block similar to the NeuMiss model proposed in [27], which is illustrated in Figure 2. Consider that we stack \(l_{\phi_{1}}\) such blocks via skip connections. For the \(l\)-th block, we denote its input as \(\mathbf{h}_{\phi_{1}}^{l-1}\), and its output as \(\mathbf{h}_{\phi_{1}}^{l}\). Specially, \(\mathbf{h}_{\phi_{1}}^{0}=\hat{\mathbf{x}}_{t}\). For each block, we have
\[\mathbf{h}_{\phi_{1}}^{l}=\mathbf{W}_{\phi_{1}}^{l}\mathbf{h}_{\phi_{1}}^{l-1}\odot(1-\mathbf{ m}_{t})+\mathbf{h}_{\phi_{1}}^{0}, \tag{16}\]
where \(\mathbf{W}_{\phi_{1}}^{l}\) is the weight matrix in the \(l\)-th block. The output of the last block is denoted as \(\mathbf{h}_{\phi_{1}}^{l_{\phi_{1}}}\). Then, the latent features \(\mathbf{z}_{t}\) is derived by adding \(\mathbf{h}_{\phi_{1}}^{l_{\phi_{1}}}\) with \(\mathbf{b}_{\phi_{1}}\odot\mathbf{m}_{t}\), i.e.,
\[\mathbf{z}_{t}=\mathbf{h}_{\phi_{1}}^{l_{\phi_{1}}}+\mathbf{b}_{\phi_{1}}\odot\mathbf{m}_{t}, \tag{17}\]
which is further fed into nonlinear functions to yield quantiles.
### _Non-crossing Quantile Regression Neural Network_
Specifically, we define a non-negative nonlinear function \(\phi_{2,\alpha_{i}}\) for each quantile level \(\alpha_{i}\). Accordingly, we respectively
Fig. 2: The structure of feature extraction block.
denote the coefficient and parameters as \(\mathbf{w}_{\alpha_{i}}\) and \(\mathbf{\theta}_{\phi_{2},\alpha_{i}}\). The quantile function \(q_{t+k}^{\alpha_{1}}\) is derived by the composition of \(\phi_{1}(\cdot)\) and \(\phi_{2,\alpha_{1}}(\cdot)\), i.e.,
\[\begin{split} q_{t+k}^{\alpha_{1}}&=g(\tilde{\mathbf{x} }_{t};\mathbf{\theta}_{1},\alpha_{1})\\ &=\mathbf{w}_{\alpha_{1}}^{\top}\phi_{2,\alpha_{\phi_{1}}}(\phi_{1}( \tilde{\mathbf{x}}_{t};\mathbf{\theta}_{\phi_{1}});\mathbf{\theta}_{\phi_{2},\alpha_{1}}). \end{split} \tag{18}\]
As for the quantile level \(\alpha_{i}\)\((i>1)\), we derive the quantile function \(q_{t+k}^{\alpha_{i}}\) by adding \(q_{t+k}^{\alpha_{i-1}}\) with an increment, which is set as the composition of \(\phi_{1}(\cdot)\) and \(\phi_{2,\alpha_{i}}(\cdot)\). It is described as
\[\begin{split} q_{t+k}^{\alpha_{i}}&=g(\tilde{\mathbf{x} }_{t};\mathbf{\theta}_{i},\alpha_{i})\\ &=q_{t+k}^{\alpha_{i-1}}+\mathbf{w}_{\alpha_{i}}^{\top}\phi_{2,\alpha _{i}}(\phi_{1}(\tilde{\mathbf{x}}_{t};\mathbf{\theta}_{\phi_{1}});\mathbf{\theta}_{\phi_{2 },\alpha_{i}}).\end{split} \tag{19}\]
As \(\phi_{2,\alpha_{i}}\) is non-negative, we have
\[q_{t+k}^{\alpha_{i-1}}\leq q_{t+k}^{\alpha_{i}}.\]
Each function \(\phi_{2,\alpha_{i}}\) is implemented with an \(l_{\phi_{2}}\)-layer MLP, where each layer takes \(\mathbf{h}_{\phi_{2},\alpha_{i}}^{l-1}\) as input, and output \(\mathbf{h}_{\phi_{2},\alpha_{i}}^{l}\). Specially, \(\mathbf{z}_{t}\) is denoted as \(\mathbf{h}_{\phi_{2},\alpha_{i}}^{l}\). The \(l\)-th layer operates as
\[\mathbf{h}_{\phi_{2},\alpha_{i}}^{l}=\mathbf{W}_{\phi_{2},\alpha_{i}}^{l}\mathbf{h}_{\phi_ {2},\alpha_{i}}^{l-1}+\mathbf{b}_{\phi_{2},\alpha_{i}}^{l}, \tag{20}\]
where \(\mathbf{W}_{\phi_{2},\alpha_{i}}^{l}\) and \(\mathbf{b}_{\phi_{2},\alpha_{i}}^{l}\) respectively represent the weight and bias in this layer. It is followed by a Relu function \(\sigma(\cdot)\), which operates as
\[\sigma(\mathbf{h}_{\phi_{2},\alpha_{i}}^{l})=\max(\mathbf{h}_{\phi_{2},\alpha_{i}}^{l},\mathbf{0}), \tag{21}\]
where \(\max\) returns the maximum between \(\mathbf{h}_{\phi_{2},\alpha_{i}}^{l}\) and \(\mathbf{0}\) elementwisely.
To estimate all the parameters, we minimize the total loss, which is defined as
\[\mathcal{L}=\frac{1}{n\times m}\sum_{t}\sum_{i}\ell(\tilde{y}_{t+k},g(\tilde{ \mathbf{x}}_{t};\mathbf{\theta}_{i},\alpha_{i})). \tag{22}\]
## V Case Study
In this section, we describe the data for case validation, which come from the open-source wind toolkit [29]. Then we introduce experimental setups, where missingness is simulated based on different mechanisms. After that, we describe benchmark models and qualification metrics.
### _Data Description_
As we focus on very short-term wind power forecasting, here we use a wind power measurement dataset collected from a wind farm located in South Carolina. It is an hourly dataset where values are normalized by its capacity, and spans from 2007 to 2013. In each case, we split the first \(70\%\) of data for training models, the following \(10\%\) of data for validation, and the last \(20\%\) of data for genuine forecast verification.
### _Experimental Setups_
In this work, we simulate missingness based on both MAR and MNAR mechanisms. For the MAR mechanism, we consider missingness that spread sporadically and in blocks, though it is independent of the wind power generation value. As for the MNAR mechanism, we consider a self-masking case, where values greater than a threshold will be missing. The designed 3 cases are described as follows:
#### V-A1 Case 1
In this case, we randomly remove \(20\%\) of data, which spread sporadically over the dataset.
#### V-A2 Case 2
In this case, we remove data in blocks, which are randomly located over the dataset. The length of each block is uniformly distributed between 5 and 30 steps, whereas the number of blocks is fixed as 300.
#### V-A3 Case 3
In this case, we remove data whose values are greater than 0.87.
In each case, we consider the lead time of 1, 2, and 3, and use wind power generation values at previous time as features. As feature selection is not the focus of this work, we choose it empirically as 6 lags. Sophisticated feature selection approaches can be certainly used.
### _Benchmark Models_
Three types of models are considered as benchmarks: naive models, "impute, then predict" strategy-based models, and "universal imputation" strategy-based models. Besides, we set a quantile regression model trained on complete data as a reference. We describe them as follows:
#### V-C1 Climatology
It is a naive model that estimates the empirical distribution of wind power generation based on historical samples.
#### V-C2 IM-Gaussian
It is an "impute, then predict" strategy-based model. Missing values are imputed via MissForest [31], based on which a parametric probabilistic forecasting model with Gaussian distributional assumption is developed.
#### V-C3 IM-Qr
It is an "impute, then predict" strategy-based model. Missing values are also imputed via MissForest, based on which quantile regression models are developed.
#### V-C4 DeepAR
It is a state-of-the-art "impute, then predict" strategy-based model where imputation and forecasting are performed simultaneously.
#### V-C5 Ui
It is a "universal imputation" strategy-based model proposed in [23] based on a fully conditional specification [18]. For further descriptions, we refer readers to [23].
#### V-C6 R-Qr
It is a quantile regression model trained on the complete dataset, which serves as a reference.
### _Qualification Metrics_
To assess the quality of forecasts, we verify the calibration and sharpness of forecasts. Concretely, the calibration of predictive densities is assessed with reliability diagrams. The sharpness of forecasts is assessed with the average width of central prediction intervals, which reveals how predictive densities concentrate the information. In addition, we assess the quality of forecasts with a skill score, namely continuous ranked probability score (CRPS). Given the lead time \(k\), we denote the predictive c.d.f for wind power generation at time \(t+k\) as \(\hat{F}_{t+k}\), and the real generation value as \(y_{t+k}\). The CRPS is calculated via
\[\mathrm{CRPS}(\hat{F}_{t+k},y_{t+k})=\int_{y}(\hat{F}_{t+k}(y)-\mathcal{I}(y-y_{t +k}))^{2}dy,\]
where \(\mathcal{I}(\cdot)\) is a step function. We report the average CRPS of all test samples for each lead time. For further information on forecast verification, we refer readers to [1].
## VI Results
The results of the aforementioned three cases are presented in this section, and followed by discussion.
### _Case 1_
The CRPS values of forecasts by the proposed and benchmark models are presented in Table I. Intuitively, The performance of climatology is the worst among all models, as it only communicates forecasts via unconditional empirical distribution. Unlike the common situations where QR often outperforms Gaussian distributional models, The performance of IM-Gaussian and IM-QR is quite close, which suggests that imputation has a complex impact on the training of downstream forecasting models. To our surprise, the performance of DeepAR is the worst among the three "impute, then predict" strategy-based models. In fact, DeepAR imputes missing values via the intermediate results of the recurrent neural network during training, which may have a negative impact on forecasting. The performance of UI and the proposed model is comparable to the reference model. Especially, the UI model slightly outperforms the reference model, which may be due to its robustness to overfitting. The 90% prediction intervals by the proposed model for 144 successive observations when lead time is 1 is shown in Figure 3.
We present the reliability diagrams and prediction interval width of forecasts when \(k=1\) in Figure 4. As shown, the reliability diagrams of "impute, then predict" strategy-based models deviate from the ideal case in relatively further distances. The prediction interval widths of the proposed and benchmark models are comparable, all of which are smaller than that of the reference model. The reliability of DeepAR is the worst among all models, whereas its prediction interval widths are also the smallest. The reliability diagrams of UI and the proposed model are close to the ideal case, which reveals that they lead to little biases in the forecasts. Surprisingly, the reliability of the reference model is worse than the proposed and UI model, while the reference model yields larger prediction interval widths. It may be caused by the overfitting of the reference model.
### _Case 2_
The CRPS values of forecasts by the proposed and benchmark models are presented in Table II. In this case, the differences in CRPS values among all models are smaller than those in case 1. Different from the case 1, missingness occurs in blocks in this case, leading to more samples with complete observations. Therefore, missing values have less impact on the quality of forecasts. Among the "impute, then predict" strategy-based models, the performance of DeepAR is still the worst, though the difference between DeepAR and IM-Gaussian/IM-QR is smaller compared to that in case 1. By contrast, the performance of the proposed and UI models is still better than that of "impute, then predict" strategy-based models, which is comparable to the reference model. It suggests that the proposed and UI models are applicable to both sporadic and block missingness.
The reliability diagrams and prediction interval width of forecasts when \(k=1\) are presented in Figure 5. In this case, the reliability of DeepAR is still the worst among all models. It is seen that the reliability diagram of the proposed model fluctuates around the ideal case, though close to the ideal case. It is caused by the monotonicity constraint on the proposed
Fig. 3: 90% prediction intervals by the proposed model for 144 successive observations in case 1 when lead time is 1.
model. Such constraints ensure that higher quantiles are no smaller than lower quantiles, but influence the parameter estimation on the other hand. Further analysis is contained in the following subsection. In general, the performance of the proposed model in reliability and sharpness is quite good.
### _Case 3_
We present the CRPS values of forecasts by the proposed and benchmark models in Table III. Compared to cases 1 and 2, all models achieve better performance in CRPS, as high wind power generation values are excluded, which leads to larger forecast errors. Still, the quality of forecasts by the DeepAR model is the worst among all models. The performance of UI and the proposed models is better than that
Fig. 4: Reliability and sharpness of forecasts by the proposed and benchmark models with \(k=1\) in case 1.
Fig. 5: Reliability and sharpness of forecasts by the proposed and benchmark models with \(k=1\) in case 2.
of "impute, then predict" strategy-based models. Especially the proposed model achieves the best performance among all models, which reveals that the proposed model is applicable to MNAR cases.
The reliability diagrams and prediction interval widths are shown in Figure 6. Unlike in cases 1 and 2, the reliability of UI model deviates from the ideal case to a large extent, which may be due to the fact that fully conditional specification relies on MAR assumption. In the MNAR case, such an assumption leads to more biases in the parameter estimation. By contrast, the proposed model is more applicable to MNAR cases.
### _Discussion on Non-crossing Constraints_
As described in Section IV.B, we place hard constraints on quantiles to ensure monotonicity by setting higher quantiles as the addition of lower quantiles and non-negative increments. Certainly, it avoids the embarrassing quantile crossing phenomena. However, such constraints also influence parameter estimation. For illustration, we present the reliability diagrams and prediction interval widths of regular quantile regression and the proposed model based on the setting in case 2 in Figure 7. As shown, the regular quantile regression model leads to a quantile-crossing phenomenon, while the prediction interval widths are wider than those of the regular quantile regression model. In addition, as the proposed model constructs quantiles via addition operation, it can be only calculated sequentially, while the regular quantile regression model is allowed to yield quantiles in parallel.
### _Training Time_
We present the training time of all models in Table IV, where time spent on imputation for the "impute, then predict" strategy-based models is not included. It is seen that the proposed model is time-efficient in training. In particular, the training time of UI model (which is based on fully conditional specification here) will increase linearly with the dimension, whereas that of the proposed model scales sublinearly thanks to the deep learning techniques.
## VII Conclusion
In this work, we propose an adaptive quantile regression approach for probabilistic wind power forecasting with missing values within the typically used conditional distribution modeling framework. It is based on deep neural network models, and contains a linear mapping whose bias is adaptive to missingness patterns by design and a nonlinear mapping that is responsible for yielding quantiles. It is applicable to both missing at-random and missing not-at-random cases. In particular, higher quantiles are derived by the addition between lower quantiles and non-negative increments, which avoids the embarrassing quantile-crossing phenomena. Case studies demonstrate that the proposed approach achieves state-of-the-art in terms of CRPS in both missing at-random and missing not-at-random cases, and is time-efficient in training. However, we only consider adaptive bias in this work, more efforts can be taken into developing adaptive weights in the future. Besides, as case studies suggest the hard constraints on quantiles influence the parameter estimation while they ensure the monotonicity, advanced non-crossing quantile regression methods can be investigated.
|
2310.01967 | Efficient Frontier Management for Collaborative Active SLAM | In autonomous robotics, a critical challenge lies in developing robust
solutions for Active Collaborative SLAM, wherein multiple robots
collaboratively explore and map an unknown environment while intelligently
coordinating their movements and sensor data acquisitions. In this article, we
present an efficient centralized frontier sharing approach that maximizes
exploration by taking into account information gain in the merged map,
distance, and reward computation among frontier candidates and encourages the
spread of agents into the environment. Eventually, our method efficiently
spreads the robots for maximum exploration while keeping SLAM uncertainty low.
Additionally, we also present two coordination approaches, synchronous and
asynchronous to prioritize robot goal assignments by the central server. The
proposed method is implemented in ROS and evaluated through simulation and
experiments on publicly available datasets and similar methods, rendering
promising results. | Muhammad Farhan Ahmed, Matteo Maragliano, Vincent FremontCarmine, Tommaso Recchiuto, Antonio Sgorbissa | 2023-10-03T11:21:19Z | http://arxiv.org/abs/2310.01967v5 | # Collaborative Active SLAM: Synchronous and Asynchronous Coordination Among Agents*
###### Abstract
In the realm of autonomous robotics, a critical challenge lies in developing robust solutions for Active Collaborative SLAM, wherein multiple robots must collaboratively explore and map an unknown environment while intelligently coordinating their movements and sensor data acquisitions. To this aim, we present two approaches for coordinating a system consisting of multiple robots to perform Active Collaborative SLAM (AC-SLAM) for environmental exploration. Our two coordination approaches, synchronous and asynchronous implement a methodology to prioritize robot goal assignments by the central server. We also present a method to efficiently spread the robots for maximum exploration while keeping SLAM uncertainty low. Both coordination approaches were evaluated through simulation on publicly available datasets, obtaining promising results.
## I Introduction
Autonomous robotics has emerged as a transformative force in the exploration of complex and uncharted environments. From planetary exploration missions to disaster relief operations, the deployment of autonomous robots has demonstrated a revolutionary potential across a diverse range of applications. At the heart of this success lies the robots' ability to autonomously explore an environment while gathering data and constructing detailed maps of the surrounding environment in real-time--a process known as Active Simultaneous Localization and Mapping (A-SLAM).
While considerable progress has been made in this sense, many research works have been recently focused on Active Collaborative SLAM (AC-SLAM), which capitalizes on the power of multiple robots working in collaboration. The potential advantages are manifold, from accelerated mapping in expansive terrains to resilient operation in challenging and dynamic scenarios. However, the utilization of multiple robots in collaborative SLAM is not without its challenges. Coordination, resource allocation, and sensor fusion become critical facets that demand careful consideration. Furthermore, the seamless integration of individual robot efforts into a coherent, unified map poses a non-trivial computational and algorithmic challenge.
For these reasons, we propose here a novel implementation of an AC-SLAM algorithm, which has its foundations in a recently proposed approach for single-agent environment exploration [1]. The algorithm presented in [1] has been extended to a multi-agent domain, where multiple robots collaboratively map an environment. To achieve this aim, both synchronous and asynchronous strategies, in a centralised approach with a central server, to establish effective communication and coordination of goals among the agents have been implemented. In the first case, agents await the completion of tasks by all agents before receiving new targets, while in the second implementation robots immediately request new goals from the server once their target has been reached. Additionally, an efficient method was devised to distribute robots in the environment, considering agent priorities and using reward- and distance-based metrics to optimize goal selection.
The subsequent sections are organized as follows: Section II provides a review of related work, Section III details the steps taken to reach the solution, Section IV shows the experimental results, and Section V presents conclusions and suggestions for future work.
## II Related Work
### _Active SLAM_
In A-SLAM implementations, the robot can actively choose its actions, such as selecting views or locations to investigate, to reduce the uncertainty of its location and map representation. By intelligently planning and executing robot operations that produce the most useful data to minimize uncertainty, the final goal is to increase the efficiency and accuracy of SLAM. Hence, A-SLAM turns the problem into an optimization task by including active control in the SLAM procedure. This involves not only estimating the robot pose and map but also optimizing the robot trajectory and sensor tasks to improve the overall performance. What distinguishes Active SLAM from conventional SLAM methods is the integration of optimal decision-making with SLAM, so as to reduce the uncertainty, as reported by [2, 3].
To this aim, the problem is usually modelled with a graph-based approach. Graph-based approaches, like pose graph optimization [4, 5, 6] and bundle adjustment [7, 8] are gaining popularity in Active SLAM due to their capacity to manage big data and the interaction between robots and the environment. Pose graph optimization treats the environment as a graph, with nodes representing robot positions and edges representing the constraints among the nodes. This adjustment enhances both the 3D point and sensor/robot positions simultaneously, improving environment
reconstruction accuracy. Distributed pose graph optimization and incremental bundle adjustment are adaptations designed for scalability and real-time performance in collaborative active SLAM. Furthermore, factor graphs, [5, 9, 10], which are graphical models used to represent a wide variety of problems across robotics, are increasingly used for their ability to handle nonlinear relationships and incorporate prior knowledge.
From a mathematical point of view, graph-based approaches define the environment as \(\mathcal{G}\triangleq(\mathcal{V},\mathcal{E})\) ([11]), each vertex \(\mathbf{v}i\in\mathcal{V}\) represents a robot's pose, and each edge \(\mathbf{e}i\triangleq(\mathbf{v}i,\mathbf{v}k)\in\mathcal{E}\) denotes a constraint between two poses. The pose-graph edges are weighted by the covariance matrix \(\mathbf{\Sigma}_{j}\), representing uncertainty or covariance of estimated poses and landmarks [1]. In \(\mathbf{\Sigma}\) larger diagonal values indicate variances whereas covariances between poses and landmarks are along the off-diagonal. Larger diagonal values suggest higher uncertainty, while non-zero off-diagonal elements indicate correlations or dependencies between variables. Additional details about graph theory, estimation-over-graph, optimality criteria and graph-connectivity, can be found in [10, 12, 13].
### _Frontiers_
A key component of Active SLAM is frontier detection, which looks for uncharted territory along the boundaries between known and unknown regions, [14]. Frontiers play a pivotal role in augmenting the precision of robot localization within the context of Active SLAM by enabling intelligent exploration and data acquisition strategies, effectively reducing uncertainty and enhancing the map-building and localization processes. In Active SLAM, the process of learning new information is achieved through adaptive data acquisition strategies that guide the robot to explore previously uncharted regions of its environment, acquire informative sensor measurements, and subsequently update its map and localization estimates, thereby progressively enhancing its knowledge and accuracy. Hence, robotic exploration depends heavily on locating and exploring frontiers.
Diverse sensors, including cameras, lidar, and sonar, are used in frontier exploration and are frequently mounted on robotic platforms for various conditions. By using as input the data from various sensors, techniques like machine learning, [15], Deep Q-Networks (DQN) [16] are used. Moreover, [17] combined the DQN architecture with a FastSLAM backend for mapping and obstacle avoidance in uncharted terrain, showcasing the potential of integrated approaches for autonomous navigation.
Once a frontier has been identified, the robot can use path planning algorithms to maximise information collecting while taking into account techniques like coverage optimisation and travel distance minimization.
### _Collaborative SLAM_
As already mentioned, the A-SLAM problem has been recently extended to the multi-robot domain. When considering multi-robot systems, two primary aspects come into play. Firstly, teams can be either homogeneous, consisting of robots of the same type, or heterogeneous, [18], with various robot types working together. Secondly, the system's architecture can be centralised, decentralised, or distributed, [19]. Centralised control offers precise coordination but is susceptible to delays and single points of failure. On the other hand, decentralised systems distribute control for enhanced robustness and scalability while requiring effective coordination. Distributed systems empower individual robots for autonomous decision-making, providing fault tolerance and adaptability while demanding efficient communication protocols. Sometimes, systems can combine centralised and distributed elements [18], sharing computational tasks among agents while central nodes handle decision-making.
These distinctions guide the design of multi-robot systems tailored to specific applications and requirements. When multi-robot systems are applied to the exploration and mapping of an unknown environment, i.e., AC-SLAM, significant challenges arise. A crucial aspect is distinguishing between "global" and "local" perspectives, [18]. In traditional single-robot SLAM, the local view is the default, with pose and map estimates based on the robot's internal reference frame. In multi-robot scenarios, maintaining consistency between perspectives becomes crucial, requiring robots to describe landmarks using the same coordinate system. Collaborative SLAM aims to establish this global reference frame, enabling robots to collectively perceive and learn from each other's observations.
This challenge can be tackled using both centralised and decentralised configurations. More in detail, in Centralised Collaborative SLAM, a central computer manages information from all agents, centralizing processing and coordination. Some variations optimize information sharing to reduce the central computer's burden, while others incorporate visual cameras for data retrieval. However, Centralised approaches can become unwieldy with a large number of robots, while decentralised approaches allow each robot access to a local view and are suitable for large-scale operations, as demonstrated by [4] and [20]. Decentralised Collaborative SLAM offers scalability and efficient distribution of computational tasks among agents, enhancing privacy and reducing communication bandwidth requirements. Some approaches involve redundancy in data, [21] and [22], where mapping data are shared among robots, while others are designated for computations. Decentralised systems address certain bottlenecks, being suitable for large-scale operations and providing each robot with its local view, [4, 14], but still face challenges like bookkeeping, information double-counting, and synchronization issues. These distinctions highlight the trade-offs between centralised and decentralised approaches, each offering unique advantages and challenges.
Finally, from a technical perspective, Collaborative SLAM consists of two vital components: Front-end processes and Back-end processes, [18]. Front-end processes gather data, produce landmark estimates, and manage loop closures, both intra-robot and inter-robot. Intra-robot loop closures mitigate odometry errors, while inter-robot loop closures
stitch together poses and local maps of different robots, creating a global understanding of the environment. These closures foster cooperation among robots, enhancing map accuracy and localization, [23]. Back-end processes focus on estimating robot and map states using data generated by the front end. It addresses challenges like an initially unknown global reference frame, uncertain starting positions, and the need for consensus among robots. Back-end processes deliver precise estimates, enabling collaborative multi-robot mapping and navigation in complex scenarios. Important front-end works are [7] and [23] for MonoSLAM and Visual Odometry, whereas for back-end implementations the work described in [5] and [24] is worth of mention, presenting the g2o optimization framework and Square Root Information Smoothing (SRIS) respectively.
## III Methodology
While many research works have been focused on collaborative strategies for SLAM, or single-robot active-SLAM, only a few works have dealt with AC-SLAM. However, these approaches present common limitations: they have high computational costs and they do not explicitly implement strategies to speed up the map discovery. In this work, we propose an AC-SLAM approach that overcomes these limitations. The approach lays its foundations in the implementation presented in [1], where a single robot, controlled by a set of ROS nodes, moves in an environment autonomously building a map: the agent detects a certain amount of frontier points through the detectors node and these points are filtered to form a final list and then to compute the reward for each point available. More in detail, the selection of one frontier over another is done by calculating a gain that depends on the amount of information that point can give at the time it is explored. For a set of frontiers, each robot computes a matrix of the type in Equation 1[1], thanks to the _D-opt_ criterion based on the graph computed.
\[\left[\begin{array}{ccc}\text{Reward}&\text{X}&\text{Y}\\ \hline r_{0}&x_{0}&y_{0}\\ r_{1}&x_{1}&y_{1}\\ \vdots&\vdots&\vdots\\ r_{j}&x_{j}&y_{j}\\ \end{array}\right] \tag{1}\]
In this approach, the frontier chosen is then the one with the highest reward among the ones in the matrix. The reward is computed through a function that quantifies information gain by evaluating unexplored cells within a given radius around a border point in a map, considering unknown and unoccupied cells, and providing a measure of the knowledge gained.
We adapted this approach to a multiple-agent architecture by replicating a set of ROS nodes for each robot as shown in Figure 1, and by adding a central server that receives the list of local frontier points by each robot, computes a global list, and replies with the next target to be reached by the robot. In other words, the server creates a unique list of frontier points to be used by all the agents and also chooses the best goal position for each agent depending on the reward matrix (following the approach of [1]), also using the spread policy that will be described more in detail in Section III-C. The resultant architecture is shown in Figure 3.
In the following, the two methodologies implemented, i.e., synchronous and asynchronous (Section III-A), the management policy of the frontiers (Section III-B) and the spreading policy used to speed up the exploration (Seciont III-C) are described in detail.
### _Synchronous vs Asynchronous Approach_
The communication between the agents and the server has been implemented with two policies: synchronous and asyn
Fig. 1: This figure shows the _group_ of ROS nodes for each of the agents in the system.
Fig. 3: The figure shows the total architecture of the resultant system.
Fig. 2: The figure shows how the server, composed of three nodes, is connected and communicates with the agents in the system. The _Assigner_ node is the one responsible for the computations.
chronous. In the synchronous approach, during the execution of the program, each agent receives the same number of goals. Moreover, each agent waits for all the other robots in the system to reach their goal before starting a new goal procedure. In this case, the central server (Figure 2) has to manage \(n\) different agents at the same time and, during the reward computation, the server is given \(n\) Reward Matrices, (Equation 1) one for each robot. A priority among agents has been set so that goal assignment is performed respecting this sequence: given two agents \(i\) and \(j\) with \(i<j\), \(i\) is assigned a goal before \(j\).
In the asynchronous approach, each agent is assigned in sequence as many goals as it can reach, without waiting for other agents. In this case, the priority is used to choose the winning agent in the case multiple agents perform the request at the same time. Since with this policy, an agent with a low priority can be stuck for a long time, there is also a counter that keeps track of this prioritization among the robots and, when an agent with a low priority is not considered for a long time, automatically assigns it the highest priority so as that the server will satisfy its request as soon as possible.
### _Frontiers Management_
Each agent identifies a certain list of frontier points that will be merged on the server side. Depending on the extent of the map, the final global list may consist of several points, which can lead to high computational time on the server side. Indeed, the choice of the best target for each robot is performed on the global list of frontier points. For this reason, a strategy to reduce the overall number of frontiers was developed. Also, since we are working on multiple robots, some of the points that are considered frontiers in a local map will be located in a region that is fully mapped when considering the global map. To solve both the aforementioned problems, we decided to consider only those points that have a given percentage of unknown cells within a given radius, using a discretized circle and the global _Occupancy Grid_ map. The usage of a discretized circle can lead to the following errors:
* Inclusion error: the discretized circle may include some cells outside the circular boundary. This error leads to false positives, in which some cells are included in the discretized circle but should be excluded.
* Exclusion error: the discretized circle may exclude some cells within the actual circular boundary. This error leads to false negatives, in which some cells are excluded from the discretized circle but should instead be included.
The magnitude of the error depends on the resolution used for the grid: higher resolutions provide a more accurate approximation of the circle and consequently negligible errors. Unfortunately, using this approach to reduce the list of frontier points may not be sufficient to meet time constraints on the server side. Therefore, we devised an algorithm aimed at further reducing the point number through the adjustment of the radius considered before. The algorithm checks if the global number of points in the list is above a certain threshold: in this case, it recomputes a new list of frontiers by increasing the radius of 0.25m; conversely, if the number of points is below another fixed threshold, the original list is reprocessed by decrementing by 10% the a-priori fixed given percentage of unknown cells on a given radius. This strategy allows us to always have a sufficient number of frontier points in the list, carefully choosing the points that lead to a significant knowledge increase if this number becomes too high.
### _Spread Policy_
To choose targets that allow the agents to explore the map efficiently, a specific spread policy has been implemented. More in detail, the server keeps track of the already assigned goals both for the asynchronous and the synchronous approach. When a target goal for one agent is selected, the server updates the reward for all other agents, by using a subtracting factor as shown in Equation 2.
\[R_{new}=R_{old}-k \tag{2}\]
and:
\[k=\frac{K}{d^{2}} \tag{3}\]
where:
\[K=\frac{\texttt{max\ reward}}{\texttt{number\ of\ targets\ assigned}} \tag{4}\]
The numerator \(K\), in Equation 4, is set at run time since it depends on the maximum reward for each agent, and the number of targets already assigned. The denominator \(d\) represents the Euclidean distance computed between the last chosen goal and the frontier points in the matrix. In other words, when the server assigns a target to robot \(j\), it will reprocess all the reward matrices for the other agents, updating the reward with a subtractive factor \(k\), which strictly depends on the position of the target assigned to robot \(j\) (\(d\) is the distance between this target position and the frontier points in the matrix).
Since the \(k\) parameter is inversely dependent on the distance, the more the points are closer to already chosen
Fig. 4: The figure shows two points and their radius around them to compute the percentage of unknown cells.
goals the less likely they are to be chosen as the next goals, thus achieving the task of spreading the goals in the environment. The parameter \(K\) has been accurately chosen for several reasons, among which:
* normalizing \(K\) concerning the size of the rewards in each matrix, which allows for having a subtractive factor that is scaled with respect to the Reward Matrix of each single agent;
* taking into account the number of already selected points, possibly distributing the reward "budget" among them. Indeed, by dividing the maximum reward by the total number of selected points, when the number of targets already explored becomes significant each point will only receive a smaller portion of the total reward, resulting in a more limited effect of the subtractive parameter.
## IV Experimental Results
The two AC-SLAM approaches, i.e., synchronous and asynchronous, have been evaluated and compared with the single-agent A-SLAM by using two metrics that are listed in the following:
* the map percentage discovered in a certain amount of time;
* the quality of the produced map, computed by considering the Structural Similarity Index Measurement (SSIM), Root Mean Square Error (RMSE) among the set considering the common parts of the maps and Alignment Error (AE) in the same configuration as RMSE.
The code was tested both on simulations and on real robots. Anyway, the results shown are the ones obtained in simulations since they are more representative of the behaviour of the algorithms. We performed 140 simulations, each one lasting 15 minutes, in the same map environment, Figure 5, with the configurations shown in Table I. The map has an extension of 2071.98 \(m^{2}\) and it is from [1]. In the following, all values are averaged over the number of runs.
* the AC-SLAM approach efficiently allows for improving map exploration. In all cases, the percentage of the map discovered after 15 minutes was higher in the AC-SLAM scenario than in the single-agent one. Also, increasing the number of robots leads to a higher percentage of the map discovered (Figure 6);
* the asynchronous approach successfully broadens the environment coverage. In all 4 different scenarios, i.e. 2 agents or 3 agents, and with or without frontiers management, the agents controlled asynchronously were able to discover a higher percentage of the map concerning the ones controlled synchronously. However, it is worth saying that when the frontiers management strategy is not adopted, the difference between the two approaches is almost negligible, while when the global frontier list is optimized as described in Section III-B, differences are much larger;
* the importance of adopting a strategy to manage the number of frontier points is even more evident when comparing the results obtained with the same number of agents and the same approach, i.e., asynchronous or asynchronous. Indeed, the frontiers management approach allows for a drastic reduction of the number of possible target points processed by the server (both in the synchronous, Figure 7, and in the asynchronous methodology, Figure 8), consequently reducing the computational cost. On the other hand, the high computational cost required by the reward processing on the server side makes necessary the adoption of strategies to limit the number of global frontiers to be considered. Indeed, when a proper strategy to opportunely control the number of frontier points is not adopted, increasing the number of robots exploring the environment and adopting collaborative policies does not lead to a significant improvement in the percentage of map discovered (i.e., without the frontiers management, the results obtained with 1, 2, or 3 robots are quite similar).
Implementing asynchronous coordination also introduced program robustness compared to the synchronous approach. In synchronous mode, the server waits for all agents to publish their lists before the code can proceed. If a robot fails to publish, such as due to node crashes from overload or desynchronization, the system cannot continue. In contrast, in the asynchronous case, if one robot fails, the system can allow the remaining _n-1_ agents to continue because the crashed robot, not requesting server use, is disregarded.
Regarding the visual analysis of the maps, the results on average appear promising (Table III). Notably, increasing the number of robots in the system and merging their maps did not compromise map resolution or introduce additional errors compared to single robot scenarios. In almost all cases, there was even a decrease in RMSE and AE, indicating that increasing the number of robots and merging their maps did not adversely affect program performance. For a demonstration of these findings, a video is available at the following link1.
Footnote 1: YouTube link: [https://youtu.be/MsZqoaEA0gY](https://youtu.be/MsZqoaEA0gY)
## V Conclusions
We propose a novel algorithm for the coordination of multiple robots in a collaborative exploration domain. Two different implementations of the same approaches have been described, coordinating the robots synchronously or asynchronously. Finally, a strategy to efficiently manage the global frontiers (to reduce the computational cost) and to spread the robot in the environment has been also proposed. Possible future works can explore strategies to implement the proposed architecture in a decentralised way, thus dividing the computational weight among all the agents.
## Acknowledgment
This work was conducted within the framework of the NExT Senior Talent Chair DeepCoSLAM, funded by the French Government through the program "Investments for the Future" administered by the National Agency for Research (ANR-16-IDEX-0007). We also extend our gratitude to the Region Pays de la Loire and Nantes Metropole for their invaluable support in facilitating this research endeavour.
Fig. 8: Boxplots showing the overall number of frontier points processed when exploring the environment with 2 (left) and 3 robots (right) and by using the frontiers management approach (green) or not (blue). Tests have been done controlling robots asynchronously.
Fig. 7: Boxplots showing the overall number of frontier points processed when exploring the environment with 2 (left) and 3 robots (right) and by using the frontiers management approach (green) or not (blue). Tests have been done controlling robots synchronously. |
2304.03037 | Parallel circuit implementation of variational quantum algorithms | We present a method to split quantum circuits of variational quantum
algorithms (VQAs) to allow for parallel training and execution, that maximally
exploits the limited number of qubits in hardware to solve large problem
instances. We apply this specifically to combinatorial optimization problems,
where inherent structures from the problem can be identified, thus directly
informing how to create these parallelized quantum circuits, which we call
slices. We test our method by creating a parallelized version of the Quantum
Approximate Optimization Algorithm, which we call pQAOA, and explain how our
methods apply to other quantum algorithms like the Variational Quantum
Eigensolver and quantum annealing. We show that not only can our method address
larger problems, but that it is also possible to run full VQA models while
training parameters using only one slice. These results show that the loss of
information induced by splitting does not necessarily affect the training of
parameters in quantum circuits for optimization. This implies that
combinatorial optimization problems are encoded with redundant information in
quantum circuits of current VQAs. Therefore, to attain quantum advantage for
combinatorial optimization, future quantum algorithms should be designed to
incorporate information that is free of such redundancies. | Michele Cattelan, Sheir Yarkoni | 2023-04-06T12:52:29Z | http://arxiv.org/abs/2304.03037v1 | # Parallel circuit implementation of variational quantum algorithms
###### Abstract
We present a method to split quantum circuits of variational quantum algorithms (VQAs) to allow for parallel training and execution, that maximally exploits the limited number of qubits in hardware to solve large problem instances. We apply this specifically to combinatorial optimization problems, where inherent structures from the problem can be identified, thus directly informing how to create these parallelized quantum circuits, which we call slices. We test our method by creating a parallelized version of the Quantum Approximate Optimization Algorithm, which we call pQAOA, and explain how our methods apply to other quantum algorithms like the Variational Quantum Eigensolver and quantum annealing. We show that not only can our method address larger problems, but that it is also possible to run full VQA models while training parameters using only one slice. These results show that the loss of information induced by splitting does not necessarily affect the training of parameters in quantum circuits for optimization. This implies that combinatorial optimization problems are encoded with redundant information in quantum circuits of current VQAs. Therefore, to attain quantum advantage for combinatorial optimization, future quantum algorithms should be designed to incorporate information that is free of such redundancies.
## I Introduction
In recent years the field of quantum computing has grown in terms of interest due to its promise of being able to solve problems that are difficult or impossible to solve with classical computers [1, 2].
Unlike classical computers, quantum computers work with a physical implementation that allows them to exploit some properties of quantum mechanics to create algorithms with improved complexity costs or efficiency. Various theoretical studies demonstrate that such an improvement can be reached and in some cases that a classical algorithm with the same performance cannot be developed [1, 2, 3, 4].
The current state of quantum hardware is not advanced enough to produce a computer able to make computations with an acceptable error rate. This is primarily due to the presence of noise, which causes errors during the computation on these machines. Consequently, over the past few years, various algorithms have been developed, known as variational quantum algorithms (VQAs), specifically designed to operate on such imperfect quantum machines [5]. These algorithms fall under the class of hybrid quantum-classical algorithms, where a quantum circuit is implemented as a black-box function optimized using a classical method. Typically, circuits are characterized by a set of continuous real parameters in one and two-qubit gates whose values are determined by a classical optimizer. The goal is to find a set of parameters such that the output of the parameterized quantum circuit minimizes a given objective function. These algorithms present shallow and small circuits that, due to the reduced number of operations, are in general more resilient against noise [5].
Despite the challenges presented by the current technology, efforts are being made to make quantum computing practical and applicable to industrial problems. One promising application of quantum computing is to solve hard optimization problems and the sub-field devoted to this is called quantum optimization. Several algorithms were developed in the past with the aim of solving industrially relevant optimization problems [6, 7].
The state of the art of quantum optimization algorithms requires the input problem to be encoded in a specific format, the Quadratic Unconstrained Binary Optimization (QUBO) problem. A QUBO involves finding a binary vector \(x\in\mathbb{B}^{n}\) such that the value \(x^{T}Qx\) is minimized, where \(Q\) is a symmetric matrix. This problem is equivalent to the Ising model, which is another physical problem that can be interchanged with QUBOs as input of a quantum algorithm, due to their equivalence. In the Ising model, the variables are spins that can take values of either +1 or -1 and are encoded in a symmetric matrix similar to QUBOs. The equivalence between these two models is given by a change of basis. In various applications of quantum optimization, these models have been demonstrated to be appropriate for representing various combinatorial optimization problems [8].
One of the algorithms that uses the Ising Hamiltonian to construct the circuit is the Quantum Approximate Optimization Algorithm (QAOA) which is inspired by the adiabatic theorem [9]. Here, alternating layers of parameterized mixing and problem Hamiltonians are used in order to approximate ground states of a given problem Ising Hamiltonian. The parameters of the quantum circuit are then optimized classically in an outer loop with respect to the problem Hamiltonian, and the QAOA circuit acts as a black-box sampler in the inner loop. There are well-known theoretical results for both finite-depth and infinite-depth circuits which show how QAOA can be used to solve some well-known optimization problems [7]. Additional variants of QAOA have been developed in recent years to improve the performance of the algorithm under different optimization conditions (such as error mitigation, feasibility of solutions, etc.) [10, 11, 12, 13]. A more general VQA approach to optimization is the Variational Quantum Eigensolver (VQE). Here, a quantum circuit is constructed given a problem-independent parameterized ansatz. Then, similarly
to QAOA, the parameters of the ansatz are optimized with respect to a given Hamiltonian which represents some quantum system. In literature, VQE has been used to solve the minimum eigenvalue problem, which is equivalent to minimizing an Ising Hamiltonian representation of a combinatorial optimization problem, but can also be used to find ground states of other quantum systems [14].
There are, however, significant limitations to the implementation of VQAs in state-of-the-art quantum hardware (also known as noisy intermediate-scale quantum processors, or NISQ [15]) in the absence of error correction. The most relevant limiting factor is the number of qubits required to construct the circuits. Although minimizing an Ising Hamiltonian is NP-hard [16] and can be used to represent many combinatorial optimization problems of academic and practical interest, this often includes a polynomial overhead in the scaling of resources required to represent such problems [8]. Typically, the limits of computability for hard problems are well beyond those that can be solved with existing quantum hardware, and so only small toy instances of said problems are typically solved with VQAs. Furthermore, due to high error rates and low coherence times, especially at high depths, the effect of noise becomes non-negligible and reduces the performance of VQAs [17]. Lastly, it is important to note that the quality of the results of these VQAs depends on the optimization of the parameters in the quantum circuit by definition. Furthermore, it has been shown that training these parameters optimally is in itself an NP-hard problem [18], and as such, implying that finding optimal VQA parameters is at least as hard as solving the combinatorial optimization problems themselves.
In this work, we attempt to mitigate the limitations of VQAs by presenting a novel method to parallelize any variational quantum algorithm. For the sake of simplicity, we motivate our method by considering quantum optimization algorithms in particular, although our method is generalizable to all variational quantum algorithms. The essence of the method is that we approximate the output state of the tensor product of the unitary matrices composing the quantum circuit into a Cartesian product of output states from smaller quantum circuits. In other words, given a combinatorial optimization problem and a VQA ansatz, we approximate the ground state distribution of the problem by splitting the ansatz into independent smaller parameterized quantum circuits, each of which is optimized in parallel and guided by a single global objective function. The result of this procedure is a collection of classically separable quantum systems with shallow circuits whose product of vector spaces matches the original optimization search space of the problem.
The paper is structured as follows: in section II we discuss previous results that motivate our method; in section III we demonstrate the derivation of our method explicitly using QAOA as a starting point, and then show how to extend the method to other VQAs; in section IV we present one such constrained combinatorial optimization problem, the vehicle routing problem, that we use as an example because of symmetric properties of its QUBO formulation; in section V we test our parallelized version of QAOA experimentally providing both insights into the physical significance of our parallelization technique as well as benchmark its performance with respect to well-known combinatorial optimization techniques.
## II Related Works
One of the questions of practical relevance that quantum optimization researchers are trying to solve is how to efficiently implement constrained optimization problems in quantum computers. In particular implementing constraints requires an overhead of resources in terms of qubits and interactions between them. Therefore, the implementation of larger problems becomes impractical because the number of qubits and the implementable circuit size are insufficient to meet the requirements of such problems. Hence, while encoding a constraint, we have to minimize the number of additional quantum resources required to implement it. Along this direction in [19; 20] possible solutions are presented. The authors proposed to not implement the constraint as part of the Hamiltonian that defines the circuit, but rather implement it as part of the function used to optimize the hyperparameters. This results in transferring the information regarding the feasibility of en constraints from the quantum simulation to the classical search showing improvement in both the quality of the solutions and in overlap with the solution state. Additional work in this direction using VQE employs the concept of _contextual subspaces_ in molecular simulations, where a quantum Hamiltonian is split into two separable Hamiltonians, whose sum reconstructs the original Hamiltonian of interest [21]. One of these Hamiltonians is then computed classically, and the second attempts to "correct" the classical approximation using a VQE method. While the authors note that the classical simulation component is still NP-hard, the number of qubits required to implement such a hybrid method was significantly smaller compared to other VQE methods, while still maintaining the chemical accuracy of the model.
Another viable approach to handle the overhead of qubits and gates while implementing constraints is to apply circuit cutting and knitting techniques [22; 20]. Although these methods show that we can simulate larger quantum systems using fewer qubits and achieve improved solution quality in certain scenarios, this outcome involves a trade-off. Indeed, in both cases the authors highlight that there is an exponential overhead in the number of measurements we have to apply in order to reconstruct the correct wavefunction [20] or in the number of cuts we can apply to the circuit, resulting in an exponential search to find the best and suitable way of cutting the circuit [22].
## III Parallelizing Variational Quantum Algorithms for Optimization
In this section, we propose a method to create a parallelizable algorithm from a VQA. The procedure is inspired by some of the results presented in section II. We can formally define VQAs as a parameterized quantum circuit to optimize over the pseudo boolean classical function \(H:\{0,1\}^{n}\rightarrow\mathbb{R}\). The circuit is initialized with a set of parameters and the final state is sampled. The samples are used to evaluate \(H\) and compute its gradient. These results are used by a classical optimizer to update the parameters of the quantum circuit until convergence or a desired result is reached. To evaluate the pseudo boolean function \(H\), we consider the quantum states measured in the computational basis as bit-strings. In fact, the quantum subroutine is designed to have outcomes that belong to the same search space of \(H\), resulting in a matching between the space where the circuit outcomes belong and the domain where \(H\) is defined.
Due to the scarce resources available in the state of the art of quantum hardware, implementing problems of a large size can often be impractical. Our proposed approach tackles this issue by creating several smaller quantum circuits that are tailored to the properties of the problem and that can be executed on the available resources. This means that, instead of having a one-to-one correspondence between the output of the single circuit and the function to optimize, we introduce a representation of the search space based on products of subspaces.
We now explain the general method to create a parallelization of quantum algorithms by inspecting the problem directly. We call this approach _slicing_. We consider a VQA that is described by a quantum system of \(N\) qubits and a quantum hardware that has \(n\) qubits available, where we want to implement the quantum circuit of the VQA in. By inspecting the problem, we identify \(k\) different subsystems, called _slices_, of maximum dimension \(n\), whose product matches the original output space of the VQA, which is the search space of our problem. Notice that the outer classical optimization routine is no longer optimizing a black box defined on a \(2^{N}\)-dimensional space, but \(k\) black boxes defined on spaces of dimension at most \(2^{n}\).
Now, let us distinguish two different cases. If \(N>n\), implementing the original circuit requires more qubits than the number available in the hardware, therefore the algorithm can only be implemented in its parallelized version. On the other hand, when \(N\leq n\) we can see that even though the circuit can now be implemented, our method reduces the number of interactions used. Therefore, in both cases, our approach presents a reduction in the number of resources used.
Given the above method, we can, specifically, formalize our quantum circuit as the following function
\[\mathcal{C}:\mathbb{R}^{q}\rightarrow\mathbb{U}(2^{n}),\]
where \(\mathbb{R}^{q}\) is the search space of the parameters and \(\mathbb{U}(2^{n})\) is the space of unitary matrices of dimension \(2^{n}\). The function \(\mathcal{C}\) fulfills the following property:
\[\mathcal{C}(\alpha_{1},\ldots,\alpha_{q})=U_{1}\otimes\cdots\otimes U_{r},\]
where \(U_{1},\ldots,U_{r}\in\bigcup_{j=1}^{n}\mathbb{U}(2^{j})\).
### A parallel Quantum Approximate Optimization Algorithm (pQAOA)
We now explain how to inspect a combinatorial optimization problem to create parallel slices for VQA by taking QAOA as an example. We stress that this procedure can be used for any VQAs. Consider the QAOA ansatz for finite depth \(p\):
\[e^{-i\beta_{p}H_{i}}e^{-i\gamma_{p}H_{f}}\cdots e^{-i\beta_{1}H_{i}}e^{-i \gamma_{1}H_{f}}. \tag{1}\]
The QAOA circuit with a generic Hamiltonian \(H_{f}\) is as shown in fig. 1. For simplicity, we start by considering constrained optimization problems, and then, in appendix B, explain how our method generalizes. Constrained combinatorial optimization problems can be represented by the following Hamiltonian:
\[H=H_{\text{obj}}+H_{C}, \tag{2}\]
where \(H_{\text{obj}}\) represents the objective function of the combinatorial optimization problem that we are considering and \(H_{C}\) is the Hamiltonian that encodes the constraints that define the feasible region of the problem. Let \(N\) be the number of qubits in the QAOA circuit and let us consider the Hamiltonian \(H_{\hat{C}}\) defined on all \(N\) qubits, that implements some constraints of the problem1. Further, we assume that by removing \(H_{\hat{C}}\) we create two classically separable Hamiltonians that operate on two registers which we call \(A\) and \(B\), of length \(n\) and \(m\) respectively (such that \(n+m=N\)), see fig. 2. Note that the circuits created in this way do not fully represent the original problem. Additionally, note that in our method it is sufficient only to include in \(H_{\hat{C}}\) the minimum set of constraints necessary in order to create these classically separable
Figure 1: Level-2 QAOA circuit. The initial state of the circuit is \(|0\rangle^{\otimes_{n}}\). The circuit for larger \(p\) is obtained by sequential repetition of the two layers as described in eq. (1).
Hamiltonians. Therefore, we must modify the classical subroutine in order to implement the information missing from \(H_{\hat{C}}\). To solve this, we apply the same argument as in [19]: we can leave \(H_{\hat{C}}\) out of the quantum circuit, implement the quantum circuit according to the Hamiltonian \(H-H_{\hat{C}}\) and execute the classical optimization subroutine by evaluating the "complete" Hamiltonian function \(H\), that can be trivially read as \((H-H_{\hat{C}})+H_{\hat{C}}\). Notice that even though the information about the feasibility of the constraints encoded by \(H_{\hat{C}}\) is now only part of the classical optimizer, we are still seeking solutions that fulfill \(H_{\hat{C}}\) since we are minimizing \(H\).
This procedure results in two separate quantum circuits of size \(n\) and \(m\) that can be executed independently on separate quantum registers. Note that the two quantum circuits depend on parts of the Hamiltonian \(H-H_{\hat{C}}\) that do not share any terms and, hence, their circuit representations are separable. We call such _classically separable_ Hamiltonians and each of the circuits define by them a _slice_.
Note that the solutions from register \(A\) are \(n\)-dimensional vectors, whereas the ones from \(B\) are \(m\)-dimensional vectors. Let \(S_{A}\) (\(S_{B}\)) be the multiset2 of samples from register \(A\) (\(B\)). To construct samples overall \(N\) qubits using the \(n\)- and \(m\)-dimensional solutions, we take the product in the following way:
Footnote 2: A multiset is a set where elements can be repeated more than once.
\[|s\rangle\in S=\{|s_{A}\rangle\otimes|s_{B}\rangle:(|s_{A}\rangle,| s_{B}\rangle)\in S_{A}\times S_{B}\}. \tag{3}\]
This mapping from slice samples to full Hamiltonian samples is sufficient to construct a parallel implementation for QAOA. Further note that, due to the fact that the slices are classically separable quantum circuits, we can measure them independently and therefore also the optimization of their respective parameters can be done independently. Therefore, we can choose whether to parameterize each slice independently or keep the same number of parameters as in the original QAOA ansatz. For the remainder of this discussion, we do not assume either case and simply refer to the parameters of the ansatzes as \(\vec{\gamma}\) and \(\vec{\beta}\)- our description holds for both.
In order to incorporate \(H_{\hat{C}}\) into the optimization of \(\vec{\gamma}\) and \(\vec{\beta}\), we must evaluate it classically. However, note that by removing \(H_{\hat{C}}\) we relax one of the assumptions of QAOA: we are no longer interested in ground states of the individual slices, but rather ground states of the original global Hamiltonian \(H\), which can now be excited states of the slices. Therefore, in order to guide the optimization procedure of \(\vec{\gamma}\) and \(\vec{\beta}\) to the global optimum, we evaluate \(H\) with our composed solutions \(S\). Meaning, if \(\psi\) is the wavefunction of the original QAOA circuit for \(H\), then we approximate it by minimizing the following expectation value:
\[\langle\psi|H|\psi\rangle\approx\langle S|H|S\rangle, \tag{4}\]
where \(|S\rangle=\frac{1}{\sqrt{|S|}}\sum_{|s\rangle\in S}|s\rangle\). The expectation value minimized in this way is now evaluated by the Hamiltonian \(H\) and, thus, the parameters \(\vec{\gamma}\) and \(\vec{\beta}\) are updated with respect to the original optimization problem. The slices of the pQAOA algorithm now function as independent black boxes used to sample separable regions of the search space.
The whole pQAOA is illustrated in fig. 3 with \(p=1\) as an example (although generalizing this procedure for any \(p\) with additional layers is trivial).
### A parallelization for the variational quantum eigensolver
The variational quantum eigensolver (VQE) is a hybrid quantum-classical variational quantum algorithm
Figure 3: The pQAOA obtained by using our method on the Hamiltonian eq. (2). The two register \(A\) and \(B\) are not connected by any gates and they can be executed in parallel. The red dashed line stress this fact. This implies that we can use only \(\max\{m,n\}\) qubits to execute this algorithm. The results are collected from the two slices and glued together. In this way, we obtain states that can be evaluated with \(H\). The parameters are, then, optimized and the algorithm can proceed with the next iteration. In the figure, we use \(\beta_{1},\beta_{2}\) and \(\gamma_{1},\gamma_{2}\), but to be consistent with the QAOA parameterization one can also use only one \(\beta\) and one \(\gamma\).
Figure 2: The level-1 QAOA circuit implementation of eq. (2). Notice that the Hamiltonian \(H-H_{\hat{C}}\), and therefore its exponential, is separable between register \(A\), first \(n\) qubits, and register \(B\), last \(m\) qubits.
used to solve the minimum eigenvalue problem. For a given initial state and choice of parameterized ansatz, a quantum circuit is defined with the goal of iteratively adapting ansatz parameters to minimize a target objective function. Specifically, given a unitary representation of the ansatz, \(U(\vec{\theta})\), where \(\ket{\vec{\theta}}\) is the number of parameters in the ansatz, VQE attempts to solve:
\[\operatorname{argmin}_{\vec{\theta}}\bra{\psi(\vec{\theta})}\mathcal{H}\ket{ \psi(\vec{\theta})}, \tag{5}\]
where \(\psi(\vec{\theta})=U(\vec{\theta})\ket{0}\) is the ansatz applied to the initial state (in this case the zero state \(\ket{0}\)), and \(\mathcal{H}\) is a Hamiltonian whose minimum eigenvalue we wish to find (or approximate). Originally proposed in [23], VQEs have been used in practice to solve a variety of problems in quantum chemistry and combinatorial optimization [23, 24, 25]. However, similar to other VQEs, VQE suffers from the same limitations of qubit count, circuit depth, and parameter optimization which limit its usefulness in applications. To overcome this, using our parallelization technique, we propose a variant, pVQE, which we outline here.
One of the strengths of VQE is the freedom in the choice of ansatz. The goal is to construct an ansatz that can simultaneously be expressive enough to explore the Hilbert space of the circuit as well as be easily implementable [26]. We note that, for our case of pVQE, we do not require the quantum Hamiltonian to explore the entire search space of the original problem, but only within each slice. Therefore, we have even more freedom with respect to the original VQE implementation. Consider the hardware-efficient ansatz (HEA), a common choice of ansatz for VQE due to its ease of implementation [27]. For \(L\) layers of the VQE circuit with \(N\) qubits, we have \(\mathcal{O}(NL)\) parameters and \(\mathcal{O}((N-1)L)\) entangling gates. However, for pVQE with \(N=kn\) qubits and \(L\) layers, we have only \(\mathcal{O}(k(n-1)L)\) entangling gates. Furthermore, most critically, while we have the same total number of parameters \(\mathcal{O}(NL)=\mathcal{O}(knL)\), each slice now occupies a Hilbert space of only \(2^{n}\). Meaning, the pVQE HEA within each slice with \(\mathcal{O}(nL)\) parameters needs to explore a space of \(2^{n}\), compared to the original VQE which needs \(\mathcal{O}(knL)\) parameters to explore a space of \(2^{kn}\). While this example specifically exploits the HEA, it generally holds that the Hilbert space of pVQE is exponentially smaller than that of VQE, which pVQE can exploit more efficiently, thus alleviating the trade-off between expressivity and each implementation of ansatz.
### A parallelization for quantum annealing
Quantum annealing (QA) is one of the original quantum optimization algorithms designed to solve combinatorial optimization problems by exploiting adiabatic evolution, both in simulation and in programmable quantum hardware [28, 29]. The algorithm works by initializing a quantum system (or simulation) to an easy-to-prepare ground state (initial Hamiltonian \(H_{i}\)), and then evolving the system to represent a different Hamiltonian (final Hamiltonian \(H_{f}\)) to be minimized. The result is a metaheuristic optimization algorithm that can be used to simulate quantum Hamiltonians. In-depth technical works on the physics behind quantum annealing theory and its implementation in hardware can be found in: [30]. For the purposes of implementing a parallelized version of QA, we only introduce the necessary components for constructing our algorithm, and encourage the interested reader to review the works cited above for more information.
The most commonly used Hamiltonian for quantum annealing is known as the transverse-field Ising Hamiltonian:
\[\mathcal{H}(s)=A(s)\left[\sum_{i}\sigma_{i}^{x}\right]+B(s)\left[\sum_{i}h_{i }\sigma_{i}^{z}+\sum_{i<j}J_{ij}\sigma_{i}^{z}\sigma_{j}^{z}\right]. \tag{6}\]
Here, \(s=t/\tau\in[0,1]\) is referred to as "normalized time", and \(\tau\) is the duration of evolution, an input parameter to the algorithm. The initial Hamiltonian shown here is \(H_{i}=\sum_{i}\sigma_{i}^{x}\) and the final Hamiltonian \(H_{f}\) is the quantum Ising Hamiltonian with z-spin Pauli operators \(\sigma_{i}^{x},\sigma_{j}^{z}\). The objective of the QA algorithm is to minimize \(H_{f}\), which can be used to represent NP-complete and NP-hard problems and is therefore of practical interest [8]. A review of previous work in applying quantum annealing in practice can be found here [6].
In this paper, we focus on motivating one method of constructing a parallel version of QA by exploiting a parameter known as _annealing offsets_. This specific parameter allows for the advancement or delay of the point in (normalized) time at which each qubit in the quantum annealer starts its evolution from \(H_{i}\) to \(H_{f}\). This shift is denoted by \(\Delta s_{i}\) for qubit \(i\), with \(\Delta s_{i}>0\) being a delay, and \(\Delta s_{i}<0\) being an advancement. Due to the changing eigenspectrum generated by the evolving Hamiltonian, it is known that some qubits in the system experience a slowdown in tunneling dynamics before the termination of the evolution, an effect known as "freeze-out". This is known to affect the performance of QA as it makes it harder for the system to remain in the ground state [31, 32]. The annealing offset parameter can therefore be used to change the freeze-out point on a per-qubit basis, in an attempt to mitigate this effect by synchronizing the freeze-out points. It has been demonstrated in quantum annealing hardware that tuning these parameters can (sometimes significantly) improve the probability of observing the ground state of \(H_{f}\)[33, 34, 35]. In general, however, given that the offsets are continuous parameters with non-convex search space, tuning these parameters optimally is a hard problem in itself.
To implement a parallel QA algorithm (pQA), we use a similar paradigm as for the previous algorithms presented above. We start by constructing the global and local Hamiltonians for our optimization problem. For each slice, we parameterize the annealing offsets for each qubit in the problem independently and tune
them within each slice using the global Hamiltonian as the target function.
While this proposal doesn't reduce the number of offset parameters in the problem (we use the same number of qubits for pQA as in QA), this does have a significant physical effect on the search space. Since each slice is embedded on the quantum annealer independently, we are only attempting to mitigate the freeze-out _within_ each slice. Therefore, the search space is much more confined with respect to the global Hamiltonian.
## IV The Vehicle Routing Problem
We examine the Vehicle Routing Problem (VRP), a well-known NP-hard optimization problem, as the testbed for our parallelized VQAs. As described in Sec. III, we can exploit the constraints in the problem formulation in order to directly inform how to build the circuit slices in our quantum implementation. In the VRP, we consider a fleet of vehicles that need to deliver goods or services to a set of customers. The goal is to find the optimal set of routes for the vehicles that will minimize costs associated with the deliveries. The QUBO definition of the VRP is as follows [36; 37; 38]:
\[H=H_{\text{of}}+\sum_{i=1}^{n}\left(1-\sum_{a=0}^{A-1}\sum_{s=0} ^{n}x_{a,i,s}\right)^{2}+\\ +\sum_{a=0}^{A-1}\sum_{s=0}^{n}\left(1-\sum_{i=0}^{n}x_{a,i,s} \right)^{2}, \tag{7}\]
with
\[H_{\text{of}}=\sum_{a=0}^{A-1}\sum_{i,j=0}^{n}\sum_{s=0}^{n}\frac{w_{i,j}}{W} x_{a,i,s}\,x_{a,j,s+1},\]
where the locations, \(i\), are numbered from \(0\) to \(n\) and \(0\) is the depot, i.e. the location where the vehicles start; \(w_{i,j}\) are the costs associated to reach location \(j\) from location \(i\) and \(W:=\max_{i,j}w_{i,j}\); \(A\) is the number of vehicles; and, the index \(s\) represents the discrete step of the process.3
Footnote 3: This is the algebraic description of the QUBO, i.e. we write directly the polynomial \(x^{T}Qx\).
By considering the QUBO formulation of the problem in eq. (7), we can see that only the second addend contains quadratic terms that involve different indices for the vehicle, \(a\). In addition, we stress the fact that this property yields symmetry in the problem that can be exploited to construct the slices. Indeed, we can write \(H\) in the following fashion:
\[H=\sum_{a=0}^{A-1}\left[\sum_{i,j=0}^{n}\sum_{s=0}^{n}\frac{w_{ v,k}}{W}x_{a,i,s}\,x_{a,j,s+1}+\right.\\ +\left.\sum_{s=0}^{n}\left(1-\sum_{i=0}^{n}x_{a,i,s}\right)^{2} \right]+\\ +\sum_{i=1}^{n}\left(1-\sum_{a=0}^{A-1}\sum_{s=0}^{n}x_{a,i,s} \right)^{2},\]
and by considering
\[H_{a}=\sum_{i,j=0}^{n}\sum_{s=0}^{n}\frac{w_{v,k}}{W}x_{a,i,s}\, x_{a,j,s+1}+\sum_{s=0}^{n}\left(1-\sum_{i=0}^{n}x_{a,i,s}\right)^{2},\] \[H_{c}=\sum_{i=1}^{n}\left(1-\sum_{a=0}^{A-1}\sum_{s=0}^{n}x_{a,i,s}\right)^{2},\]
we can summarize \(H\) as:
\[H=H_{c}+\sum_{a=0}^{A-1}H_{a}.\]
One can notice that the Hamiltonians \(H_{a}\) do not share any variables and can be treated separately. Therefore, we identify \(H_{a}\) as the slices of our parallelized circuit and \(H_{c}\) as the part of the Hamiltonian to only simulate classically within the global Hamiltonian, as we did for \(H_{\hat{C}}\) in section III.
## V Numerical Results
To test our algorithms, we solve 50 randomly generated instances of VRPs with 2 vehicles and 3 locations. The locations of the studied instances are generated with a Gaussian distribution over a discrete grid of \(100\times 100\). The depot is placed at the center of the grid, with coordinates \((0,0)\). The distances between locations are computed with the L2 norm. We use NVIDIA's cuQuantum [39] to simulate our circuits executed on a DGX-1 with Tesla V100 [40].
Because of the relatively small sizes of VRP instances studied here, we can calculate the global optima with brute force approaches. To evaluate the efficacy of each algorithm we compute the approximation ratios with respect to the brute-force solution. The results obtained by the quantum algorithms are then compared to the open-source software package OR-Tools by Google [41].
In addition, to analyze the effect of a lower expressive circuit on the training of the parameters we evaluate the optimized parameters of pQAOA with a QAOA ansatz and we compare the results with the original QAOA performance.
### Implementation description
We use two different versions of the pQAOA presented in section III.1. Each implementation depends on the choice of the number of angles to train in the circuit. The angles \(\vec{\gamma}\) and \(\vec{\beta}\) can be chosen either to be consistent with QAOA, i.e. each layer is driven by a unique angle, or, since the slices are independent we can use different angles per slice. In the following subsections, we describe the implications of such a choice.
#### iii.1.1 Multi-angle pQAOA
After identifying the slices in the model, each classically separable Hamiltonian is implemented as a separate quantum circuit. The original QAOA has 2 parameters per layer, but now, since there is no connection between the smaller quantum circuits implemented by the slices, one can decide to assign independent parameters per slice. This yields \(2\cdot k\cdot p\) angles to optimize, where \(k\) is the number of slices identified in the mode and \(p\) is the number of layers of the QAOA circuit. The implementation and the sampling process of the circuit are shown in section III.1.
#### iii.1.2 Single-slice pQAOA
Differently from what we describe in section V.1.1, one can decide to not use different parameters per slice but to keep the same number as for the original QAOA. Notice that, when the slices identify identical Hamiltonians this yields a different implementation of the circuit. This happens when the variables used to represent the optimization problem are based on multiple indices. In those cases, the structure of the polynomial \(x^{T}Qx\) presents identical Hamiltonians that are repeated addends in the sums derived by the Cartesian product between the indices. It is worth noting that this is common when discrete variables are implemented employing a binary representation. Indeed, as in the VRP example, one can implement the quantum circuit only considering a single slice. If every slice constructs the same circuit, and the same parameter values are used for each slice, then sampling from the different slices is equivalent to sampling more from one single slice. Therefore, we can implement the circuit of the Hamiltonian that represents the slice and reconstruct the solution of the original Hamiltonian by considering the Cartesian product of the samples with themselves. This means that if the optimization is encoded on a Hamiltonian defined on \(N=kn\) qubits, where \(k\) is the number of identical slices, we can approximate such Hamiltonian by using only \(n\) qubits. Therefore, We call this version of the algorithm, single-slice pQAOA.
### Result comparison
In fig. 4 we show the main comparison between the algorithms. Even though the number of samples collected is the same, the number of measurements utilized to produce these samples varies based on the algorithm employed. In the case of QAOA training, we sample from the circuit \(10^{p+1}\) times, which is adequate to demonstrate the solution quality trend with the increase of \(p\). However, for pQAOAs training, we sample the circuit the same number of times as QAOA and we further subsample from this set to have a fixed number of training samples at each iteration. This decision is made to assess the performance of the algorithms while using fewer resources. Therefore, for multi-angle pQAOA, we sample each slice \(10^{p+1}\) times, but we use only \(100\) subsamples per slice, which yields a total number of \(100^{k}\) samples to evaluate the global Hamiltonian \(H\), where \(k\) is the number of slices. In this case, we obtain \(10,000\) samples with the Cartesian product because we have two slices. Furthermore, notice that this approach yields the same number of qubit measurements as that of QAOA since the total number of qubits included in the QAOA circuit, in the slices of multi-angle pQAOA, is identical. For single-slice pQAOA, as already mentioned in section V.1.2, we can directly sample the unique slice \(10^{p+1}\) times, since the others are identical and, then, reconstruct the solutions to evaluate the global Hamiltonian by considering the Cartesian product between the set of the samples and itself. This yields an advantage because we are using fewer measurements since we do not need to sample from each slice. In addition, by using a unique slice, we are not considering as many qubits as the other algorithms. After the training process, for all algorithms, we sample the circuit to obtain \(10,000\) samples to obtain the solution. Therefore, we sample the QAOA circuit \(10,000\) times and, to obtain the same number of samples, the two slices of the pQAOAs \(100\) times. To summarize, for QAOA, all collected samples are utilized, while for pQAOAs the size of the product between the set of samples from the slices increases exponentially, necessitating the consideration of subsamples. Therefore, since we are considering instances with two slices we consider only \(100\) subsamples, out of the \(10^{p+1}\) obtained. Notice that the size of a Cartesian product of two sets of size \(100\) is a set of size \(10,000\). Hence, the amount of resources used to train the parameters is the same for every \(p\).
Since the original QAOA has the guarantee to increase the quality of the solutions as the layers of the quantum circuit increase, we execute the algorithms with a different number of layers, \(p=1,\ldots,6\), to compare the behavior of the two parallelized algorithms with the QAOA.
Figure 4 demonstrates that with the classical optimizer reaching convergence for all instances and that despite being given a different amount of training resources, the performance of the pQAOAs is on average worse than QAOA. This was expected because of the smaller number of samples used to train the cir
cuit and the loss of information represented in the circuit. Nevertheless, for some specific instances, we find that the parallelized versions sometimes reach better solutions than QAOA. We can further observe that for small \(p\) multi-angle pQAOA performs better than single-slice pQAOA. Nevertheless, the performance of single-slice pQAOA does increase when the circuit becomes deeper and the results surpass the multi-angle pQAOA with \(p=6\). Indeed, multi-angle pQAOA does not show any improvement with larger \(p\) and its performance stays similar independently from the depth of the circuit. We can attribute this behavior of the algorithm to the number of parameters used to train. We can notice that with lower \(p\) a higher number of parameters yields better results, and so multi-angle pQAOA beats its single-slice version. But, on the other hand, having too many parameters leads to a decrease in the quality of the solutions with larger \(p\). This observation manifests the NP-hardness [18] of training VQA parameters. Moreover, this is due to the number of samples used to train the parameters, since we use a fixed amount of resources to train. Therefore, the complete distribution of the outcome states of the circuit is not completely described. This issue could be solved either by increasing the number of training samples or by taking into account a state vector representation of the outcome. However, we stress that using a state vector representation is not suitable for real purposes, since we cannot access the final wavefunction of a VQA directly. Indeed, training quantum algorithms considering the wavefunction can be done only in classical simulation and does not have a real-world application.
Indeed, even though our quantum circuit can be now executed by using fewer quantum resources, still the information lost in the process, i.e. the lower expressibility of the model, must be compensated in the classical optimization subroutine. In addition, one can notice that by introducing subsamples, we are also introducing biases in the solutions that we are using to optimize the parameters of the pQAOAs. In fact, since we do not know the original distribution of the outcome of the circuit, we cannot subsample and be sure that the original distribution is preserved. To solve this issue we apply a naive rule to select the subsamples. As already mentioned, the global minima solutions of the global Hamiltonian are not always minima of the slices, therefore we are no longer seeking ground states of the slices. This implies that the parameters must not be trained with the solutions that have a smaller expectation value but rather with solutions that are feasible for the slice. This is because feasible solutions for the global Hamiltonian are also feasible for the slices. Therefore, we only select subsamples of the slices that correspond to feasible solutions for the slice in the global Hamiltonian, including excited states of the slices. If there are no feasible solutions available, we select solutions with the smallest expectation values.
Despite our choice of subsampling, we notice that this classical post-processing represents the main bottleneck of the method. In fact, the ideal training of the parameters requires the use of all the reconstructed solutions via the Cartesian product. This is, though, not practical since the Cartesian product size scales exponentially with the number of slices. Hence, even though we can decide how many slices we want to create with our approach, we must still consider the additional overhead when training with the solutions derived from the Cartesian product. Therefore, better rules to select the subsamples must be found to reduce the biases introduced and improve performance.
Figure 5: Comparison between QAOA and the results of the QAOA circuit evaluates the best set of parameters trained by using pQAOA. This figure presents the approximation ratio of 50 instances of the VRP with 2 vehicles and 3 locations. The results of QAOA are the ones shown in fig. 4. To generate the solution of QAOA with the parameters of pQAOA we evaluate a QAOA circuit evaluates on the parameters of each slice and we pick the best results. We notice that the results are similar for lower depths of the circuit.
Figure 4: Comparison of the approximation ratios of the algorithms executed on 50 instances of Gaussian distributed VRP with 2 vehicles and 3 locations. To train QAOA we use \(10^{p+1}\) per iteration of the classical optimizer while both pQAOA and single-slice pQAOA are optimized by considering only 100 samples per slice over \(10^{p+1}\) samples obtained from the quantum circuit. Therefore the number of samples collected is the same, while the training samples remain fixed only for the pQAOAs. One can notice that while pQAOA achieves better results with smaller \(p\), single-slice pQAOA increases the quality of its solution by increasing \(p\). Furthermore, we stress the fact that these small instances are not trivial to solve. In fact, we notice that the classical solver cannot always reach the global optimum of the problem. Nevertheless, all the quantum algorithms perform worse on average.
The presented results can be used to further analyze QAOA. In fig. 5 and fig. 6 we compare the results between QAOA and the same QAOA circuit evaluated with sets of parameters trained by pQAOA. The approximation ratio of the quantum algorithms is comparable. We attribute this similarity to the concentration of parameter phenomenon [42]. As already highlighted, we can notice that the results obtained by using multi-angle pQAOA have higher quality solutions with small \(p\) while single-slice pQAOA obtains better results for deeper circuits. Furthermore, the quality of the solutions improves by increasing the number of layers implemented in the circuit. The same trend is shown by the results computed by evaluating a QAOA ansatz with trained parameters obtained from multi-angle pQAOA and single-slice pQAOA.
Additionally, it is worth noting that the performance of all the quantum algorithms is worse than the classical results. Moreover, the instances appear not to be trivial since the classical solver does not always reach the optimal solution. Nevertheless, we stress that in the considered examples the size of the instances can be considered small and, therefore, an advantage is not expected. However, we put the reference to standard methods in classical optimization. Lastly, it is important to note that the optimal solutions of all the instances require only one vehicle to leave the depot. In fact, albeit generating Gaussian distributed instances is standard while benchmarking VRP instances, with small instances we experience this bias. This is due to the rareness of selecting locations on the grid that are far enough to allow both vehicles to leave the depot. Indeed, sampling from the tails of the Gaussian distribution is difficult when the number of samples is small, as is the case for our instances.
## VI Conclusions
In this work, we present a method to create parallelized versions of quantum algorithms informed by the optimization problem directly. Our analyses were focused on one specific algorithm, QAOA, but they can hold for all the VQAs presented in this work. We show how to construct parallel quantum circuits to maximally utilize the available number of qubits in NISQ processors. Specifically, we show how to use this method to solve constrained optimization problems that have more variables than the number of qubits available in the QPU. Furthermore, in specific classes of optimization problems, especially in constrained optimization problems, each parallel slice obtained by this process creates an identical copy of the same quantum circuit, which we call single-slice pQAOA. We show how to even further reduce the need for quantum resources by simulating only one of the identical copies.
We find that for low-depth circuits (specifically for \(p=1,2\)) our parallelization method of multi-angle pQAOA is comparable with QAOA and in some cases even better. In fact, even though the circuit is less expressive than the QAOA one, the larger number of parameters returns better results from the optimization routine. On the other hand, when the depth increases we see that a large number of parameters becomes a bottleneck due to the hardness of finding optimal values. This is stressed by the performance of the single-slice pQAOA as well, that for larger \(p\) becomes, instead, competitive with QAOA performance. Indeed, we can notice that while the approximation ratio for small \(p\) does not show any improvement with respect to multi-angle pQAOA, we can see that this changes with deeper circuits. Therefore, we can highlight a trade-off between the number of parameters to optimize and the depth of the circuit that represents the model.
In addition, we can see that the number of qubit measurements that we use to sample the circuits varies. For QAOA and multi-angle pQAOA we use the same while for single-slice pQAOA we need polynomially fewer measures. Therefore this scaling makes single-slice pQAOA a good candidate to make quantum hardware a valuable alternative to solve real-world problems since it is more practical to implement problems at scale.
It is also worth highlighting that while a more expressive model can yield better results, training the QAOA circuit over the entire model may not be necessary. Specifically, a less expressive quantum circuit with a reduced number of gates can produce parameters that yield solutions comparable to those obtained by training over the original model. Notably, the quantum resources required to obtain this set of parameters are lower than those needed for the original QAOA. The number of qubits and measurements required to compute the parameters is also lower than that of the non-parallelized algorithm. These results have significant implications for the design and optimization of quantum circuits for practical applica
Figure 6: The figure presents a comparison similar to fig. 5, but now we evaluate the QAOA circuit with the parameters trained by the single-slice pQAOA. We can see that the results match the performance of QAOA for deeper circuits. Furthermore, we stress the fact that the quality of the solutions increases by increasing the depth of the circuit.
tions.
Therefore, scaling the problem by reducing or keeping the amount of quantum resources while obtaining comparable results makes this method a possible solution for the application of quantum algorithms to solve real-world problems. Furthermore, future research should focus on developing new classical methods for reconstructing the original global Hamiltonian and developing tailored rules for properly collecting subsamples to achieve a scalable implementation of VQAs.
## Acknowledgements
MC and SY are funded by the German Ministry for Education and Research(BMB+F) in the project QAI2-Q-KIS under grant 13N15587. Furthermore, the authors would like to thank Andrea Skolik, Anestis Papanikolaou, Gabriele Compostella, Jakob Huhn and Matthew Kiser for valuable discussions and suggestions given.
|
2306.03896 | Potential Constraints to Neutrino-Nucleus Interactions Based on Electron
Scattering Data | A thorough understanding of neutrino-nucleus interactions physics is crucial
to achieving precision goals in broader neutrino physics programs. The
complexity of nuclei comprising the detectors and limited understanding of
their weak response constitutes one of the biggest systematic uncertainties in
neutrino experiments - both at intermediate energies affecting the short- and
long-baseline neutrino programs as well as at lower energies affecting coherent
scattering neutrino programs. While electron and neutrino interactions are
different at the primary vertex, many underlying relevant physical processes in
the nucleus are the same in both cases, and electron scattering data collected
with precisely controlled kinematics, large statistics and high precision
allows one to constrain nuclear properties and specific interaction processes.
To this end, electron-nucleus scattering experiments provide vital
complementary information to test, assess and validate different nuclear models
and event generators intended to be used in neutrino experiments. In fact, for
many decades, the study of electron scattering off a nucleus has been used as a
tool to probe the properties of that nucleus and its electromagnetic response.
While previously existing electron scattering data provide important
information, new and proposed measurements are tied closely to what is required
for the neutrino program in terms of expanding kinematic reach, the addition of
relevant nuclei and information on the final states hadronic system. | V. Pandey | 2023-06-06T17:56:36Z | http://arxiv.org/abs/2306.03896v1 | # Potential Constraints to Neutrino - Nucleus Interactions Based on Electron Scattering Data
###### Abstract
A thorough understanding of neutrino-nucleus interactions physics is crucial to achieving precision goals in broader neutrino physics programs. The complexity of nuclei comprising the detectors and limited understanding of their weak response constitutes one of the biggest systematic uncertainties in neutrino experiments - both at intermediate energies affecting the short- and long-baseline neutrino programs as well as at lower energies affecting coherent scattering neutrino programs. While electron and neutrino interactions are different at the primary vertex, many underlying relevant physical processes in the nucleus are the same in both cases, and electron scattering data collected with precisely controlled kinematics, large statistics and high precision allows one to constrain nuclear properties and specific interaction processes. To this end, electron-nucleus scattering experiments provide vital complementary information to test, assess and validate different nuclear models and event generators intended to be used in neutrino experiments. In fact, for many decades, the study of electron scattering off a nucleus has been used as a tool to probe the properties of that nucleus and its electromagnetic response. While previously existing electron scattering data provide important information, new and proposed measurements are tied closely to what is required for the neutrino program in terms of expanding kinematic reach, the addition of relevant nuclei and information on the final states hadronic system.
+
Footnote †: preprint: FERMILAB-CONF-23-015-ND
Introduction
The success of current and future neutrino experiments in achieving discovery level precision will greatly depend on the precision with which the fundamental underlying process - neutrino interaction with the target nucleus in the detector - is known [1]. To this end, electron scattering experiments have been playing a crucial role by providing high-precision data as the testbed to assess and validate different nuclear models intended to be used in neutrino experiments [2].
For the accelerator-based neutrino program, such as DUNE, the primary physics goals are determining mass hierarchy and measuring precision oscillation physics including subtle effects of \(\delta_{CP}\)[3]. The main challenges in constraining neutrino-nucleus scattering physics stem from the fact that neutrino energies at these experiments typically range from 100s of MeV to a few GeV where different interaction mechanisms yield comparable contributions to the cross section. One has to constrain an accurate picture of the initial state target nucleus, its response to the electroweak probe that includes several reaction mechanisms resulting into various finals state particles, and final state interactions that further modify the properties of the hadronic system created at the primary interaction vertex.
For the coherent elastic neutrino-nucleus scattering (CEvNS) program at stopped pion sources, such as at ORNL, the main source of uncertainty in evaluating the CEvNS cross section is driven by the underlying nuclear structure, embedded in the weak form factor, of the target nucleus. The recent detection of CEvNS process by the COHERENT collaboration [4] has opened up a slew of opportunities to probe several Beyond the Standard Model (BSM) scenarios in CEvNS experiments. In order to disentangle new physics signals from the SM expected CEvNS rate, the weak form-factor which primarily depends on the neutron density has to be known at percent level precision [5; 6].
Most of our current knowledge about the complexity of the nuclear environment - nuclear structure, dynamics, and reaction mechanisms - has been accumulated by studying electron scattering off target nuclei. The electron scattering of the nucleus, governed by quantum electrodynamics, has an advantage over the proton or pion scattering off nuclei which are dominated by strong forces. The electromagnetic interaction is well known within quantum electrodynamics and is weak compared to hadronic interaction and hence the interaction between the incident electron and the nucleus can be treated within the Born approximation,
i.e. within a single-photon exchange mechanism.
In the last few decades, a wealth of high-precision electron scattering data has been collected, over a variety of nuclei ranging from \({}^{3}\)He to \({}^{208}\)Pb, at several facilities including Bates, Saclay, Frascati, DESY, SLAC, NIKHEF, Jefferson Lab, etc., among others. The ability to vary electron energy, and scattering angle and hence the energy and moment transferred to the nucleus \((\omega,q)\) - combined with the advancement in high-intensity electron beams, high-performance spectrometers and detectors - resulted in investigating processes ranging from quasi- elastic (QE) to the \(\Delta\) resonance to complete inelastic (resonant, non-resonant, and the deep inelastic scatter- ings (DIS)) with significant details. A number of those datasets were further utilized to separate the longitudinal and transverse response functions through the Rosenbluth separation. Several decades of experimental work has provided sufficient testbed to assess and validate several theoretical approximations and predictions and hence propelled the theoretical progress staged around nuclear ground state properties, nuclear many-body theories, nuclear correlations, form factors, nucleon-nucleon interactions, etc. A web archive of accumulated data is maintained at Ref.[7; 8].
Besides being immensely interesting in itself, electron scattering turned out to be of great importance for neutrino programs. The data collected with electron-nucleus scattering has provided the benchmark to test the nuclear models that can be further extended to neutrino-nucleus scattering. The extension of the formalism from electron-nucleus scattering, where only vector current contributes, to neutrinos require the addition of axial current contribution. Despite the fact that (unpolarized) electron scattering provides access to only vector response, the vector current is conserved between electromagnetic and weak response through conserved vector current (CVC).
While previous and existing electron scattering experiments provide important information, new dedicated measurements whose goals tie more closely with the needs of constraining neutrino-nucleus interactions physics in neutrino programs are needed. Dedicated electron scattering experiments with targets and kinematics of interest to neutrino experiments (CEvNS, supernova, and accelerator-based) will be vital in the development of neutrino-nucleus scattering physics modeling that underpin neutrino programs [2].
The rest of this article is structured as follows. In Sec. II, we briefly describe the neutrino interaction challenges faced by neutrino programs in their key physics goals. We then identify connections between electron- and neutrino-nucleus scattering physics in Sec. III. In Sec. IV,
we present a brief summary of the current and planned electron scattering experiments that are input to various neutrino programs. We summarize in Sec. V.
## II The importance of constraining neutrino-nucleus interactions physics
In accelerated-based neutrino oscillation program, neutrino-nucleus interactions constitute one of the dominant systematic uncertainties. In the event of a neutrino oscillating from \(\nu_{i}\) to \(\nu_{j}\) and for a given observable topology, the observed event rate at far detector is a convolution of neutrino flux at near detector (\(\phi_{\nu_{i}}\)), probability of oscillation from flavor \(i\) to \(j\), and the neutrino-nucleus cross section for neutrino flavor \(j\) (\(\sigma_{\nu_{j}}\)), and detection efficiency at far detector (\(\epsilon_{\nu_{j}}\))
\[\mathcal{R}(\nu_{i}\rightarrow\nu_{j})\propto\phi_{\nu_{i}}\otimes P(\nu_{i} \rightarrow\nu_{j})\otimes\sigma_{\nu_{j}}\otimes\epsilon_{\nu_{j}} \tag{1}\]
with oscillation probability, considering simple example of two neutrino flavors, given as:
\[P(\nu_{i}\rightarrow\nu_{j})\simeq\sin^{2}2\theta\sin^{2}\left(\frac{\Delta m ^{2}L}{4E_{\nu}}\right), \tag{2}\]
with \(\theta\) and \(\Delta m^{2}\) are mixing angle and the squared-mass difference respectively, \(E_{\nu}\) is neutrino energy and \(L\) is the oscillation baseline. Typically, the ratio of oscillated event rate at the far detector to the unoscillated event rate at the near detector does not cancel out flux and cross sections dependence.
The systematic challenges in neutrino experiments are manifold. The energy of the interacting neutrino is not known, the kinematics of the interaction in the target nucleus is not known and the only known (to some degree) quantity - the topology of the final state particles and their energy - is subjected to detector type, detection thresholds, and the accuracy of particle identification and background reduction processes. The necessity of an accurate neutrino interaction recipe is essential at almost every step of the analysis. The accuracy of the measurement of the (energy-dependent) neutrino oscillation probability relies strongly on the accuracy with which a Monte-Carlo event generator can describe all neutrino-nucleus interaction types that can produce the observed event topology (that depends both on the initial and final state nuclear effects). As it stands currently, a lack of reliable nuclear recipe contributes to the main source of systematic uncertainty and is considered one of the main hurdles in further increasing the obtained precision. For current long-baseline
neutrino experiments, T2K and NOvA, neutrino-nucleus interactions constitute one of the largest uncertainties. In future long-baseline neutrino experiments, DUNE and HyperK, the statistics will significantly increase and neutrino interaction systematics uncertainties will be dominant.
In the energy regime of the accelerator-based neutrino experiments, 100s of MeV to a few GeV, several mechanisms contribute to the nuclear response: from the excitation of nuclear collective states in the lowest energy part of the spectrum, up to the deep inelastic scattering at the highest energy transfers, encompassing the quasi-elastic region, corresponding to one-nucleon knockout, and the resonance region, corresponding to the excitation of nucleon resonances followed by their decay and subsequent production of pions and other mesons. There is no unified underlying theory to describe neutrino-nucleus interactions for this broad energy range. It truly is a multi-scale, multi-process, many-body non-perturbative problem subject to complex nuclear structure and dynamics that include transitions between different degrees of freedom. One needs a description of initial state target nucleus, its response to the electroweak probe that include several reaction mechanisms resulting into various finals state particles, and final state interactions that further modify the properties of the hadronic system created at the primary interaction vertex.
Similarly, for low-energy (10s of MeV) neutrinos, the uncertainties on inelastic neutrino-nucleus interaction, the detection channel for supernova neutrinos in DUNE and HyperK is large and is often not even quantified [9; 10]. Although theoretical uncertainties, primarily driven by the poorly known neutron density distributions, are relatively small in CEvNS case, percent level precision might be needed to disentangle new physics signals [5].
## III Connecting electron- and neutrino-nucleus scattering physics
Electron-nucleus scattering process, represented in Figure 1(a), is primarily governed by electromagnetic interactions where (to first-order) the interaction is mediated by a (virtual) photon. The neutrino-nucleus scattering, represented in Figure 1 (b), is primarily governed by weak interaction via the exchange of a \(W^{\pm}\) or \(Z^{0}\) boson for charged and neutral weak process, respectively.
In the Born approximation [13] the lepton-nucleus differential cross section \(d\sigma\) is propor
tional to the contraction of the leptonic (\(L_{\mu\nu}\)) and hadronic (\(W^{\mu\nu}\)) tensors
\[d\sigma\propto L^{\mu\nu}W_{\mu\nu} \tag{3}\]
with hadronic tensor written in terms of nuclear current operators, \(J\)
\[W_{\mu\nu}\propto\sum_{f}<\psi_{i}|J_{\mu}^{\dagger}(q)|\psi_{f}><\psi_{f}|J_{ \nu}(q)|\psi_{i}>\delta(E_{0}+\omega-E_{f}) \tag{4}\]
where \(\psi_{i}\) and \(\psi_{f}\) are initial and final state wave functions, and \(\omega\) and \(q\) are energy and momentum transferred to the nucleus.
Contracting the leptonic and hadronic tensor, we obtain a sum involving projections of the current matrix elements. It is convenient to choose these to be transverse and longitudinal with respect to the virtual boson direction. The electron-nucleus scattering cross section becomes
\[d\sigma_{e}\propto V_{L}R_{L}+V_{T}R_{T} \tag{5}\]
and the neutrino-nucleus scattering cross section becomes
\[d\sigma_{\nu}\propto V_{C}R_{C}+V_{L}R_{L}+2V_{CL}R_{CL}+V_{T}R_{T}\pm V_{T^{ \prime}}R_{T^{\prime}} \tag{6}\]
where \(R\) are nuclear responses that are functions of \(\omega\) and \(q\). The subscripts C, L, and T correspond to coulomb, longitudinal and transverse components. The last term in Eq. 6, the transverse interference term, is positive for neutrino scattering and negative for antineutrino scattering.
Figure 1: Diagrammatic representation of (a) electron-nucleus and (b) neutrino-nucleus scattering process (\(l=e,\mu,\tau\)), where X represents outgoing hadronic final state.
The underlying nuclear physics, probed by electrons and neutrinos, is intimately connected to each other. The initial nucleus description is the same. The weak current carried by neutrinos has a vector and an axial component, while the electromagnetic current carried by electrons is purely vector. Though, the vector current is conserved (CVC) between electromagnetic and weak interactions leaving the axial nuclear response unique to neutrinos (or polarized electrons). The final state interactions effects are the same. Therefore, various aspects of nuclear structure and dynamics influencing the neutrino-nucleus cross section can be studied in electron scattering. Any model or event generator that does not work for electron scattering would likely not work for neutrino scattering. In typical electron scattering experiments the incident beam energy is known with good accuracy, hence the transferred energy, \(\omega\), and momentum, \(q\), can be precisely determined by measuring the outgoing lepton kinematics. The high-precision, high statistics electron scattering data collected with precisely controlled kinematics allows to separate different processes.
The tens-of-MeV neutrinos, from stopped pion sources or from core-collapse supernova, primarily interact via two processes: coherent elastic neutrino-nucleus scattering (CEvNS), and inelastic charged and neutral current scattering. Precise determination of Standard Model CEvNS cross section will enable new physics searches in CEvNS experiments, while precise inelastic cross section determination will enable detection of supernova signals in DUNE experiment.
The CEvNS cross section (at tree level) is given as
\[\frac{d\sigma}{dT}(E_{\nu},T) \simeq \frac{G_{F}^{2}}{4\pi}M\left[1-\frac{MT}{2E_{\nu}^{2}}\right]Q_{W} ^{2}F_{W}^{2}(q^{2})\,, \tag{7}\]
where \(G_{F}\) is the Fermi constant, \(M\) the mass of the nucleus, \(E_{\nu}\) and \(T\) the energy of the neutrino and the nuclear recoil energy, respectively. The weak form factor \(F_{W}(q^{2})\) is given as
\[F_{W}(q^{2}) = \frac{1}{Q_{W}}[NF_{n}(q^{2})-(1-4\sin^{2}\theta_{W})ZF_{p}(q^{2})] \tag{8}\]
where \(\theta_{W}\) is the Weinberg mixing angle. In Eq. (8) the quantities \(F_{p}(q^{2})\) and \(F_{n}(q^{2})\) are the proton (\(p\)) and neutron (\(n\)) form factors, respectively. While the proton distributions are relatively well known through elastic electron scattering experiments [11], neutron distributions are much more difficult to constrain. Since \(1-4\sin^{2}\theta_{W}(0)\approx 0\), the weak form factor becomes \(F_{W}(q^{2})\approx F_{n}(q^{2})\). In order to disentangle new physics signals from the SM expected
CEvNS rate, the weak form factor, which primarily depends on the neutron density, has to be known at percent level precision.
Recent advancements in Parity Violating Electron Scattering (PVES) experiments, utilizing polarized electron beams, provide relatively model-independent ways of determining weak form factors that can be used as direct input in determining CEvNS cross section. Both processes are described in first-order perturbation theory via the exchange of an electroweak gauge boson between a lepton and a nucleus. While in CEvNS the lepton is a neutrino and a \(Z^{0}\) boson is exchanged, in PVES the lepton is an electron, but measuring the asymmetry allows one to select the interference between the \(\gamma\) and \(Z^{0}\) exchange. As a result, both the CEvNS cross section and the PVES asymmetry depend on the weak form factor \(F_{W}(Q^{2})\), which is mostly determined by the neutron distribution within the nucleus. The parity-violating asymmetry \(A_{pv}\) for elastic electron scattering is the fractional difference in cross section for positive helicity and negative helicity electrons. In Born approximation \(A_{pv}\) is proportional to the weak form factor \(F_{W}(q^{2})\)[14; 15],
\[A_{pv}=\frac{d\sigma/d\Omega_{+}-d\sigma/d\Omega_{-}}{d\sigma/d \Omega_{+}+d\sigma/d\Omega_{-}}=\frac{G_{F}q^{2}|Q_{W}|}{4\pi\alpha\sqrt{2}Z} \frac{F_{W}(q^{2})}{F_{ch}(q^{2})}\,. \tag{9}\]
Here \(F_{ch}(q^{2})\) is the (E+M) charge form factor that is typically known from unpolarized electron scattering. Therefore, one can extract \(F_{W}\) from measurements of \(A_{pv}\). Note that Eq. 9 must be corrected for Coulomb distortions [12], though these effects are absent for neutrino scattering.
The inelastic neutrino-nucleus cross sections in this tens-of-MeV regime are quite poorly understood. There are very few existing measurements, none at better than the 10% uncertainty level. As a result, the uncertainties on the theoretical calculations of, e.g., neutrino-argon cross sections are not well quantified at all at these energies. Because inelastic neutrino interactions have big uncertainties, in the future it will be crucial to measure inelastic electron scattering cross sections at energies below the 50 MeV mark and use those data to calibrate theoretical models for the neutrino scattering process. Overall, we expect nuclear structure effects to be definitely larger than in CEvNS and presumably at least at the 10% level. To this end, 10s of MeV electron scattering data will be vital in constraining neutrino-nucleus interaction at this energy scale.
## IV Current and future electron scattering experiments for neutrino programs
For over five decades, electron scattering experiments at different facilities around the world have provided a wealth of information on the complexity of nuclear structure, dynamics and reaction mechanisms. Decades of experimental work has provided a vital testbed to assess and validate theoretical approximations and predictions that propelled the theoretical progress staged around. A large data set of high precision electron-nucleus scattering exist, meant to study various nuclear physics aspects, covering many nuclei and wide energy ranges corresponding to different reaction mechanisms. While previous and existing electron scattering experiments provide important information, new dedicated measurements whose goals are tied to the needs of neutrino programs are needed. New data can expand relevant kinematic reach, the addition of relevant nuclei and the information on the final states
\begin{table}
\begin{tabular}{|c|c c c|} \hline
**Collaborations** & **Kinematics** & **Targets** & **Scattering** \\ \hline
**E12-14-012 (JLab)** & \(E_{e}\) = 2.222 GeV & Ar, Ti & \((e,e^{\prime})\) \\ (Data collected: 2017) & \(15.5^{\circ}\leq\theta_{e}\leq 21.5^{\circ}\) & Al, C & \(e,p\) \\ & \(-50.0^{\circ}\leq\theta_{p}\leq-39.0^{\circ}\) & & in the final state \\ \hline
**e4nu/CLAS (JLab)** & \(E_{e}\) = 1, 2, 4, 6 GeV & H, D, He, & \((e,e^{\prime})\) \\ (Data collected: 1999, 2022) & \(\theta_{e}>5^{\circ}\) & C, Ar, \({}^{40}\)Ca, & \(e,p,n,\pi,\gamma\) \\ & & \({}^{48}\)Ca, Fe, Sn in the final state \\ \hline
**LDMX (SLAC)** & \(E_{e}\) = 4.0, 8.0 GeV & & \((e,e^{\prime})\) \\ (Planned) & \(\theta_{e}<40^{\circ}\) & W, Ti, Al & \(e,p,n,\pi,\gamma\) \\ & & & in the final state \\ \hline
**A1 (MAMI)** & 50 MeV \(\leq E_{e}\leq 1.5\) GeV & H, D, He & \((e,e^{\prime})\) \\ (Data collected: 2020) & \(7^{\circ}\leq\theta_{e}\leq 160^{\circ}\) & C, O, Al & 2 additional \\ (More data planned) & & Ca, Ar, Xe & charged particles \\ \hline
**A1 (eALBA)** & \(E_{e}\) = 500 MeV & C, CH & \((e,e^{\prime})\) \\ (Planned) & - few GeV & Be, Ca & \\ \hline \end{tabular}
\end{table}
Table 1: Current and planned electron scattering experiments. For more details, please see Ref. [2].
hadronic system.
In Table 1, we present a summary of the current and planned electron-scattering experiments. These electron scattering experiments are primarily motivated by the needs of the accelerator neutrino experiments. They include complementary efforts that cover a broad range of kinematics and carry a varied level of particle identification and other detection capabilities. The work is mainly done with a cross-community effort of nuclear and high-energy physicists. For more information on details of individual experiments, we refer readers to a recent Snowmass white paper, Ref. [2].
This kinematics is then presented in Fig. 2, where it is overlaid on the regions expected to contain 68% (light shaded) and 95% (dark shaded) of charged current interactions of muon neutrinos with argon in the DUNE near detector [16], as estimated using GENIE 3.0.6. The e4nu experiment at JLab [17] employs a broad range of energies and has the potential to study a significant phase space of DUNE kinematics. The beam energy of the LDMX experiment at SLAC [18], 4 GeV, is closely corresponding to the average neutrino energy in DUNE, and can perform extensive studies of pion production. A1 collaboration at MAMI covers a broad range of scattering angles--from 7\({}^{\circ}\) to 160\({}^{\circ}\)--and beam energies--from \(\sim\)50 MeV to 1.5 GeV-- and would be able to perform extensive studies of the quasielastic and \(\Delta\) peaks. In these experiments, in general, a lot of attention will be given to measuring
Figure 2: Kinematic coverage of the ongoing and planned electron scattering experiments for electron scattering on targets including argon and titanium presented in the (a) (\(|\mathbf{q}|,\omega\)) and (b) (\(|\mathbf{q}|,W\)) planes. The thin solid, dashed, and dotted lines correspond to the kinematics of quasielastic scattering, \(\Delta\) excitation, and the onset of deep-inelastic scattering at \(W=1.7\) GeV on free nucleons. Figure taken from Ref. [2].
exclusive cross sections.
In Table 2, we present a summary of the current and planned PVES experiments. These experiments probe complementary information for CEvNS experiment in constraining weak form factor of the nucleus. While for tens-of-MeV inelastic neutrino scattering, there is currently no ongoing program though the potential exists for a lower energy electron beam at MESA at Mainz. Dedicated electron scattering experiments with targets and kinematics of interest to low-energy neutrino experiments will be crucial in achieving the precision goals of low-energy neutrino programs. For more information, we refer readers to a recent Snowmass white paper, Ref. [2].
## V Summary
Neutrino physics has entered a precision era and exciting neutrino experimental programs at low and high energies can lead to discoveries. The importance of constraining systematics resulted from neutrino-nucleus interaction physics in key neutrino measurements, in particular at accelerator-based experiments, cannot be overstated. To this end, the electron scattering experiments play a key role in constraining underlying nuclear physics in nuclear models and event generators intended to be used in neutrino experiments.
Electron and neutrino interactions carry many similarity in underlying relevant physical processes, and electron scattering data collected with precisely controlled kinematics, large statistics and high precision allows one to constrain nuclear properties and specific interaction processes. Electron scattering data provide the necessary testbed to assess and validate different nuclear models intended to be used in neutrino experiments. While previously existing electron scattering data provide important information, new and proposed
\begin{table}
\begin{tabular}{|c|c c c c|} \hline
**Collaborations** & **Target \(q^{2}\) (GeV\({}^{2}\))** & \(A_{pv}\) (ppm) & \(\pm\delta R_{n}\) **(\%)** \\ \hline PREX & \({}^{208}\)Pb & 0.00616 & \(0.550\pm 0.018\) & 1.3 \\ CREX & \({}^{48}\)Ca & 0.0297 & & 0.7 \\ Qweak & \({}^{27}\)Al & 0.0236 & \(2.16\pm 0.19\) & 4 \\ MREX & \({}^{208}\)Pb & 0.0073 & & 0.52 \\ \hline \end{tabular}
\end{table}
Table 2: Parity violating elastic electron scattering experiments. For more details, please see Ref. [2].
measurements whose goals are closely tied to the needs of neutrino program in terms of expanding kinematic reach, the addition of relevant nuclei and information on the final states hadronic system are vital. The NP-HEP cross-community collective efforts are playing a key role in this endeavour.
###### Acknowledgements.
VP is grateful to the organizers of the NuFACT 2022 workshop for the invitation and hospitality. This manuscript has been authored by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the U.S. Department of Energy, Office of Science, Office of High Energy Physics.
|
2304.04758 | Expectations over Unspoken Alternatives Predict Pragmatic Inferences | Scalar inferences (SI) are a signature example of how humans interpret
language based on unspoken alternatives. While empirical studies have
demonstrated that human SI rates are highly variable -- both within instances
of a single scale, and across different scales -- there have been few proposals
that quantitatively explain both cross- and within-scale variation.
Furthermore, while it is generally assumed that SIs arise through reasoning
about unspoken alternatives, it remains debated whether humans reason about
alternatives as linguistic forms, or at the level of concepts. Here, we test a
shared mechanism explaining SI rates within and across scales: context-driven
expectations about the unspoken alternatives. Using neural language models to
approximate human predictive distributions, we find that SI rates are captured
by the expectedness of the strong scalemate as an alternative. Crucially,
however, expectedness robustly predicts cross-scale variation only under a
meaning-based view of alternatives. Our results suggest that pragmatic
inferences arise from context-driven expectations over alternatives, and these
expectations operate at the level of concepts. | Jennifer Hu, Roger Levy, Judith Degen, Sebastian Schuster | 2023-04-07T18:12:22Z | http://arxiv.org/abs/2304.04758v1 | # Expectations over Unspoken Alternatives Predict Pragmatic Inferences
###### Abstract
Scalar inferences (SI) are a signature example of how humans interpret language based on unspoken alternatives. While empirical studies have demonstrated that human SI rates are highly variable - both within instances of a single scale, and across different scales - there have been few proposals that quantitatively explain both cross- and within-scale variation. Furthermore, while it is generally assumed that SIs arise through reasoning about unspoken alternatives, it remains debated whether humans reason about alternatives as linguistic forms, or at the level of concepts. Here, we test a shared mechanism explaining SI rates within and across scales: context-driven expectations about the unspoken alternatives. Using neural language models to approximate human predictive distributions, we find that SI rates are captured by the expectedness of the strong scalemate as an alternative. Crucially, however, expectedness robustly predicts cross-scale variation only under a meaning-based view of alternatives. Our results suggest that pragmatic inferences arise from context-driven expectations over alternatives, and these expectations operate at the level of concepts.
+
Footnote †: Code and data can be found at [https://github.com/jennhu/expectations-over-alternatives](https://github.com/jennhu/expectations-over-alternatives).
## 1 Introduction
Much of the richness of linguistic meaning arises from what is left unsaid (e.g., Grice, 1975; Sperber and Wilson, 1986; Horn, 1989). For example, if Alice says "Some of the students passed the exam", Bob can infer that Alice means _not all_ students passed the exam, even though Alice's utterance would still be logically true if all students had passed. One explanation of this inference is that Bob reasons about the unspoken **alternatives** that were available to the speaker. Under the assumptions that (1) speakers generally try to be informative, (2) Alice has full knowledge of the situation, and (3) it would have been relevant and more informative for Alice to say "All of the students passed the exam", Alice's choice to say "some" suggests that she believes the sentence with "all" is false. This inference pattern is more generally known as **scalar inference** (SI), which arises from orderings between linguistic items (scales).
SI has often been treated as a categorical phenomenon: when a speaker utters a weaker (less informative) item on a scale, a listener rules out the meaning of stronger (more informative) items on that scale (e.g., Levinson, 2000). However, empirical studies have demonstrated substantial variability in the rates at which humans draw SIs, both within instances of a single scale (Degen, 2015; Eiteljoerge et al., 2018; Li et al., 2021) and across scales formed by different lexical items (e.g., Doran et al., 2009; Beltrama and Xiang, 2013; van Tiel et al., 2016; Gotzner et al., 2018; Pankratz and van Tiel, 2021; Ronai and Xiang, 2022). For example, consider the following instances of the scale \(\langle\textit{some},\textit{all}\rangle\):
1. 1. I like some country music. 2. I like some, but not all, country music.
2. It would certainly help them to appreciate some of the things that we have here. 3. It would certainly help them to appreciate some, but not all, of the things that we have here.
Degen (2015) finds that humans are highly likely to consider (1-a) as conveying a similar meaning as (1-b), but unlikely to consider (2-a) as conveying a similar meaning as (2-b) (Figure 0(a)). Similarly, consider the following instances of the scales \(\langle\textit{possible},\textit{certain}\rangle\) and \(\langle\textit{ugly},\textit{hideous}\rangle\), which both consist of adjectives ordered by entailment:
* 1. [label=(3)]
* Success is possible.
* Success is not certain.
* The painting is ugly.
* The painting is not hideous.
van Tiel et al. (2016) find that humans are highly likely to conclude that (3-a) implies (3-b), but unlikely to conclude that (4-a) implies (4-b) (Figure 1b).
While cross-scale and within-scale variation have typically been studied as distinct empirical phenomena, they both reflect gradedness in listener inferences based on alternatives and context. It therefore seems desirable to explain these empirical findings with a shared account, but there have been few proposals that quantitatively explain both within- and cross-scale variation. For example, cross-scale variation can be explained by intrinsic properties of the scale (e.g., whether the strong scalemate refers to an extreme endpoint; van Tiel et al., 2016), but these factors cannot explain variation within instances of a single scale. On the other hand, many factors explaining within-scale variance are scale-specific (e.g., the partitive "of the" for \(\langle\)_some_, _all_\(\rangle\); Degen, 2015) and may not generalize to new scales.
Here, we investigate a shared account of SI rates within and across scales. Since the alternatives are not explicitly produced (by definition), the listener has uncertainty over which alternatives the speaker could have used - and therefore, which strong scalemates ought to be negated through SI. Building upon constraint-based accounts of human language processing (Degen and Tanenhaus, 2015, 2016), we test the hypothesis that SIs depend on the availability of alternatives, which depend on context-driven expectations maintained by the listener. For example, if a speaker says "The movie was good", the listener might predict that _amazing_ is a more likely alternative than _funny_ to the weak term _good_. An expectation-based view predicts that the listener would be thus be more likely to infer that the movie is not amazing (according to the speaker), and less likely to infer that the movie is not funny. However, while Degen and Tanenhaus (2015, 2016) have argued that listeners maintain context-driven expectations over alternatives, these studies have primarily investigated a single scale (\(\langle\)_some_, _all_\(\rangle\)) in small domains, arguing from qualitative patterns and in the absence of a formal theory.
Furthermore, while it is generally assumed that SIs arise based on reasoning about unspoken alternatives, it remains debated whether humans reason about alternatives as linguistic structures (e.g., Katzir, 2007; Fox and Katzir, 2011), or at the level of concepts (e.g., Gazdar, 1979; Buccola et al., 2021). Returning to the earlier example, if the weak scalemate is _good_, listeners may reason about a concept like VeryGood instead of a specific linguistic expression like _amazing_. In this sense, the listener's uncertainty about alternatives might arise from uncertainty about both the scale itself (_Is the speaker implying the plot wasn't amazing, or that the jokes weren't funny?_), as well
Figure 1: (a) Distribution of human scalar inference (SI) ratings (on scale of 1-7) across instances of the \(\langle\)_some_, _all_\(\rangle\) scale (reproduction of Fig. 1, Degen 2015). (b) Average SI rates across scales formed by different lexical items (reproduction of Fig. 2, van Tiel et al. 2016).
as the exact word forms under consideration by the speaker (_Is the speaker implying the movie wasn't amazing, fantastic, or wonderful?_). Despite theoretical debates about the nature of alternatives, however, the role of concept-based alternatives in SI has not been tested in a systematic, quantitative way.
We provide a formalization of an expectation-based account of alternatives and test it on both string-based and concept-based views of alternatives. Instead of empirically estimating human expectations over alternatives (cf. Ronai and Xiang, 2022), we use neural language models as an approximation, which allows us to generate predictions for arbitrary sentences and contexts. We test the account's predictions on human SI rates within the \(\langle\)_some_, _all\(\rangle\)_ scale (Degen, 2015), and across 148 scales from four datasets (van Tiel et al., 2016; Gotzner et al., 2018; Pankratz and van Tiel, 2021; Ronai and Xiang, 2022). We find support for the expectation-based account, and also provide the first evidence that concept-based alternatives may be underlying a wide range of SIs. Our results suggest that pragmatic inferences may arise from context-driven expectations over unspoken alternatives, and these expectations operate at the level of concepts.
## 2 Background
### Within-scale variation
Within-scale variation refers to the variation in SI rates across instances of a single scale, such as \(\langle\)_some_, _all\(\rangle\)_. To explore SI variation within the scale \(\langle\)_some_, _all\(\rangle\)_, we use the dataset collected by Degen (2015), which features 1363 naturalistic sentences containing a "some"-NP from the Switchboard corpus of telephone dialogues (Godfrey et al., 1992) (Table 1). For each sentence, SI rates were measured using a sentence-similarity paradigm. On each trial, participants saw two sentence variants: the original sentence containing "some", and a minimally differing sentence where ", but not all," was inserted directly after "some". Participants were asked, "How similar is the statement with'some, but not all' to the statement with'some'?" and indicated responses (similarity judgments) on a seven point Likert scale. If the speaker's originally intended meaning clearly includes an implicature, then making the implicature explicit by inserting ", but not all," should not change the meaning of the sentence, so similarity judgments should be high. Thus, a higher similarity judgment indicates a stronger SI.
Degen (2015) finds substantial variation in SI rates across contexts, challenging the idea that the "some, but not all" inference arises reliably without sensitivity to context (Horn, 1989; Levinson, 2000). She also reports several features that predict SI rates, such as whether "some" occurs with the partitive "of the", or whether the "some"-NP is in subject position. However, these features may be highly specific to the \(\langle\)_some_, _all\(\rangle\)_ scale, and it is unclear whether a more general mechanism may also explain variation within or across other scales.
### Cross-scale variation (scalar diversity)
Cross-scale variation refers to the variation in SI rates across scales formed by different lexical items. To explore this, we use SI rates across 148 unique scales from four datasets, summarized in Table 1. Each scale involves a pair of English words (adjectives, adverbs, or verbs) of the form \(\langle\)[WEAK], [STRONG], where [WEAK] is less informative than [STRONG] (e.g., \(\langle\)_intelligent_, _brilliant_)).1 For each dataset, SI rates were measured through a binary choice task. Participants saw a character make a short, unembedded statement consisting of a simple noun phrase subject and a predicate with a weak scalar item (e.g., "John says: This student is intelligent."). Their task was to indicate (_Yes_ or _No_) whether they would conclude that the speaker believes the negation of a strong scalar item (e.g., "Would you conclude from this
\begin{table}
\begin{tabular}{l l r r r r} \hline \hline Dataset & Type of variation & \# participants & \# scales & \# contexts per scale & \# data points per item \\ \hline Degen (2015) & Within-scale & 243 & 1 & 1363 & \(\sim\) 10 \\ \hline Ronai and Xiang (2022) & Cross-scale & 40 & 57 & 1 & 40 \\ Pankratz and van Tiel (2021) & Cross-scale & 1970 & 50 & 1 & \(\sim\) 40 \\ Gotzner et al. (2018) & Cross-scale & 220 & 67 & 1 & 40 \\ van Tiel et al. (2016) & Cross-scale & 28 & 39 & 3 & 10 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Details of human data used in our analyses. An item is a unique (scale, context) combination.
that, according to John, she is not brilliant?"). The SI rate for a scale is the proportion of _Yes_ responses.
This method has revealed large variation in SI rates, ranging from 4% (\(\langle\textit{ugly},\textit{hideous}\rangle\)) to 100% (\(\langle\textit{sometimes},\textit{always}\rangle\)) (van Tiel et al., 2016). van Tiel et al. (2016) test two classes of factors that might predict SI rates: the availability of the strong scalemate given the weak scalemate, and the degree to which scalemates can be distinguished from each other. They find SI rates are predicted by measures of scalemate distinctness (e.g., whether the strong scalemate forms a fixed endpoint on the scale), but not by availability (but see Westera and Boleda, 2020; Ronai and Xiang, 2022). Other studies have proposed additional scale-intrinsic factors (e.g., Gotzner et al., 2018; Sun et al., 2018; Pankratz and van Tiel, 2021). However, structural properties of a scale cannot explain variablity in SI rates _within_ a scale, as these properties do not change across contexts.
While others have proposed context-dependent factors - which could, in principle, explain both cross- and within-scale variation - these factors often lack explanatory power in practice. For example, Ronai and Xiang (2021) find that the prominence of the Question Under Discussion (Roberts, 2012) is correlated with SI rates, but only for unbounded scales (i.e., scales where neither scalemate has a fixed, extreme meaning).
## 3 An expectation-based account of SI
Theoretically, it is the set of alternative utterances - utterances that the speaker could have used, but didn't - that drive scalar implicature, and in principle every possible utterance in a language might be an alternative to every other. However, at an algorithmic level (Marr, 1982), it would be intractable for listeners to perform inference over this entire set. Furthermore, the signature pattern of SI would not arise without restrictions on the alternatives: otherwise, "[WEAK], but not [STRONG]" and "[STRONG]" would both be alternatives to "[WEAK]", leading to contradictory inferences without a mechanism for breaking symmetry (Kroch, 1972; Katzir, 2007; Breheny et al., 2018).
To solve this symmetry problem, some approaches restrict alternatives based on structural complexity through grammar-internal mechanisms (e.g., Katzir, 2007; Fox and Katzir, 2011). However, these theories do not capture the uncertainty that listeners maintain, and are difficult to test quantitatively. Here, we test the view that listeners form probabilistic expectations over alternatives, given information from their interaction with the speaker. In the remainder of this section, we first discuss the conceptual predictions of an expectation-based account of SI, and then describe how we operationalize these predictions using neural language models.
Suppose that a listener hears a sentence with a weak scalar term [WEAK] (e.g., "This student is intelligent"). To rule out the meaning of a particular strong scalemate [STRONG] (e.g., the student is not _brilliant_), the listener must have reason to believe that the speaker would have said [STRONG] if they had intended to convey the strong meaning. However, since the alternatives are not explicitly produced, the listener has some degree of uncertainty over what alternatives were considered by the speaker. If it is likely that the speaker would have said [STRONG] to convey the strong meaning, then their choice to say [WEAK] suggests that they did not have grounds to say [STRONG] - and thus, an SI should be more likely to arise.
The key question, then, is how listeners estimate which alternatives are likely to be considered by the speaker. An expectation-based account proposes that listeners integrate contextual and grammatical cues to maintain probabilistic expectations over these alternatives. A scalemate that is more probable (given these cues) should be more likely to enter the scalar inference computation. Thus, this account predicts that the more expected the strong scalemate is as an alternative to the weak scalemate, the higher SI rates should be.
### String-based view of alternatives
When an alternative is likely to be a strong scalemate, listeners should be more likely to rule out its meaning, resulting in higher SI rates. Conditioned on the context and the speaker's choice to use [WEAK], the listener must estimate the probability of [WEAK] and [STRONG] being contrasted in a scalar relationship. Since it is difficult to directly estimate this probability, we construct a sentence frame where the probability of [STRONG] - at the level of forms - approximates the probability of [STRONG] being in a scalar relationship with a weak scalemate [WEAK]. This approach allows us to re-frame the problem of estimating listeners'
expectations over strong scalemates into a word prediction problem.
To do this, we use the scalar construction "_X, but not Y_", which in many cases suggests that \(Y\) is a strong scalemate to _X_Hearst (1992); de Melo and Bansal (2013); van Miltenburg (2015); Pankratz and van Tiel (2021). For a given utterance [CONTEXT][WEAK][CONTEXT] and hypothesized scale ([WEAK], [STRONG]), we form a sentence that explicitly states the SI:
\[\small\underbrace{\texttt{[WEAK], but not[STRONG],}}_{\text{scalar construction}}\text{[CONTEXT]} \tag{1}\]
To test how expected [STRONG] is as an alternative to [WEAK], we need to estimate how likely a human would predict [STRONG] to appear in the [STRONG] position in (1).2 Instead of attempting to directly measure these predictions (cf. Ronai and Xiang, 2022, see (3)), we approximate this with neural language models. We measure how unexpected [STRONG] is by computing its surprisal (negative log probability) under a language model, conditioned on the rest of the sentence. Since surprisal measures _un_expectedness, we predict a negative relationship between SI rate and the surprisal of the strong scalemate.
Footnote 2: Another approach would be to measure the expectedness of [STRONG] in the template [CONTEXT] [STRONG] [CONTEXT] – that is, by replacing [WEAK] with [STRONG] in the speaker’s original utterance. This template would instantiate the theory that listeners determine alternatives based on the context. In contrast, the template we use in (1) instantiates the theory that listeners form expectations over alternatives based on the context as well as the speaker’s usage of [WEAK]. We return to this topic in Section 7.1.
This predictor is closely related to the notion of an SI's "relevance" Pankratz and van Tiel (2021). Under usage-based theories of language (e.g., Tomasello, 2003; Bybee and Beckner, 2015), if a weak scalar term is encountered frequently in a scalar relationship with a particular strong term, then the scalar relationship between these items will be enforced. Thus, Pankratz and van Tiel (2021) measure the relevance of an SI by counting corpus frequencies of the scalemates in the string "[WEAK], but not [STRONG]." This is conceptually aligned with our setup, where we might expect higher corpus frequencies to correspond to lower surprisal under a language model. However, our predictor differs from Pankratz and van Tiel's in an important way: they aim to measure the "general relevance" of an SI, which they define as "relevance even in the absence of a situated context." It is unclear how general relevance can explain variation in SI rates within instances of a scale. By using context-conditioned probabilities from a language model, our predictor could account for both the general frequency of "[WEAK], but not [STRONG]" as well as expectations driven by the context in which the scale occurs.
### Concept-based view of alternatives
The method described above implicitly treats linguistic forms as the alternatives driving scalar inferences. However, recent proposals have advanced the view that alternatives are not linguistic objects, but instead operate at the level of more general reasoning preferences Buccola et al. (2021). On this view, alternatives are constructed by replacing primitives of the concept expressed by the speaker with primitives of equal or less complexity.
Here, we test a generalization of this concept-based view of alternatives. Suppose, for example, a speaker uses the weak scalar term _big_. On a concept-based view, the listener may infer that the speaker is contrasting _big_ with a concept like VeryBig instead of a particular linguistic expression like _enormous_. However, in the experiments mentioned in Section 2.2, the SI process likely needs to be grounded in linguistic forms before the listener makes a judgment about a particular strong scalemate (in string form). One hypothesis is that upon hearing an expression with a weak scalemate, a stronger conceptual alternative is activated, which in turn probabilistically activates all the strings that could reflect it. Returning to our earlier example, if the conceptual alternative is VeryBig, and _huge_, _massive_, and _enormous_ are string-based realizations of that alternative, they may be assigned a high likelihood. When asked about a specific string-form alternative (e.g., "The elephant is big. Would you conclude that it is not enormous?"), humans may endorse the SI if the probability of conceptually similar linguistic alternatives is sufficiently high, even if the probability of the tested alternative (here, _enormous_) is low.
If SIs involve reasoning about conceptual alternatives, then surprisal values estimated from assumed string-form alternatives may be poor estimates of the true relevant surprisal, as a single concept could be expressed with multiple forms. Therefore, in addition to assessing whether ex
pectedness of specific linguistic forms predicts SI rates (Section 3.1), we also test a second predictor which approximates the expectedness of conceptual alternatives. To do this, we need a set of alternatives \(\mathcal{A}\) that could serve as potential linguistic scalemates. As described in more detail in Sections 4.3 and 5.3, we obtain \(\mathcal{A}\) by taking a fixed set of words with the same part of speech as the weak scalemate, inspired by grammatical theories of alternatives (e.g., Rooth, 1985; Katzir, 2007).3
Footnote 3: We adopt a liberal view of alternatives to avoid under-generation. However, an important open question is how alternatives are determined, which we leave for future work.
Using this alternative set \(\mathcal{A}\), we compute the weighted average surprisal of \(\mathcal{A}\) using weights determined by the conceptual similarity between each alternative and the tested strong scalemate. We use GloVe embeddings Pennington et al. (2014) as an approximation for conceptual representations of scalar items, and cosine similarity between GloVe vectors to approximate conceptual similarity.
For each scale \(\langle\texttt{[WEAK]}\), \(\texttt{[STRONG]}\rangle\), we obtain weights by computing the cosine similarity between the GloVe embeddings for \(\texttt{[STRONG]}\) (\(v_{\texttt{[STRONG]}}\)) and each potential alternative \(a\) (\(v_{a}\)) in the alternative set \(\mathcal{A}\). We compute the weighted average probability over \(\mathcal{A}\) using these weights, and then take the negative log to obtain the weighted average surprisal:
\[-\log\left(\frac{\sum_{a\in\mathcal{A}}P(a)\cdot\text{cossim}(v_{\texttt{[ STRONG]}},v_{a})}{\sum_{a\in\mathcal{A}}\text{cossim}(v_{\texttt{[STRONG]}},v_{a})}\right) \tag{2}\]
If there are many conceptually similar alternatives with low surprisal, then the weighted average surprisal will be low, even if the surprisal of the tested scalemate is high. Therefore, weighted average suprisal forms a proxy for concept-based surprisal, which we compare to string-based suprisal.
## 4 Predicting variation within \(\langle\textit{some},\textit{all}\rangle\)
### Human data
To investigate variation within the scale \(\langle\textit{some}\), _all_\(\rangle\), we use human SI strength ratings collected by Degen (2015). These ratings were measured by asking participants to rate the similarity (1-7) between a sentence with "some" and a minimally differing sentence with "some, but not all". See Section 2.1 for details.
### Model
Following the experiment conducted by Degen (2015), we construct scalar templates by inserting ", but not all," after the occurrence of "some" in each sentence from the dataset. Since this scalar construction ("some, but not all,") often occurs in the middle of the sentence, we use the bidirectional language model BERT Devlin et al. (2019) to measure model expectations at the position of the strong scalemate. Concretely, we replace "all" with the [MASK] token and measure BERT's probability distribution at that token. All models in our study are accessed via the Hugging-face transformers library Wolf et al. (2020).
### Candidate alternatives
For our string-based surprisal predictor (Section 3.1), we are only concerned with the surprisal of the alternative _all_ in the [STRONG] position in (1). However, to compute our concept-based surprisal predictor (Section 3.2), we need a set of candidate alternatives that could potentially serve as the strong scalemates implied by the speaker. Since the alternatives to _some_ are highly constrained by the grammar, we manually constructed a set of English quantifiers that can be used in contrast to _some_: _each_, _every_, _few_, _half_, _much_, _many_, _most_, and _all_.
### Results
Figure 2 shows the relationship between our predictors and human SI ratings for Degen's (2015) dataset of variation within \(\langle\textit{some}\), _all_\(\rangle\). We find that both string-based and concept-based surprisal are indeed negatively correlated with human similarity judgments (string-based: Figure 1(a), Pearson \(\rho=-0.400,p<0.0001\); concept-based: Figure 1(b), \(\rho=-0.432,p<0.0001\)).4
Footnote 4: We note that the relationship between surprisal and SI ratings appears highly non-linear in Figure 1(a). We expect this is due to the fact that the scalemate _all_ is highly expected in most contexts, so the surprisal values of _all_ are concentrated near zero. There is a stronger linear relationship between SI ratings and raw probabilities (\(\rho=0.482,p<0.0001\)).
We additionally conducted a multivariate analysis including our two new predictors (string- and concept-based surprisal) among the predictors investigated in Degen's original study. We centered and transformed all variables according to Degen's original analyses. The results are summarized in Table 2. We find that the original predictors
remain statistically significant, and that concept-based surprisal (but not string-based surprisal) is a significant predictor in the full model. This suggests that listeners draw stronger scalar inferences when _all_ - or a conceptually similar alternative - is more expected in a given context.
## 5 Predicting variation across scales
### Human data
To investigate variation across scales, we use human SI rates collected by four studies (Ronai and Xiang, 2022; Pankratz and van Tiel, 2021; Gotzner et al., 2018; van Tiel et al., 2016). SI rates were measured by showing participants a sentence with the weak scalemate (e.g., "The student is intelligent"), and asking whether they would endorse the negation of the strong scalemate (e.g., "The student is not brilliant"). See Section 2.2 for details.
### Model
We construct scalar templates following the pattern summarized in Table 3. Since in each case the strong scalemate is the final word in the sentence,5 we use an autoregressive language model to measure expectations over potential scalemates in the [STRONG] position. We use the base GPT-2 model (Radford et al., 2019) via Huggingface and obtain model surprisals through the SyntaxGym command-line interface (Gauthier et al., 2020).
Footnote 5: For a small number of verbal scales, the strong scalemate is followed with the pronoun “if” to make the sentence grammatical. We don’t expect this to matter for our purposes.
### Candidate alternatives
Recall from Section 3.2 that we need a set of potential linguistic alternatives to compute the weighted average surprisal. We take this set of alternatives to be a set of words with the same part of speech (POS) as the weak scalemate and obtain these candidate alternative sets by extracting lists of English adjectives, adverbs, and verbs from WordNet (Miller, 1995). We then used NLTK (Loper and Bird, 2002) to find the words satisfying finer-grained POS tags (JJ for adjectives, RB for adverbs, and VB for verbs), and sorted each POS set according to word frequencies from the OpenSubtitles corpus (Lison and Tiedemann, 2016).6,7 We excluded words in the POS sets that were not in the frequency corpus, resulting in 3204 adjectives, 1953 adverbs, and 226 verbs. We restricted each POS set to its 1000 highest-frequency words, and performed some manual exclusions (e.g., removing "do" and "be" from the verb set, which are unlikely to form scales with any of the tested items and follow different syntactic rules). This finally resulted in our three alternative sets: 1000 adjectives, 960 adverbs, and 224 verbs.8
Footnote 6: [https://github.com/hermitdave/FrequencyWords](https://github.com/hermitdave/FrequencyWords)
Footnote 7: [http://www.opensubtitles.org](http://www.opensubtitles.org)
Footnote 8: Most words in the alternative sets occur with low frequency, but we chose to be liberal when including alternatives
### Results
#### 5.4.1 String-based analyses
Figure 2(a) shows our results for cross-scale variation, under a string-based view of alternatives. We find that surprisal is a significant predictor only for Ronai and Xiang's dataset (Pearson \(\rho=-0.361,p=0.006\)).9
\begin{table}
\begin{tabular}{l r r} \hline \hline Predictor & \(\beta\) & \(p\) \\ \hline Degen (2015) Predictors & & \\ Partitive & 0.658 & \(<\) 0.0001 \\ Strength & \(-0.470\) & \(<\) 0.0001 \\ Mention & 0.287 & \(<\) 0.0001 \\ Subjecthood & 0.495 & \(<\) 0.0001 \\ Modification & 0.157 & \(<\) 0.01 \\ Log sentence length & 0.189 & \(<\) 0.0001 \\ \hline Our Predictors & & \\ String-based surprisal & **0.008** & **0.960** \\ Concept-based surprisal & **-0.782** & **\(<\) 0.001** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Summary of full regression model, including original predictors from Degen (2015) (see the original study for a detailed description of each of the predictors).
Figure 2: Relationship between human SI strength ratings within \(\langle\)_some_, _all_\(\rangle\) scale (Degen, 2015) and BERT-derived predictors: (a) surprisal of scalemate _all_ in the scalar construction, and (b) weighted average surprisal over the full set of candidate alternatives (Section 4.3). Each point represents a sentence. Shaded region denotes 95% CI.
to ensure broad coverage over potential scalemates.
Model surprisal vs. human completions.For the dataset where we do find a relationship between surprisal and SI rates, we ask whether model surprisals are correlated with human-derived measurements of how "accessible" the strong scalemate is. If model surprisals and human accessibility scores are strongly linked, this would suggest that models and humans are aligned at the level of predictive distributions over alternatives, validating our approach of using language models to approximate human predictions.
To this end, we use data from Ronai and Xiang's Experiment 2, which measured the accessibility of scalemates through a Cloze task. Humans were presented with a short dialogue featuring a sentence with the weak scalemate, as in (3), and then asked to generate a completion of the dialogue in the blank. The "accessibility" of the strong scalemate is, in fact, a "accessibility" of the strong scalemate is,
mate is taken to be the frequency with which it is generated in this paradigm.
\[\begin{split}\text{Sue:}\quad\text{The movie is good.}\\ \text{Mary:}\quad\text{So you mean it's not}\end{split} \tag{3}\]
We find that model surprisals are negatively correlated with accessibility scores (Figure 4; \(\rho=-0.357,p=0.006\)), suggesting that our method of estimating expectations over alternatives using artificial language models aligns with direct measurements in humans.
#### 5.4.2 Concept-based analyses
Turning to a conceptual view of alternatives, Figure 2(b) shows the relationship between human SI rates and weighted average surprisals (Equation 2). We find a significant negative correlation for all but one of the tested datasets (Ronai and Xiang: \(\rho=-0.400,p=0.002\); Pankratz and van Tiel: \(\rho=-0.342,p=0.015\); Gotzner et al.: \(\rho=-0.415,p=0.0005\); van Tiel et al.: \(\rho=-0.167,p=0.310\)), demonstrating that similarity-weighted surprisal captures more variation than raw surprisal (cf. Figure 2(a); Section 5.4.1).
We additionally included both (centered) string-based and concept-based surprisal as predictors in a multivariate model, summarized in Table 4 (middle columns). As in the within-scale analysis, for three of the four datasets we find that concept-based surprisal is a stronger predictor than string-based surprisal. With that said, we find only a marginal effect of concept-based surprisal in Ronai and Xiang's data, and no effect of either predictor in van Tiel et al.'s data. However, for Ronai and Xiang's data, this does not mean that there is no value in either predictor - rather, the predictors are too closely correlated to definitively favor one over the other. To demonstrate this, for each dataset we performed an analysis of variance (ANOVA) comparing the full model to a null intercept-only model (Table 4, right columns). We find that for all datasets except that of van Tiel et al., the model with both surprisal predictors explains significantly more variance than the null model. In sum, our results suggest that the expectedness of the strong scalemate can capture significant cross-scale SI variation, but these expectations may operate over groups of semantically similar linguistic forms instead of individual strings.
Qualitative analysis.As a follow-up analysis, we identified cases where GPT-2 assigns low probability to the tested strong scalemate, but high probability to near synonyms. We analyzed the top 5 alternatives from the full alternative set (Section 5.3) that were assigned highest probability as strong scalemates under GPT-2. Figure 5 shows three examples from Ronai and Xiang's dataset. The title of each subplot shows the scalar construction, with the weak scalemate highlighted in teal and the tested strong scalemate underlined in red. The y-axis shows the top 5 candidate scalemates, and the x-axis shows the probability assigned by the model. For the weak scalemate _big_ (left), GPT-2 assigns highest probability to the alternative _huge_, which semantically conveys similar information to the empirically tested alternative _enormous_. We see a similar pattern for weak scalemate _largely_ and alternatives _completely_ and _totally_ (middle), as well as for weak scalemate _hard_ and alternative _impossible_ (right). This is consistent with the hypothesis that surprisal of a specific string may not capture surprisal of the underlying concept.
Taken together, these analyses suggest that
\begin{table}
\begin{tabular}{l|l r r|r r} \hline \hline \multirow{2}{*}{Dataset} & \multicolumn{2}{c|}{Full model} & \multicolumn{2}{c}{ANOVA} \\ & Predictor & \(\beta\) & \(p\) & \(F\) & \(p\) \\ \hline Ronai and Xiang (2022) & \begin{tabular}{l} String-based surprisal \\ Concept-based surprisal \\ \end{tabular} & \(-1.538\) & \(0.215\) & \(3.247\) & \(0.012\) \\ \cline{2-6} & \begin{tabular}{l} String-based surprisal \\ Concept-based surprisal \\ \end{tabular} & \(0.460\) & \(0.694\) & \(3.198\) & \(0.050\) \\ \cline{2-6} & \begin{tabular}{l} String-based surprisal \\ Concept-based surprisal \\ \end{tabular} & \(0.384\) & \(0.545\) & \(2.751\) & \(0.019\) \\ \cline{2-6} & \begin{tabular}{l} Concept-based surprisal \\ Concept-based surprisal \\ \end{tabular} & \(-8.010\) & \(0.0005\) & \(2.751\) & \(0.019\) \\ \hline van Tiel et al. (2016) &
\begin{tabular}{l} String-based surprisal \\ Concept-based surprisal \\ \end{tabular} & \(0.293\) & \(0.858\) & \(1.016\) & \(0.422\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Summary of full regression model (middle columns) and ANOVA comparing full model against intercept-only model (right columns) for each cross-scale variation dataset.
a concept-based view of alternatives is better aligned with human inferences than treating alternatives as specific linguistic forms. Testing additional ways of operationalizing concept-based alternatives is a promising direction for future work.
## 6 Related work
Prior work has evaluated the ability of computational models to capture scalar inferences. For example, the IMPPRES benchmark Jeretic et al. (2020) frames SI as a natural language inference problem: the weak scalar expression (e.g., "Jo ate some of the cake") is the premise, and the negated strong scalar expression (e.g., "Joe didn't eat all of the cake") is the hypothesis. Under this setup, an interpretation consistent with the strictly logical reading would assign a _neutral_ relationship between the premise and hypothesis, whereas a pragmatic reading would assign an _entailment_ relationship. Models are evaluated based on how often they assign the entailment label across items, which treats SIs as a homogeneous phenomenon and does not capture SI variation.
Another line of work has attempted to predict within-scale SI variation through a supervised approach Schuster et al. (2020); Li et al. (2021). This approach takes a sentence with a weak scalar item, and attempts to directly predict the human SI strength through a prediction head on top of a sentence encoder. This differs from our approach in that it requires training directly on the SI-rate-prediction task, whereas we probe the predictive distribution that emerges from language modeling with no task-specific representations. This allows us to compare model probability distributions to the expectations deployed by humans during pragmatic inferences, building upon a literature linking language models to predictive processing (e.g., Frank and Bod, 2011; Smith and Levy, 2013; Wilcox et al., 2020; Merkx and Frank, 2021).
There have also been several studies extracting scalar orderings from corpora or language model representations. For example, de Marneffe et al. (2010) use distributional information from a web corpus to ground the meanings of adjectives for an indirect question answering task. Similarly, Shivade et al. (2015) use scalar constructions like "_X, but not Y_" to identify scales from a corpus of biomedical texts. Others have found that adjectival scale orderings can be derived from static word embeddings Kim and de Marneffe (2013) and contextualized word representations Gari Soler and Apidianaki (2020, 2021).
## 7 Discussion
We tested a shared mechanism explaining variation in SI rates across scales and within \(\langle\)_some_, _all_\(\rangle\), based on the hypothesis that humans maintain context-driven expectations about unspoken alternatives Degen and Tanenhaus (2015, 2016). We operationalized this in two ways using neural language models: the expectedness of a linguistic alternative as a scalemate (string-based surprisal), and the expectedness of a conceptual alternative (weighted average surprisal). We found that for both within-scale and cross-scale variation, expectedness captures human SI rates. Crucially, however, expectedness of the strong scalemate is a robust predictor of cross-scale variation only under a conceptual view of alternatives Buccola et al. (2021). Our results support the idea that the strength of pragmatic inferences depends on the availability of alternatives, which depends on in-context predictability.
One open question is the source of variability across the tested human behavioral datasets - in
Figure 5: Probability assigned by GPT-2 to top 5 candidate strong alternatives (y-axis) for 3 example weak scalar items: _big_, _largely_, and _hard_Ronai and Xiang (2022). The full scalar construction is shown above each subplot, with the original tested strong scalemate underlined in red.
particular, the lack of surprisal effect for van Tiel et al.'s data (Section 5.4). While we cannot be certain about why the results vary, we identified a few differences that might affect data quality across datasets (see Table 1). van Tiel et al.'s study has the smallest number of participants (28), smallest number of ratings per scale (10), and smallest number of scales (39). In addition, their experiments presented multiple sentence contexts per scale, whereas the other experiments only presented one sentence per scale. Other experimental factors, such as participant recruitment and exclusion criteria, may have also contributed to differences in data reliability.
### How do listeners restrict the alternatives?
We now return to the issue raised in Footnote 2: what information do listeners use to form expectations about alternatives? To illustrate potential hypotheses, consider the item "The soup is warm/hot" from van Tiel et al.'s experimental materials. In our framework described in Section 3.1, \([\texttt{CONTEXT}]=\) "The soup is", \([\texttt{WEAK}]=\) "warm", and \([\texttt{STRONG}]=\) "hot". One hypothesis is that listeners form expectations over relevant scalar expressions given \([\texttt{CONTEXT}]\) alone. On this view, expectations over strong scalemates could be measured by computing the probability of \([\texttt{STRONG}]\) in the template \([\texttt{CONTEXT}][\texttt{STRONG}]\); i.e., "The soup is \([\texttt{STRONG}]\)". In contrast, in this paper we test expectations of \([\texttt{STRONG}]\) in the template "The soup is warm, but not \([\texttt{STRONG}]\)", which instantiates an alternate theoretical position: that listeners use not only the context, but also \([\texttt{WEAK}]\) as information for forming expectations over alternatives.
We adopt this view for several reasons. First, it could be the case that the context does not provide enough information for the listener to narrow down alternatives. Returning to the running example, "The soup is" could be followed by many continuations, some potentially relating to the taste or size of the soup in addition to its temperature. Taking the weak scalar term "warm" into account allows the listener to restrict the relevant alternatives to a smaller, more tractable set, which presents an algorithmic solution to the computationally challenging inference problem. However, the under-informativity of the context may be a problem unique to the simple stimuli used in the behavioral experiments. It is plausible that listeners could sufficiently restrict alternative sets given more naturalistic contexts, which likely provide more cues to the Question Under Discussion Roberts (2012).
In addition, there could be cues from \([\texttt{WEAK}]\) that provide information about likely alternatives, independent of the context. For example, listeners might prefer strong scalemates that match \([\texttt{WEAK}]\) in register or formality, or in shared phonological features. This motivates why we chose template (1) to measure expectations over alternatives, instead of \([\texttt{CONTEXT}][\texttt{STRONG}]\). However, the extent to which listeners tune their predictions based on \([\texttt{WEAK}]\) above and beyond the context remains an open empirical question.
### From alternatives to inference
Conceptually, computing an SI involves two steps: (1) determining the suitable alternatives, and (2) ruling out the meaning of alternatives to arrive at a strengthened interpretation of the weak scalar term. Our results primarily shed light on the first step, providing evidence that expectations play a role in determining alternatives, and that alternatives are likely based on meanings in addition to linguistic forms.
When considering the higher-level reasoning process, many factors beyond alternatives play a causal role in SI. One view is that humans use alternatives in a cooperative reasoning process, such as that formalized by the Rational Speech Act framework (RSA; Frank and Goodman, 2012; Goodman and Frank, 2016). In an RSA model, a pragmatic listener \(L_{1}(m\mid u)\) uses a speaker's utterance \(u\) to update their prior beliefs \(P(m)\) over which meaning \(m\) the speaker is trying to convey. The listener does this by computing the likelihood of a pragmatic speaker \(S_{1}\) producing \(u\) given each potential meaning. The pragmatic \(S_{1}\) speaker corresponds to the utility \(U\) of the utterance \(u\) to convey \(m\), relative to the utility of the alternative utterances in the set of alternatives \(\mathcal{A}\):
\[L_{1}(m\mid u)=\frac{S_{1}(u\mid m)P(m)}{\sum_{m^{\prime}}S_{1}(u \mid m^{\prime})P(m^{\prime})} \tag{4}\] \[S_{1}(u\mid m)=\frac{U(u,m)}{\sum_{u^{\prime}\in\mathcal{A}}U(u^ {\prime},m)} \tag{5}\]
Our findings appear compatible with RSA: listeners reason about a speaker that normalizes over alternatives. However, it remains an open question how variable expectations over alternatives should be operationalized in an RSA model. One option,
as recently proposed by Zhang et al. (2023), is that the pragmatic speaker is conditioned on the alternative set \(\mathcal{A}\). The pragmatic listener has beliefs over different sets of \(\mathcal{A}\) and marginalizes over these beliefs when drawing an inference:
\[L_{1}(m\mid u)=\sum_{\mathcal{A}}P(\mathcal{A})\frac{S_{1}(u\mid m,\mathcal{A})P (m)}{\sum_{m^{\prime}}S_{1}(u\mid m^{\prime},\mathcal{A})P(m^{\prime})} \tag{6}\]
Another possibility is that the variable expectations are not inputs to the model, but instead fall out of reasoning about how likely speakers are to use the weaker versus stronger terms, given variable contextual priors over meanings and questions under discussion (see, e.g., Goodman and Lassiter, 2015; Qing et al., 2016). We leave a detailed exploration of such a model to future work.
The role of priors.Pragmatic inferences are influenced by the prior probabilities of the world states compatible with the weak and strong meanings (Degen et al., 2015; Sikos et al., 2021). For example, consider the scale _\(\langle\)start, finish\(\rangle\)_. If a human were asked "The movie started at 2:30. Would you conclude that the movie did not finish at 2:30?", they would likely answer _Yes_. This _Yes_ response would count as an SI under the experimental paradigm, but does not reflect pragmatic reasoning over scalar alternatives: it is simply implausible for a movie to start and finish at the same time, given our knowledge of the world.10
Footnote 10: This example is due to Lassiter (2022).
These priors have an important connection to our analyses. As outlined in Section 3.1, we approximate the expectedness of a strong scalemate by measuring the expectedness of its linguistic form. This approach can be seen as reflecting an implicit assumption that the more likely a certain meaning is, the more likely it is to be expressed linguistically. This is likely to be wrong in certain cases - for example, if a certain meaning is so likely that it is obvious without being said, then speakers may avoid the effort of explicitly producing the linguistic expression (and thus, the linguistic expression would have low probability). This could potentially be the case for relatively common SIs. For example, a speaker might be able to get away with only saying _some_ and expecting a listener to recover the meaning _some but not all_.
With that said, we believe our estimation method may minimize this issue, as we measure expectations conditioned on an explicit scalar contrast with the weak scalemate (i.e., "[WEAK], but not"). Thus, our approach can be seen as approximating listeners' expectations about upcoming linguistic material, given that the speaker has _already chosen_ to produce a scalar contrast. Nevertheless, a complete account of scalar inferences will need to account for the influence of the prior probabilities over world states, which may explain some of the variance not captured by our expectedness predictors.
### Implications for NLP
While the main role of language models in our analyses was to systematically test a cognitive theory, we believe this work also has implications for NLP evaluation. A growing body of work uses controlled assessments to evaluate the linguistic knowledge of NLP models. Many studies test whether models exhibit a categorical pattern of behavior that reflects a particular linguistic generalization. For example, in syntactic evaluations, a model is successful if it satisfies certain inequality relationships between grammatical and ungrammatical sentences (e.g., Linzen et al., 2016; Futrell et al., 2019; Hu et al., 2020). SI (and other types of implicatures) have largely been treated the same way (see Section 6).
In contrast, we do not evaluate whether language models exhibit a categorical pattern of behavior (_"Do models interpret SIs pragmatically?"_). Instead, based on the empirical evidence for scalar variation, we test whether models capture systematic variability in human inferences (_"Are models sensitive to the factors that modulate human pragmatic inferences?"_). We urge other NLP researchers to consider variability in human behaviors instead of relying on categorical generalizations (see also Pavlick and Kwiatkowski, 2019; Jiang and Marneffe, 2022; Baan et al., 2022; Webson et al., 2023). Through this approach, we can build models that capture the rich variability of human language, and use these models to refine our theories about the human mind.
## Acknowledgments
We thank the anonymous reviewers as well as the action editor, Ehud Reiter, for their insightful feedback. J.H. is supported by an NSF Graduate Research Fellowship (#1745302) and an NSF Doctoral Dissertation Research Improvement Grant
(BCS-2116918). S.S. is supported by the NSF under Grant #2030859 to the Computing Research Association for the CIFellows Project and the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 948878). J.H. and R.L. also gratefully acknowledge support from the Simons Center for the Social Brain at MIT. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation nor the Computing Research Association.
|
2301.08964 | Boundedness of the dyadic maximal function on graded Lie groups | Let $1<p\leq \infty$ and let $n\geq 2.$ It was proved independently by C.
Calder\'on, R. Coifman and G. Weiss that the dyadic maximal function
\begin{equation*}
\mathcal{M}^{d\sigma}_Df(x)=\sup_{j\in\mathbb{Z}}\left|\smallint\limits_{\mathbb{S}^{n-1}}f(x-2^jy)d\sigma(y)\right|
\end{equation*} is a bounded operator on $L^p(\mathbb{R}^n)$ where $d\sigma(y)$
is the surface measure on $\mathbb{S}^{n-1}.$ In this paper we prove an
analogue of this result on arbitrary graded Lie groups. More precisely, to any
finite Borel measure $d\sigma$ with compact support on a graded Lie group $G,$
we associate the corresponding dyadic maximal function
$\mathcal{M}_D^{d\sigma}$ using the homogeneous structure of the group. Then,
we prove a criterion in terms of the order (at zero and at infinity) of the
group Fourier transform $\widehat{d\sigma}$ of $d\sigma$ with respect to a
fixed Rockland operator $\mathcal{R}$ on $G$ that assures the boundedness of
$\mathcal{M}_D^{d\sigma}$ on $L^p(G)$ for all $1<p\leq \infty.$ | Duván Cardona, Julio Delgado, Michael Ruzhansky | 2023-01-21T15:36:57Z | http://arxiv.org/abs/2301.08964v2 | # Boundedness of the dyadic maximal function on graded Lie groups
###### Abstract.
Let \(1<p\leq\infty\) and let \(n\geq 2\). It was proved independently by C. Calderon, R. Coifman and G. Weiss that the dyadic maximal function
\[\mathcal{M}_{D}^{d\sigma}f(x)=\sup_{j\in\mathbb{Z}}\left|\int\limits_{\mathbb{ S}^{n-1}}f(x-2^{j}y)d\sigma(y)\right|\]
is a bounded operator on \(L^{p}(\mathbb{R}^{n})\) where \(d\sigma(y)\) is the surface measure on \(\mathbb{S}^{n-1}\). In this paper we prove an analogue of this result on arbitrary graded Lie groups. More precisely, to any finite Borel measure \(d\sigma\) with compact support on a graded Lie group \(G\), we associate the corresponding dyadic maximal function \(\mathcal{M}_{D}^{d\sigma}\) using the homogeneous structure of the group. Then, we prove a criterion in terms of the order (at zero and at infinity) of the group Fourier transform \(\widetilde{d\sigma}\) of \(d\sigma\) with respect to a fixed Rockland operator \(\mathcal{R}\) on \(G\) that assures the boundedness of \(\mathcal{M}_{D}^{d\sigma}\) on \(L^{p}(G)\) for all \(1<p\leq\infty\).
Key words and phrases:Dyadic maximal function, nilpotent Lie groups, graded Lie groups, Calderon theorem, Coifman-Weiss theory 2010 Mathematics Subject Classification: 35S30, 42B20; Secondary 42B37, 42B35 The authors are supported by the FWO Odysseus 1 grant G.0H94.18N: Analysis and Partial Differential Equations and by the Methusalem programme of the Ghent University Special Research Fund (BOF) (Grant number 01M01021). J. Delgado is also supported by Vice. Inv.Universidad del Valle Grant CI 71329, MathAmSud and Minciencias-Colombia under the project MATHAMUD21-MATH-03. Michael Ruzhansky is also supported by EPSRC grant EP/R003025/2.
## 1. Introduction
### Outline
For more than fifty years (dating back to the work of Folland and Stein [20] in the 1970s), there has been an extensive program to generalise the techniques from the real-variable Euclidean harmonic analysis to the more general setting of nilpotent Lie groups. This is particularly motivated by applications to degenerate partial differential operators (see e.g. Rothschild and Stein [33]), where the usual methods on the Euclidean space are not completely suitable. Among the fundamental operators of the Euclidean harmonic analysis, are the full maximal function and its dyadic counterpart. Contributing to the aforementioned program, the aim of this work is to study the \(L^{p}\)-boundedness of the dyadic maximal function on a nilpotent Lie group \(G.\) Since our criteria involve that the group admits left-invariant hypoelliptic partial differential operators we assume that \(G\) is a graded Lie group in view of the Helffer and Nourrigat solution of the Rockland conjecture, see [24].
To the best of our knowledge, the study of the \(L^{p}\)-boundedness of the spherical averages started in a satisfactory way with Stein (inspired by the 1976's work of Nagel, Riviere and Wainger [30]). Indeed, consider the full-maximal function on \(\mathbb{R}^{n}\)
\[\mathcal{M}_{F}^{d\sigma}f(x)=\sup_{r>0}\left|\int\limits_{\mathbb{S}^{n-1}}f( x-ry)d\sigma(y)\right|, \tag{1.1}\]
where \(\sigma\) is the surface measure on the sphere \(\mathbb{S}^{n-1}.\) A remarkable result due to Stein in [42], proved the boundedness of \(\mathcal{M}_{F}^{d\sigma}\) from \(L^{p}(\mathbb{R}^{n})\) to itself, if and only if \(p>\frac{n}{n-1},\) for all \(n\geq 3.\) Then, the lower dimensional case \(n=2\) was proved by Bourgain in [3]. Additionally, C. Calderon in [7], proved that the dyadic maximal function
\[\mathcal{M}_{D}^{d\sigma}f(x)=\sup_{j\in\mathbb{Z}}\left|\int\limits_{\mathbb{ S}^{n-1}}f(x-2^{j}y)d\sigma(y)\right|, \tag{1.2}\]
can be extended to a bounded operator on \(L^{p}(\mathbb{R}^{n}),\) for all \(1<p\leq\infty,\) for any \(n\geq 2.\) It was observed by S. Wainger that the \(L^{p}\)-boundedness of the dyadic maximal function was also proved independently by Coifman and Weiss in [10]. Other proofs for the \(L^{p}\)-boundedness of (1.1) when \(n\geq 3\) can be found e.g. in Carbery [8], Cowling and Maceuri [11], Rubio de Francia [34], Duoandikoetxea and Rubio de Francia [15], and for Bourgain's result for \(n=2\) a new proof was given by Mockenhaupt, Seeger, and Sogge [28].
The study of the full maximal function and its dyadic version has been mainly concentrated in the context of the Heisenberg group \(\mathbb{H}_{n}\) and on two-steps nilpotent Lie groups. More precisely, by considering the unit sphere \(\mathbb{S}_{\mathbb{H}_{n},K}\) with respect to the Koranyi norm \(|(z,t)|=(|z|^{4}+16t^{2})^{\frac{1}{4}}\) there is a unique Radon measure \(d\sigma\) that makes valid the polar decomposition formula. Then, Cowling in [12] proved the \(L^{p}\)-boundedness of the corresponding full-maximal function \(\mathcal{M}_{F}^{d\sigma}.\) Extensions of Cowling's result have been obtained e.g. by Schmidt in [41] to hypersurfaces with non-vanishing rotational curvature on nilpotent Lie groups and for two steps nilpotent Lie groups by Fischer [18]. The \(L^{p}\)-boundedness for the Lacunary maximal function in this setting has been proved recently by Ganguly and Thangavelu [21] on \(\mathbb{H}_{n}\) for all \(n\geq 2.\) The approach in [21] combines the ideas of Cowling [12] with the sparse techniques developed (in the Euclidean setting) by Lacey in [27]. As for the spherical maximal function on the Heisenberg group for the surface measure on the complex
sphere \(\mathbb{S}_{\mathbb{H}_{n},r}:=\{(z,0):|z|=r\}\), we refer the reader to Nevo and Thangavelu [32]. The result in [32] was improved independently by Narayanan and Thangavelu [31] and by Muller and Seeger [29]. Moreover, in [1], Bagchi, Hait, Roncal and Thangavelu have proved an analogue of Calderon's theorem for the associated lacunary spherical maximal function on \(\mathbb{H}_{n}\) with \(n\geq 2\). For \(n=1\) the \(L^{p}\)-boundedness of the corresponding full maximal function (restricted to a class of radial functions) has been proved by Beltran, Guo, Hickman and Seeger in [2].
### The main result
Let \(G\) be a homogeneous Lie group. Let us consider its corresponding family of dilations (see Definition 2.4)
\[D_{r}:G\to G,\ x\mapsto D_{r}(x)\equiv r\cdot x,\,x\in G.\]
Consider the dyadic maximal function
\[\mathcal{M}_{D}^{d\sigma}f(x)=\sup_{j\in\mathbb{Z}}\left|\int\limits_{G}f((2 ^{j}\cdot y)^{-1}x)d\sigma(y)\right|, \tag{1.3}\]
associated to an arbitrary finite Borel measure \(d\sigma\) with compact support on \(G.\) We require the existence of left-invariant hypoelliptic partial differential operators on the group and then the group has to be graded (see [19, Page 172] and Definition 2.6). Then, a positive Rockland operator \(\mathcal{R}\) is a group Fourier multiplier by the operator-valued function \(\pi(\mathcal{R}),\) (that is the infinitesimal representation of the operator, defined at any irreducible and unitary representation \(\pi\) of the unitary dual \(\widehat{G}\) of \(G\) as in (2.3)). In order to analyse the \(L^{p}\)-boundedness of the Dyadic maximal function (1.3) we will assume that for some \(a>0,\) the measure \(d\sigma\) in (1.3) satisfies the group Fourier transform condition
\[\max_{\pm}\sup_{\pi\in\widehat{G}}\|\pi(\mathcal{R})^{\pm\frac{a}{\nu}} \widehat{d\sigma}(\pi)\|_{\mathrm{op}}<\infty, \tag{1.4}\]
where \(\nu\) is the homogeneous degree of the operator \(\mathcal{R},\) (in the case of the positive Laplacian \(-\Delta,\)\(\nu=2\)). Then, according to the discussion above, (1.4) says that \(\widehat{d\sigma}(\pi)\) has order \(+a\) at infinity and \(-a\) at zero (with respect to the spectrum of any \(\pi(\mathcal{R})\)).
The following criterion is the main theorem of this work.
**Theorem 1.1**.: _Let \(d\sigma\) be a finite Borel measure of compact support on a graded Lie group \(G.\) Let \(\mathcal{R}\) be a positive Rockland operator on \(G\) of homogeneous degree \(\nu>0.\) Assume that for some \(a>0\) the Fourier transform of \(d\sigma\) satisfies the growth estimate_
\[\max_{\pm}\sup_{\pi\in\widehat{G}}\|\pi(\mathcal{R})^{\pm\frac{a}{\nu}} \widehat{d\sigma}(\pi)\|_{\mathrm{op}}<\infty. \tag{1.5}\]
_Then, the dyadic maximal function \(\mathcal{M}_{D}^{d\sigma}:L^{p}(G)\to L^{p}(G)\) can be extended to a bounded operator for all \(1<p\leq\infty.\)_
_Remark 1.2_.: For the \(\ell^{p}\)-boundeness of the discrete dyadic maximal function we refer the reader to Bourgain, Mirek, Stein, and Wrobel [4, 5].
_Remark 1.3_.: Our proof of Theorem 1.1 is inspired by the approach developed by Duoandikoetxea and Rubio De Francia in [15]. In the setting of non-commutative nilpotent Lie groups many difficulties arise. One is that the Fourier transform of distributions is operator valued. Also, the Fourier transform condition (1.5) is motivated by conditions of the same nature arising for homogeneous structures (nonisotropic
structures) on the Euclidean space, namely, the condition of non-vanishing curvature at infinite order for Euclidean hypersurfaces \(\Sigma\subset\mathbb{R}^{n}.\) Such a condition implies a decay estimate for the Fourier transform \(\widehat{d\Sigma}\) of the corresponding surface measure \(d\Sigma,\) see e.g. Seeger, Tao and Wright [45].
_Remark 1.4_.: The criterion given in Theorem 1.1 is new even on the Heisenberg group. Then also on Heisenberg type groups, stratified groups, etc. Examples of Rockland operators on stratified groups are Hormander sub-Laplacians and their integer powers, see [19].
_Remark 1.5_.: Let \(d\sigma\) be a finite measure of compact support on \(G=\mathbb{R}^{n}.\) In the case where \(\mathcal{R}=-\Delta_{x}\) is the positive Laplacian, the inequality in (1.5) becomes equivalent to the Fourier transform condition,
\[\forall\xi\neq 0,\,|\widehat{d\sigma}(\xi)|\lesssim\min\{|\xi|^{a},|\xi|^{-a} \},\ a>0, \tag{1.6}\]
to guarantee the \(L^{p}\)-boundedness of the dyadic maximal operator \(\mathcal{M}_{D}^{d\sigma}\) on \(L^{p}(\mathbb{R}^{n}),\) for all \(1<p\leq\infty,\) see e.g. Theorem 6.3.4 in Grafakos [22, Page 455] or Duoandikoetxea and Rubio De Francia [15]. Let \(d\mu\) be the surface measure on the sphere \(\mathbb{S}^{n-1}.\) Coifman and Weiss [10, Page 246] deduced the \(L^{p}\)-boundedness of \(\mathcal{M}_{D}^{d\mu},\)\(1\leq p<\infty,\) using the fact that
\[\forall\xi\neq 0,\,|\widehat{d\mu}(\xi)|\lesssim|\xi|^{-(n-1)/2},\ n\geq 2. \tag{1.7}\]
Indeed, if \(\phi\in C_{0}^{\infty}(\mathbb{R}^{n})\) is such that \(\widehat{\phi}(0)=1,\) for the measure \(d\sigma=d\mu-\widehat{\mu}(0)\phi,\) Coifman and Weiss proved that the \(L^{p}\)-boundedness of \(\mathcal{M}_{D}^{d\sigma},\) implies the \(L^{p}\)-boundedness of \(\mathcal{M}_{D}^{d\mu}.\) This latter argument due to Coifman and Weiss is an alternative proof to the one given in the classical manuscript [7] due to C. Calderon.
_Remark 1.6_.: Let \(d\sigma\) be a finite Borel measure of compact support on a graded Lie group \(G.\) Let \(\mathcal{R}\) be a positive Rockland operator on \(G\) of homogeneous degree \(\nu>0.\) By the Riesz-representation theorem, we have that \(d\sigma=Kdx\) for some compactly supported function \(K\) on \(G.\) Note that if \(\mathcal{R}^{\pm\frac{a}{\nu}}K\in L^{1}(G),\) (this means that \(K\) belongs to the Sobolev space \(\mathring{L}^{1}_{\pm a}(G),\) that is \(\|K\|_{L^{1}_{\pm a}(G)}:=\|\mathcal{R}^{\pm\frac{a}{\nu}}K\|_{L^{1}(G)}<\infty\)) then
\[\sup_{\pi\in\widehat{G}}\|\pi(\mathcal{R})^{\pm\frac{a}{\nu}}\widehat{d\sigma }(\pi)\|_{\mathrm{op}}=\sup_{\pi\in\widehat{G}}\|\int_{G}\mathcal{R}^{\pm \frac{a}{\nu}}K(g)\pi(g)^{*}dg\|_{\mathrm{op}}\leq\|K\|_{\mathring{L}^{1}_{ \pm a}(G)}<\infty,\]
showing that the class of compactly supported finite Borel measures \(d\sigma=Kdx\) with \(K\in\mathring{L}^{1}_{a}(G)\cap\mathring{L}^{1}_{-a}(G),\) for some \(a>0,\) provides examples of measures satisfying (1.5).
## 2. Fourier analysis on graded groups
For the aspects of the Fourier analysis on nilpotent Lie groups we follow Folland and Stein [20] and the notation is taken from [19]. For the aspects about the theory of Rockland operators on graded Lie groups we follow [19].
### Homogeneous and graded Lie groups
Let \(G\) be a homogeneous Lie group, that is a connected and simply connected Lie group whose Lie algebra \(\mathfrak{g}\) is endowed with a family of dilations \(D_{r,\mathfrak{g}}.\) We define it as follows.
**Definition 2.1**.: A family of dilations \(\mathrm{Dil}(\mathfrak{g}):=\{D_{r,\mathfrak{g}}:\,r>0\}\) on the Lie algebra \(\mathfrak{g}\) is a family of automorphisms on \(\mathfrak{g}\) satisfying the following two compatibility conditions:
1. For every \(r>0\), \(D_{r,\mathfrak{g}}\) is a map of the form \(D_{r,\mathfrak{g}}=\operatorname{Exp}(\ln(r)A)\), for some diagonalisable linear operator \(A\equiv\operatorname{diag}[\nu_{1},\cdots,\nu_{n}]:\mathfrak{g}\to\mathfrak{g}\).
2. \(\forall X,Y\in\mathfrak{g}\), and \(r>0\), \([D_{r,\mathfrak{g}}X,D_{r,\mathfrak{g}}Y]=D_{r,\mathfrak{g}}[X,Y]\).
_Remark 2.2_.: We call the eigenvalues of \(A\), \(\nu_{1},\nu_{2},\cdots,\nu_{n}\), the dilations weights or weights of \(G\).
In our analysis the notion of the homogeneous dimension of the group is crucial. We introduce it as follows.
**Definition 2.3**.: The homogeneous dimension of a homogeneous Lie group \(G\) whose dilations are defined via \(D_{r,\mathfrak{g}}=\operatorname{Exp}(\ln(r)A)\), is given by \(Q=\operatorname{Tr}(A)=\nu_{1}+\cdots+\nu_{n}\), where \(\nu_{i}\), \(i=1,2,\cdots,n\), are the eigenvalues of \(A\).
**Definition 2.4** (Dilations on the group).: The family of dilations \(\operatorname{Dil}(\mathfrak{g})\) of the Lie algebra \(\mathfrak{g}\) induces a family of mappings on \(G\) defined via,
\[\operatorname{Dil}(G):=\{D_{r}:=\exp_{G}\circ D_{r,\mathfrak{g}}\circ\exp_{G}^ {-1}:\ r>0\},\]
where \(\exp_{G}:\mathfrak{g}\to G\) is the usual exponential mapping associated to the Lie group \(G\). We refer to the elements of the family \(\operatorname{Dil}(G)\) as dilations on the group.
_Remark 2.5_.: If we use the notation \(r\cdot x=D_{r}(x)\), \(x\in G\), \(r>0\), then the effect of the dilations of the group on the Haar measure \(dx\) on \(G\) is determined by the identity
\[\int\limits_{G}(f\circ D_{r})(x)dx=r^{-Q}\int\limits_{G}f(x)dx.\]
**Definition 2.6**.: A connected, simply connected Lie group \(G\) is graded if its Lie algebra \(\mathfrak{g}\) may be decomposed as the direct sum of subspaces \(\mathfrak{g}=\mathfrak{g}_{1}\oplus\mathfrak{g}_{2}\oplus\cdots\oplus \mathfrak{g}_{s}\) such that the following bracket conditions are satisfied: \([\mathfrak{g}_{i},\mathfrak{g}_{j}]\subset\mathfrak{g}_{i+j}\), where \(\mathfrak{g}_{i+j}=\{0\}\) if \(i+j\geq s+1\), for some \(s\).
Examples of graded Lie groups are the Heisenberg group \(\mathbb{H}^{n}\) and more generally any stratified group where the Lie algebra \(\mathfrak{g}\) is generated by the first stratum \(\mathfrak{g}_{1}\). Here, \(n\) is the topological dimension of \(G\), \(n=n_{1}+\cdots+n_{s}\), where \(n_{k}=\dim\mathfrak{g}_{k}\). For more examples, see [19].
_Remark 2.7_ (Not every nilpotent Lie group is homogeneous).: A Lie algebra admitting a family of dilations is nilpotent, and hence so is its associated connected, simply connected Lie group. The converse does not hold, i.e., not every nilpotent Lie group is homogeneous although they exhaust a large class, see [19] for details. Indeed, the main class of Lie groups under our consideration is that of graded Lie groups.
### Fourier analysis on nilpotent Lie groups
Let \(G\) be a simply connected nilpotent Lie group. Then the adjoint representation \(\operatorname{ad}:\mathfrak{g}\to\operatorname{End}(\mathfrak{g})\) is nilpotent. Next, we define unitary and irreducible representations.
**Definition 2.8** (Continuous, unitary and irreducible representations of \(G\)).: We say that \(\pi\) is a continuous, unitary and irreducible representation of \(G\), if the following properties are satisfied,
1. \(\pi\in\operatorname{Hom}(G,\operatorname{U}(H_{\pi}))\), for some separable Hilbert space \(H_{\pi}\), i.e. \(\pi(xy)=\pi(x)\pi(y)\) and for the adjoint of \(\pi(x)\), \(\pi(x)^{*}=\pi(x^{-1})\), for every \(x,y\in G.\) This property says that the representation is compatible with the group operation.
2. The map \((x,v)\mapsto\pi(x)v,\) from \(G\times H_{\pi}\) into \(H_{\pi}\) is continuous. This says that the representation is a strongly continuous mapping.
3. For every \(x\in G,\) and \(W_{\pi}\subset H_{\pi},\) if \(\pi(x)W_{\pi}\subset W_{\pi},\) then \(W_{\pi}=H_{\pi}\) or \(W_{\pi}=\{0\}.\) This means that the representation \(\pi\) is irreducible if its only invariant subspaces are \(W=\{0\}\) and \(W=H_{\pi},\) the trivial ones.
**Definition 2.9** (Equivalent representations).: Two unitary representations
\[\pi\in\operatorname{Hom}(G,\operatorname{U}(H_{\pi}))\text{ and }\eta\in \operatorname{Hom}(G,\operatorname{U}(H_{\eta}))\]
are equivalent if there exists a bounded linear mapping \(Z:H_{\pi}\to H_{\eta}\) such that for any \(x\in G,\)\(Z\pi(x)=\eta(x)Z.\) The mapping \(Z\) is called an intertwining operator between \(\pi\) and \(\eta.\) The set of all the intertwining operators between \(\pi\) and \(\eta\) is denoted by \(\operatorname{Hom}(\pi,\eta).\)
**Definition 2.10** (The unitary dual).: The relation \(\sim\) on the set of unitary and irreducible representations \(\operatorname{Rep}(G)\) defined by: \(\pi\sim\eta\)_if and only if \(\pi\) and \(\eta\) are equivalent representations,_ is an equivalence relation. The quotient
\[\widehat{G}:=\operatorname{Rep}(G)/\!\sim\]
is called the unitary dual of \(G.\)
The unitary dual encodes all the Fourier analysis on the group. So, we are going to define the Fourier transform.
**Definition 2.11** (Group Fourier Transform).: The Fourier transform of \(f\in L^{1}(G),\) at \(\pi\in\widehat{G},\) is defined by
\[\widehat{f}(\pi)=\int\limits_{G}f(x)\pi(x)^{*}dx:H_{\pi}\to H_{\pi}.\]
_Remark 2.12_.: The Schwartz space \(\mathscr{S}(G)\) is defined by the smooth functions \(f:G\to\mathbb{C},\) such that via the exponential mapping \(f\circ\exp_{G}:\mathfrak{g}\cong\mathbb{R}^{n}\to\mathbb{C}\) can be identified with functions on the Schwartz class \(\mathscr{S}(\mathbb{R}^{n}).\) Then, the Schwartz space on the dual \(\widehat{G}\) is defined by the image under the Fourier transform of the Schwartz space \(\mathscr{S}(G),\) that is \(\mathscr{F}_{G}:\mathscr{S}(G)\to\mathscr{S}(\widehat{G}):=\mathscr{F}_{G}( \mathscr{S}(G)).\)
_Remark 2.13_ (Fourier Inversion Formula and Plancherel Theorem).: If we identify one representation \(\pi\) with its equivalence class, \([\pi]=\{\pi^{\prime}:\pi\sim\pi^{\prime}\},\) for every \(\pi\in\widehat{G},\) the Kirillov trace character \(\Theta_{\pi}\) defined by
\[[\Theta_{\pi},f]:=\operatorname{Tr}(\widehat{f}(\pi)),\]
is a tempered distribution on \(\mathscr{S}(G).\) The Kirillov character allows one to write the Fourier inversion formula
\[\forall f\in L^{1}(G)\cap L^{2}(G),\ f(x)=\int\limits_{\widehat{G}} \operatorname{Tr}[\pi(x)\widehat{f}(\pi)]d\pi,\ x\in G.\]
The \(L^{2}\)-space on the dual is defined by the completion of \(\mathscr{S}(G)\) with respect to the norm
\[\|\sigma\|_{L^{2}(\widehat{G})}:=\left(\int\limits_{G}\|\sigma(\pi)\|_{\rm HS} ^{2}d\pi\right)^{\frac{1}{2}},\ \sigma(\pi)=\widehat{f}(\pi)\in\mathscr{S}(\widehat{G}), \tag{2.1}\]
where \(\|\cdot\|_{\mathrm{HS}}\) denotes the Hilbert-Schmidt norm of operators on every representation space. The corresponding inner product on \(L^{2}(\widehat{G})\) is given by
\[(\sigma,\tau)_{L^{2}(\widehat{G})}:=\int\limits_{G}\mathrm{Tr}[\sigma(\pi)\tau( \pi)^{*}]d\pi,\ \sigma,\tau\in L^{2}(\widehat{G}), \tag{2.2}\]
where the notation \(\tau(\pi)^{*}\) indicates the adjoint operator. Then, the Plancherel theorem says that \(\|f\|_{L^{2}(G)}=\|\widehat{f}\|_{L^{2}(\widehat{G})}\) for all \(f\in L^{2}(G).\)
### Homogeneous linear operators and Rockland operators
Homogeneous operators interact with the dilations of the group. We introduce them in the following definition.
**Definition 2.14** (Homogeneous operators).: A continuous linear operator \(T:C^{\infty}(G)\to\mathscr{D}^{\prime}(G)\) is homogeneous of degree \(\nu_{T}\in\mathbb{C}\) if for every \(r>0\) the equality
\[T(f\circ D_{r})=r^{\nu_{T}}(Tf)\circ D_{r}\]
holds for every \(f\in\mathscr{D}(G).\)
Now, we introduce the main class of differential operators in the context of nilpotent Lie groups. The existence of these operators classifies the family of graded Lie groups. We call them Rockland operators.
**Definition 2.15** (Rockland operators).: If for every representation \(\pi\in\widehat{G},\)\(\pi:G\to U(H_{\pi}),\) we denote by \(H_{\pi}^{\infty}\) the set of smooth vectors (also called Garding vectors), that is, the space of vectors \(v\in H_{\pi}\) such that the function \(x\mapsto\pi(x)v,\)\(x\in\widehat{G},\) is smooth, a Rockland operator is a left-invariant partial differential operator
\[\mathcal{R}=\sum_{|\alpha|\leq m}a_{\alpha}X^{\alpha}:C^{\infty}(G)\to C^{ \infty}(G)\]
which is homogeneous of positive degree \(\nu=\nu_{\mathcal{R}}\) and such that, for every unitary irreducible non-trivial representation \(\pi\in\widehat{G},\) its symbol \(\pi(\mathcal{R})\) defined via the Fourier inversion formula by
\[\mathcal{R}f(x)=\int\limits_{\widehat{G}}\mathrm{Tr}[\pi(x)\pi(\mathcal{R}) \widehat{f}(\pi)]d\pi,\ x\in G, \tag{2.3}\]
is injective on \(H_{\pi}^{\infty};\)\(\sigma_{\mathcal{R}}(\pi)=\pi(\mathcal{R})\) coincides with the infinitesimal representation of \(\mathcal{R}\) as an element of the universal enveloping algebra \(\mathfrak{U}(\mathfrak{g}).\)
**Example 2.16**.: Let \(G\) be a graded Lie group of topological dimension \(n.\) We denote by \(\{D_{r}\}_{r>0}\) the natural family of dilations of its Lie algebra \(\mathfrak{g}:=\mathrm{Lie}(G),\) and by \(\nu_{1},\cdots,\nu_{n}\) its weights. We fix a basis \(Y=\{X_{1},\cdots,X_{n}\}\) of \(\mathfrak{g}\) satisfying \(D_{r}X_{j}=r^{\nu_{j}}X_{j},\) for \(1\leq j\leq n,\) and all \(r>0.\) If \(\nu_{\circ}\) is any common multiple of \(\nu_{1},\cdots,\nu_{n},\) the operator
\[\mathcal{R}=\sum_{j=1}^{n}(-1)^{\frac{\nu_{\circ}}{\nu_{j}}}c_{j}X_{j}^{\frac{ 2\nu_{\circ}}{\nu_{j}}},\ c_{j}>0,\]
is a positive Rockland operator of homogeneous degree \(2\nu_{\circ}\) on \(G\) (see Lemma 4.1.8 of [19]).
_Remark 2.17_.: It can be shown that a Lie group \(G\) is graded if and only if there exists a differential Rockland operator on \(G.\) Also, in view of the Schwartz kernel theorem, the operator \(\mathcal{R}\) admits a right-convolution kernel \(k_{\mathcal{R}},\) that is for any \(f\in C_{0}^{\infty}(G),\)
\[\mathcal{R}f(x)=\int\limits_{G}f(y)k_{\mathcal{R}}(y^{-1}x)dy,\ x\in G.\]
In terms of the convolution of functions
\[f*g(x):=\int\limits_{G}f(y)g(y^{-1}x)dy,\ f,g\in L^{1}(G), \tag{2.4}\]
and in view of the action of the Fourier transform on convolutions \(\widehat{f*g}=\widehat{g}\widehat{f},\) the Fourier inversion formula shows that \(\forall\pi\in\widehat{G},\)\(\pi(\mathcal{R})=\widehat{k}_{\mathcal{R}}(\pi).\)
Next, we record for our further analysis some aspects of the functional calculus for Rockland operators.
_Remark 2.18_ (Functional calculus for Rockland operators).: If the Rockland operator is formally self-adjoint, then \(\mathcal{R}\) and \(\pi(\mathcal{R})\) admit self-adjoint extensions on \(L^{2}(G)\) and \(H_{\pi},\) respectively. Now if we preserve the same notation for their self-adjoint extensions and we denote by \(E\) and \(E_{\pi}\) their spectral measures, we will denote by
\[\psi(\mathcal{R})=\int\limits_{-\infty}^{\infty}\psi(\lambda)dE(\lambda),\ \ \text{and}\ \ \pi(\psi(\mathcal{R}))\equiv\psi(\pi(\mathcal{R}))=\int\limits_{-\infty}^{ \infty}\psi(\lambda)dE_{\pi}(\lambda), \tag{2.5}\]
the functions defined by the functional calculus. In general, we will reserve the notation \(\{dE_{A}(\lambda)\}_{0\leq\lambda<\infty}\) for the spectral measure associated with a positive and self-adjoint operator \(A\) on a Hilbert space \(H.\)
We now recall a lemma on dilations on the unitary dual \(\widehat{G},\) which will be useful in our analysis of spectral multipliers. For the proof, see Lemma 4.3 of [19].
**Lemma 2.19**.: _For every \(\pi\in\widehat{G}\) let us define_
\[D_{r}(\pi)(x)\equiv(r\cdot\pi)(x):=\pi(r\cdot x)\equiv\pi(D_{r}(x)), \tag{2.6}\]
_for every \(r>0\) and all \(x\in G.\) Then, if \(f\in L^{\infty}(\mathbb{R})\) then \(f(\pi^{(r)}(\mathcal{R}))=f(r^{\nu}\pi(\mathcal{R})).\)_
_Remark 2.20_.: Note that if \(f_{r}:=r^{-Q}f(r^{-1}\cdot),\) then
\[\widehat{f}_{r}(\pi)=\int\limits_{G}r^{-Q}f(r^{-1}\cdot x)\pi(x)^{*}dx=\int \limits_{G}f(y)\pi(r\cdot y)^{*}dy=\widehat{f}(r\cdot\pi), \tag{2.7}\]
for any \(\pi\in\widehat{G}\) and all \(r>0,\) with \((r\cdot\pi)(y)=\pi(r\cdot y),\)\(y\in G,\) as in (2.6).
The following lemma present the action of the dilations of the group \(G\) into the kernels of bounded functions of a Rockland operator \(\mathcal{R},\) see [19, Page 179].
**Lemma 2.21**.: _Let \(f\in L^{\infty}(\mathbb{R}_{0}^{+})\) be a bounded Borel function and let \(r>0.\) Then, we have_
\[\forall x\in G,\,f(r^{\nu}\mathcal{R})\delta(x)=r^{-Q}[f(\mathcal{R})\delta]( r^{-1}\cdot x), \tag{2.8}\]
_where \(Q\) is the homogeneous dimension of \(G.\)_
## 3. Boundedness of the dyadic maximal function
In this section we establish the \(L^{p}\)-boundedness of the dyadic maximal function associated to a compactly supported Borel measure on a graded Lie group satisfying some additional Fourier transform conditions according to the hypothesis in Theorem 1.1. First, we start with our main lemma in the next subsection.
### The key lemma
The following Lemma 3.1 is our main tool for the proof of Theorem 1.1. Indeed, it will be used to use the \(L^{p}\)-boundedness of the square function operator in Lemma 3.2 from which we will deduce the boundedness of the dyadic maximal function (1.3).
**Lemma 3.1**.: _Let \(K\in L^{1}(G)\) be a distribution with compact support such that for some \(a>0\) the group Fourier transform of \(K\) satisfies the growth estimate_
\[\max_{\pm}\sup_{\pi\in\widehat{G}}\|\pi(\mathcal{R})^{\pm\frac{a}{\nu}} \widehat{K}(\pi)\|_{\mathrm{op}}<\infty. \tag{3.1}\]
_For any \(j\in\mathbb{Z},\) let us consider the kernel \(K_{j}(x)=2^{-jQ}K(2^{-j}\cdot x),\)\(x\in G,\) and define \(T\) as follows_
\[Tf(x):=\sum_{j=-\infty}^{\infty}f*K_{j}(x),\,f\in C_{0}^{\infty}(G). \tag{3.2}\]
_Then, \(T:L^{p}(G)\to L^{p}(G)\) admits a bounded extension for all \(1<p<\infty.\)_
Proof.: We start the proof by considering a suitable partition of unity for the spectrum of the Rockland operator \(\mathcal{R}.\) For this, let us take a function \(\Phi\in\mathscr{S}(\mathbb{R})\) such that (see Lemma 3.13 in [26])
\[\forall\lambda\in(0,\infty),\,\sum_{j=-\infty}^{\infty}\Phi(2^{j\nu}\lambda) =1. \tag{3.3}\]
Moreover, we can assume that \(\Phi\) generates a dyadic partition in the sense that \(\mathrm{supp}(\Phi)\subset[1/2^{\nu},2^{\nu}].\) As a consequence we have that
\[\sum_{j=-\infty}^{\infty}\Phi(2^{j\nu}\mathcal{R})=I=\text{ identity operator on }L^{2}(G), \tag{3.4}\]
and the convergence in \(\mathscr{S}^{\prime}(G)\) to the Dirac distribution
\[\sum_{j=-\infty}^{\infty}\Phi(2^{j\nu}\mathcal{R})\delta=\delta. \tag{3.5}\]
In view of the property in (2.5) for the functional calculus of \(\mathcal{R},\) taking the group Fourier transform of (3.5) in both sides we obtain that
\[\forall\pi\in\widehat{G},\,\sum_{j=-\infty}^{\infty}\mathscr{F}_{G}[\Phi( \mathcal{R})\delta](2^{j}\cdot\pi)=\sum_{j=-\infty}^{\infty}\Phi[(2^{j}\cdot \pi)(\mathcal{R})]=I_{H_{\pi}}, \tag{3.6}\]
where we have used that, for any \(j,\)\((2^{j}\cdot\pi)(\mathcal{R})=2^{j\nu}\pi(\mathcal{R}).\) To simplify the notation, let us define
\[\forall x\in G,\forall j\in\mathbb{Z},\,\Phi_{j}(x):=2^{-jQ}(\Phi(\mathcal{R} )\delta)(2^{-j}\cdot),\,\,\Phi(x):=(\Phi(\mathcal{R})\delta)(x). \tag{3.7}\]
Then, we have
\[\forall\pi\in\widehat{G}\,,\forall j\in\mathbb{Z},\,\widehat{\Phi}_{j}(\pi)= \mathscr{F}_{G}[\Phi(\mathcal{R})\delta](2^{j}\cdot\pi)=\Phi[(2^{j}\cdot\pi)( \mathcal{R})]=\Phi[2^{j\nu}\pi(\mathcal{R})]. \tag{3.8}\]
In view of (3.6), we conclude that
\[\forall\pi\in\widehat{G},\,\sum\limits_{j=-\infty}^{\infty}\widehat{\Phi}_{j} (\pi)=I_{H_{\pi}}, \tag{3.9}\]
and then, in the sense of distributions we have the identity
\[\sum\limits_{j=-\infty}^{\infty}\Phi_{j}=\delta. \tag{3.10}\]
Note also, that if \(\{dE_{\pi(\mathcal{R})}\}_{\lambda>0}\) is the spectral measure of the operator \(\pi(\mathcal{R})\), then
\[\forall j\in\mathbb{Z},\,\forall\pi\in\widehat{G},\,\widehat{\Phi}_{j}(\pi)= \Phi[(2^{j}\cdot\pi)(\mathcal{R})]=\Phi[2^{j\nu}\pi(\mathcal{R})]=\int\limits _{0}^{\infty}\Phi(2^{j\nu}\lambda)dE_{\pi(\mathcal{R})}(\lambda). \tag{3.11}\]
These properties of the partition of the unity \(\Phi_{j},\,j\in\mathbb{Z}\), will be used in our further analysis. Indeed, we start using (3.10) to decompose any \(K_{j}\) as follows:
\[K_{j}=K_{j}\ast\delta=\sum\limits_{k=-\infty}^{\infty}K_{j}\ast\Phi_{j+k}. \tag{3.12}\]
Define the linear operator \(\tilde{T}_{k}\) for any \(k\in\mathbb{Z}\) as follows,
\[\forall f\in C_{0}^{\infty}(G),\,\tilde{T}_{k}f:=\sum\limits_{j=-\infty}^{ \infty}f\ast K_{j}\ast\Phi_{j+k}. \tag{3.13}\]
Note that
\[\sum\limits_{k=-\infty}^{\infty}\tilde{T}_{k}f=\sum\limits_{k=-\infty}^{ \infty}\sum\limits_{j=-\infty}^{\infty}f\ast K_{j}\ast\Phi_{j+k}=\sum\limits_ {k=-\infty}^{\infty}\sum\limits_{j=-\infty}^{\infty}f\ast K_{k}\ast\Phi_{k+j }=\sum\limits_{k=-\infty}^{\infty}f\ast K_{k}=:Tf.\]
In view of the last identity we will split our proof in the following steps.
* Step 1. To estimate the norm of the operator \(\tilde{T}_{k}:L^{2}(G)\to L^{2}(G).\) Moreover, we will prove that \[\exists C>0,\,\forall f\in C_{0}^{\infty}(G),\,\|\tilde{T}_{k}f\|_{L^{2}(G)} \leq C2^{-a|k|}\|f\|_{L^{2}(G)}.\] (3.14)
* Step 2. To estimate the norm of the operator \(\tilde{T}_{k}:L^{1}(G)\to L^{1,\infty}(G).\) Moreover, we will prove that \[\exists C>0,\,\forall\lambda>0,\,\forall f\in C_{0}^{\infty}(G),\,|\{x\in G:| \tilde{T}_{k}f(x)|>\lambda\}|\leq\frac{C(1+|k|)}{\lambda}\|f\|_{L^{1}(G)}.\] (3.15)
* Step 3. To use Marcinkiewicz interpolation theorem to prove that for any \(1<p<2\), \[\exists C>0,\,\forall f\in C_{0}^{\infty}(G),\,\|\tilde{T}_{k}f\|_{L^{p}(G)} \leq C_{p}2^{-a|k|\theta}(1+|k|)^{1-\theta}\|f\|_{L^{p}(G)},\] (3.16) where \(1/p=\theta/2+(1-\theta)\).
* Final Step. The proof of Theorem 3.1 follows if we sum over \(k\in\mathbb{Z}\) both sides of (3.16), in the case when \(1<p<2,\) and then by the duality argument we complete then \(L^{p}\)-boundedness of \(T\) for \(2<p<\infty.\)
Once proved Steps 1 and 2 all the other steps above are clear. It remains, therefore, to prove Steps 1 and 2.
#### 3.1.1. Step 1
Let us prove that
\[\exists C>0,\,\forall f\in C_{0}^{\infty}(G),\,\|\tilde{T}_{k}f\|_{L^{2}(G)} \leq C2^{-a|k|}\|f\|_{L^{2}(G)}. \tag{3.17}\]
From (3.12) we have the identity
\[\forall j\in\mathbb{Z},\,\forall\pi\in\widehat{G},\,\widehat{K}_{j}(\pi)= \sum_{m=-\infty}^{\infty}\widehat{\Phi}_{m+j}(\pi)\widehat{K}_{j}(\pi), \tag{3.18}\]
in the strong topology on \(H_{\pi}.\) Using this fact and Plancherel theorem we deduce that
\[\|\tilde{T}_{k}f\|_{L^{2}(G)}^{2}=(\tilde{T}_{k}f,\tilde{T}_{k}f)_{L^{2}(G)}= (\widehat{\widehat{T}_{k}f},\widehat{\widehat{T}_{k}f})_{L^{2}(\widehat{G})}, \tag{3.19}\]
and by writing the right-hand side of this identity in integral form we have that
\[\|\tilde{T}_{k}f\|_{L^{2}(G)}^{2} =\sum_{m=-\infty}^{\infty}\sum_{j=-\infty}^{\infty}(\widehat{ \Phi}_{m+k}(\pi)\widehat{K}_{m}(\pi)\widehat{f}(\pi),\widehat{\Phi}_{j+k}(\pi )\widehat{K}_{j}(\pi)\widehat{f}(\pi))_{L^{2}(\widehat{G})}\] \[\leq\sum_{m=-\infty}^{\infty}\sum_{j=-\infty}^{\infty}\int\limits _{\widehat{G}}|\mathrm{Tr}[\widehat{\Phi}_{m+k}(\pi)\widehat{K}_{m}(\pi) \widehat{f}(\pi)(\widehat{\Phi}_{j+k}(\pi)\widehat{K}_{j}(\pi)\widehat{f}(\pi ))^{*}]|d\pi\] \[=\sum_{j=-\infty}^{\infty}\sum_{m=-\infty}^{\infty}\int\limits _{\widehat{G}}|\mathrm{Tr}[\widehat{\Phi}_{m+k}(\pi)\widehat{K}_{m}(\pi) \widehat{f}(\pi)\widehat{f}(\pi)^{*}\widehat{K}_{j}(\pi)^{*}\widehat{\Phi}_{j +k}(\pi)^{*}]|d\pi.\]
Because of (3.8), \(\widehat{\Phi}_{j+k}(\pi)\) is self-adjoint in every representation space, and the operators \(\widehat{\Phi}_{m+k}(\pi),\widehat{\Phi}_{j+k}(\pi),\) commute with each other and with other Borel functions of \(\pi(\mathcal{R}),\) and we can write
\[\|\tilde{T}_{k}f\|_{L^{2}(G)}^{2} \leq\sum_{j=-\infty}^{\infty}\sum_{m=-\infty}^{\infty}\int\limits _{\widehat{G}}|\mathrm{Tr}[\widehat{\Phi}_{m+k}(\pi)\widehat{K}_{m}(\pi) \widehat{f}(\pi)\widehat{f}(\pi)^{*}\widehat{K}_{j}(\pi)^{*}\widehat{\Phi}_{j +k}(\pi)]|d\pi\] \[=\sum_{j=-\infty}^{\infty}\sum_{m=-\infty}^{\infty}\int\limits _{\widehat{G}}|\mathrm{Tr}[\widehat{K}_{j}(\pi)^{*}\widehat{\Phi}_{j+k}(\pi) \widehat{\Phi}_{m+k}(\pi)\widehat{K}_{m}(\pi)\widehat{f}(\pi)\widehat{f}(\pi) ^{*}]|d\pi\] \[=\sum_{j=-\infty}^{\infty}\sum_{m=-\infty}^{\infty}\int\limits _{\widehat{G}}|\mathrm{Tr}[\widehat{K}(2^{j}\cdot\pi)^{*}[(2^{j}\cdot\pi)( \mathcal{R})^{\pm\frac{a}{\nu}}][(2^{j}\cdot\pi)(\mathcal{R})^{\mp\frac{a}{\nu }}]\] \[\widehat{\Phi}_{j+k}(\pi)\widehat{\Phi}_{m+k}(\pi)\widehat{K}_{m}( \pi)\widehat{f}(\pi)\widehat{f}(\pi)^{*}]|d\pi\] \[\leq\sum_{j=-\infty}^{\infty}\sum_{m=-\infty}^{\infty}\int\limits _{\widehat{G}}\|\widehat{K}(2^{j}\cdot\pi)^{*}[(2^{j}\cdot\pi)(\mathcal{R})^ {\pm\frac{a}{\nu}}]\|_{\mathrm{op}}|\mathrm{Tr}[[(2^{j}\cdot\pi)(\mathcal{R}) ^{\mp\frac{a}{\nu}}]\] \[\widehat{\Phi}_{j+k}(\pi)\widehat{\Phi}_{m+k}(\pi)\widehat{K}_{m}( \pi)\widehat{f}(\pi)\widehat{f}(\pi)^{*}]|d\pi\]
\[=\sum_{j=-\infty}^{\infty}\sum_{m=-\infty}^{\infty}\int\limits_{ \widehat{G}}||[[(2^{j}\cdot\pi)(\mathcal{R})^{\pm\frac{a}{\nu}}]\widehat{K}(2^{j} \cdot\pi)]^{*}||_{\mathrm{op}}|\mathrm{Tr}[[(2^{j}\cdot\pi)(\mathcal{R})^{\mp \frac{a}{\nu}}]\] \[\qquad\qquad\qquad\qquad\qquad\widehat{\Phi}_{j+k}(\pi)\widehat{ \Phi}_{m+k}(\pi)\widehat{K}_{m}(\pi)\widehat{f}(\pi)\widehat{f}(\pi)^{*}]|d\pi\] \[=\sum_{j=-\infty}^{\infty}\sum_{m=-\infty}^{\infty}\int\limits_{ \widehat{G}}||[(2^{j}\cdot\pi)(\mathcal{R})^{\pm\frac{a}{\nu}}]\widehat{K}(2^{ j}\cdot\pi)||_{\mathrm{op}}|\mathrm{Tr}[[(2^{j}\cdot\pi)(\mathcal{R})^{\mp\frac{a}{ \nu}}]\] \[\qquad\qquad\qquad\qquad\qquad\qquad\widehat{\Phi}_{j+k}(\pi) \widehat{\Phi}_{m+k}(\pi)\widehat{K}_{m}(\pi)\widehat{f}(\pi)\widehat{f}(\pi) ^{*}]|d\pi\] \[\lesssim\sum_{j=-\infty}^{\infty}\sum_{m=-\infty}^{\infty}\int \limits_{\widehat{G}}|\mathrm{Tr}[[(2^{j}\cdot\pi)(\mathcal{R})^{\mp\frac{a} {\nu}}]\] \[\qquad\qquad\qquad\qquad\qquad\widehat{\Phi}_{j+k}(\pi)\widehat{ \Phi}_{m+k}(\pi)\widehat{K}_{m}(\pi)\widehat{f}(\pi)\widehat{f}(\pi)^{*}]|d\pi\] \[=\sum_{j=-\infty}^{\infty}\sum_{m=-\infty}^{\infty}\int\limits _{\widehat{G}}|\mathrm{Tr}[\widehat{\Phi}_{j+k}(\pi)\widehat{\Phi}_{m+k}(\pi) [(2^{j}\cdot\pi)(\mathcal{R})^{\mp\frac{a}{\nu}}]\widehat{K}_{m}(\pi)\widehat{ f}(\pi)\widehat{f}(\pi)^{*}]|d\pi\] \[=\sum_{j=-\infty}^{\infty}\sum_{m=-\infty}^{\infty}\int\limits _{\widehat{G}}|\mathrm{Tr}[[\widehat{f}(\pi)\widehat{f}(\pi)^{*}]\widehat{ \Phi}_{j+k}(\pi)\widehat{\Phi}_{m+k}(\pi)[(2^{j}\cdot\pi)(\mathcal{R})^{\mp \frac{a}{\nu}}]\widehat{K}_{m}(\pi)]|d\pi,\]
where we have used the kernel condition
\[\sup_{j\in\mathbb{Z}_{\ast}\pi\in\widehat{G}}||[(2^{j}\cdot\pi)(\mathcal{R})^ {\mp\frac{a}{\nu}}]\widehat{K}(2^{j}\cdot\pi)||_{\mathrm{op}}\leq\sup_{\pi\in \widehat{G}}\|\pi(\mathcal{R})^{\mp\frac{a}{\nu}}\widehat{K}(\pi)\|_{\mathrm{ op}}<\infty. \tag{3.20}\]
In view of (3.11), the properties of the functional calculus allow us to write
\[\widehat{\Phi}_{j+k}(\pi)\widehat{\Phi}_{m+k}(\pi)=\int\limits_{0}^{\infty} \Phi(2^{(j+k)\nu}\lambda)\Phi(2^{(m+k)\nu}\lambda)dE_{\pi(\mathcal{R})}( \lambda). \tag{3.21}\]
Since the support of \(\Phi\) lies in the interval \([1/2^{\nu},2^{\nu}]\) we have that
\[\forall\ell\forall\ell^{\prime},\text{ such that }|\ell-\ell^{\prime}|\geq 2,\ \Phi_{\ell}(\lambda)\Phi_{\ell^{\prime}}(\lambda)=0.\]
In consequence the representation in (3.21) shows that \(\widehat{\Phi}_{j+k}(\pi)\widehat{\Phi}_{m+k}(\pi)\equiv 0_{H_{\pi}}\) if \(|m-j|\geq 2.\) So, we have that
\[\|\tilde{T}_{k}f\|_{L^{2}(G)}^{2}\lesssim\sum_{j=-\infty}^{\infty}\sum_{m=j-1 }^{j+1}\int\limits_{\widehat{G}}|\mathrm{Tr}[[\widehat{f}(\pi)\widehat{f}(\pi )^{*}]\widehat{\Phi}_{j+k}(\pi)\widehat{\Phi}_{m+k}(\pi)[(2^{j}\cdot\pi)( \mathcal{R})^{\mp\frac{a}{\nu}}]\widehat{K}_{m}(\pi)]|d\pi. \tag{3.22}\]
In view of (3.20) we have that
\[\mathscr{A}(\pi):=|\mathrm{Tr}[[\widehat{f}(\pi)\widehat{f}(\pi)^ {*}]\widehat{\Phi}_{j+k}(\pi)\widehat{\Phi}_{m+k}(\pi)[(2^{j}\cdot\pi)( \mathcal{R})^{\mp\frac{a}{\nu}}]\widehat{K}_{m}(\pi)]|\] \[=|\mathrm{Tr}[[\widehat{f}(\pi)\widehat{f}(\pi)^{*}]\widehat{ \Phi}_{j+k}(\pi)\widehat{\Phi}_{m+k}[(2^{j}\cdot\pi)(\mathcal{R})]^{\mp\frac{a} {\nu}}(\pi)\widehat{K}(2^{m}\cdot\pi)]|\] \[=|\mathrm{Tr}[[\widehat{f}(\pi)\widehat{f}(\pi)^{*}]\widehat{ \Phi}_{j+k}(\pi)\widehat{\Phi}_{m+k}(\pi)[2^{j\nu}\times\pi(\mathcal{R})]^{ \mp\frac{a}{\nu}}\widehat{K}(2^{m}\cdot\pi)]|\] \[=|\mathrm{Tr}[[\widehat{f}(\pi)\widehat{f}(\pi)^{*}]\widehat{ \Phi}_{j+k}(\pi)\widehat{\Phi}_{m+k}(\pi)[2^{(j-m)\nu}\times 2^{m\nu}\times\pi(\mathcal{R})]^{\mp\frac{a}{\nu}} \widehat{K}(2^{m}\cdot\pi)]|\] \[=2^{\mp(j-m)a}|\mathrm{Tr}[[\widehat{f}(\pi)\widehat{f}(\pi)^{*}] \widehat{\Phi}_{j+k}(\pi)\widehat{\Phi}_{m+k}(\pi)[2^{m\nu}\times\pi(\mathcal{R })]^{\mp\frac{a}{\nu}}\widehat{K}(2^{m}\cdot\pi)]|.\]
Since \(j-m=0,\pm 1,\,2^{\mp 2(j-m)a}\asymp 1\), we have
\[\mathscr{A}(\pi):=|\mathrm{Tr}[[\widehat{f}(\pi)\widehat{f}(\pi)^{ *}]\widehat{\Phi}_{j+k}(\pi)\widehat{\Phi}_{m+k}(\pi)[(2^{j}\cdot\pi)(\mathcal{R })^{\mp\frac{a}{\nu}}]\widehat{K}_{m}(\pi)]|\] \[=2^{\mp(j-m)a}|\mathrm{Tr}[[\widehat{f}(\pi)\widehat{f}(\pi)^{ *}]\widehat{\Phi}_{j+k}(\pi)\widehat{\Phi}_{m+k}(\pi)[2^{m\nu}\times\pi( \mathcal{R})]^{\mp\frac{a}{\nu}}\widehat{K}(2^{m}\cdot\pi)]|\] \[\asymp|\mathrm{Tr}[[\widehat{f}(\pi)\widehat{f}(\pi)^{*}]\widehat {\Phi}_{j+k}(\pi)\widehat{\Phi}_{m+k}(\pi)[2^{m\nu}\times\pi(\mathcal{R})]^{ \mp\frac{a}{\nu}}\widehat{K}(2^{m}\cdot\pi)]|.\]
Until now we have proved that the term
\[\mathscr{A}(\pi):=|\mathrm{Tr}[[\widehat{f}(\pi)\widehat{f}(\pi)^{*}] \widehat{\Phi}_{j+k}(\pi)\widehat{\Phi}_{m+k}(\pi)[(2^{j}\cdot\pi)(\mathcal{R })^{\mp\frac{a}{\nu}}]\widehat{K}_{m}(\pi)]|\]
can be estimated as follows
\[\mathscr{A}(\pi)\asymp|\mathrm{Tr}[[\widehat{f}(\pi)\widehat{f}(\pi)^{*}] \widehat{\Phi}_{j+k}(\pi)\widehat{\Phi}_{m+k}(\pi)[2^{m\nu}\times\pi(\mathcal{ R})]^{\mp\frac{a}{\nu}}\widehat{K}(2^{m}\cdot\pi)]|. \tag{3.23}\]
To estimate the right-hand side of (3.23) let us use (3.20) again as follows
\[\mathscr{A}(\pi) \asymp|\mathrm{Tr}[[\widehat{f}(\pi)\widehat{f}(\pi)^{*}] \widehat{\Phi}_{j+k}(\pi)\widehat{\Phi}_{m+k}(\pi)[(2^{m}\cdot\pi)(\mathcal{R })]^{\mp\frac{2a}{\nu}}[(2^{m}\cdot\pi)(\mathcal{R})]^{\pm\frac{a}{\nu}} \widehat{K}(2^{m}\cdot\pi)]|\] \[\leq\mathrm{Tr}[[[\widehat{f}(\pi)\widehat{f}(\pi)^{*}]\widehat{ \Phi}_{j+k}(\pi)\widehat{\Phi}_{m+k}(\pi)[(2^{m}\cdot\pi)(\mathcal{R})]^{\mp \frac{2a}{\nu}}]|\|[(2^{m}\cdot\pi)(\mathcal{R})]^{\pm\frac{a}{\nu}}\widehat{ K}(2^{m}\cdot\pi)\|_{\mathrm{op}}\] \[\lesssim\mathrm{Tr}[[[\widehat{f}(\pi)\widehat{f}(\pi)^{*}]\widehat {\Phi}_{j+k}(\pi)\widehat{\Phi}_{m+k}(\pi)[(2^{m}\cdot\pi)(\mathcal{R})]^{\mp \frac{2a}{\nu}}]|.\]
Now, we have the better estimate (because it only depends on the functional calculus of the symbol \(\pi(\mathcal{R})\) of the Rockland operator \(\mathcal{R}\) and of \(\widehat{f}\))
\[\mathscr{A}(\pi)\lesssim\mathrm{Tr}[[[\widehat{f}(\pi)\widehat{f}(\pi)^{*}] \widehat{\Phi}_{j+k}(\pi)\widehat{\Phi}_{m+k}(\pi)[(2^{m}\cdot\pi)(\mathcal{R })]^{\mp\frac{2a}{\nu}}]|. \tag{3.24}\]
Note that when composing \(\widehat{\Phi}_{j+k}(\pi)\widehat{\Phi}_{m+k}(\pi)\) with the operator \([(2^{m}\cdot\pi)(\mathcal{R})]^{\pm\frac{2a}{\nu}}\) we remove the zero from the spectrum of the new operator
\[\widehat{\Phi}_{j+k}(\pi)\widehat{\Phi}_{m+k}(\pi)[(2^{m}\cdot\pi)(\mathcal{R })]^{\pm\frac{2a}{\nu}} \tag{3.25}\]
because of the spectral identity in (3.8) and the properties of the support of \(\Phi.\) From (3.24) we also have the estimate
\[\mathscr{A}(\pi)\lesssim\mathrm{Tr}[|[\widehat{f}(\pi)\widehat{f}(\pi)^{*}] \widehat{\Phi}_{j+k}(\pi)\widehat{\Phi}_{m+k}(\pi)[(2^{m}\cdot\pi)(\mathcal{R })]^{-\frac{2a}{\nu}}|]. \tag{3.26}\]
Now, using the functional calculus for Rockland operators we can estimate (3.26). Indeed, we have
\[\mathrm{Tr}[|[\widehat{f}(\pi)\widehat{f}(\pi)^{*}]\widehat{\Phi} _{j+k}(\pi)\widehat{\Phi}_{m+k}(\pi)[(2^{m}\cdot\pi)(\mathcal{R})]^{-\frac{2a} {\nu}}|]\] \[=\mathrm{Tr}[|[\widehat{f}(\pi)\widehat{f}(\pi)^{*}]\widehat{ \Phi}_{j+k}(\pi)\widehat{\Phi}_{m+k}(\pi)[(2^{m\nu}\pi)(\mathcal{R})]^{-\frac{2 a}{\nu}}|]\] \[=\mathrm{Tr}[|[\widehat{f}(\pi)\widehat{f}(\pi)^{*}]\widehat{ \Phi}_{j+k}(\pi)\widehat{\Phi}_{m+k}(\pi)2^{-2ma}[\pi(\mathcal{R})]^{-\frac{2 a}{\nu}}|]\] \[=2^{-2ma}\mathrm{Tr}[|[\widehat{f}(\pi)\widehat{f}(\pi)^{*}] \widehat{\Phi}_{j+k}(\pi)\widehat{\Phi}_{m+k}(\pi)[\pi(\mathcal{R})]^{-\frac{2 a}{\nu}}|]\] \[\asymp 2^{-2ja}\mathrm{Tr}[|[\widehat{f}(\pi)\widehat{f}(\pi)^{*}] \widehat{\Phi}_{j+k}(\pi)\widehat{\Phi}_{m+k}(\pi)[\pi(\mathcal{R})]^{-\frac{2 a}{\nu}}|].\]
Summarising we have proved that
\[\mathscr{A}(\pi)\lesssim 2^{-2ja}\mathrm{Tr}[|[\widehat{f}(\pi)\widehat{f}(\pi)^{*} ]\widehat{\Phi}_{j+k}(\pi)\widehat{\Phi}_{m+k}(\pi)[\pi(\mathcal{R})]^{-\frac{2 a}{\nu}}|]. \tag{3.27}\]
Note that in the same way that (3.24) implies (3.27) by changing \(-a\) by \(+a\) in the argument of this implication we also have that (3.26) implies
\[\mathscr{A}(\pi)\lesssim 2^{2ja}\mathrm{Tr}[|[\widehat{f}(\pi)\widehat{f}(\pi)^{*}] \widehat{\Phi}_{j+k}(\pi)\widehat{\Phi}_{m+k}(\pi)[\pi(\mathcal{R})]^{\frac{2 a}{\nu}}|]. \tag{3.28}\]
Then, from (3.27) and (3.28) we have the following similar bounds
\[\mathscr{A}(\pi)\lesssim 2^{-2ja}\max\limits_{\pm}\operatorname{Tr}[|[\widehat{f}( \pi)\widehat{f}(\pi)^{*}]\widehat{\Phi}_{j+k}(\pi)\widehat{\Phi}_{m+k}(\pi)[ \pi(\mathcal{R})]^{\pm\frac{2a}{\nu}}|], \tag{3.29}\]
and
\[\mathscr{A}(\pi)\lesssim 2^{2ja}\max\limits_{\pm}\operatorname{Tr}[|[\widehat{f}( \pi)\widehat{f}(\pi)^{*}]\widehat{\Phi}_{j+k}(\pi)\widehat{\Phi}_{m+k}(\pi)[ \pi(\mathcal{R})]^{\pm\frac{2a}{\nu}}|]. \tag{3.30}\]
Note that (3.29) and (3.30) imply the estimate
\[\mathscr{A}(\pi)\lesssim\min\{2^{2ja},2^{-2ja}\}\max\limits_{\pm} \operatorname{Tr}[|[\widehat{f}(\pi)\widehat{f}(\pi)^{*}]\widehat{\Phi}_{j+k}( \pi)\widehat{\Phi}_{m+k}(\pi)[\pi(\mathcal{R})]^{\pm\frac{2a}{\nu}}|].\]
Since \(\min\{2^{2ja},2^{-2ja}\}=2^{-2|j|a}\), we have deduced the estimate
\[\mathscr{A}(\pi)\lesssim 2^{-2|j|a}\max\limits_{\pm}\operatorname{Tr}[|[ \widehat{f}(\pi)\widehat{f}(\pi)^{*}]\widehat{\Phi}_{j+k}(\pi)\widehat{\Phi}_ {m+k}(\pi)[\pi(\mathcal{R})]^{\pm\frac{2a}{\nu}}|]. \tag{3.31}\]
Now in order to estimate (3.31) we can use the functional calculus for Rockland operators. Indeed, since \(m\asymp j\) note that
\[\operatorname{Tr}[|[\widehat{f}(\pi)\widehat{f}(\pi)^{*}]\widehat {\Phi}_{j+k}(\pi)\widehat{\Phi}_{m+k}(\pi)\pi(\mathcal{R})^{\frac{\pm 2a}{\nu}}|]\] \[=\operatorname{Tr}[|[\widehat{f}(\pi)\widehat{f}(\pi)^{*}]\left( \int\limits_{0}^{\infty}\Phi(2^{(j+k)\nu}\lambda)\Phi(2^{(m+k)\nu}\lambda)dE_{ \pi(\mathcal{R})}(\lambda)\right)\pi(\mathcal{R})^{\frac{\pm 2a}{\nu}}|]\] \[=\operatorname{Tr}[|[\widehat{f}(\pi)\widehat{f}(\pi)^{*}]\left( \int\limits_{0}^{\infty}\Phi(2^{(j+k)\nu}\lambda)\Phi(2^{(m+k)\nu}\lambda) \lambda^{\frac{\pm 2a}{\nu}}dE_{\pi(\mathcal{R})}(\lambda)\right)|]\] \[\asymp\operatorname{Tr}[|[\widehat{f}(\pi)\widehat{f}(\pi)^{*}] \left(\int\limits_{\lambda\sim 2^{-(j+k)\nu}}\Phi(2^{(j+k)\nu}\lambda)\Phi(2^{(m+k)\nu} \lambda)\lambda^{\frac{\pm 2a}{\nu}}dE_{\pi(\mathcal{R})}(\lambda)\right)|],\]
where we have used the notation \(\lambda\sim 2^{-(j+k)\nu}\) to indicate that \(\lambda\in[2^{-(j+k+1)\nu},2^{-(j+k-1)\nu}]\). Using the properties of the spectral projections \(E_{\pi(\mathcal{R})}\) we have that
\[\operatorname{Tr}[|[\widehat{f}(\pi)\widehat{f}(\pi)^{*}]\left( \int\limits_{\lambda\sim 2^{-(j+k)\nu}}\Phi(2^{(j+k)\nu}\lambda)\Phi(2^{(m+k)\nu} \lambda)dE_{\pi(\mathcal{R})}(\lambda)\right)|]\] \[=\operatorname{Tr}[|[\widehat{f}(\pi)\widehat{f}(\pi)^{*}]E_{ \pi(\mathcal{R})}[2^{-(j+k+1)\nu},2^{-(j+k-1)\nu}]\left(\int\limits_{\lambda \sim 2^{-(j+k)\nu}}\Phi(2^{(j+k)\nu}\lambda)\Phi(2^{(m+k)\nu}\lambda)dE_{ \pi(\mathcal{R})}(\lambda)\right)|]\] \[\leq\operatorname{Tr}[|[\widehat{f}(\pi)\widehat{f}(\pi)^{*}]E_{ \pi(\mathcal{R})}[2^{-(j+k+1)\nu},2^{-(j+k-1)\nu}]|]\] \[\qquad\qquad\times\|\left(\int\limits_{\lambda\sim 2^{-(j+k)\nu}} \Phi(2^{(j+k)\nu}\lambda)\Phi(2^{(m+k)\nu}\lambda)dE_{\pi(\mathcal{R})}( \lambda)\right)\|_{\mathrm{op}}.\]
Let us estimate the last operator norm:
\[\|\left(\int\limits_{\lambda\sim 2^{-(j+k)\nu}}\Phi(2^{(j+k)\nu} \lambda)\Phi(2^{(m+k)\nu}\lambda)\lambda^{\frac{\pm 2a}{\nu}}dE_{\pi( \mathcal{R})}(\lambda)\right)\|_{\mathrm{op}}\] \[\asymp 2^{\mp 2(j+k)a}\|\left(\int\limits_{\lambda\sim 2^{-(j+k)\nu}} \Phi(2^{(j+k)\nu}\lambda)\Phi(2^{(m+k)\nu}\lambda)dE_{\pi(\mathcal{R})}( \lambda)\right)\|_{\mathrm{op}}\] \[\leq 2^{\mp 2(j+k)a}\|\left(\int\limits_{\lambda\geq 0}\Phi(2^{(j+k) \nu}\lambda)\Phi(2^{(m+k)\nu}\lambda)dE_{\pi(\mathcal{R})}(\lambda)\right)\|_ {\mathrm{op}}\] \[=2^{\mp 2(j+k)a}\|\Phi(2^{(j+k)\nu}\pi(\mathcal{R}))\Phi(2^{(m+k) \nu}\pi(\mathcal{R}))\|_{\mathrm{op}}\]
\[\leq 2^{\mp 2(j+k)a}\sup_{\lambda\geq 0}(\Phi(2^{(j+k)\nu}(\lambda)) \Phi(2^{(m+k)\nu}\lambda)\] \[\leq 2^{\mp 2(j+k)a}\|\Phi\|_{L^{\infty}}^{2}.\]
So, we have proved the inequality
\[\mathrm{Tr}[|[\widehat{f}(\pi)\widehat{f}(\pi)^{*}]\left(\int \limits_{\lambda\sim 2^{-(j+k)\nu}}\Phi(2^{(j+k)\nu}\lambda)\Phi(2^{(m+k)\nu} \lambda)dE_{\pi(\mathcal{R})}(\lambda)\right)|]\] \[\leq 2^{\mp 2(j+k)a}\|\Phi\|_{L^{\infty}}^{2}\mathrm{Tr}[|[ \widehat{f}(\pi)\widehat{f}(\pi)^{*}]E_{\pi(\mathcal{R})}[2^{-(j+k+1)\nu},2^{- (j+k-1)\nu}]|],\]
from where we deduce the estimate
\[\max_{\pm}\mathrm{Tr}[|[\widehat{f}(\pi)\widehat{f}(\pi)^{*}] \left(\int\limits_{\lambda\sim 2^{-(j+k)\nu}}\Phi(2^{(j+k)\nu}\lambda)\Phi(2^{(m+k) \nu}\lambda)dE_{\pi(\mathcal{R})}(\lambda)\right)|]\] \[\lesssim_{\Phi}2^{-2|j+k|a}\mathrm{Tr}[|[\widehat{f}(\pi)\widehat {f}(\pi)^{*}]E_{\pi(\mathcal{R})}[2^{-(j+k+1)\nu},2^{-(j+k-1)\nu}]|].\]
The previous analysis allows us to estimate (3.31) as follows
\[\mathscr{A}(\pi) \lesssim 2^{-2|j|a}\max_{\pm}\mathrm{Tr}[|[\widehat{f}(\pi)\widehat {f}(\pi)^{*}]\left(\int\limits_{\lambda\sim 2^{-(j+k)\nu}}\Phi(2^{(j+k)\nu} \lambda)\Phi(2^{(m+k)\nu}\lambda)dE_{\pi(\mathcal{R})}(\lambda)\right)|]\] \[\lesssim_{\Phi,K,a}2^{-2|j|a}\times 2^{-2|j+k|a}\mathrm{Tr}[|[ \widehat{f}(\pi)\widehat{f}(\pi)^{*}]E_{\pi(\mathcal{R})}[2^{-(j+k+1)\nu},2^{ -(j+k-1)\nu}]|].\]
Using the triangle inequality \(|j+k|\geq|k|-|j|\) we have the reverse inequality
\[-|j+k|\leq|j|-|k|,\]
and then
\[\mathscr{A}(\pi) \lesssim_{\Phi,K,a}2^{-2|j|a}\times 2^{-2|k|a+2|j|a}\mathrm{Tr}[|[ \widehat{f}(\pi)\widehat{f}(\pi)^{*}]E_{\pi(\mathcal{R})}[2^{-(j+k+1)\nu},2^{ -(j+k-1)\nu}]|]\] \[=2^{-2|k|a}\mathrm{Tr}[|[\widehat{f}(\pi)\widehat{f}(\pi)^{*}]E_ {\pi(\mathcal{R})}[2^{-(j+k+1)\nu},2^{-(j+k-1)\nu}]|].\]
In consequence, coming back to (3.22) we have that
\[\|\tilde{T}_{k}f\|_{L^{2}(G)}^{2} \leq\sum_{j=-\infty}^{\infty}\sum_{m=j-1}^{j+1}\int\limits_{ \widehat{G}}\|\widehat{\Phi}_{j+k}(\pi)\widehat{\Phi}_{m+k}(\pi)[(2^{j}\cdot \pi)(\mathcal{R})^{\pm\frac{a}{\nu}}]\widehat{K}_{m}(\pi)\|_{\mathrm{op}}\| \widehat{f}(\pi)|_{\mathrm{HS}}^{2}\,d\pi\] \[\lesssim\sum_{j=-\infty}^{\infty}\sum_{m=j-1}^{j+1}2^{-2|k|a} \int\limits_{\widehat{G}}\mathrm{Tr}[|[\widehat{f}(\pi)\widehat{f}(\pi)^{*} ]E_{\pi(\mathcal{R})}[2^{-(j+k+1)\nu},2^{-(j+k-1)\nu}]|]d\pi\] \[\lesssim\sum_{j=-\infty}^{\infty}2^{-2|k|a}\int\limits_{\widehat {G}}\mathrm{Tr}[|[\widehat{f}(\pi)\widehat{f}(\pi)^{*}]E_{\pi(\mathcal{R})}[ 2^{-(j+k+1)\nu},2^{-(j+k-1)\nu}]|]d\pi.\]
Using the fact that the mapping \(t_{j}:=j+\cdot:\mathbb{Z}\to\mathbb{Z}\), \(k\mapsto j+k\), is a bijection on \(\mathbb{Z}\), we have the estimates
\[\|\tilde{T}_{k}f\|_{L^{2}(G)}^{2}\lesssim\sum_{j=-\infty}^{\infty }2^{-2|k|a}\int\limits_{\widehat{G}}\mathrm{Tr}[|[\widehat{f}(\pi)\widehat{f}( \pi)^{*}]E_{\pi(\mathcal{R})}[2^{-(j+1)\nu},2^{-(j-1)\nu}]|]d\pi\] \[\asymp 2^{-2|k|a}\int\limits_{\widehat{G}}\mathrm{Tr}[|[\widehat{f}( \pi)\widehat{f}(\pi)^{*}]E_{\pi(\mathcal{R})}(-\infty,\infty)|]d\pi\] \[=2^{-2|k|a}\int\limits_{\widehat{G}}\mathrm{Tr}[|[\widehat{f}( \pi)\widehat{f}(\pi)^{*}]I_{H_{\pi}}]d\pi\]
\[=2^{-2|k|a}\|\widehat{f}\|_{L^{2}(\widehat{G})}^{2}.\]
Then, using the last inequality and the Plancherel theorem we deduce that the \(L^{2}\)-norm of \(\tilde{T}_{k}f\) satisfies the estimate
\[\|\tilde{T}_{k}f\|_{L^{2}(G)}^{2}\lesssim_{\Phi,K,a}2^{-2|k|a}\|f\|_{L^{2}(G)}^{ 2},\]
which implies
\[\|\tilde{T}_{k}f\|_{L^{2}(G)}\lesssim_{\Phi,K,a}2^{-|k|a}\|f\|_{L^{2}(G)}. \tag{3.32}\]
The proof of (3.14) is complete and we have concluded Step 1 of the proof.
#### 3.1.2. Step 2
For the proof of the _weak (1,1) estimate_ we will use the fundamental theorem for singular integrals due to Coifman and Weiss [9]. We write
\[\forall f\in C_{0}^{\infty}(G),\,\tilde{T}_{k}f:=f*(\tilde{T}_{k}\delta),\,( \tilde{T}_{k}\delta):=\sum_{j=-\infty}^{\infty}K_{j}*\Phi_{j+k}.\]
We shall prove the estimate
\[[\tilde{T}_{k}\delta]_{H}:=\sup_{|y|\leq 1}\int\limits_{|x|>2|y|}|\tilde{T}_{k} \delta(y^{-1}x)-\tilde{T}_{k}\delta(x)|dx\lesssim_{K}(1+|k|), \tag{3.33}\]
and then the constant \((1+|k|)\) of the right-hand side of (3.33) gives the estimate in (3.15). Indeed, note that the estimate in (3.33) says that the kernel \(\tilde{T}_{k}\delta\) of \(\tilde{T}_{k}\) satisfies the Hormander condition \([\tilde{T}_{k}\delta]_{H}\lesssim_{K}(1+|k|).\) Then, from the fundamental theorem for singular integrals due to Coifman and Weiss [9], one has that the \((L^{1},L^{1,\infty})\)-operator norm of \(T\) satisfies the estimate
\[\|\tilde{T}_{k}\|_{L^{1}\to L^{1,\infty}}\lesssim\|\tilde{T}_{k}\|_{L^{2}\to L ^{2}}+[\tilde{T}_{k}\delta]_{H}\lesssim 2^{-|k|a}+(1+|k|)\lesssim(1+|k|), \tag{3.34}\]
which proves (3.15).
Note that for any \(y\in G\) with \(|y|\leq 1\), we have
\[\int\limits_{|x|>2|y|}|\tilde{T}_{k}\delta(y^{-1}x)-\tilde{T}_{k }\delta(x)|dx =\int\limits_{|x|>2|y|}\left|\sum_{j=-\infty}^{\infty}(K_{j}* \Phi_{j+k})(y^{-1}x)-(K_{j}*\Phi_{j+k})(x)\right|dx\] \[\leq\sum_{j=-\infty}^{\infty}\int\limits_{|x|>2|y|}\left|(K_{j}* \Phi_{j+k})(y^{-1}x)-(K_{j}*\Phi_{j+k})(x)\right|dx\] \[=\sum_{j=-\infty}^{\infty}I_{j,k},\]
where
\[I_{j,k}=\int\limits_{|x|>2|y|}\left|(K_{j}*\Phi_{j+k})(y^{-1}x)-(K_{j}*\Phi_{ j+k})(x)\right|dx.\]
Note that, by the Hausdorff-Young inequality, we have the following immediate estimate
\[I_{j,k} \leq\int\limits_{G}\left|(K_{j}*\Phi_{j+k})(y^{-1}x)-(K_{j}* \Phi_{j+k})(x)\right|dx\leq 2\int\limits_{G}\left|(K_{j}*\Phi_{j+k})(z) \right|dz\] \[\leq\|K_{j}\|_{L^{1}(G)}\|\Phi_{j+k}\|_{L^{1}(G)}=\|K\|_{L^{1}(G)} \|\Phi(\mathcal{R})\delta\|_{L^{1}(G)}.\]
Indeed, since
\[K_{j}:=2^{-jQ}K(2^{-j}\cdot)\text{ and }\Phi_{j+k}:=2^{-(j+k)Q}(\Phi(\mathcal{R}) \delta)(2^{-(j+k)}\cdot),\]
the changes of variables \(z=2^{-j}\cdot x\) and \(z^{\prime}=2^{-(j+k)}\cdot x^{\prime}\) imply the equality
\[\|K_{j}\|_{L^{1}(G)}=\int\limits_{G}|K_{j}(x)|dx=2^{-jQ}\int\limits_{G}|K(2^{- j}\cdot x)|dx=\int\limits_{G}|K(z)|dz=\|K\|_{L^{1}(G)},\]
as well as that
\[\|\Phi_{j+k}\|_{L^{1}(G)}=2^{-(j+k)Q}\int\limits_{G}|(\Phi( \mathcal{R})\delta)(2^{-(j+k)}\cdot x^{\prime})|dx^{\prime}=\int\limits_{G}|( \Phi(\mathcal{R})\delta)(z^{\prime})|dz^{\prime}\] \[=\|(\Phi(\mathcal{R})\delta)\|_{L^{1}(G)}.\]
Then, we have proved that
\[\forall 0<|y|\leq 1,\,\forall(j,k),\,\,\,I_{j,k}\lesssim_{\Phi}\|K\|_{L^{1}(G)} \tag{3.35}\]
Also, we will provide other estimates for \(I_{j,k}\) as follows. First, we use the dilation property
\[\forall r>0,\forall x,y\in G,\,\,r\cdot(xy)=(r\cdot x)(r\cdot y),\,\,\text{ and}\,\,\,\,r\cdot x^{-1}=(r\cdot x)^{-1}. \tag{3.36}\]
Then
\[I_{j,k}=\int\limits_{|x|>2|y|}\left|(K_{j}*\Phi_{j+k})(y^{-1}x) -(K_{j}*\Phi_{j+k})(x)\right|dx\] \[=\int\limits_{|x|>2|y|}|\int\limits_{G}K_{j}(y^{-1}xz^{-1})\Phi_{ j+k}(z)dz-\int\limits_{G}K_{j}(xz^{-1})\Phi_{j+k}(z)dz|dx\] \[=\int\limits_{|x|>2|y|}2^{-jQ}|\int\limits_{G}K(2^{-j}\cdot(y^{-1 }xz^{-1}))2^{-(j+k)Q})(\Phi(\mathcal{R})\delta)(2^{-j-k}z)dz\] \[-\int\limits_{G}K(2^{-j}\cdot(xz^{-1}))2^{-(j+k)Q}(\Phi( \mathcal{R})\delta)(2^{-j-k}\cdot z)dz|dx\] \[=\int\limits_{|x|>2|y|}2^{-jQ}|\int\limits_{G}K((2^{-j}\cdot y^{- 1})(2^{-j}\cdot x)(2^{-j}\cdot z^{-1}))2^{-(j+k)Q}(\Phi(\mathcal{R})\delta)(2^ {-j-k}z)dz\] \[-\int\limits_{G}K((2^{-j}\cdot x)(2^{-j}\cdot z^{-1}))2^{-(j+k)Q }(\Phi(\mathcal{R})\delta)(2^{-j-k}\cdot z)dz|dx\] \[=\int\limits_{|x|>2|y|}2^{-jQ}|\int\limits_{G}K((2^{-j}\cdot y^{- 1})(2^{-j}\cdot x)(2^{-j}\cdot z)^{-1})2^{-(j+k)Q}(\Phi(\mathcal{R})\delta)(2^ {-j-k}z)dz\] \[-\int\limits_{G}K((2^{-j}\cdot x)(2^{-j}\cdot z)^{-1})2^{-(j+k)Q }(\Phi(\mathcal{R})\delta)(2^{-j-k}\cdot z)dz|dx.\]
The change of variables \(w:=2^{-j}\cdot z\) gives the new volume element \(dw=2^{-jQ}dz\), and we have that
\[I_{j,k}\] \[=\int\limits_{|x|>2|y|}2^{-jQ}|\int\limits_{G}K((2^{-j}\cdot y^{- 1})(2^{-j}\cdot x)(2^{-j}\cdot z)^{-1})2^{-(j+k)Q}(\Phi(\mathcal{R})\delta)(2 ^{-j-k}z)dz\] \[-\int\limits_{G}K((2^{-j}\cdot x)(2^{-j}\cdot z)^{-1})2^{-(j+k)Q }(\Phi(\mathcal{R})\delta)(2^{-j-k}\cdot z)dz|dx\] \[=\int\limits_{|x|>2|y|}|\int\limits_{G}K((2^{-j}\cdot y)^{-1}(2^ {-j}\cdot x)w^{-1})2^{-kQ}2^{-jQ}(\Phi(\mathcal{R})\delta)(2^{-k}w)dw\] \[-\int\limits_{G}K((2^{-j}\cdot x)w^{-1})2^{-kQ}2^{-jQ}(\Phi( \mathcal{R})\delta)(2^{-k}\cdot w)dw|dx\]
\[=\int\limits_{|x|>2|y|}2^{-jQ}|\int\limits_{G}K((2^{-j}\cdot y)^{-1}(2^{-j} \cdot x)w^{-1})2^{-kQ}(\Phi(\mathcal{R})\delta)(2^{-k}w)dw\]
\[-\int\limits_{G}K((2^{-j}\cdot x)w^{-1})2^{-kQ}(\Phi(\mathcal{R})\delta)(2^{-k} \cdot w)dw|dx.\]
On the other hand, the change of variables \(x^{\prime}=2^{-j}\cdot x\) gives the new volume element \(dx^{\prime}=2^{-jQ}dx\), the zone \(\{x\in G:|x|>2|y|\}\) is mapped to the set
\[\{x^{\prime}\in G:|x^{\prime}|>2^{-j+1}|y|\}\]
allowing us to write the following identities
\(I_{j,k}\)
\[=\int\limits_{|x^{\prime}|>2^{1-j}|y|}2^{-jQ}|\int\limits_{G}K((2 ^{-j}\cdot y)^{-1}x^{\prime}w^{-1})2^{-kQ}(\Phi(\mathcal{R})\delta)(2^{-k}w)dw\] \[-\int\limits_{G}K(x^{\prime}w^{-1})2^{-kQ}(\Phi(\mathcal{R}) \delta)(2^{-k}\cdot w)dw|2^{jQ}dx^{\prime}\] \[=\int\limits_{|x|>2^{1-j}|y|}|\int\limits_{G}K((2^{-j}\cdot y)^{ -1}xw^{-1})\Phi_{k}(w)dw-\int\limits_{G}K(xw^{-1})\Phi_{k}(w)dw|dx\] \[=\int\limits_{|x|>2^{1-j}|y|}|K*\Phi_{k}((2^{-j}\cdot y)^{-1}x)- K*\Phi_{k}(x)|dx\] \[=\int\limits_{|x|>2^{1-j}|y|}|\int\limits_{G}K(z)[\Phi_{k}(z^{-1 }(2^{-j}\cdot y)^{-1}x)-\Phi_{k}(z^{-1}x)]dz|dx\] \[=\int\limits_{|x|>2^{1-j}|y|}2^{-kQ}|\int\limits_{G}K(z)[\Phi((2 ^{-k}\cdot z)^{-1}(2^{-j-k}\cdot y)^{-1}(2^{-k}\cdot x))-\Phi((2^{-k}\cdot z) ^{-1}(2^{-k}\cdot x))]dz|dx\] \[\leq\int\limits_{|x|>2^{1-j}|y|}2^{-kQ}\int\limits_{G}|K(z)||[ \Phi((2^{-k}\cdot z)^{-1}(2^{-j-k}\cdot y)^{-1}(2^{-k}\cdot x))-\Phi((2^{-k} \cdot z)^{-1}(2^{-k}\cdot x))]|dzdx,\]
where we have used the dilation property in (3.36) with \(r=2^{-k}.\)
The change of variables \(x^{\prime}=2^{-k}\cdot x\) and the new volume element \(dx^{\prime}=2^{-kQ}dx\) implies that
\[I_{j,k}\] \[\leq\int\limits_{|x^{\prime}|>2^{1-j-k}|y|}\int\limits_{G}|K(z)| |[\Phi((2^{-k}\cdot z)^{-1}(2^{-j-k}\cdot y)^{-1}(x^{\prime}))-\Phi((2^{-k} \cdot z)^{-1}(x^{\prime}))]|dzdx^{\prime}.\]
Now, let us make the change of variables \(z^{\prime\prime}=2^{-k}\cdot z.\) We have that the new volume element \(2^{kQ}dz^{\prime\prime}=dz.\) Then,
\[I_{j,k}\] \[\leq\int\limits_{|x^{\prime}|>2^{1-j-k}|y|}\int\limits_{G}2^{kQ}| K(2^{k}\cdot z^{\prime\prime})||[\Phi((z^{\prime\prime})^{-1}(2^{-j-k}\cdot y )^{-1}(x^{\prime}))-\Phi((z^{\prime\prime})^{-1}(x^{\prime}))]|dz^{\prime\prime }dx^{\prime}.\]
Now, the mean value theorem (see [19, Page 119]) implies that
\[|[\Phi((z^{\prime\prime})^{-1}(2^{-j-k}\cdot y)^{-1}(x^{\prime})) -\Phi((z^{\prime\prime})^{-1}(x^{\prime}))]|\] \[\lesssim\sum\limits_{\ell=1}^{n}|(2^{-j-k}\cdot y)^{-1}|^{\nu_{ \ell}}\sup\limits_{|z^{\prime}|\leq|(2^{-j-k}\cdot y)^{-1}|}|(X_{z^{\prime}, \ell}(\Phi((z^{\prime\prime})^{-1}z^{\prime}(x^{\prime})))|,\]
where we have written that \(|z^{\prime}|\lesssim|(2^{-j-k}\cdot y)^{-1}|\) to indicate that the inequality
\[|z^{\prime}|\leq c|(2^{-j-k}\cdot y)^{-1}|=c2^{-j-k}|y| \tag{3.37}\]
is valid for some universal constant \(c>1\) as in the mean value theorem of [19, Page 119]. Using that \(\int_{G}2^{kQ}|K(2^{k}\cdot z^{\prime\prime})|dz^{\prime\prime}=\|K\|_{L^{1}(G)}\), we can estimate \(I_{j,k}\) as follows
\[I_{j,k}\] \[\leq 2^{kQ}\int_{G}|K(2^{kQ}\cdot z^{\prime\prime})|dz^{\prime\prime}\] \[\times\sup_{z\in G}\int_{|x^{\prime}|>2^{1-j-k}|y|}\left(\sum_{ \ell=1}^{n}2^{-(j+k)\nu_{\ell}}|y|^{\nu_{\ell}}\sup_{|z^{\prime}|\lesssim 2^{-j-k} |y|}|X_{z,\ell}(\Phi(z^{-1}z^{\prime}x^{\prime})|\right)dx^{\prime}\] \[=\|K\|_{L^{1}(G)}\] \[\times\sum_{\ell=1}^{n}2^{-(j+k)\nu_{\ell}}|y|^{\nu_{\ell}}\sup _{z\in G}\int_{|x^{\prime}|>2^{1-j-k}|y|}\sup_{|z^{\prime}|\lesssim 2^{-j-k} |y|}|X_{z,\ell}(\Phi(z^{-1}z^{\prime}x^{\prime})|dx^{\prime}.\]
Using the Sobolev embedding theorem on \(G\) (see Theorem 4.4.25 of [19, page 241]), we can estimate for \(M_{0}>Q/2\),
\[\sup_{z\in G}\int_{|x^{\prime}|>2^{1-j-k}|y|}\sup_{|z^{\prime}| \lesssim 2^{-j-k}|y|}|X_{z,\ell}(\Phi(z^{-1}z^{\prime}x^{\prime})|dx^{\prime}\] \[\lesssim\sup_{z\in G}\int_{G}\sup_{z^{\prime}\in G}|X_{z,\ell}( \Phi(z^{-1}z^{\prime}x^{\prime})|dx^{\prime}\lesssim\sup_{z\in G}\int_{[\beta] \leq M_{0}}\int_{G}\|X_{z^{\prime}}^{\beta}X_{z,\ell}(\Phi(z^{-1}z^{\prime}x^{ \prime})\|_{L^{2}(G,dz^{\prime})}dx^{\prime}\] \[\lesssim\sup_{z\in G}\sum_{[\beta]\leq M_{0}}\int_{G}\|X_{z^{ \prime}}^{\beta}X_{z,\ell}(\Phi(z^{-1}z^{\prime}x^{\prime})\|_{L^{2}(G,dz^{ \prime})}dx^{\prime}\] \[\lesssim\sup_{z\in G}\sum_{[\beta]\leq M_{0}}\left(\int_{G}(1+|x^ {\prime}|)^{2M_{0}}\|X_{z^{\prime}}^{\beta}X_{z,\ell}(\Phi(z^{-1}z^{\prime}x^{ \prime})\|_{L^{2}(G,dz^{\prime})}^{2}dx^{\prime}\right)^{\frac{1}{2}}\left( \int_{G}(1+|x^{\prime}|)^{-2M_{0}}dx^{\prime}\right)^{\frac{1}{2}}\] \[\lesssim\sup_{z\in G}\sum_{[\beta]\leq M_{0}}\left(\int_{G}\int_{G }(1+|x^{\prime}|)^{2M_{0}}|X_{z^{\prime}}^{\beta}X_{z,\ell}(\Phi(z^{-1}z^{ \prime}x^{\prime})|^{2}dx^{\prime}dz^{\prime}\right)^{\frac{1}{2}}<\infty,\]
where the convergence of the last integral is justified by Hulanicki theorem, see [19, page 251]. All the analysis above shows that for all \(j,k\in\mathbb{Z}\),
\[\forall y\in G:0<|y|\leq 1,\ I_{j,k}\lesssim\sum_{\ell=1}^{n}2^{-(j+k)\nu_{ \ell}}|y|^{\nu_{\ell}}\|K\|_{L^{1}(G)}. \tag{3.38}\]
In terms of \(R>0\) defined by
\[R=\inf\{R^{\prime}>0:\operatorname{supp}(K)\subset B(e,R^{\prime})\}, \tag{3.39}\]
and of \(0<|y|\leq 1\), let us make a more precise estimate of \(I_{j,k}\) in the case where \(2^{-j}|y|\geq 2R.\) This will be used in further analysis. To do this, let us come back to the estimate
\[I_{j,k}\lesssim\sum_{\ell=1}^{n}2^{-(j+k)\nu_{\ell}}|y|^{\nu_{ \ell}}\int_{|x^{\prime}|>2^{1-j-k}|y|}\int_{G}|K(z)|\sup_{|z^{\prime}|\lesssim 2 ^{-j-k}|y|}|X_{z,\ell}(\Phi((2^{-k}\cdot z)^{-1}z^{\prime}x^{\prime})|dx^{\prime }\,dz\] \[=\sum_{\ell=1}^{n}2^{-(j+k)\nu_{\ell}}|y|^{\nu_{\ell}}\mathscr{I} _{j,k,\ell},\]
where
\[\mathscr{I}_{j,k,\ell}=\int\limits_{|x^{\prime}|>2^{1-j-k}|y|}\int\limits_{G}|K(z )|\sup\limits_{|z^{\prime}|\lesssim 2^{-j-k}|y|}|X_{z,\ell}(\Phi((2^{-k}\cdot z)^{-1}z^{ \prime}x^{\prime})|dx^{\prime}\,dz.\]
To estimate this double integral let us split it as follows
\[\mathscr{I}_{j,k,\ell}=\int\limits_{|x^{\prime}|>2^{1-j-k}|y|} \int\limits_{G}|K(z)|\sup\limits_{|z^{\prime}|\lesssim 2^{-j-k}|y|}|X_{z,\ell}( \Phi((2^{-k}\cdot z)^{-1}z^{\prime}x^{\prime})|dx^{\prime}\,dz\] \[=\int\limits_{|x^{\prime}|>2^{1-j-k}|y|}\int\limits_{\{z:|x^{ \prime}|\geq 2^{-k+2}|z|\}}|K(z)|\sup\limits_{|z^{\prime}|\lesssim 2^{-j-k}|y|}|X_{z, \ell}(\Phi((2^{-k}\cdot z)^{-1}z^{\prime}x^{\prime})|dx^{\prime}\,dz\] \[\qquad\qquad+\int\limits_{|x^{\prime}|>2^{1-j-k}|y|}\int\limits_ {\{z:|x^{\prime}|<2^{-k+2}|z|\}}|K(z)|\sup\limits_{|z^{\prime}|\lesssim 2^{-j-k}|y|}|X_{z, \ell}(\Phi((2^{-k}\cdot z)^{-1}z^{\prime}x^{\prime})|dx^{\prime}\,dz\] \[=\mathscr{I}_{j,k,\ell}+\mathscr{I}_{j,k,\ell}^{II}.\]
Observe that for the integral
\[\mathscr{I}_{j,k,\ell}^{I}=\int\limits_{|x^{\prime}|>2^{1-j-k}|y|}\int\limits_ {\{z:|x^{\prime}|\geq 2^{-k+2}|z|\}}|K(z)|\sup\limits_{|z^{\prime}|\lesssim 2^{-j-k}|y|}|X _{z,\ell}(\Phi((2^{-k}\cdot z)^{-1}z^{\prime}x^{\prime})|dx^{\prime}\,dz,\]
the integral with respect to \(dz\) is computed on the zone \(\{z:|x^{\prime}|\geq 2^{-k+2}|z|\}\), where one has \(-|x^{\prime}|/4\leq-2^{-k}|z|\). On the other hand for the integral with respect to \(dx^{\prime}\), on the region \(\{x^{\prime}:|x^{\prime}|>2^{1-j-k}|y|\}\), one has that \(|x^{\prime}|/2>2^{-j-k}|y|\). In consequence, for some constant \(0<C<c\), where \(c\) is the universal constant in (3.37), and then independent of \(j\) and \(k\), one has that
\[|(2^{-k}\cdot z)^{-1}z^{\prime}x^{\prime}|\geq|x^{\prime}|-|z^{ \prime}|-2^{-k}|z| \geq C(|x^{\prime}|-2^{-j-k}|y|-\frac{1}{4}|x^{\prime}|)\] \[=C(|x^{\prime}|/2-2^{-j-k}|y|+|x^{\prime}|/2-\frac{1}{4}|x^{ \prime}|)\] \[>C(|x^{\prime}|/2-\frac{1}{4}|x^{\prime}|)\] \[=C\frac{|x^{\prime}|}{4}.\]
Then, estimating again \(|X_{z,\ell}(\Phi((2^{-k}\cdot z)^{-1}z^{\prime}x^{\prime})|\leq C_{L}(1+|((2^{ -k}\cdot z)^{-1}z^{\prime}x^{\prime})|)^{-L}\), with \(L\) to be determined later, we have
\[\int\limits_{|x^{\prime}|>2^{1-j-k}|y|}\int\limits_{\{z:|x^{\prime }|\geq 2^{-k+2}|z|\}}|K(z)|\sup\limits_{|z^{\prime}|\lesssim 2^{-j-k}|y|}|X_{z,\ell}( \Phi((2^{-k}\cdot z)^{-1}z^{\prime}x^{\prime})|dx^{\prime}\,dz\] \[\leq\int\limits_{|x^{\prime}|>2^{1-j-k}|y|}\int\limits_{\{z:|x^{ \prime}|\geq 2^{-k+2}|z|\}}|K(z)|\sup\limits_{|z^{\prime}|\lesssim 2^{-j-k}|y|}C_{L}(1+|((2^{ -k}\cdot z)^{-1}z^{\prime}x^{\prime})|)^{-L}dx^{\prime}\,dz\] \[\leq\int\limits_{|x^{\prime}|>2^{1-j-k}|y|}\int\limits_{G}|K(z)| dzC_{L}(1+\frac{1}{4}|x^{\prime}|)^{-L}dx^{\prime}.\]
Then, with \(L=Q+1+2\nu_{\ell}\), we have
\[\int\limits_{|x^{\prime}|>2^{1-j-k}|y|}\int\limits_{G}|K(z)|dzC_{L}( 1+\frac{1}{4}|x^{\prime}|)^{-L}dx^{\prime}\] \[=\|K\|_{L^{1}(G)}\int\limits_{|x^{\prime}|>2^{1-j-k}|y|}C_{L}(1+ \frac{1}{4}|x^{\prime}|)^{-Q-1}(1+|x^{\prime}|)^{-2\nu_{\ell}}dx^{\prime}\]
\[\lesssim\|K\|_{L^{1}(G)}(2^{1-j-k}|y|)^{-2\nu_{\ell}}\int\limits_{|x^{ \prime}|>2^{1-j-k}|y|}C_{L}(1+\frac{1}{4}|x^{\prime}|)^{-Q-1}dx^{\prime}\] \[\leq\|K\|_{L^{1}(G)}(2^{1-j-k}|y|)^{-2\nu_{\ell}}\int\limits_{G}C_{ L}(1+\frac{1}{4}|x^{\prime}|)^{-Q-1}dx^{\prime}.\]
In consequence we have that
\[\mathscr{I}^{I}_{j,k,\ell}\lesssim\|K\|_{L^{1}(G)}(2^{1-j-k}|y|)^{-2\nu_{\ell}}.\]
Now, note that when \(2^{-j}|y|\geq 2R,\) and for \(|x^{\prime}|<2^{-k+2}|z|,\) (and then \(|x^{\prime}|<2^{-k+2}R,\) since \(z\) belongs to the support of \(K\)) we have that
\[\forall 0<|y|\leq 1,\ 2^{-j}|y|\geq 2R,\,\{(x^{\prime},z):|x^{\prime}|>2^{1-j-k} |y|\}\cap\{(x^{\prime},z):|x^{\prime}|<2^{-k+2}|z|\}=\emptyset. \tag{3.40}\]
This implies that when \(0<|y|\leq 1,\ 2^{-j}|y|\geq 2R,\)
\[\mathscr{I}^{II}_{j,k,\ell}=\int\limits_{|x^{\prime}|>2^{1-j-k}|y|}\int \limits_{\{z:|x|<2^{-k+2}|z|\}}|K(z)|\sup_{|z^{\prime}|\lesssim 2^{-j-k}|y|}|X_{z, \ell}(\Phi((2^{-k}\cdot z)^{-1}z^{\prime}x^{\prime})|dx^{\prime}\,dz=0. \tag{3.41}\]
In summarising we have proved the following estimates
* \(\forall 0<|y|\leq 1,\,\forall j,k\in\mathbb{Z}\) \[I_{j,k}\lesssim\sum_{\ell=1}^{n}2^{-(j+k)\nu_{\ell}}|y|^{\nu_{\ell}}\|K\|_{L^{1 }(G)}.\] (3.42) Moreover, in view of (3.35) we have that \(\forall 0<|y|\leq 1,\,\forall j,k\in\mathbb{Z}\) \[I_{j,k}\lesssim\sum_{\ell=1}^{n}\min\{1,\,2^{-(j+k)\nu_{\ell}}|y|^{\nu_{\ell}} \}\|K\|_{L^{1}(G)}.\] (3.43)
* \(\forall 0<|y|\leq 1,\ 2^{-j}|y|\geq 2R,\) \[I_{j,k}\lesssim\sum_{\ell=1}^{n}2^{-(j+k)\nu_{\ell}}|y|^{\nu_{\ell}}(2^{1-j-k} |y|)^{-2\nu_{\ell}}\|K\|_{L^{1}(G)}\lesssim\sum_{\ell=1}^{n}(2^{-(j+k)\nu_{ \ell}}|y|^{\nu_{\ell}})^{-1}\|K\|_{L^{1}(G)}.\] (3.44)
Now, let us use these inequalities to estimate the Hormander condition
\[\sup_{|y|\leq 1}\int\limits_{|x|>2|y|}|\tilde{T}_{k}\delta(y^{-1}x)-\tilde{T}_{k} \delta(x)|dx\leq\sum_{j\in\mathbb{Z}}I_{j,k}. \tag{3.45}\]
Indeed, we consider the cases where \(2^{-k}>(2R)^{-1}\) and where \(2^{-k}\leq(2R)^{-1}.\)
* Case (i): \(2^{-k}>(2R)^{-1}.\) Observe that when \(\frac{1}{2^{-k}|y|}<2^{-j}\leq 2R/|y|,\) we have that \(1\leq 2^{-j-k}|y|.\) In this situation, the fact that \(\min\{1,2^{-j-k}|y|\}=1,\) and the estimate in (3.43) imply that \[\sum_{j}I_{j,k} \lesssim\sum_{\ell=1}^{n}\|K\|_{L^{1}(G)}\left(\sum_{2^{-j}\leq \frac{1}{2^{-k}|y|}}2^{-(j+k)\nu_{\ell}}|y|^{\nu_{\ell}}+\sum_{\frac{1}{2^{-k}| y|}<2^{-j}\leq\frac{2R}{|y|}}1+\sum_{2^{-j}\geq\frac{2R}{|y|}}(2^{-(j+k)\nu_{ \ell}}|y|^{\nu_{\ell}})^{-1}\right)\] \[\lesssim\sum_{\ell=1}^{n}\|K\|_{L^{1}(G)}(\log(R)+|k|)\lesssim\| K\|_{L^{1}(G)}(\log(R)+|k|)\lesssim_{n,K}\|K\|_{L^{1}(G)}(1+|k|).\]
Indeed, observe that the sums \(\sum_{2^{-j}\leq\frac{1}{2^{-k}|y|}}2^{-(j+k)\nu_{\ell}}|y|^{\nu_{\ell}}\) and \(\sum_{2^{-j}\geq\frac{2R}{|y|}}(2^{-(j+k)\nu_{\ell}}|y|^{\nu_{\ell}})^{-1}\) can be handled as geometric sums in order to get \[\sum_{2^{-j}\leq\frac{1}{2^{-k}|y|}}2^{-(j+k)\nu_{\ell}}|y|^{\nu_{\ell}}=2^{-k \nu_{\ell}}|y|^{\nu_{\ell}}\sum_{2^{-j}\leq\frac{1}{2^{-k}|y|}}2^{-j\nu_{\ell} }\asymp 2^{-k\nu_{\ell}}|y|^{\nu_{\ell}}(2^{-k\nu_{\ell}}|y|^{\nu_{\ell}})^{-1}=1,\] and \[\sum_{2^{-j}\geq\frac{2R}{|y|}}(2^{-(j+k)\nu_{\ell}}|y|^{\nu_{\ell}})^{-1} =(2^{-k\nu_{\ell}}|y|^{\nu_{\ell}})^{-1}\sum_{2^{j}\leq\frac{|y|}{2 R}}2^{j\nu_{\ell}}\leq(2^{-k\nu_{\ell}}|y|^{\nu_{\ell}})^{-1}\sum_{2^{j}\leq \frac{|y|}{2^{k}}}2^{j\nu_{\ell}}\] \[\lesssim(2^{-k\nu_{\ell}}|y|^{\nu_{\ell}})^{-1}2^{-k\nu_{\ell}}|y |^{\nu_{\ell}}=1.\] As for the sum \(\sum_{\frac{1}{2^{-k}|y|}<2^{-j}\leq\frac{2R}{|y|}}1\), let \(j_{0}\in\mathbb{Z}\) and \(\ell_{0}\in\mathbb{N}\), be such that \[2^{-j_{0}}\leq\frac{1}{2^{-k}|y|}\leq 2^{-j_{0}+1},\ 2^{-j_{0}+\ell_{0}}\leq \frac{2R}{|y|}\leq 2^{-j_{0}+\ell_{0}+1}.\] Then \[\sum_{\frac{1}{2^{-k}|y|}<2^{-j}\leq\frac{2R}{|y|}}1\lesssim|\{j\in\{0,1,\cdots,\ell_{0}\}:2^{-j_{0}+j}\in[2^{-j_{0}},2^{-j_{0}+\ell_{0}}]\}|=\ell_{0}+1 \lesssim\ell_{0}.\] Now, to estimate \(\ell_{0}\) in terms of \(R\) and of \(|k|\), note that \(2^{-j_{0}}\asymp\frac{1}{2^{-k}|y|}\), and \(2^{-j_{0}+\ell_{0}}\asymp\frac{2R}{|y|}\). We have that \[2^{-j_{0}-k}\asymp\frac{1}{|y|}\asymp\frac{2^{-j_{0}+\ell_{0}}}{2R}.\] From this estimate we deduce that \(2^{\ell_{0}}\asymp 2^{-k}\times 2R\lesssim 2^{|k|+\log_{2}(R)}\), from where we have that \(\ell_{0}\lesssim\log(R)+|k|\lesssim_{R}(1+|k|)\).
* Case (ii): \(2^{-k}\leq(2R)^{-1}\). Again we divide the sum, but this time in two terms as follows \[\sum_{j}I_{j,k} \lesssim\sum_{\ell=1}^{n}\|K\|_{L^{1}(G)}\left(\sum_{2^{-j}\leq \frac{1}{2^{-k}|y|}}2^{-(j+k)\nu_{\ell}}|y|^{\nu_{\ell}}+\sum_{2^{-j}>\frac{1}{ 2^{-k}|y|}}(2^{-(j+k)\nu_{\ell}}|y|^{\nu_{\ell}})^{-1}\right)\] \[\lesssim\sum_{\ell=1}^{n}\|K\|_{L^{1}(G)}=n\|K\|_{L^{1}(G)}.\] All the estimates above prove that \[\sup_{|y|\leq 1}\int\limits_{|x|>2|y|}|\tilde{T}_{k}\delta(y^{-1}x)-\tilde{T}_ {k}\delta(x)|dx\lesssim_{n,K}\|K\|_{L^{1}(G)}(1+|k|),\] (3.46) as desired. So, for any \(k\in\mathbb{Z}\) we have proved that \(\tilde{T}_{k}:L^{2}(G)\to L^{2}(G)\) and \(\tilde{T}_{k}:L^{1}(G)\to L^{1,\infty}(G)\) are bounded operators with the operator norms satisfying the bounds \[\|\tilde{T}_{k}\|_{\mathscr{B}(L^{2}(G))}\lesssim 2^{-a|k|},\ \|\tilde{T}_{k}\|_{ \mathscr{B}(L^{1}(G),L^{1,\infty}(G))}\lesssim(1+|k|).\] (3.47) By using the Marcinkiewicz interpolation theorem, we have for all \(1<p<2\), the bound \[\|\tilde{T}_{k}\|_{\mathscr{B}(L^{p}(G))}\lesssim 2^{-a|k|\theta_{p}}(1+|k|)^{1-| \theta_{p}|},\] (3.48)
where \(1/p=\theta/2+(1-\theta_{p}).\) Consequently
\[\|T\|_{\mathscr{B}(L^{p}(G))}\lesssim\sum_{k\in\mathbb{Z}}\|\tilde{T}_{k}\|_{ \mathscr{B}(L^{p}(G))}\lesssim\sum_{k\in\mathbb{Z}}2^{-a|k|\theta_{p}}(1+|k|)^{ 1-|\theta_{p}|}<\infty, \tag{3.49}\]
for all \(1<p<2.\) The boundedness of \(T\) on \(L^{p}(G)\) for all \(2<p<\infty\) now follows from the duality argument.
### The probabilistic argument
Now we present a lemma about the \(G\)-function associated to the family \(K_{j}\) of kernels in Lemma 3.1.
**Lemma 3.2**.: _Let \(K\) be a distribution as in Lemma 3.1. Then the square function_
\[(\mathscr{G}(K))f(x):=\left(\sum_{j\in\mathbb{Z}}|f*K_{j}(x)|^{2}\right)^{ \frac{1}{2}} \tag{3.50}\]
_is a bounded operator from \(L^{p}(G)\) to \(L^{p}(G)\) for all \(1<p<\infty.\)_
Proof.: Let us give the classical probabilistic argument involving the Rademacher functions. So, given a sequence \(\varepsilon=\{\varepsilon_{j}\}_{j\in\mathbb{Z}},\) where \(\varepsilon_{j}=\pm 1,\) consider the operator
\[T_{\varepsilon}f(x)=\sum_{j\in\mathbb{Z}}\varepsilon_{j}f*K_{j}. \tag{3.51}\]
In view of Lemma 3.1, \(T_{\varepsilon}\) is bounded on \(L^{p}(G)\) for all \(1<p<\infty,\) and
\[\|T_{\varepsilon}\|_{\mathscr{B}(L^{p})}\leq C_{p}, \tag{3.52}\]
where \(C_{p}\) is independent of the choice of the sequence \(\varepsilon.\) Let us consider the orthonormal system on \(L^{2}[0,1]\) determined by the Rademacher functions \(r_{j},\)\(j\in\mathbb{Z}.\) These are functions defined via \(r_{j}(t)=r_{0}(2^{j}t),\) where \(r_{0}\) is defined via
\[r_{0}(s)=-1,\,0\leq s<1/2,\ r_{0}(s)=1,\,1/2\leq s\leq 1. \tag{3.53}\]
The Rademacher functions \(r_{j},\)\(j\in\mathbb{Z},\) satisfy the Khintchine inequality (see e.g. Grafakos [22, Appendix C.1]):
* If \(F=\sum_{j}a_{j}r_{j}\in L^{2}[0,1],\) then there exist positive constants \(A_{p},B_{p}>0\) such that \[A_{p}\|F\|_{L^{p}[0,1]}\leq\left(\sum_{j\in\mathbb{Z}}|a_{j}|^{2}\right)^{ \frac{1}{2}}\leq B_{p}\|F\|_{L^{p}[0,1]}.\] (3.54)
Let \(x\in G.\) If we apply this property with \(F(t)=(\mathscr{G}(K))f=\sum_{j\in\mathbb{Z}}f*K_{j}(x)r_{j}(t),\) then
\[\left(\sum_{j\in\mathbb{Z}}|f*K_{j}(x)|^{2}\right)^{\frac{p}{2}}\leq B_{p}^{p} \int\limits_{[0,1]}\left|\sum_{j\in\mathbb{Z}}f*K_{j}(x)r_{j}(t)\right|^{p}dt.\]
Now, integrating both sides of this inequality with respect to the Haar measure \(dx,\) then for any \(t\in[0,1],\) we have an operator like \(T_{\varepsilon}=T_{\{r_{j}(t)\}}\) and the desired inequality
\[\|(\mathscr{G}(K))f\|_{L^{p}(G)}\leq C_{p}\|f\|_{L^{p}(G)}, \tag{3.55}\]
now follows from (3.52). The proof of Lemma 3.2 is complete.
### Proof of the main theorem
Let \(1<p\leq\infty.\) In this subsection, we prove the \(L^{p}(G)\)-boundedness of the dyadic maximal function (1.3) associated to an finite Borel measure \(d\sigma\) with compact support on \(G.\) We assume that for some \(a>0\) the group Fourier transform of \(d\sigma\) satisfies the growth estimate in (1.5).
Proof of Theorem 1.1.: By the Riesz representation theorem we have that \(d\sigma=Kdx,\) where \(K\) is a distribution with compact support on \(G\) that satisfies the group Fourier transform inequalities: \(\sup_{\pi\in\widehat{G}}\|\pi(\mathcal{R})^{\pm\frac{a}{\nu}}\widehat{K}(\pi) \|_{\mathrm{op}}<\infty.\) Note that
\[\mathcal{M}_{D}^{d\sigma}f(x) =\sup_{j\in\mathbb{Z}}\left|\int\limits_{G}f((2^{j}\cdot y)^{-1} x)d\sigma(y)\right|=\sup_{j\in\mathbb{Z}}\left|\int\limits_{G}f((2^{j}\cdot y)^{-1} x)K(y)dy\right|\] \[=\sup_{j\in\mathbb{Z}}\left|\int\limits_{G}f(y^{-1}x)K_{j}(y)dy \right|,\,\forall y\in G,\,K_{j}(y):=2^{-jQ}K(2^{-j}y).\]
In view of Lemma 3.2, we have that
\[\mathcal{M}_{D}^{d\sigma}f(x)=\sup_{j\in\mathbb{Z}}|f*K_{j}(x)|\leq(\mathscr{ G}(K))f(x):=\left(\sum_{j\in\mathbb{Z}}|f*K_{j}(x)|^{2}\right)^{\frac{1}{2}}, \tag{3.56}\]
and in consequence we have proved the boundedness of \(\mathcal{M}_{D}^{d\sigma}\) on \(L^{p}(G)\) for all \(1<p<\infty.\) The boundedness of \(\mathcal{M}_{D}^{d\sigma}\) on \(L^{\infty}(G)\) is clearly satisfied and in that case the \(L^{\infty}(G)\)-operator norm of \(\mathcal{M}_{D}^{d\sigma}\) is bounded by the variation of the finite measure \(d\sigma.\) The proof of Theorem 1.1 is complete.
|
2307.05116 | Topological interface states -- a possible path towards a Landau-level
laser in the THz regime | Volkov-Pankratov surface bands arise in smooth topological interfaces, i.e.
interfaces between a topological and a trivial insulator, in addition to the
chiral surface state imposed by the bulk-surface correspondence of topological
materials. These two-dimensional bands become Landau-quantized if a magnetic
field is applied perpendicular to the interface. I show that the energy scales,
which are typically in the 10-100 meV range, can be controlled both by the
perpendicular magnetic field and the interface width. The latter can still be
varied with the help of a magnetic-field component in the interface. The Landau
levels of the different Volkov-Pankratov bands are optically coupled, and their
arrangement may allow one to obtain population inversion by resonant optical
pumping. This could serve as the elementary brick of a multi-level laser based
on Landau levels. Moreover, the photons are absorbed and emitted either
parallel or perpendicular to the magnetic field, respectively in the Voigt and
Faraday geometry, depending on the Volkov-Pankratov bands and Landau levels
involved in the optical transitions. | Mark O. Goerbig | 2023-07-11T08:50:52Z | http://arxiv.org/abs/2307.05116v4 | # Topological interface states - a possible path towards a Landau-level laser in the THz regime
###### Abstract
Volkov-Pankratov surface bands arise in smooth topological interfaces, _i.e._ interfaces between a topological and a trivial insulator, in addition to the chiral surface state imposed by the bulk-surface correspondence of topological materials. These two-dimensional bands become Landau-quantized if a magnetic field is applied perpendicular to the interface. I show that the energy scales, which are typically in the \(10-100\) meV range, can be controlled both by the perpendicular magnetic field and the interface width. The latter can still be varied with the help of a magnetic-field component in the interface. The Landau levels of the different Volkov-Pankratov bands are optically coupled, and their arrangement may allow one to obtain population inversion by resonant optical pumping. This could serve as the elementary brick of a multi-level laser based on Landau levels. Moreover, the photons are absorbed and emitted either parallel or perpendicular to the magnetic field, respectively in the Voigt and Faraday geometry, depending on the Volkov-Pankratov bands and Landau levels involved in the optical transitions.
## I Introduction
Landau levels (LLs), which arise due to the quantization of the electrons' energy in a strong magnetic field, have been regularly proposed to be a promising system for a frequency-tunable laser in the THz regime [1; 2; 3; 4; 5]. Indeed, upon a putative population inversion between the LLs \(n\) and \(n+1\) in parabolic bands, one may expect cyclotron emission with a typical frequency \(\Omega_{n+1,n}=\omega_{c}\) given by the cyclotron frequency \(\omega_{c}=eB/m_{B}\), which is directly controlled by the strength of the magnetic field \(B\) and the band mass \(m_{B}\). In spite of this conceptually appealing proposal, the path to the realization of a working LL laser is barred by strong obstacles that are mainly concerned with population inversion. The latter requires rather long-lived electrons in the excited LL, but their lifetime is strongly reduced by non-radiative recombinations, namely Auger processes that are prominent due to the equidistant LL separation [6] (for a detailed discussion of these processes, see Ref. [7]). In such processes, an electron in the excited LL \(n+1\) can be promoted due to electron-electron interactions to the LL \(n+2\) while the required energy is provided by a simultaneous deexcitation of another electron from \(n+1\) to \(n\). Instead of using one excited electron to emit a photon of frequency \(\omega_{c}\), two electrons in the LL \(n+1\) are thus lost without emission of any photon. Another obstacle equally related to equisitant LLs is reabsorption of cyclotron light due to the transition \((n+1)\to(n+2)\), which is resonant with the \((n+1)\to n\) transition used in the emission of light [7; 8; 9].
Soon after the isolation of graphene, physicists explored this material in cyclotron-emission experiments in the perspective of realizing a LL laser [4; 5; 10]. Due to the linearly dispersing bands of graphene electrons in the vicinity of charge neutrality, the LL spectrum is given by \(E_{n}=\pm\hbar(v/l_{B})\sqrt{2n}\), in terms of the Fermi velocity \(v\simeq 10^{6}\) m/s and the magnetic length \(l_{B}=\sqrt{\hbar/eB}\simeq 26\,\mathrm{nm}/\sqrt{B[\mathrm{T}]}\), _i.e._ the levels are no longer equidistant. While the orders of magnitude with a fundamental gap of \(\hbar\Omega_{1,0}\sim 100\) meV for magnetic fields \(B\sim 10\) T are promising for possible THz applications, Auger processes remain a relevant source of non-radiative recombination processes also in these relativistic systems [11]. For example, while the \(1\to 0\) transition is no longer in resonance with the neighboring \(2\to 1\) transition, it is in resonance with the transition \(4\to 1\) due to the square-root dependence of the LL on the level index \(n\)[7; 12]. Furthermore, it has been shown that the optical phonon responsible for the G band in graphene (at \(\sim 200\) meV) also enhances decay processes that are detrimental to population inversion [13]. The drawback of resonant transitions and enhanced Auger processes can to some extent be healed, _e.g._ in gapless HgTe/CdTe quantum wells, where the low-energy electrons are described in terms of so-called Kane fermions. While their zero-field spectrum is similar to that of massless Dirac fermions, LLs with even indices are absent in the spectrum so that some transitions are absent, such as the above-mentioned transition \(4\to 1\).
An extremely interesting route towards the realization of a LL laser is the use of Dirac materials with a (mass) gap \(\Delta\) that is on the same order of magnitude as the typical LL spacing, _i.e._ in the \(100\) meV range, for systems with a characteristic velocity parameter of \(v\simeq 10^{6}\) m/s. In this case, the LL spectrum is given by
\[E_{\lambda,n}=\lambda\sqrt{\Delta^{2}+2\hbar^{2}v^{2}n/l_{B}^{2}}, \tag{1}\]
where \(\lambda=\pm\) is the band index. Indeed, if \(\Delta\sim\hbar v/l_{B}\), the LL spectrum is neither (approximately) linear in \(n\) and \(B\) as it would be in the limit \(\Delta\gg\hbar v/l_{B}\) nor does
it follow the square-root dependence of graphene in the opposite limit \(\Delta\ll\hbar v/l_{B}\). In this case, the absence of simultaneous resonant transitions suppress both reabsorption and non-radiative Auger scattering. First encouraging results in this direction have been obtained in gapped HgTe/CdTe quantum wells [9]. Another system in which massive Dirac fermions occur is the interface of a topological and a trivial insulator, in the form of Volkov-Pankratov (VP) states [14; 15; 16]. The bulk-surface correspondence for topological materials enforces indeed the occurence of a massless chiral state at such an interface, but it has been shown that the interface spectrum is much richer in systems with smooth interfaces, _e.g._ when the gap changes over a certain distance \(\ell\) that characterizes the interface width and that is larger than an intrinsic length \(\lambda_{C}=\hbar v/\Delta\). In smooth interfaces between a topological and a trivial insulator, one finds a whole family of surface states the spectrum of which is indeed given by [16]
\[\epsilon_{m}(\mathbf{q})\simeq\lambda\hbar v\sqrt{\mathbf{q}^{2}+2m/l_{S}^{2}}. \tag{2}\]
Here, \(\mathbf{q}=(q_{x},q_{y})\) is the two-dimensional (2D) wave vector in the interface, \(m\) denotes the index of the surface band, and \(l_{S}=\sqrt{\ell\lambda_{C}}\) is a characteristic length determining the extension of the interface states in the \(z\) direction perpendicular to the interface. Equation (2) is indeed valid as long as the energy of the surface bands at \(\mathbf{q}=0\) is smaller than the bulk gap, \(\sqrt{2m}\hbar v/l_{S}\leq\Delta\). The latter condition is equivalent to requiring that the interface width \(\ell\) be larger than \(m\) times the intrinsic length \(\lambda_{C}\)[16]. The \(n=0\) surface state is precisely the chiral state that survives in the abrupt limit, \(\ell\to 0\), while the VP states (for \(m\neq 0\)) disappear in the continuum of bulk states as soon as \(\ell<\lambda_{C}\). Notice that the formation of VP states is a universal property of topological materials that has been studied not only in topological insulators [16; 17; 18; 19; 20; 21], but also in Weyl semimetals [22; 23], graphene [24], and topological superconductors [25].
Very recently, inter-VP transitions have been measured within magneto-optical spectroscopy in Pb\({}_{1-x}\)Sn\({}_{x}\)Se crystals [26] in which the Sn concentration determines whether the system is a trivial or a topological (crystalline) insulator [27; 28; 29]. Moreover, the concentration determines the size of the bulk gap so that smooth interfaces may be obtained by molecular-beam epitaxy (MBE) in which the Sn concentration is smoothly varied during the growth process, and where the absolute band gap in the topological regime can be designed such as to be identical to that in the trivial insulator [26]. This allows for a strong versatility in the fabrication of interfaces of various widths and thus of systems with specially designed fundamental gaps
\[\Delta_{\text{VP}}=\sqrt{2}\hbar v/l_{S}=\sqrt{2}\Delta\sqrt{\frac{\lambda_{C }}{\ell}}=\sqrt{\frac{2\hbar v\Delta}{\ell}} \tag{3}\]
between the \(m=1\) VP and the chiral (\(m=0\)) surface states.
In the present paper, I argue that smooth topological interfaces, such as in the above-mentioned Pb\({}_{1-x}\)Sn\({}_{x}\)Se crystals, may be extremely promising systems for the realization of long-lived population inversion if a magnetic field is applied perpendicular to the interface that quantizes the 2D electronic motion in the interface into LLs. The main reason for this expectation is the fact that VP bands provide us with several families of LLs that can to some extent be brought into close energetic proximity with LLs of the chiral surface band. This would allow for devices similar to three- or four-level lasers in which population inversion could be more easily achieved than in the usual LL setup. Furthermore, optical pumping and radiative desexcitation can be chosen to happen in different directions via an intelligent choice of the involved transitions. Indeed, while the optical selection rules in the Faraday geometry impose that the emitted or absorbed photons propagate in the direction of the magnetic field for a transition coupling the LLs \(n\) and \(n\pm 1\), it has been shown previously [30] that such transitions must obey an optical selection rule \(m\to m\) for the VP states. The selection rules are inverted in the Voigt geometry, where the emitted or absorbed photon propagates in a direction perpendicular to the magnetic field. The underlying reason for these selection rules and their geometry dependence is an intriguing analogy between the spatially changing gap parameter and LL quantization. Indeed, the spatially varying gap parameter can be viewed as a fake magnetic field that is oriented in the plane of the interface, and the characteristic length \(l_{S}\) plays the role of an effective magnetic length. Via an intelligent choice of the geometry of a cavity hosting the topological material and the involved transitions, one may therefore expected to obtain a strong cyclotron emission in the direction of the interface while pumping the system with photons propagating perpendicular to the interface, or _vice versa_.
## II Volkov-Pankratov states and optical selections in a magnetic field
Let us first review some basic features of VP states and their coupling to light in the presence of a magnetic field along the lines of Ref. [30]. We consider an interface between a trivial insulator in the lower part of the device (\(z<-\ell\)) and a topological one in the upper part (\(z>\ell\)) (see inset of Fig. 1). Due to the magnetic field, the VP bands (2) get quantized into LLs whose spectrum reads (for \(n\neq 0\))
\[E_{\lambda,m,n\neq 0}=\lambda\hbar v\sqrt{\frac{2|m|}{l_{S}^{2}}+\frac{2|n|}{l _{B}^{2}}}, \tag{4}\]
where we have considered the magnetic field to be oriented perpendicular to the interface (in the \(z\)-direction). For notational simplicity, we merge the band index \(\lambda\) from now on with the VP and LL indices so that
\((-m,-n)\) corresponds to the \(n\)-th LL in the \(m\)-th VP band of negative energy (\(\lambda=-\)), whence the modulus of the indices in the spectrum (4) to avoid confusion. Due to the parity anomaly, the above spectrum is only valid for LLs with an index \(n\neq 0\), while the \(n=0\) LLs of the VP bands stick either to the bottom of the positive-energy bands (\(\xi=+\)) or to the top of the negative-energy VP state (\(\xi=-\))
\[E_{m,n=0}=\xi\hbar v\sqrt{\frac{2|m|}{l_{S}^{2}}}, \tag{5}\]
depending on the chirality index \(\xi\). The latter can be changed if we change the order between the topological and the trivial insulator (interface between a topological insulator in the lower part and a trivial one in the upper part), and it can also be altered easily by changing the orientation of the magnetic field. The LL spectrum for the \(m=0\) and the \(m=1\) VP states [both for the conduction (\(m=+1\)) and the valence band (\(m=-1\))] are shown in Fig. 1. Notice that the surface-state-width parameter \(l_{S}\) can be decreased effectively with the help of an _inplane_ magnetic field \(B_{\parallel}\), \(l_{S}(B_{\parallel}=0)^{-4}=1/\ell^{2}\lambda_{C}^{2}\rightarrow l_{S}^{-4}=l _{S}(B_{\parallel}=0)^{-4}+(eB_{\parallel}/\hbar)^{2}\) so that the effective energy separation between the VP states, given by Eq. (3) is increased to [30]
\[\Delta_{\rm VP}=\sqrt{\frac{2\hbar v\Delta}{\ell}}\left(1+\frac{e^{2}v^{2}B_{ \parallel}^{2}\ell^{2}}{\Delta^{2}}\right)^{1/4}. \tag{6}\]
The writing of Eq. (4) unveils the reminiscence of LLs and VP states. Indeed, if we linearize the gap inversion over the smooth interface by a linear function connecting a gap parameter of \(+\Delta\) in the trivial insulator (at \(z<-\ell\)) and \(-\Delta\) in the topological insulator (at \(z>\ell\)), _i.e._\(-\Delta z/\ell\), the system may be mapped to the LL problem of massive Dirac fermions [16]. Within this analogy, the variation of the gap parameter in the \(z\)-direction may be viewed as a vector potential that stems from a "fake" magnetic field oriented in the interface, while the physical magnetic field is oriented in the \(z\)-direction. Notice furthermore that the above description can easily be generalized to a situation where the gap in the topological insulator is not of the same size as that in the trivial one [16], in which case the effective interface width \(l_{S}\) is determined by an average between the two gaps. The analogy between interface width and magnetic field finally yields a physical understanding of the optical selection rules between the levels \((m,n)\) and \((m^{\prime},n^{\prime})\), where henceforth the first index indicates the VP band and the second one the physical LL. In the Faraday geometry, in which the absorbed or emitted photon propagates in the direction of the magnetic field, angular-momentum conservation imposes that the only optically active transitions involve adjacent LL indices, \(n\to n^{\prime}=\pm(n\pm 1)\), regardless of the band index \(\lambda\). This needs to be contrasted to the Voigt geometry, where the photon propagates in the plane perpendicular to the magnetic field and where the LL index remains unchanged \(n\to n^{\prime}=\pm n\). Since the fake magnetic field that yields the VP bands is oriented in the interface, Voigt and Faraday geometry are inverted, and a photon propagating perpendicular to the interface couples VP bands with the same index (\(m\to m^{\prime}=\pm m\)) while a photon with a wave vector in the interface couples adjacent VP bands [\(m\to m^{\prime}=\pm(m\pm 1)\)]. As in the LL problem, the selection rules, which are summarized in the table above, do not depend on the band index. In both cases, VP states and LLs, it is the circular polarization of the photon determines which of the adjacent levels or bands are optically coupled.
## III Three-level scheme
Let us first illustrate schematically the different emission processes in terms of resonant (optical) pumping within a three-level picture to fix some basic ideas. In a first step, we consider the situation depicted in Fig. 2(a) where the LL energy scale \(\sqrt{2}\hbar v/l_{B}\) is slightly larger than the VP gap \(\Delta_{\rm VP}\) given in Eq. (3), _i.e._ the mag
Figure 1: Landau levels for VP surface states. The red levels correspond to the chiral surface state with \(m=0\), while the blue levels correspond to the \(m=\pm 1\) VP states. The sign indicates the band index \(\lambda\) here for notational convenience. As a consequence of the parity anomaly, the \(n=0\) LL is found only in the upper VP band (\(m=+1\)). The inset, adapted from Ref. [26], shows the setup of a smooth interface between a trivial insulator (PbSe) at \(z<-\ell\) and a topological insulator (Pb\({}_{0.76}\)Sn\({}_{0.24}\)Se) at \(z>\ell\). The magnetic field is oriented in the [001] direction perpendicular to the interface.
\begin{table}
\begin{tabular}{c||c|c} geometry & VP states & Landau levels \\ \hline \hline Faraday & \(m\rightarrow\pm m\) & \(\pm n\to n\pm 1\) \\ \hline Voigt & \(m\rightarrow\pm m\pm 1\) & \(n\rightarrow\pm n\) \\ \end{tabular}
\end{table}
Table 1: Optical selection rules in the Faraday (photon propagation perpendicular to the interface) and the Voigt geometry (photon propagation in the interface).
netic length is slightly smaller than the effective surface width \(l_{S}\). We show below in Sec. V that this situation can be easily achieved _e.g._ in MBE-grown Pb\({}_{1-x}\)Sn\({}_{x}\)Se crystals. In this case, the \(n=1\) LL of the chiral \(m=0\) surface state is slightly above the \(n=0\) level of the upper VP band with an index \(m=1\). In Fig. 2(a), we consider optical pumping in the Faraday geometry, where the light frequency is resonant with the \((m=0,n=0)\rightarrow(m=0,n=1)\) transition. If the target level is only slightly above the lowest LL of the \(m=1\) VP band, \((m=1,n=0)\), one may expect rapid non-radiative decay of the excited electrons to the latter level. These electrons may then decay to the zero-energy level \((m=0,n=0)\) by emitting light of the frequency \(\omega=\sqrt{2}v/l_{S}\) in the Voigt geometry, _i.e._ absorbed and emitted photons, even if they may be almost resonant, propagate in perpendicular directions. While the magnetic field does then not allow one to control the frequency of the transition, which is determined by the interface width \(\ell\), it allows us to bring the levels \((m=1,n=0)\) and \((m=0,n=1)\) into close energetic vicinity and thus to increase the transition rate between the two levels, which is proportional to
\[\Gamma\sim\frac{1/\tau}{1/\tau^{2}+2v^{2}(1/l_{B}-1/l_{S})^{2}}, \tag{7}\]
if we consider Lorentzian level broadening due to a dephasing time \(\tau\)[31]. For a typical value of \(\tau\sim 100\) fs, the level broadening is then on the order of some meV. Notice, however, that the frequency of the emitted light may to some extent be varied with the help of an inplane magnetic field, according to Eq. (6).
Similarly, one may use the Voigt goemetry for pumping the transition \((m=0,n=0)\rightarrow(m=1,n=0)\). If the latter is now slightly above the \((m=0,n=1)\) level [see Fig. 2(b)], _i.e._ for smaller magnetic fields with \(\sqrt{2}\hbar v/l_{B}<\Delta_{\rm VP}\), the \(n=1\) LL of the chiral surface band may be populated by non-radiative decay processes, and one may expect a population inversion between the \((m=0,n=0)\) and \((m=0,n=1)\) levels, with cyclotron emission at the fundamental frequency \(\omega_{C}=\sqrt{2}v/l_{B}\). As before, the emitted photon propagates then in a direction perpendicular to that of the absorbed photon, but Faraday and Voigt geometries are inverted.
## IV Four-level scheme
We now investigate a possible four-level scheme for population inversion, as shown in Fig. 3. For the sake of the argument, we consider the \(n=0\) LLs of the VP bands now to be situated in the negative-energy branch. As already mentioned, this can easily be achieved by switching the orientation of the magnetic field. Let us choose optical pumping by light in the Voigt geometry that is resonant with the transition \((m=0,n=-1)\rightarrow(m=1,n=1)\). In contrast to the three-level scheme discussed in the previous section, the target level is no longer in close vicinity with the level below that is \((m=0,n=1)\). However, both are optically coupled, and an electron can transit from \((m=1,n=1)\) to \((m=0,n=1)\) by emitting a photon in the Voigt geometry again. While this photon is sacrified in the present scheme, its emission allows for an enhanced population of the \(n=1\) LL in
Figure 2: Sketch of a three-level scheme for stimulated cyclotron emission. Ideally, the Fermi level is considered to be situated above the \((m=0,n=0)\) level so that the zero-energy level is filled. (a) Pumping in the Faraday geometry for \(\sqrt{2}\hbar v/l_{B}>\Delta_{\rm VP}\). The \((m=0,n=1)\) level is populated via pumping (dashed blue arrow) from the zero-energy \((m=0,n=0)\) level, and emission can take place in the Voigt geometry in the transition \((m=1,n=0)\rightarrow(m=0,n=0)\) (dashed red arrow). The level \((m=1,n=0)\) is rapidly populated by the \((n=1,m=0)\) level if the latter is almost resonant with the former, via a rapid non-radiative decay (dashed black arrow). (b) Similar process with pumping in the Voigt geometry for \(\sqrt{2}\hbar v/l_{B}<\Delta_{\rm VP}\). The pumping transition is \((m=0,n=0)\rightarrow(m=1,n=0)\) (dashed blue arrow) while emission takes place in the \((m=0,n=1)\) transition (dashed red arrow) that is populated via rapid non-radiative decay processes (dashed black arrow) from the \((m=1,n=0)\) level, which is slightly higher in energy. The insets represent the device geometries, with the direction of propagation of the absorbed (dashed blue arrow) and emitted (dashed red arrow) photon, for the two configurations, respectively.
the chiral surface band. This is particularly interesting since the transition (\(m=0,n=1\)) \(\rightarrow\) (\(m=0,n=0\)) to the central zero-energy level, which we consider to be non or only sparsely populated, is resonant with the (\(m=0,n=0\)) \(\rightarrow\) (\(m=0,n=-1\)) transition to the original level served in the pumping process. Under strong pumping and thus a strong depletion of the (\(m=0,n=-1\)) level, it is therefore possible to emit _two_ photons at the cyclotron frequency.
It is noteworthy that the above-mentioned resonant cyclotron transitions are also involved in non-radiative Auger processes. Such Auger processes have been shown to be detrimental to population inversion in GaAs and graphene. Here, however, this is not the case. Indeed, one of the two electrons that take part in the Auger process, where both electrons originally reside in the (\(m=0,n=0\)) LL, is kicked back into the (\(m=0,n=1\)) LL thus maintaining the fertile population inversion. While the electron that transits simultaneously to the level (\(m=0,n=-1\)) is energetically lost, _i.e._ it does not emit a photon, the first electron emits another photon at the cyclotron frequency before it can take part in another Auger process or radiatively transit to (\(m=0,n=-1\)).
## V Possible realization in Pb\({}_{1-x}\)Sn\({}_{x}\)Se crystals
While the above arguments are not restricted to a particular topological insulator, it is useful to discuss the orders of magnitude of the probably best controlled system in which VP states occur that is MBE-grown Pb\({}_{1-x}\)Sn\({}_{x}\)Se crystals [27; 28; 29]. As mentioned in the introduction, the MBE growth allows one to obtain interfaces with a well-controlled interface width in which the VP states obey to great accuracy the dispersion (2) [26]. Most saliently, the Sn concentration \(x\) in the Pb substitution allows one to trigger the electronic nature of the material. While, for \(x=0\), the system is a trivial band insulator it becomes a crystalline topological insulator above a critical concentration on the order of (\(x_{c}\simeq 0.12\)). Moreover, the Sn concentration determines the size of the gap, which is on the order of 90 meV in the trivial insulator at \(x=0\)[28; 29]. The choice \(x=0.24\) allows one to obtain the same magnitude for the gap in the topological insulator (\(2\Delta\sim 90\) meV) [26], but even larger gaps on the order of \(2\Delta\sim 200\) meV may be obtained upon variation of temperature and strain on the crystals [27; 29]. Magneto-optical experiments indicate that the fundamental VP gap (3) scales as [26]
\[\Delta_{\rm VP}\simeq 45\,{\rm meV}/\sqrt{\ell/100\,{\rm nm}}, \tag{8}\]
and samples with interface widths between \(\ell=50\) and 200 nm have been obtained, while the intrinsic length has been estimated to be \(\lambda_{C}\simeq 6\) nm so that the effective surface width varies between \(l_{S}\sim 17\) nm and \(l_{S}\sim 35\) nm. In order for the magnetic length to be on the same order of magnitude as \(l_{S}\) - situation considered in the present paper - one would require magnetic fields in the range 0.5...3 T that are easily accessible experimentally.
Finally, the Fermi velocity is roughly half of that in graphene so that \(v/c\sim 1/600\), in terms of the speed of light \(c\). If we consider the fundamental cyclotron resonance associated with the transition (\(m=0,n=1\)) \(\rightarrow\) (\(m=0,n=0\)), the energy of the transition is thus roughly
\[\sqrt{2}\frac{\hbar v}{l_{B}}\simeq 20\,{\rm meV}\times\sqrt{B[{\rm T}]}. \tag{9}\]
This implies a the transition rate [4]
\[\Gamma_{(m=0,n=1)\rightarrow(m=0,n=0)} = 2\alpha\left(\frac{v}{c}\right)^{2}\omega\] \[\simeq 2.4\times 10^{6}\,{\rm s}^{-1}\times\sqrt{B[{\rm T}]},\]
in terms of the fine-structure constant \(\alpha=1/137\), if we consider dipolar light coupling. This is roughly a factor of four smaller than in graphene due to the reduced Fermi velocity \(v\). Notice that interaction-induced decay processes take place at much shorter time scales, typically in the fs range. In the case of almost resonant levels, as discussed in the previous section [see _e.g._ the levels
Figure 3: Sketch of a four-level scheme for stimulated cyclotron emission. Ideally, the Fermi level is now situated below the (\(m=0,n=0\)) level so that the zero-energy level is empty. We consider pumping in the Voigt geometry for \(\sqrt{2}\hbar v/l_{B}<\Delta_{\rm VP}\) and a situation, where the \(n=0\) LL of the \(m=1\) VP state is in the negative-energy branch. The (\(m=1,n=1\)) level is populated via pumping (dashed blue arrow) from the (\(m=0,n=-1\)) level, and the pumped electrons can decay radiatively to the (\(m=0,n=1\)) level by emitting light also in the Voigt geometry. Electrons in the (\(m=0,n=1\)) LL can then decay to the zero-energy level (\(m=0,n=0\)) emitting cyclotron light in the direction perpendicular to the interface (in the Faraday geometry, dashed red arrows). Furthermore, this transition is resonant with that (\(m=0,n=0\)) \(\rightarrow\) (\(m=0,n=-1\)), and a second photon with the cyclotron frequency can thus be emitted. Auger processes, shown in green now enhance the depopulation of the zero-energy LL (\(m=0,n=0\)) and are therefore helpful for cyclotron emission.
(\(m=1,n=0\)) and (\(m=0,n=1\)) in Fig. 2(a) and (b)), the decay rate from the higher to the lower level is on the order of [31]
\[\Gamma\sim\frac{2\pi}{\hbar}\left(\frac{e^{2}}{el_{B}}\right)^{2}\left(\frac{ \tau}{\hbar}\right)\simeq\epsilon^{-1}\times 10^{16}\,\mathrm{s}^{-1} \times B[\mathrm{T}], \tag{11}\]
where \(\epsilon\) is the dielectric constant of the host material.
Finally, one should notice that the (\(m=1,n=1\)) level, used _e.g._ in the above four-level scheme, can also be brought into close energetic vicinity of the bottom of the _bulk_ conduction band. While the schemes discussed above in terms of resonant pumping in a specific geometry require themselves a THz source, one might then alternatively use the conduction band as a target of pulsed or continuous pumping at higher energies and rely on rapid decay processes towards the band bottom, which then serves as a reservoir for the (\(m=1,n=1\)) level. However, to test this possibility, one would need to rely on a decay towards this target level at the interface that is more rapid than the bulk recombination. The quantitative study of these decay processes is beyond the scope of the present paper.
## VI Conclusions
In conclusion, I have argued that the particular surface-state spectrum that is formed in smooth interfaces between a trivial and a topological insulator is a promising path towards the LL laser. In addition to the chiral surface state, which may be described in terms of a massless Dirac fermion, VP states are formed if the gap parameter varies over a width \(\ell\) that must be larger than the intrinsic length scale \(\lam_{C}=\hbar v/\Delta\), in terms of the bulk gap \(\Delta\). These surface bands have the form of a massive 2D Dirac fermion, and each of the bands gives rise to LLs if a magnetic field is applied perpendicular to the interface. One is thus confronted with families of LLs the energy of which can to great extent be controlled, by the magnetic field for the LL separation and by the interface width for the energy separation between the VP bands. While the latter is given by the sample growth, it can still be varied _in situ_ with the help of an inplane magnetic field that effectively reduces the interface width and thus increases the gap between the VP bands. The magnetic field does not only allow one to change the cyclotron frequency, at which light is emitted in certain setups, but also to bring LLs associated with different VP bands into close energetic vicinity. When the gap between the VP bands is on the same order of magnitude as the typical LL separation - this situation can be easily achieved experimentally, _e.g._ in \(\mathrm{Pb}_{(}1-x)\mathrm{Sn}_{x}\mathrm{Se}\) crystals - the LL spectra are neither equidistant nor follow a square-root law so that both Auger and reabsorption processes are maximally suppressed.
Another highly unusual and, for devices, potentially extremely fertile aspect of light emission in VP LLs is the direction of propagation of the absorbed and emitted photons. Indeed, photons with a wave vector perpendicular to the interface (Faraday geometry) are absorbed and emitted in transitions involving adjacent LL indices \(n\) and \(n\pm 1\) but the same VP band index \(m\), regardless of whether the LLs are formed in the positive- or negative energy branch of the VP bands. On the other hand, photons propagate inside the interface (Voigt geometry) for transitions \((m,n)\rightarrow(m\pm 1,n)\). This would allow for a smart design of the Fabry-Perot cavities such that the extension in the \(z\)- and \(x/y\)-directions match the photon wavelength of the respective transitions, especially if pumping and emission are associated with the two different geometries (Faraday and Voigt). Finally, I have argued that the often detrimental Auger processes may be less efficient in the proposed setup so that they do not hinder the population inversion required for a LL laser, in contrast to most proposals for LL lasers.
###### Acknowledgements.
I would like to thank Gauthier Krizman, Louis-Anne de Vaulcher, and Milan Orlita for fruitful discussions.
|
2303.00599 | LS-IQ: Implicit Reward Regularization for Inverse Reinforcement Learning | Recent methods for imitation learning directly learn a $Q$-function using an
implicit reward formulation rather than an explicit reward function. However,
these methods generally require implicit reward regularization to improve
stability and often mistreat absorbing states. Previous works show that a
squared norm regularization on the implicit reward function is effective, but
do not provide a theoretical analysis of the resulting properties of the
algorithms. In this work, we show that using this regularizer under a mixture
distribution of the policy and the expert provides a particularly illuminating
perspective: the original objective can be understood as squared Bellman error
minimization, and the corresponding optimization problem minimizes a bounded
$\chi^2$-Divergence between the expert and the mixture distribution. This
perspective allows us to address instabilities and properly treat absorbing
states. We show that our method, Least Squares Inverse Q-Learning (LS-IQ),
outperforms state-of-the-art algorithms, particularly in environments with
absorbing states. Finally, we propose to use an inverse dynamics model to learn
from observations only. Using this approach, we retain performance in settings
where no expert actions are available. | Firas Al-Hafez, Davide Tateo, Oleg Arenz, Guoping Zhao, Jan Peters | 2023-03-01T15:46:12Z | http://arxiv.org/abs/2303.00599v1 | # LS-IQ: Implicit Reward Regularization for
###### Abstract
Recent methods for imitation learning directly learn a \(Q\)-function using an implicit reward formulation rather than an explicit reward function. However, these methods generally require implicit reward regularization to improve stability and often mistreat absorbing states. Previous works show that a squared norm regularization on the implicit reward function is effective, but do not provide a theoretical analysis of the resulting properties of the algorithms. In this work, we show that using this regularizer under a mixture distribution of the policy and the expert provides a particularly illuminating perspective: the original objective can be understood as squared Bellman error minimization, and the corresponding optimization problem minimizes a bounded \(\chi^{2}\)-Divergence between the expert and the mixture distribution. This perspective allows us to address instabilities and properly treat absorbing states. We show that our method, Least Squares Inverse Q-Learning (LS-IQ), outperforms state-of-the-art algorithms, particularly in environments with absorbing states. Finally, we propose to use an inverse dynamics model to learn from observations only. Using this approach, we retain performance in settings where no expert actions are available.1
Footnote 1: The code is available at [https://github.com/robfiras/ls-iq](https://github.com/robfiras/ls-iq)
## 1 Introduction
Inverse Reinforcement Learning (IRL) techniques have been developed to robustly extract behaviors from expert demonstration and solve the problems of classical Imitation Learning (IL) methods (Ng et al., 1999; Ziebart et al., 2008). Among the recent methods for IRL, the Adversarial Imitation Learning (AIL) approach (Ho and Ermon, 2016; Fu et al., 2018; Peng et al., 2021), which casts the optimization over rewards and policies into an adversarial setting, have been proven particularly successful. These methods, inspired by Generative Adversarial Networks (GANs) (Goodfellow et al., 2014), alternate between learning a discriminator, and improving the agent's policy w.r.t. a reward function, computed based on the discriminator's output. These _explicit reward_ methods require many interactions with the environment as they learn both a reward and a value function. Recently, _implicit reward_ methods (Kostrikov et al., 2020; Arenz and Neumann, 2020; Garg et al., 2021) have been proposed. These methods directly learn the \(Q\)-function, significantly accelerating the policy optimization. Among the _implicit reward_ approaches, the Inverse soft Q-Learning (IQ-Learn) is the current state-of-the-art. This method modifies the distribution matching objective by including reward regularization on the expert distribution, which results in a minimization of the \(\chi^{2}\)-divergence between the policy and the expert distribution. However, whereas their derivations only consider regularization on the expert distribution, their practical implementations on continuous control tasks have shown that regularizing the reward on both the expert and policy distribution achieves significantly better performance.
The contribution of this paper is twofold: First, when using this regularizer, we show that the resulting objective minimizes the \(\chi^{2}\) divergence between the expert and a mixture distribution between the expert and the policy. We then investigate the effects of regularizing w.r.t. the mixture distribution on the theoretical properties of IQ-Learn. We show that this divergence is bounded, which translates
to bounds on the reward and \(Q\)-function, significantly improving learning stability. Indeed, the resulting objective corresponds to least-squares Bellman error minimization and is closely related to Soft Q-Imitation Learning (SQIL) (Reddy et al., 2020). Second, we formulate Least Squares Inverse Q-Learning (LS-IQ), a novel IRL algorithm. By following the theoretical insight coming from the analysis of the \(\chi^{2}\) regularizer, we tackle many sources of instabilities of the IQ-Learn approach: the arbitrariness of the \(Q\)-function scales, exploding \(Q\)-functions targets, and reward bias Kostrikov et al. (2019), i.e., assuming that absorbing states provide the null reward. We derive the LS-IQ algorithm by exploiting structural properties of the \(Q\)-function and heuristics based on expert optimality. This results in increased performance on many tasks and, in general, more stable learning and less variance in the \(Q\)-function estimation. Finally, we extend the implicit reward methods to the IL from observations setting by training an Inverse-Dynamics Model (IDM) to predict the expert actions, which are no longer assumed to be available. Even in this challenging setting, our approach retains performance similar to the one where expert actions are known.
**Related Work.** The vast majority of IRL and IL methods build upon the Maximum Entropy (MaxEnt) IRL framework (Ziebart, 2010). In particular, Ho & Ermon (2016) introduce Generative Adversarial Imitation Learning (GAIL), which applies GANs to the IL problem. While the original method minimizes the Jensen-Shannon divergence to the expert distribution, the approach is extended to general \(f\)-divergences (Ghasemipour et al., 2019), building on the work of Nowozin et al. (2016). Among the \(f\)-divergences, the Pearson \(\chi^{2}\) divergence improves the training stability for GANs (Mao et al., 2017) and for AIL (Peng et al., 2021). Kostrikov et al. (2019) introduce a replay buffer for off-policy updates of the policy and discriminator. The authors also point out the problem of reward bias, which is common in many imitation learning methods. Indeed, AIL methods implicitly assign a null reward to these states, leading to survival or termination biases, depending on the chosen divergence. Kostrikov et al. (2020) improve the previous work introducing recent advances from offline policy evaluation (Nachum et al., 2019). Their method, ValueDice, uses an inverse Bellman operator that expresses the reward function in terms of its \(Q\)-function, to minimize the reverse Kullback-Leibler Divergence (KL) to the expert distribution. Arenz & Neumann (2020) derive a non-adversarial formulation based on trust-region updates on the policy. Their method, O-NAIL, uses a standard Soft-Actor Critic (SAC) (Haarnoja et al., 2018) update for policy improvement. O-NAIL can be understood as an instance of the more general IQ-Learn algorithm (Garg et al., 2021), which can optimize different divergences depending on an implicit reward regularizer. Garg et al. (2021) also show that their algorithm achieves better performance using the \(\chi^{2}\) divergence instead of the reverse KL. Reddy et al. (2020) propose a method that uses SAC and assigns fixed binary rewards to the expert and the policy. Swamy et al. (2021) provide a unifying perspective on many of the methods mentioned above, explicitly showing that GAIL, ValueDice, MaxEnt-IRL, and SQIL can be viewed as moment matching algorithms. Lastly, Sikchi et al. (2023) propose a ranking loss for AIL, which trains a reward function using a least-squares objective with ranked targets.
## 2 Preliminaries
Notation.A Markov Decision Process (MDP) is a tuple \((\mathcal{S},\mathcal{A},P,r,\gamma,\mu_{0})\), where \(\mathcal{S}\) is the state space, \(\mathcal{A}\) is the action space, \(P:\mathcal{S}\times\mathcal{A}\times\mathcal{S}\rightarrow\mathbb{R}^{+}\) is the transition kernel, \(r:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}\) is the reward function, \(\gamma\) is the discount factor, and \(\mu_{0}:\mathcal{S}\rightarrow\mathbb{R}^{+}\) is the initial state distribution. At each step, the agent reaches a state \(s\in\mathcal{S}\) from the environment, samples an action \(a\in\mathcal{A}\) using the policy \(\pi:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}^{+}\), and transitions with probability \(P(s^{\prime}|s,a)\) into the next state \(s^{\prime}\in\mathcal{S}\), where it receives the reward \(r(s,a)\). We define an occupancy measure \(\rho_{\pi}(s,a)=\pi(a|s)\sum_{t=0}^{\infty}\gamma^{t}\pi^{s}(s)\), where \(\mu_{t}^{\pi}(s^{\prime})=\int_{s,a}\mu_{t}^{\pi}(s)\pi(a|s)P(s^{\prime}|s,a)da\) is the state distribution for \(t>0\), with \(\mu_{0}^{\pi}(s)=\mu_{0}(s)\). The occupancy measure allows us to denote the expected reward under policy \(\pi\) as \(\mathbb{E}_{\rho_{\pi}}[r(s,a)]\triangleq\mathbb{E}[\sum_{t=0}^{\infty}\gamma ^{t}r(s,a_{t})]\), where \(s_{0}\sim\mu_{0}\), \(a_{t}\sim\pi(.|s_{t})\) and \(s_{t+1}\sim P(.|s_{t},a_{t})\) for \(t>0\). Furthermore, \(\mathbb{R}^{S\times\mathcal{A}}=\{x:\mathcal{S}\times\mathcal{A}\rightarrow \mathbb{R}\}\) denotes the set of functions in the state-action space and \(\overline{\mathbb{R}}\) denotes the extended real numbers \(\mathbb{R}\cup\{+\infty\}\). We refer to the soft value functions as \(\tilde{V}(s)\) and \(\tilde{Q}(s,a)\), while we use \(V(s)\) and \(Q(s,a)\) to denote the value functions without entropy bonus.
**Inverse Reinforcement Learning as an Occupancy Matching Problem.** Given a set of demonstrations consisting of states and actions sampled from an expert policy \(\pi_{E}\), IRL aims at finding a reward function \(r(s,a)\) from a family of reward functions \(\mathcal{R}=\mathbb{R}^{S\times\mathcal{A}}\) assigning high reward to
samples from the expert policy \(\pi_{E}\) and low reward to other policies. We consider the framework presented in Ho and Ermon (2016), which derives the maximum entropy IRL objective with an additional convex reward regularizer \(\psi_{\rho}:\mathbb{R}^{\mathcal{S}\times\mathcal{A}}\to\overline{\mathbb{R}}\) from an occupancy matching problem
\[\max_{r\in\mathbb{R}}\min_{n\in\Pi}L_{\rho}(r,\pi)=\max_{r\in\mathbb{R}}\left( \min_{\pi\in\Pi}-\beta H_{\rho}(\pi)-\mathbb{E}_{\rho_{\pi}}[r(s,a)]\right)+ \mathbb{E}_{\rho_{\pi_{E}}}[r(s,a)]-\psi_{\rho}(r), \tag{1}\]
with the space of policies \(\Pi=\mathbb{R}^{\mathcal{S}\times\mathcal{A}}\), the discounted cumulative entropy bonus \(H_{\rho}(\pi)=\mathbb{E}_{\rho_{\pi}}[-\log(\pi(a|s))]\), and a constant \(\beta\) controlling the entropy bonus. Note that the inner optimization is a maximum entropy Reinforcement Learning (RL) objective (Ziebart, 2010), for which the optimal policy is given by
\[\pi^{*}(a|s)=\frac{1}{Z_{s}}\exp{(\tilde{Q}(s,a))}, \tag{2}\]
where \(Z_{s}=\int_{\tilde{a}}\exp{\tilde{Q}(s,a)}\,d\tilde{a}\) is the partition function and \(\tilde{Q}(s,a)\) is the soft action-value function, which is given for a certain reward function by the soft Bellman operator \((\tilde{\mathcal{B}}^{\pi}\tilde{Q})(s,a)=r(s,a)+\gamma\mathbb{E}_{\mathscr{ F}\sim P(.|s,a)}\tilde{V}^{\pi}(s^{\prime})\), where \(\tilde{V}^{\pi}(s^{\prime})=\mathbb{E}_{a\sim\pi(.|s)}[\tilde{Q}(s,a)-\log\pi (a|s)]\).
Garg et al. (2021) transform Equation 1 from reward-policy space to \(\tilde{Q}\)-policy space using the _inverse_ soft Bellman operator \((\tilde{\mathcal{T}}^{\pi}\tilde{Q})(s,a)=\tilde{Q}(s,a)-\gamma\mathbb{E}_{s^ {\prime}\sim P(.|s,a)}\tilde{V}^{\pi}(s^{\prime})\) to get a one-to-one correspondence between the reward and the \(\tilde{Q}\)-function. This operator allows to change the objective function \(L_{\rho}\) from reward-policy to \(\tilde{Q}\)-policy space, from now on denoted as \(\mathcal{J}_{\rho}\)
\[\max_{r\in\mathbb{R}}\min_{\pi\in\Pi}L_{\rho}(r,\pi)=\max_{\tilde{Q}\in\tilde {\Omega}}\min_{\pi\in\Pi}\mathcal{J}_{\rho}(\tilde{Q},\pi)\,, \tag{3}\]
where \(\tilde{\Omega}=\mathbb{R}^{\mathcal{S}\times\mathcal{A}}\) is the space of \(\tilde{Q}\)-functions. Furthermore, they use Equation 2 to extract the optimal policy \(\pi_{\tilde{Q}}\) given a \(\tilde{Q}\)-function to drop the inner optimization loop in Equation 1 such that
\[\max_{\tilde{Q}\in\Omega}\mathcal{J}_{\rho}(\tilde{Q},\pi_{\tilde {Q}})=\max_{\tilde{Q}\in\Omega} \mathbb{E}_{\rho_{\pi_{E}}}\left[\tilde{Q}(s,a)-\gamma\mathbb{E}_{s^{ \prime}\sim P(.|s,a)}[\tilde{V}^{\pi}(s^{\prime})]\right]-\beta H_{\rho}(\pi_ {\tilde{Q}}) \tag{4}\] \[-\mathbb{E}_{\rho_{\pi}}\left[\tilde{Q}(s,a)-\gamma\mathbb{E}_{s^ {\prime}\sim P(.|s,a)}[\tilde{V}^{\pi}(s^{\prime})]\right]-\psi_{\rho}(\tilde{ \mathcal{T}}^{\pi}\tilde{Q}).\]
Practical Reward Regularization.Garg et al. (2021) derive a regularizer enforcing an L\({}_{2}\) norm-penalty on the reward on state-action pairs from the expert, such that \(\psi_{\pi_{E}}(r)=c\,\mathbb{E}_{\rho_{\pi_{E}}}\left[r(s,a)^{2}\right]\) with \(c\) being a regularizer constant. However, in continuous action spaces, this regularizer causes instabilities. In practice, Garg et al. (2021) address this instabilities by using the regularizer to the mixture
\[\psi_{\rho}(r)=\alpha\,c\,\mathbb{E}_{\rho_{\pi_{E}}}\left[r(s,a)^{2}\right]+ (1-\alpha)\,c\,\mathbb{E}_{\rho_{\pi}}\left[r(s,a)^{2}\right]\,, \tag{5}\]
where \(\alpha\) is typically set to \(0.5\). It is important to note that this change of regularizer does not allow the direct extraction of the policy from Equation 1 anymore. Indeed, the regularizer in Equation 5 also depends on the policy. Prior work did not address this issue. In the following sections, we will provide an in-depth analysis of this regularizer, allowing us to address the aforementioned issues and derive the correct policy update. Before we introduce our method, we use Proposition A.1 in Appendix A to change the objectives \(L_{\rho}\) and \(\mathcal{J}_{\rho}\) from expectations under occupancy measures to expectations under state-action distributions \(d_{\pi_{E}}\) and \(d_{\pi}\), from now on denoted as \(L\) and \(\mathcal{J}\), respectively.
## 3 Least Squares Inverse Q-Learning
In this section, we introduce our proposed imitation learning algorithm, which is based on the occupancy matching problem presented in Equation 1 using the regularizer defined in Equation 5. We start by giving an interpretation of the resulting objective as a \(\chi^{2}\) divergence between the expert distribution and a mixture distribution of the expert and the policy. We then show that the regularizer allows us to cast the original objective into a Bellman error minimization problem with fixed binary rewards for the expert and the policy. An RL problem with fixed rewards is a unique setting, which we can utilize to bound the \(Q\)-function target, provide fixed targets for the \(Q\)-function on expert states instead of doing bootstrapping, and adequately treat absorbing states. However, these techniques need to be applied on hard \(Q\)-functions. Therefore, we switch from soft action-value functions \(\tilde{Q}\) to hard \(Q\)-functions, by introducing an additional entropy critic. We also present a regularization critic allowing us to recover the correct policy update corresponding to the regularizer in Equation 5. Finally, we propose to use an IDM to solve the imitation learning from observations problem.
### Interpretation as a Statistical Divergence
Ho & Ermon (2016) showed that their regularizer results in a Jensen-Shannon divergence minimization between the expert's and the policy's state-action distribution. Similarily, Garg et al. (2021) showed that their regularizer \(\psi_{\pi_{E}}(r)\) results in a minimization of the \(\chi^{2}\) divergence. However, the regularizer presented in Equation 5 is not investigated yet. We show that this regularizer minimizes a \(\chi^{2}\) divergence between the expert's state-action distribution and a mixture distribution between the expert and the policy. Therefore, we start with the objective presented in Equation 1 and note that strong duality \(-\max_{r\in\mathbb{R}}\min_{e\in\mathbb{I}}L=\min_{e\in\mathbb{I}}\max_{r\in \mathbb{R}}L\) - follows straightforwardly from the minimax theorem (Von Neumann, 1928) as \(-\bar{H}(\pi)\), \(-\mathbb{E}_{d_{\pi}}[r(s,a)]\) and \(\psi(\bar{r})\) are convex in \(\pi\), and \(-\mathbb{E}_{d_{\pi}}[r(s,a)]\), \(\mathbb{E}_{d_{\pi_{E}}}[r(s,a)]\) and \(\psi(r)\) are concave in \(r\)(Ho & Ermon, 2016). We express the \(\chi^{2}\) divergence between the expert's distribution and the mixture distribution using its variational form,
\[2\chi^{2}\big{(}d_{\pi_{E}}\big{|}\underbrace{d_{\pi_{E}}+d_{ \pi}}_{d_{\pi_{E}}}\big{)}= \sup_{r}2\Big{(}\mathbb{E}_{d_{\pi_{E}}}[r(s,a)]\!-\!\mathbb{E}_{ d_{\pi}}\big{[}r(s,a)\!+\!\tfrac{r(s,a)^{2}}{4}\big{]}\Big{)}\] \[= \sup_{r}\mathbb{E}_{d_{\pi_{E}}}[r(s,a)]\!-\!\mathbb{E}_{d_{\pi}} [r(s,a)]\!-\!c\mathbb{E}_{d_{\pi_{E}}}\big{[}r(s,a)^{2}\big{]}\!-\!c(1\!-\! \alpha)\mathbb{E}_{d_{\pi}}\big{[}r(s,a)^{2}\big{]}, \tag{6}\]
with the regularizer constant \(c=\nicefrac{{1}}{{2}}\) and \(\alpha=\nicefrac{{1}}{{2}}\). Now, if the optimal reward is in \(\mathcal{R}\), the original objective from Equation 1 becomes an entropy-regularized \(\chi^{2}\) divergence minimization problem
\[\max_{r\in\mathcal{R}}\min_{e\in\mathbb{I}}L=\min_{\pi\in\mathbb{I}}2\chi^{2} \big{(}d_{\pi_{E}}\big{|}\stackrel{{ d_{\pi_{E}}+d_{\pi}}}{{2}} \big{)}-\beta H(\pi)\,. \tag{7}\]
Equation 7 tells us that the regularized IRL objective optimizes the reward to match a divergence while optimizing the policy to minimize the latter. When the divergence to be matched is unbounded, the optimal reward is also unbounded, causing instability during learning. Unlike the \(\chi^{2}\)-divergence between the agent's and the expert's distribution, the \(\chi^{2}\)-divergence to the mixture distribution is bounded to [0, \(\nicefrac{{\gamma}}{{\omega}}\)] as shown in Proposition A.3, and its optimal reward
\[r^{*}(s,a)=\frac{1}{c}\frac{d_{\pi_{E}}(s,a)-d_{\pi}(s,a)}{d_{\pi_{E}}(s,a)+d _{\pi}(s,a)}\,, \tag{8}\]
is also bounded in the interval \([-\nicefrac{{1}}{{c}},\nicefrac{{1}}{{c}}]\) as shown in Proposition A.2.
### A Reinforcement Learning Perspective on Distribution Matching
In the following, we present a novel perspective on Equation 4 allowing us to better understand the effect of the regularizer. Indeed, for the regularizer defined in Equation 5, we can interpret this objective as an entropy-regularized least squares problem, as shown by the following proposition:
**Proposition 3.1**: _Let \(r_{\tilde{Q}}(s,a)=(\tilde{\mathcal{T}}^{\pi}\tilde{Q})(s,a)\) be the implicit reward function of a \(\tilde{Q}\)-function, then for \(\psi(r_{\tilde{Q}})=\mathbb{E}_{\tilde{Q}}[r_{\tilde{Q}}(s,a)^{2}]\) with \(\tilde{d}(s,a)=\alpha d_{\pi_{E}}(s,a)+(1-\alpha)d_{\pi}(s,a)\), the solution of Equation 4 under state-action distributions equals the solution of an entropy-regularized least squares minimization problem such that \(\arg\min_{\tilde{Q}\in\tilde{\Omega}}\mathcal{L}(\tilde{Q},\pi_{\tilde{Q}})= \arg\max_{\tilde{Q}\in\tilde{\Omega}}\mathcal{J}(\tilde{Q},\pi_{\tilde{Q}})\) with_
\[\mathcal{L}(\tilde{Q},\pi_{\tilde{Q}})=\alpha\mathbb{E}_{d_{\pi_{E}}}\left[ \big{(}r_{\tilde{Q}}(s,a)-r_{\text{max}}\big{)}^{2}\right]+(1-\alpha)\mathbb{ E}_{d_{\pi_{\tilde{Q}}}}\left[\big{(}r_{\tilde{Q}}(s,a)-r_{\text{min}} \big{)}^{2}\right]+\frac{\beta}{c}H(\pi_{\tilde{Q}})\,, \tag{9}\]
_where \(r_{\text{max}}=\frac{1}{2\alpha c}\) and \(r_{\text{min}}=-\frac{1}{2(1-\alpha)c}\)._
The proof is provided in Appendix A.3. The resulting objective in Equation 9 is very similar to the one in the Least Squares Generative Adversarial Networks (LSGANs) (Mao et al., 2017) setting, where \(r_{\tilde{Q}}(s,a)\) can be interpreted as the discriminator, \(r_{\text{max}}\) can be interpreted as the target for expert samples, and \(r_{\text{min}}\) can be interpreted as the target for samples under the policy \(\pi\). For \(\alpha=0.5\) and \(c=1\), resulting in \(r_{\text{max}}=1\) and \(r_{\text{min}}=-1\), Equation 9 differs from the discriminator's objective in the LSGANs setting only by the entropy term.
Now replacing the implicit reward function with the inverse soft Bellman operator and rearranging the terms yields
\[\mathcal{L}(\tilde{Q},\pi_{\tilde{Q}})= \alpha\mathbb{E}_{d_{\pi_{E}}}\left[\left(\tilde{Q}(s,a)-(r_{\text {max}}+\gamma\mathbb{E}_{s^{\prime}\sim P(.|s,a)}[\tilde{V}^{\pi}(s^{\prime})] )\right)^{2}\right] \tag{10}\] \[+(1-\alpha)\mathbb{E}_{d_{\pi_{\tilde{Q}}}}\left[\left(\tilde{Q}( s,a)-(r_{\text{min}}+\gamma\mathbb{E}_{s^{\prime}\sim P(.|s,a)}[\tilde{V}^{\pi}(s^{ \prime})])\right)^{2}\right]+\frac{\beta}{c}H(\pi_{\tilde{Q}})\] \[= \alpha\,\delta^{2}(d_{\pi_{E}},r_{\text{max}})+(1-\alpha)\,\delta ^{2}(d_{\pi},r_{\text{min}})+\frac{\beta}{c}H(\pi_{\tilde{Q}})\,, \tag{11}\]
where \(\delta^{2}\) is the squared soft Bellman error. We can deduce the following from Equation 11:
\(\chi^{2}\)**-regularized IRL under a mixture can be seen as an RL problem** with fixed rewards \(r_{\text{max}}\) and \(r_{\text{min}}\) for the expert and the policy. This insight allows us to understand the importance of the regularizer constant \(c\): it defines the target rewards and, therefore, the scale of the \(Q\)-function. The resulting objective shows strong relations to the SQIL algorithm, in which also fixed rewards are used. However, SQIL uses \(r_{\text{max}}=1\) and \(r_{\text{min}}=0\), which is infeasible in our setting for \(\alpha<1\). While the entropy term appears to be another difference, we note that it does not affect the critic update, where \(\pi_{\tilde{Q}}\) is fixed. As in SQIL, the entropy is maximized by extracting the MaxEnt policy using Equation 2.
**Stabilizing the training in a fixed reward setting is straightforward.** We can have a clean solution to the reward bias problem - c.f., Section 3.4 -, and we can provide fixed \(Q\)-target for the expert and clipped \(Q\)-function targets for the policy - c.f., Section 3.5 & 3.7 to improve learning stability significantly. However, we must switch from soft to hard action-value functions by introducing an entropy critic to apply these techniques. Additionally, we show how to recover the correct policy update corresponding to the regularizer in Equation 5 by introducing a regularization critic.
### Entropy and Regularization Critic
We express the \(\tilde{Q}\)-function implicitly using \(\tilde{Q}(s,a)=Q(s,a)+\mathcal{H}^{\pi}(s,a)\) decomposing it into a hard \(Q\)-function and an _entropy critic_
\[\mathcal{H}^{\pi}(s,a)=\mathbb{E}_{\mathcal{P},\pi}\left[\sum_{\ell^{\prime}= t}^{\infty}-\gamma^{\ell^{\prime}-t+1}\beta\log\pi(a_{\ell^{\prime}+1}|s_{ \ell^{\prime}+1})\middle|s_{t}=s,a_{t}=a\right]. \tag{12}\]
This procedure allows us to stay in the MaxEnt formulation while retaining the ability to operate on the hard \(Q\)-function. We replace the soft inverse Bellman operator with the hard optimal inverse Bellman operator \((\mathcal{T}Q)(s,a)=Q(s,a)-\gamma\mathbb{E}_{s^{\prime}\sim P(\cdot|s,a)}V^{*} (s^{\prime})\), with the optimal value function \(V^{*}(s)=\max_{a}Q(s,a)\).
As mentioned before, the regularizer introduced in Equation 5 incorporates yet another term depending on the policy. Indeed, the inner optimization problem in Equation 1--the term in the brackets--is not purely the MaxEnt problem anymore, but includes the term \(-k\mathbb{E}_{d_{\pi}}[r(s,a)^{2}]\) with \(k=c(1-\alpha)\). To incorporate this term into our final implicit action-value function \(Q^{\dagger}(s,a)\), we learn an additional _regularization critic_
\[\mathcal{C}(s,a)=\mathbb{E}_{\mathcal{P},\pi}\left[\sum_{\ell^{\prime}=t}^{ \infty}\gamma^{\ell^{\prime}-t}r(s_{\ell^{\prime}},a_{\ell^{\prime}})^{2} \middle|s_{t}=s,a_{t}=a\right]. \tag{13}\]
such that \(Q^{\dagger}(s,a)=Q(s,a)+\mathcal{H}^{\pi}(s,a)+k\,\mathcal{C}(s,a)\). Using \(Q^{\dagger}\), we obtain the exact solution to the inner minimization problem in Equation 2. In practice, we learn a single critic \(\mathcal{G}^{\pi}\) combining \(\mathcal{H}^{\pi}\) and \(\mathcal{C}\). We train the latter independently using the following objective
\[\min_{\mathcal{G}^{\pi}}\delta^{2}_{\mathcal{G}}=\min_{\mathcal{G}^{\pi}} \mathbb{E}_{d_{\pi}}\left[(\mathcal{G}^{\pi}(s,a)-(k\,r_{\text{Q}}(s,a)^{2}+ \mathbb{E}_{\begin{subarray}{c}\ell^{\prime}\sim P\\ a^{\prime}\sim\pi\end{subarray}}\left[\gamma(-\beta\log\pi(a^{\prime}|s^{ \prime})+\mathcal{G}^{\pi}(s^{\prime},a^{\prime})\right]))^{2}\right]\,, \tag{14}\]
which is an entropy-regularized Bellman error minimization problem given the squared implicit reward \(r_{Q}\) scaled by \(k\).
### Treatment of Absorbing States
Another technical aspect neglected by IQ-Learn is the proper treatment of absorbing states. Garg et al. (2021) treat absorbing states by adding an indicator \(\nu\)--where \(\nu=1\) if \(s^{\prime}\) is a terminal state--in front of the discounted value function in the inverse Bellman operator
\[(\mathcal{T}^{\pi}_{u}Q)(s,a)=Q(s,a)-(1-\nu)\gamma\mathbb{E}_{s^{\prime}\sim P (\cdot|s,a)}V^{\pi}(s^{\prime})\,. \tag{15}\]
This inverse Bellman operator is obtained by solving the forward Bellman operator for \(r(s,a)\) under the assumption that the value of absorbing states is zero. However, as pointed out by Kostrikov et al. (2019), such an assumption may introduce termination or survival bias; the value of absorbing states also needs to be learned. Our perspective provides a clear understanding of the effect of the inverse Bellman operator in Equation 15: The objective in Equation 10 will regress the \(Q\)-function of transitions into absorbing states towards \(r_{\text{max}}\) or \(r_{\text{min}}\), respectively. However, based on Equation 9, the implicit _reward_ of absorbing states should be regressed toward \(r_{\text{max}}\) or \(r_{\text{min}}\)
Instead, we derive our inverse operator from the standard Bellman operator while exploiting that the value of the absorbing state \(s_{A}\) is independent of the policy \(\pi\)
\[(\mathcal{T}^{\pi}_{\text{kin}}Q)(s,a){=}Q(s,a){-}\gamma{\mathbb{E}}_{s^{\prime} \sim P(\cdot|s,a)}\big{(}(1{-}\nu)V^{\pi}(s^{\prime}){+}\nu V(s_{A})\big{)}. \tag{16}\]
We further exploit that the value of the absorbing state can be computed in closed form as \(V(s_{A})=\frac{r_{A}}{1-\gamma}\), where \(r_{A}\) equals \(r_{\text{max}}\) on expert states and \(r_{\text{min}}\) on policy states. Please note that the corresponding forward Bellman operator converges to the same \(Q\)-function, despite using the analytic value of absorbing states instead of bootstrapping, as we show in Appendix A.5. When applying our inverse operator in Equation 16 to Equation 9, we correctly regress the \(Q\)-function of transitions into absorbing states towards their discounted return. We show the resulting full objective in Appendix A.4.
We show the effect of our modified operator on the toy task depicted in Figure 1 (top), where the black point mass is spawned in either of the four dark blue squares and has to reach the green area in the middle. Once the agent enters the red area, the episode terminates. The expert always takes the shortest path to the green area, never visiting the red area. The operator proposed by IQ-Learn does not sufficiently penalize the agent for reaching absorbing states, preventing the IQ-Learn agent from reaching the goal consistently, as can be seen from the orange graph in Figure 1 (bottom). In contrast, when using our operator \(\mathcal{T}_{\text{isiq}}\), the agent solves the task successfully.
### An Alternative Formulation for the Expert Residual Minimization
The first term in Equation 9 defines the squared Bellman error minimization problem on the distribution \(d_{\pi_{E}}\)
\[\alpha\,\delta^{2}(d_{\pi_{E}},r_{\text{max}})=\alpha{\mathbb{E}}_{d_{\pi_{E}}} \left[(r_{Q}(s,a)-r_{\text{max}})^{2}\right]\,. \tag{17}\]
Due to bootstrapping, this minimization can become challenging, even for a fixed expert policy, as it does not fix the scale of the \(Q\)-function unless the trajectory reaches an absorbing state. This problem arises particularly on expert data for cyclic tasks, where we generate trajectories up to a fixed horizon. The lack of a fixed scale increases the variance of the algorithm, affecting the performance negatively.
Therefore, we propose a modified objective, analyzing Equation 17. The minimum of this term is achieved when \(r_{Q}(s,a)=r_{\text{max}}\) for all reachable \((s,a)\) under \(d_{\pi_{E}}\). Thus, the objective of this term is to push the reward, on expert trajectories, towards \(r_{\text{max}}\). If we consider this minimum, each transition in the expert's trajectory has the following \(Q\)-value:
\[Q^{\pi_{E}}(s,a)=\sum_{t=0}^{\infty}\gamma^{t}r_{\text{max}}=\frac{r_{\text{ max}}}{1-\gamma}=Q_{\text{max}},\quad\text{with}\,s,a\sim d_{\pi_{E}}(s,a). \tag{18}\]
As the objective of our maximization on expert distribution is equivalent to pushing the value of the expert's states and actions towards \(Q_{\text{max}}\), we propose to replace the bootstrapping target with the fixed target \(Q_{\text{max}}\) resulting in the following new objective:
\[\mathcal{L}_{\text{isiq}}(Q){=}\alpha{\mathbb{E}}_{d_{\pi_{E}}}\big{[}(Q(s,a){ -}Q_{\text{max}})^{2}\big{]}{+}(1{-}\alpha){\mathbb{E}}_{d_{\pi}}\Big{[}(Q(s,a ){-}(r_{\text{min}}{+}\gamma{\mathbb{E}}_{s^{\prime}\sim P(\cdot|s,a)}[V^{*}( s^{\prime})]))^{2}\Big{]}. \tag{19}\]
Note that we skip the terminal state treatment for clarity. The full objective is shown in Appendix A.4. Also, we omit the entropy term as we incorporate the latter now in \(\mathcal{H}^{\pi}(s,a)\). This new objective incorporates a bias toward expert data. Therefore, it is not strictly equivalent to the original problem formulation. However, it updates the \(Q\)-function toward the same ideal target, while providing a simpler and more stable optimization landscape. Empirically, we experienced that this modification, while only justified intuitively, has a very positive impact on the algorithm's performance.
Figure 1: Point mass toy task (top) with success rate plot (bottom). Here, we compare the standard IQ-Learn operator to the modified operator.
### Learning from Observations
In many real-world tasks, we do not have access to expert actions, but only to observations of expert's behavior (Torabi et al., 2019). In this scenario, AIL methods, such as GAIfO (Torabi et al., 2019), can be easily adapted by learning a discriminator only depending on the current and the next state. Unfortunately, it is not straightforward to apply the same method to implicit rewards algorithms that learn a \(Q\)-function. The IQ-Learn method (Garg et al., 2021) relies on a simplification of the original objective to perform updates not using expert actions but rather actions sampled from the policy on expert states. However, this reformulation is not able to achieve good performance on standard benchmarks as shown in our experimental results.
A common practice used in the literature is to train an IDM. This approach has been previously used in behavioral cloning (Torabi et al., 2018; Nair et al., 2017) and for reinforcement learning from demonstrations (Guo et al., 2019; Paves et al., 2020; Radosavovic et al., 2021). Following the same idea, we generate an observation-only version of our method by training an IDM online on policy data and using it for the prediction of unobserved actions of the expert. We modify the objective in Equation 19 to
\[\mathcal{L}_{\text{big-o}}(Q){=}\alpha\mathbb{E}_{d_{\pi_{E}}}\Big{[}\big{(}Q (s,\Gamma_{\omega}(s,s^{\prime})){-}Q_{\max}\big{)}^{2}\Big{]}{+}\bar{\alpha} \mathbb{E}_{d_{\pi}}\Big{[}\big{(}Q(s,a){-}(r_{\text{min}}{+}\gamma\mathbb{E} _{s^{\prime}{-}P(\cdot|s,a)}[V^{*}(s^{\prime})])\big{)}^{2}\Big{]}, \tag{20}\]
with the dynamics model \(\Gamma_{\omega}(s,s^{\prime})\), its parameters \(\omega\) and \(\bar{\alpha}=(1-\alpha)\). We omit the notation for absorbing states and refer to Appendix A.4 instead. Notice that the IDM is only used to evaluate the expert actions, and is trained by solving the following optimization problem
\[\min_{\omega}\mathcal{L}_{\Gamma}(\omega)=\min_{\omega}\mathbb{E}_{d_{\pi}, \mathcal{P}}\left[\|\Gamma_{\omega}(s,s^{\prime})-a\|_{2}^{2}\right], \tag{21}\]
where the expectation is performed on the state distribution generated by the learner policy \(\pi\). While the mismatch between the training distribution and the evaluation distribution could potentially cause problems, our empirical evaluation shows that on the benchmarks we achieve performance similar to the action-aware algorithm. We give more details on this approach in Appendix B.
### Practical Algorithm
We now instantiate a practical version of our algorithm in this section. An overview of our method is shown in Algorithm 1. In practice, we use parametric functions to approximate \(Q\), \(\pi\), \(\mathcal{G}\) and \(\Gamma\), and optimize the latter using gradient ascent on surrogate objective functions that approximate the expectations under \(d_{\pi}\) and \(d_{\pi_{E}}\) using the datasets \(\mathcal{D}_{\pi}\) and \(\mathcal{D}_{\pi_{E}}\). Further, we use target networks, as already suggested by the Garg et al. (2021). However, while the objective in Equation 4 lacked intuition about the usage of target networks, the objective in Equation 11 is equivalent to a reinforcement learning objective, in which target networks are a well-known tool for stabilization. Further, we exploit our access to the hard \(Q\)-function as well as our fixed reward target setting to calculate the maximum and minimum Q-values possible, \(Q_{\min}=\frac{r_{\text{min}}}{1-\gamma}\) and \(Q_{\max}=\frac{r_{\text{min}}}{1-\gamma}\), and clip the output of target network to that range. Note that this also holds for the absorbing states. In doing so, we ensure that the target \(Q\) always remains in the desired range, which was often not the case with IQ-Learn. Target clipping prevents the explosion of the \(Q\)-values that can occur due to the use of neural approximators. This technique allows the algorithm to recover from poor value function estimates and prevents the \(Q\)-function from leaving the set of admissible functions. Finally, we found that training the policy on a small fixed expert dataset anneals the entropy bonus of expert trajectories, even if the policy never visits these states and actions. To address this problem, we clip the entropy bonus on expert states to a running average of the maximum entropy on policy states.
```
Initialize:\(Q_{\theta}\), \(\pi_{\phi}\), \(\mathcal{G}_{\zeta}\) and optionally \(\Gamma_{\omega}\) for step \(t\) in \(\{1,...,N\}\)do Sample mini-batches \(\mathcal{D}_{\mathcal{H}}\) and \(\mathcal{D}_{\pi_{E}}\) (opt) Predict actions for \(\mathcal{D}_{\pi_{E}}\) using \(\Gamma_{\omega}\) \(\mathcal{D}_{\pi_{E}}{=}\{\{s,\Gamma_{\omega}(s,s^{\prime}),s^{\prime}\}\forall \{s,s^{\prime}\}{\in}\mathcal{D}_{\pi_{E}}\}\) Update the \(Q\)-function using Eq. 6. \(\theta_{t+1}{+}\phi_{t}{+}\kappa_{Q}\nabla_{\theta}[\mathcal{J}(\theta, \mathcal{D}_{\pi},\mathcal{D}_{\pi_{E}})]\) (opt.) Update \(\mathcal{G}\)-function using Eq. 14. \(\mathcal{G}_{t+1}{+}\zeta_{t}{-}\kappa_{\theta}\nabla_{\zeta}[\mathcal{G}_{ \zeta}^{2}(\zeta,\mathcal{D}_{\pi})]\) Update Policy \(\pi_{\phi}\) using the KL \(\phi_{t+1}{-}\phi_{t}{-}\kappa_{\pi}\nabla_{\phi}[D_{KL}(\pi\phi\|\pi_{\phi})]\) (opt.) Update \(\Gamma_{\omega}\) using Eq. 21. \(\mathcal{A}_{t+1}{+}\omega_{t}{-}\kappa_{\Gamma}\nabla_{\omega}[\mathcal{L}_{ \Gamma}(\omega,\mathcal{D}_{\pi})]\) endfor
```
**Algorithm 1** LS-IQ
In continuous action spaces, \(Z_{s}\) is intractable, which is why we can not directly extract the optimal policy using Equation 2. As done in previous work (Haarnoja et al., 2018; Garg et al., 2021), we use a parametric policy \(\pi_{\phi}\) to approximate \(\pi_{\tilde{Q}}\) by minimizing the KL \(D_{\text{KL}}(\pi_{\phi}\parallel\pi_{\tilde{Q}})\). In our implementation, we found it unnecessary to use a double-critic update. This choice reduces the
computational and memory requirements of the algorithm, making it comparable to SAC. Finally, we replace \(V^{*}(s)\) with \(V^{\pi}(s)\) on the policy expectation, as we do not have access to the latter in continuous action spaces.
## 4 Experiments
We evaluate our method on six MuJoCo environments: Ant-v3, Walker2d-v3, Hopper-v3, HalfCheetah-v3, Humanoid-v3, and Atlas. The latter is a novel locomotion environment introduced by us and is further described in Appendix C.1. We select the following baselines: GALL (Ho and Ermon, 2016), VALL (Peng et al., 2019), IQ-Learn (Garg et al., 2021) and SQIL (Reddy et al., 2020). For a fair comparison, all methods are implemented in the same framework, MushroomRL (D'Eramo et al., 2021). We verify that our implementations achieve comparable results to the original implementations by the authors. We use the hyperparameters proposed by the original authors for the respective environments and perform a grid search on novel environments. The original implementation of IQ-Learn evaluates two different algorithm variants depending on the given environment. We refer to these variants as IQv0--which uses telescopoping (Garg et al., 2021) to evaluate the agent's expected return in Equation 4--, and IQ--which directly uses Equation 4-- and evaluate both variants on all environments. For our method, we use the same hyperparameters as IQ-Learn, except for the regularizer coefficient \(c\) and the entropy coefficient \(\beta\), which we tune on each environment. We only consider equal mixing, i.e., \(\alpha=0.5\).
In our first experiment, we perform ablations on the different design choices of LSIQ. We evaluate the following variants: LSIQ-HC uses a (combined) entropy critic and regularization critic, LSIQ-H only uses the entropy critic, and LSIQ does not use any additional critic, similar to IQ-Learn. We use ten seeds and five expert trajectories for these experiments. For the Atlas environment, we use 100 trajectories. We also consider IQ, IQv0, and SQIL as baselines and show the learning curves for four environments in Figure 2. The learning curves on the HalfCheetah environment can be found in Appendix C.6. It is interesting to note that IQ-Learn without telescoping does not perform well on Atlas, Walker, and Hopper, where absorbing states are more likely compared to Ant and HalfCheetah, which almost always terminate after a fixed amount of steps. We hypothesize that the worse performance on Walker and Hopper is caused by reward bias, as absorbing states are not sufficiently penalized. IQv0 would suffer less from this problem as it treats all states visited by the agent as initial states, which results in stronger reward penalties for these states. We conduct further ablation studies showing the influence of the proposed techniques, including an ablation study on the effect of fixed targets, clipping on the target \(Q\)-value, entropy clipping for the expert, as well as
Figure 2: Comparison of different versions of LS-IQ. Abscissa shows the normalized discounted cumulative reward. Ordinate shows the number of training steps (\(\times 10^{3}\)). The first row shows the results and an exemplary trajectory – here the trained LS-IQ agent – on a locomotion task using an Atlas robot. The second row shows 4 MuJoCo Gym tasks, for which the expert’s cumulative rewards are \(\rightarrow\) Hopper:3299.81, Walker2d:5841.73, Ant:6399.04, Humanoid:6233.45
the treatment of absorbing states in Appendix C. Our results show that the additional critics have little effect, while fixing the targets significantly increases the performance.
For our main experiments, we only evaluate LSIQ and LSIQ-H, which achieve the best performance in most environments. We compare our method to all baselines for four different numbers of expert demonstrations, \(1\), \(5\), \(10\), and \(25\), and always use five seeds. We perform each experiment with and without expert action. When actions are not available, we use a state transition discriminator (Torabi et al., 2019) for GAIL and VAIL, and IDMs for LSIQ (c.f., Section 3.6). In contrast, IQ-Learn uses actions predicted on expert states by the current policy when no expert actions are available. In the learning-from-observation setting, we do not evaluate SQIL, and we omit the plots for IQ, which does not converge in any environment and focus only on IQv0. Figure 3 shows the final expected return over different numbers of demonstrations for four of the environments. All learning curves, including the HalfCheetah environment, can be found in Appendix C.6 for state-action setting and in Appendix C.5 for the learning-from-observation setting. Our experiments show that LSIQ achieves on-par or better performance compared to all baselines. In particular, in the learning-from-observation setting, LSIQ performs very well by achieving a similar return compared to the setting where states and actions are observed.
## 5 Conclusion
Inspired by the practical implementation of IQ-Learn, we derive a distribution matching algorithm using an implicit reward function and a squared L\({}_{2}\) penalty on the mixture distribution of the expert and the policy. We show that this regularizer minimizes a bounded \(\chi^{2}\)-divergence to the mixture distribution and results in modified updates for the \(Q\)-function and policy. Our analysis reveals an interesting connection to SQIL--which is not derived from an adversarial distribution matching objective--and shows that IQ-Learn suffers from reward bias. We build on our insights to propose a novel method, LS-IQ, which uses a modified inverse Bellman operator to address reward bias, target clipping, fixed reward targets for policy samples, and fixed \(Q\)-function targets for expert samples. We also show that the policy optimization of IQ-Learn is not consistent with regularization on the mixture distribution and show how this can be addressed by learning an additional regularization critic. In our experiments, LS-IQ outperforms strong baseline methods, particularly when learning from observations, where we train an IDM for predicting expert actions. In future work, we will quantify the bias introduced by the fixed \(Q\)-function target and investigate why this heuristic is fundamental for stabilizing learning. We will also analyze the error propagation in the \(Q\)-function target and derive theoretical guarantees on the \(Q\)-function approximation error.
Figure 3: Ablation study on the effect of the number of expert trajectories on different Mujoco environments. Abscissa shows the normalized cumulative reward. Ordinate shows the number of expert trajectories. The first row shows the performance when considering states and action, while the second row considers the performance when using states only. Expert cumulative rewards identical to Figure 2.
#### Acknowledgments
Calculations for this research were conducted on the Lichtenberg high-performance computer of the TU Darmstadt. This work was supported by the German Science Foundation (DFG) under grant number SE1042/41-1. Research presented in this paper has been partially supported by the German Federal Ministry of Education and Research (BMBF) within the subproject "Modeling and exploration of the operational area, design of the AI assistance as well as legal aspects of the use of technology" of the collaborative KIARA project (grant no. 13N16274).
|
2307.11163 | Numerical study of multiparticle production in $λφ^4$ theory | We study multiparticle production in the unbroken $(3+1)$-dimensional
$\lambda\phi^4$ theory using the semiclassical method of singular solutions. We
show that the probabilities of these processes are exponentially suppressed in
terms of a small coupling constant $\lambda \ll 1$ if the multiplicity of the
final state is large: $n \gg 1$. At $ n \ll \lambda^{-1}$ the probabilities
agree with well-known perturbative results. At $ n \gg \lambda^{-1}$ they are
dominated by loop effects and decrease exponentially with $n$, as we show for
the first time. | S. Demidov, B. Farkhtdinov, D. Levkov | 2023-07-20T18:00:11Z | http://arxiv.org/abs/2307.11163v1 | # Nr-Th-2023-011
###### Abstract
We study multiparticle production in the unbroken \((3+1)\)-dimensional \(\lambda\phi^{4}\) theory using the semiclassical method of singular solutions. We show that the probabilities of these processes are exponentially suppressed in terms of a small coupling constant \(\lambda\ll 1\) if the multiplicity of the final state is large: \(n\gg 1\). At \(n\ll\lambda^{-1}\) the probabilities agree with well-known perturbative results. At \(n\gg\lambda^{-1}\) they are dominated by loop effects and decrease exponentially with \(n\), as we show for the first time.
\({}^{a}\)Institute for Nuclear Research of the Russian Academy of Sciences, Moscow 117312, Russia
\({}^{b}\)Moscow Institute of Physics and Technology, Dolgoprudny 141700, Russia
\({}^{c}\)Institute for Theoretical and Mathematical Physics MSU, Moscow 119991, Russia
## Introduction
We consider multiparticle production in the weakly-coupled theory of a real scalar field \(\phi(t,\mathbf{x})\) with the potential
\[V(\phi)=m^{2}\phi^{2}/2+\lambda\phi^{4}/4\;, \tag{1}\]
where \(m^{2}>0\) is the particle mass and \(\lambda\ll 1\) is the coupling constant. In the multiparticle processes, \(n\gg 1\) particles are created in the collision of a few particles. The difficulty in description of these processes appears when \(n\) exceeds the inverse coupling constant of the theory \(\lambda^{-1}\).
Indeed, the amplitude of producing \(n\) particles at the mass threshold from one off-shell particle equals [1, 2]
\[{\cal A}_{n}={\cal A}_{n}^{\rm tree}\left[1+B\lambda n^{2}+O(\lambda^{2}n^{4} )\right]\,,\qquad{\cal A}_{n}^{\rm tree}=n!\left(\frac{\lambda}{8m^{2}}\right) ^{(n-1)/2}\,, \tag{2}\]
where \({\cal A}_{n}^{\rm tree}\) comes from tree diagrams, the term \(B\lambda n^{2}\) with known [2] numerical constant \(B\) represents one-loop correction and multiloop terms are hidden inside \(O(\lambda^{2}n^{4})\). At \(\lambda n\ll 1\), the tree contribution is the largest, and the leading correction comes from one-loop diagrams. But at \(n\gtrsim\lambda^{-1}\) all contributions are comparable and the series (2) break down [3, 4]. It was argued [5] that the leading parts of multiloop contributions at \(n\sim O\left(\lambda^{-1}\right)\) can be resummed into the exponent
\[{\cal A}_{n}={\cal A}_{n}^{\rm tree}\;{\rm e}^{F_{\cal A}/\lambda}\,,\qquad{ \rm where}\qquad F_{\cal A}=B(\lambda n)^{2}+O(\lambda^{3}n^{3})\,. \tag{3}\]
However, the function \(F_{\cal A}(\lambda n)\) is presently unknown apart from the leading term of its Taylor expansion at \(\lambda n\ll 1\) in Eq. (3).
One wonders whether the full multiparticle amplitude \({\cal A}_{n}\) may become of order one at some \(n\sim O\left(\lambda^{-1}\right)\), similarly to the tree-level result \({\cal A}_{n}^{\rm tree}\). If that is the case, few-particle scattering may lead to a spectacular multiparticle production at sufficiently high collision energies \(E\sim mn\).
The exponential form (3) of the amplitude suggests that multiparticle processes can be described semiclassically [6, 7, 8, 9]. The most promising development in this direction is D.T. Son's method of singular solutions [8] which is applicable at \(\lambda\ll 1\) and \(n\gg 1\). In this talk we describe its numerical implementation and demonstrate some preliminary results in the unbroken \(\lambda\phi^{4}\) theory, see Ref. [10] for the follow-up publication.
## Singular semiclassical solutions - numerically
Following Ref. [8], we consider inclusive probability of transition few \(\to n\) at energy \(E\):
\[{\cal P}_{n}(E)\equiv\sum_{f}|\langle f;E,n|\hat{\cal S}\,\hat{\cal O}|0 \rangle|^{2}\;, \tag{4}\]
where the operator \(\hat{\cal O}\) creates a few-particle initial state, the sum runs over all final states \(f\) with fixed energy \(E\) and multiplicity \(n\), and \(\hat{\cal S}\) is the scattering matrix. To be concrete, we consider the in-state operator of the form
\[\hat{\cal O}\to\hat{\cal O}_{J}=\exp\left(-\int d^{3}\mathbf{x}\;J( \mathbf{x})\hat{\phi}(0,\mathbf{x})\right),\;\;\;J(\mathbf{x})=\frac{j_{0}}{\sqrt{\lambda}}\mbox{e}^{-\mathbf{x}^{2} /(2\sigma^{2})}, \tag{5}\]
that includes a localized classical source \(J(\mathbf{x})\) of strength \(j_{0}\) and width \(\sigma\) acting at \(t=0\).
The derivation of the classical method [8] consists of writing down path integral for Eq. (4) and evaluating it in the saddle-point approximation. The result for the probability is
\[{\cal P}_{n}^{(J)}(E)\sim\mbox{e}^{F_{J}(\lambda n,\,\varepsilon)/\lambda}\,, \tag{6}\]
where \(\varepsilon=E/n-m\) is the mean kinetic energy of out-particles and we ignore the prefactor. The semiclassical exponent \(F_{J}\) is determined using the value of the the classical action \(S\left[\phi_{\rm cl}\right]\) on the saddle-point configuration \(\phi_{\rm cl}(t,\mathbf{x})\). The latter is complex and satisfies the field equation
\[\Box\phi_{\rm cl}+m^{2}\phi_{\rm cl}+\phi_{\rm cl}^{3}=iJ(\mathbf{x} )\delta(t)\;. \tag{7}\]
It is convenient to obtain \(\phi_{\rm cl}\) on the complex time contour depicted in Fig. 1a. Boundary conditions for this solution are specified by the initial and final states of the process: vacuum state at \(t\to+i\infty\) and multiparticle state with fixed \(E\) and \(n\) at \(t\to+\infty\). In a nutshell, our semiclassical method consists of solving the above boundary value problem for \(\phi_{\rm cl}\) and calculating \(F_{J}\) using the solution.
Figure 1: (a) A contour in the complex time plane for Eq. (7). Arrows show the direction from past to future. (b) Deformation of the linear solution A into the nonlinear solution B, and then — into the singular solution C with \(j_{0}=\sigma=0\). Units with \(m=1\) are used.
It is worth noting that the operator \(\hat{\mathcal{O}}_{J}\) creates \(O(J^{2}/\lambda)\) particles from the vacuum, so that the few-particle initial state is achieved only at sufficiently small \(J\). As a consequence, the probability (4) of the process few \(\to n\) is restored in the limit
\[F(\lambda n,\varepsilon)=\lim_{J\to 0}F_{J}(\lambda n,\varepsilon)\qquad\text{ and}\qquad\mathcal{P}_{n}(E)\sim\text{e}^{F(\lambda n,\varepsilon)/\lambda}\,. \tag{8}\]
It was conjectured [5] that the value of \(F\) is insensitive to the details of the few-particle initial state \(\hat{\mathcal{O}}|0\rangle\) and hence independent of the particular form of \(J(\boldsymbol{x})\) in the limit (8). We send \(j_{0}\to 0\) while keeping \(j_{0}/\sigma=\text{const}\), which corresponds to an infinitesimally weak Gaussian source supported at \(\boldsymbol{x}=0\). One can show that the saddle-point configuration \(\phi_{\text{cl}}\) becomes singular in this limit.
We solve the boundary value problem for Eq. (7) numerically. To this end we adopt units with \(m=1\), substitute the spherically-symmetric Ansatz \(\phi_{\text{cl}}=\phi_{\text{cl}}(t,r)\), and introduce a rectangular spacetime lattice \(\{t_{j},r_{i}\}\) with \(t_{j}\) covering the complex contour in Fig. 1a. We use Newton-Raphson method [11] to solve the lattice equations at every accessible value of \(\lambda n\), \(\varepsilon\), \(j_{0}\), and \(\sigma\). Once the exponent \(F_{J}\) is found, we extrapolate it to \(j_{0}=0\).
We select physically relevant saddle-point configurations using the following strategy. At small values of \(j_{0}\) and \(\lambda n\propto j_{0}^{2}\) the interaction term in Eq. (7) can be ignored -- hence the semiclassical solution \(\phi_{\text{cl}}\) describes creation of particles by the source \(J\) in the almost free theory. The respective solutions of the linearized equation (7) -- unique and definitely physical -- can be obtained analytically. Using one of them as the first approximation, we numerically solve the full nonlinear equation and arrive at the true saddle-point configurations, see the point A in Fig. 1b. After that we increase \(j_{0}\) and \(\lambda n\propto j_{0}^{2}\) in small steps building a continuous branch of numerical solutions, see the chain of circles AB in Fig. 1b. Once the solution B was found, we consider the limit (8). To this end, we decrease \(j_{0}\) at _fixed_\(\lambda n\), \(\varepsilon\), and \(j_{0}/\sigma\), see the circles between B and C in Fig. 1b. A polynomial extrapolation of \(F_{J}\) to \(j_{0}=0\) (solid line in Fig. 1b) gives the final result for the few-particle exponent \(F(\lambda n,\,\varepsilon)\) (point C).
## Results
Our numerical data for the suppression exponent \(F(\lambda n,\,\varepsilon)\) are shown by empty circles, squares, and triangles in Fig. 2a. In the previous publication [12] we demonstrated that they are close to the well-known tree-level result at \(\lambda n\ll 1\):
\[F(\varepsilon,\lambda n)=\lambda n\ln\left(\frac{\lambda n}{16}\right)+\left[ f(\varepsilon)-1\right]\;\lambda n+O(\lambda n)^{2}\qquad\text{at}\qquad \lambda n\ll 1\,, \tag{9}\]
where the function \(f(\varepsilon)\) is known numerically, see Refs. [13, 14]. Notably, we find that in the opposite limit \(\lambda n\gg 1\) the graphs in Fig. 2a are almost linear:
\[F(\varepsilon,\lambda n)=f_{\infty}(\varepsilon)\lambda n+g_{\infty}( \varepsilon)\qquad\text{at}\qquad\lambda n\gg 1\;. \tag{10}\]
In practice, it is convenient to describe all numerical data with the function
\[F(\varepsilon,\lambda n)=\lambda nf_{\infty}(\varepsilon)-\frac{\lambda n}{2} \ln\left[\left(\frac{16}{\lambda n}\right)^{2}\text{e}^{2-2f(\varepsilon)+2f_ {\infty}(\varepsilon)}-\frac{2g_{\infty}(\varepsilon)}{\lambda n}+1\right] \tag{11}\]
that interpolates between the asymptotic regimes (9) and (10). Fitting numerical results with Eq. (11), we obtain solid lines in Fig. 2a and values of \(f_{\infty}(\varepsilon)\) and \(g_{\infty}(\varepsilon)\). The tilt \(f_{\infty}(\varepsilon)\) of the semiclassical exponent is plotted in Fig. 2b (circles). Notably, it is negative and can be approximated by the function
\[f_{\infty}(\varepsilon)=-\frac{3}{4}\ln\left[\left(\frac{d_{1}m}{\varepsilon }\right)^{2}+d_{2}\right],\quad\text{where}\quad d_{1}\approx 11.5\,,\;d_{2} \approx 40.2\,, \tag{12}\]
that has intuitively correct behaviors at \(\varepsilon\to 0\) and \(\varepsilon\to+\infty\); see the solid line in Fig. 2b.
In a nutshell, our results imply that multiparticle production is exponentially suppressed in the unbroken \(\lambda\phi^{4}\) theory at any \(\lambda n\) and \(\varepsilon\).
## Conclusion
We numerically implemented D.T. Son's method of singular solutions in the unbroken \(\lambda\phi^{4}\) theory. Using this method, we computed the probability of multiparticle production
Figure 2: (a) Suppression exponent \(F\) as a function of \(\lambda n\) at fixed \(\varepsilon/m=0.5;\ 1;\ 3;\ 5\). Limit \(j_{0}\to 0\) has been already evaluated. (b) The slope \(f_{\infty}(\varepsilon)\) of the exponent at large \(\lambda n\) as a function of \(\varepsilon/m\).
few \(\to n\). We explicitly demonstrated that the process remains exponentially suppressed at any energy \(E\) and multiplicity \(n\) if the latter is large: \(n\gg 1\). Moreover, at \(\lambda n\gg 1\) the probability decreases exponentially with \(n\). We provided compact fitting expressions for all of our numerical results.
Numerical calculations were performed on the Computational cluster of the Theoretical Division of INR RAS.
|
2306.06736 | Efficient Skip Connections Realization for Secure Inference on Encrypted
Data | Homomorphic Encryption (HE) is a cryptographic tool that allows performing
computation under encryption, which is used by many privacy-preserving machine
learning solutions, for example, to perform secure classification. Modern deep
learning applications yield good performance for example in image processing
tasks benchmarks by including many skip connections. The latter appears to be
very costly when attempting to execute model inference under HE. In this paper,
we show that by replacing (mid-term) skip connections with (short-term) Dirac
parameterization and (long-term) shared-source skip connection we were able to
reduce the skip connections burden for HE-based solutions, achieving x1.3
computing power improvement for the same accuracy. | Nir Drucker, Itamar Zimerman | 2023-06-11T18:06:06Z | http://arxiv.org/abs/2306.06736v1 | # Efficient Skip Connections Realization for Secure Inference on Encrypted Data
###### Abstract
Homomorphic Encryption (HE) is a cryptographic tool that allows performing computation under encryption, which is used by many privacy-preserving machine learning solutions, for example, to perform secure classification. Modern deep learning applications yield good performance for example in image processing tasks benchmarks by including many skip connections. The latter appears to be very costly when attempting to execute model inference under HE. In this paper, we show that by replacing (mid-term) skip connections with (short-term) Dirac parameterization and (long-term) shared-source skip connection we were able to reduce the skip connections burden for HE-based solutions, achieving \(\times 1.3\) computing power improvement for the same accuracy.
Keywords:shared-source skip connections, Dirac networks, Dirac parameterization, homomorphic encryption, privacy preserving machine learning, PPML, encrypted neural networks, deep neural networks
## 1 Introduction
The use of Homomorphic Encryption (HE) to construct Privacy-Preserving Machine Learning (PPML) solutions, e.g., secure Deep Neural Network (DNN) inference on the cloud, becomes more and more realistic. For example, Gartner [11] predicted that in 2025, 50% of large enterprises will adopt HE-based solutions. In addition, we see many companies and academic institutes collaborate in global activities such as HEBench [22] and the HE standardization efforts [2]. The main reason is, of course, that HE allows finance and health organizations to comply with regulations such as GDPR [10] and HIPAA [5] when uploading sensitive data to the cloud.
One principle scenario of HE-based PPML solutions involves two entities: a user and a semi-honest cloud server that performs Machine Learning (ML) computation on HE-encrypted data. Specifically, the cloud offers an ML as a Service (MLaaS) solution, where it first trains a model in the clear, e.g., a DNN, and then, uses it for inference operations on the clients' data. On the other side, the client first generates its own HE keys, stores the secret key, and uploads the public and evaluation keys to the cloud. Subsequently, upon demand, it encrypts secret samples and submits them to the cloud that uses the client's public and
evaluation keys to perform the model inference operation. The final encrypted results are sent back to the client who decrypts them using its private key.
The clients' data is kept confidential from the server during the entire protocol due to HE, while the cloud model is never sent to the client, which allows the cloud to monetize its mlaaS service. In this paper, we focus on this scenario but stress that our study can be used almost without changes in many other threat models.
One downside of HE-based solutions is their latency. While there are many software and hardware improvement that make HE-based solutions practical such as [1, 15], there is still a gap between computing on encrypted data and computing on cleartexts, where our goal is to reduce that gap. Our starting point is a recent study [4] that pointed out on skip connections as a major contributor to the overall latency of secure inference solutions that use DNNs. The authors of [4] suggested removing the skip connection at the cost of some accuracy degradation or replacing the skip connections using several heuristics. We continue this line of work by suggesting using modern techniques that allow training DNNs while maintaining good accuracy. Specifically, we replace mid-term skip connections in DNNs with short-term (Dirac parameterization) [24] and long-term (Shared source skip connection) [21].
**Our contribution** We used ResNet50, a state-of-the-art network in terms of size that can run efficiently under HE [4, 15], as our baseline. We modified it to be HE-friendly, a term that we explain later, and apply the above techniques. Our experiments show that using this approach we were able to reduce the number of HE bootstrap operations by \(\times 1.36-1.75\) and thus the overall CPU time by x1.3.
**Organization.** The document is organized as follows. In section 2 we provide some background about HE. We describe skip connections and their variants in Section 3. Our experiments and results are presented in Section 4 and we conclude in Section 5.
## 2 Homomorphic Encryption (HE)
HE is an encryption scheme that encrypts input plaintext from a ring \(\mathcal{R}_{1}(+,*)\) into ciphertexts in another ring \(\mathcal{R}_{2}(\oplus,\odot)\), i.e., it contains the encryption function \(\mathrm{Enc}:\mathcal{R}_{1}\rightarrow\mathcal{R}_{2}\) and decryption function \(\mathrm{Dec}:\mathcal{R}_{2}\rightarrow\mathcal{R}_{1}\), where a scheme is correct if \(\mathrm{Dec}(\mathrm{Enc}(x))=x\). In addition, HE schemes include homomorphic addition and multiplication operations such that \(\mathrm{Dec}(\mathrm{Enc}(x)\oplus\mathrm{Enc}(y))=x+y\) and \(\mathrm{Dec}(\mathrm{Enc}(x)\odot\mathrm{Enc}(y))=x*y\) see survey in [13]. In our experiment, we use CKKS [6, 7] an approximately correct scheme, i.e., for some small \(\epsilon>0\) that is determined by the key, it follows that \(|x-\mathrm{Dec}(\mathrm{Enc}(x))|\leq\epsilon\) and the same modification applies to the other equations.
**Chain Index and Bootstrapping.** HE ciphertexts and particularly CKKS ciphertexts have a limit on the number of multiplications they can be involved with before a costly bootstrap operation is required. To this end, every ciphertext
includes an extra metadata parameter called the "multiplication chain index" (a.k.a. modulus chain index) or CIdx. Ciphertexts start with a CIdx of 0 and after every multiplication of two ciphertexts with CIdx of \(x\) and \(y\), the result has a CIdx of \(\max(x,y)+1\), where at least a ReScale operation is required. This process continues until the ciphertext reaches the predefined limit, which was originally set by the client to achieve the desired level of security and performance. To enable further computation on a ciphertext, a Bootstrap operation is performed to reduce its CIdx, or even reset it back to 0. In general, many HE-based applications attempt to minimize the number of Bootstrap invocations and this is also our goal in this paper.
There are two options for adding or multiplying two ciphertexts \(c_{1}\), \(c_{2}\) with \(\text{CIdx}=x,y\), respectively, where w.l.o.g \(x>y\): a) adjust \(c_{1}\) to have a CIdx \(=y\) by invoking ReScale(Bootstrap\((c_{1}),y)\); or b) invoke ReScale\((c_{2},x)\). This first option is costlier because it invokes both ReScale and Bootstrap in advance while the other approach leaves the bootstrap handling to future operations. However, this approach is preferred when \(c_{1}\) is expected to be added to multiple ciphertexts with lower chain indices. In that case, we perform only one Bootstrap operation on \(c_{1}\) instead of many on the other operations' results. An automatic bootstrapping placement mechanism is expected to consider the above.
**HE Packing.** Some HE schemes, such as CKKS [6], operate on ciphertexts in a homomorphic Single Instruction Multiple Data (SIMD) fashion. This means that a single ciphertext encrypts a fixed-size vector, and the homomorphic operations on the ciphertext are performed slot-wise on the elements of the plaintext vector. To utilize the SIMD feature, we need to pack and encrypt more than one input element in every ciphertext. The packing method can significantly impact both bandwidth, latency, and memory requirements. In this paper we decided to rely on IBM HELayers, which provides efficient packing capabilities for DNNs through the use of a new data structure called tile tensors [1]. We stress that adding or multiplying two ciphertexts that represent different tile tensor shape is problematic and an extra transformation is needed. While automatically handled by HELayers, one of goals is to also save these transformations.
## 3 Skip connections
Skip connections, a.k.a, residual connections [14], are crucial components in modern network architectures. Given several layers f(x), applying skip-connection to the layer means wrapping f(x) with a function \(S_{f}(x)=f(x)+x\). For real-world applications, networks without skip connections are hard to train, especially very deep networks. Skip connection solves optimization issues such as (i) vanishing gradients (ii) exploding gradients [19], or (iii) shattering gradients [3]. In practice, modern architectures heavily rely on skip connections e.g., Transforms [23] ViT [9], LLMs, CNNs [16], WaveNet [17], GPT [20], and Residual Net (ResNet) who has became one of the most cited DNN of the 21st century. When considering cleartext networks, skip-connections require only a simple addition, and thus provide an efficient solution that enables easier optimization of DNNs.
Moreover, they also play a fundamental role in modern Deep Learning (DL) solutions. While skip-less networks exist, and new variants appear from time to time e.g., [18; 25], they are rarely used in real-world applications, as they tend to perform poorly in complex scenarios and noisy data.
### Handling skip connections in HE
Observation 3.1 of [4] explains the relation between skip connections and bootstrapping operations.
**Observation 1** (observation 3.1 [4]): _Given a skip connection layer \(S_{f}(x)=x+f(x)\), where \(f\) is a combination of some layers. When running under HE,_
1. \(\mathrm{CIdx}\left(S_{f}(x)\right)\in\{\mathrm{CIdx}(x),\mathrm{CIdx}(f(x))\}\)_._
2. _When_ \(\mathrm{CIdx}(x)\neq\mathrm{CIdx}(f(x))\) _the skip connection implementation invokes either a_ \(\mathrm{ReScale}\) _or a_ Bootstrap _operation and may increase the overall multiplication depth of the network by_ \(|\mathrm{CIdx}(x)-\mathrm{CIdx}(f(x))|\)_._
In addition, the authors of [4] explained that the cost of \(S_{f}(x)\) can be even higher because the input \(x\) or \(f(x)\) may need to go through some transformations before adding them together, which is the case with the HElayers SDK. Given the latency costs associated with implementing skip connections under HE, [4] proposed either removing skip connections by first training a network and then gradually removing the connections, which resulted in some accuracy degradation, or that they suggested some heuristics to reroute these connections, which offers a tradeoff between latency and accuracy. Here, we suggest another heuristic that brings knowledge from the AI domain into the HE domain. Particularly, we replace DNNs skip connections with Dirac parameterization [24] and shared-source skip connection [21]. Informally speaking, shared-source skip connections connect the output of the initial layer or input with the output of different locations in the network. The reason this reduces the number of boot-straps is that after the initial layer, the chain index is very low. In addition, we can aim to add these connections only to layer outputs that share the same tile tensor shape and thus save reshape operations. Dirac parameterization is explained as follows: let \(\sigma(x)\) be a function that combines non-linearity and batch normalization, then a standard convolutional layer in ResNet is of an explicit form \(y=x+\sigma(W\odot x)\), where a Dirac parameterization is of the form \(y=\sigma(diag(a)I+W)\odot x=\sigma(diag(a)x+W\odot x)\). This addition, helps training and does not affect the latency of the secure inference.
Figure 1 shows a standard ResNet network with skip connection in their original places. In contrast, Figure 2 shows our modification. Because HE only supports polynomial operations, we first replace non-polynomial layers with polynomial layers (red font layers) to achieve a HE-friendly network. Specifically, we replace MaxPool with AVGPooling and ReLU activations with polynomial activations similar to [12]. Subsequently, we removed all mid-term skip connections and added long-term shared-source connections (green arrows) from the
output of the first convolution layer to every one of the four layers' outputs. To ensure that the dimensions match we added \(1\times 1\) convolutional and average pooling layers to these connections. Note that these layers can be performed on the server side but also by the client if we consider a split network, where the first layer is performed on the client side. This offers a tradeoff between latency and bandwidth. Finally, we added low-term Dirac parameterization to the first two convolution layers of every block, where the stride is 1 (orange blocks).
## 4 Experiments
In our experiments, we used a single NVIDIA A100-SXM4-40GB GPU with 40GB of memory for training and for secure inference an Intel(r)Xeon(r)CPU
Figure 1: An illustration of ResNet50 every layer contains several blocks and there is a skip connection between every block.
Figure 2: An illustration of our modified HE-friendly ResNet50. Every layer contains several blocks and there is a shared-source skip connection from the first layer output to the output of the four other layers. Red layers were modified to make the network HE-friendly as in [12].
E5-2699 v4 @ 2.20GHz machine with 44 cores (88 threads) and 750GB memory. In addition, for inference we used HELayers [1] version 1.5.2, where we set the underlying HE library to HEAaN [8] targeting 128 bits security. Specifically, we used HELayers simulator, which considers the underlying platform capabilities and provides us with the CPU-time of every run, i.e., the needed compute resources for the run. Note that this measurement accumulates the run time of all used CPUs.
Table 1 summarizes the test accuracy results of four HE-friendly ResNet50 variants: a) a reference HE-friendly ResNet50 network; b) our modified network with shared-source skip connection and Dirac parameterization; c) The reference network without skip connections but with Dirac parameterization; d) The reference network without skip connections. All networks used activation polynomials of degree 8 and were trained over CIFAR-10. For training, we used PyTorch as our library, AdamW as the optimizer (with all default hyperparameters, and learning rate of \(1e{-}3\)), a batch size of 50, and the standard cross-entropy loss in all our experiments. For simplicity, we did not use dropout or learning rate scheduling. We trained all the networks for 120 epochs and we see that our proposed design provides an interesting tradeoff between CPU-runtime and accuracy. It has almost the same accuracy as the reference implementation but almost the latency of a network without skip connections and an overall CPU-time improvement of around \(\times 1.3\). Furthermore, in vanilla networks, the latency and amount of bootstraps operations are proportional to the number of skip-connections. In contrast, in our architecture, these properties are independent of the number of skip-connections, which is crucial for larger networks.
Figure 3 compares the training status of the three HE-friendly ResNet50 variants. For these, it reports the test accuracy (x-axis) per training epoch (y-axis). The reference HE-friendly network is represented by the blue line, our modified network by the red line, and a network where the skip-connections were completely removed by a green line.
Table 2 extends Table 1 and shows the accumulated CPU-time improvement when using our modified HE-friendly ResNet50 as reported by HELayers simulator [1] with activation functions of different polynomial degrees. We observe that
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline Network architecture & CPU-time (h) & \# bootstraps & Test accuracy & Test accuracy \\ & & & Non HE-friendly & HE-friendly \\ \hline Reference & 18.06 & 2,568 & 91.67 & 91.46 \\ \hline W/o skip connections & 12.9 & 1,888 & 88.74 & 88.68 \\ W/ Dirac params & 12.9 & 1,888 & 90.87 & 90.74 \\
**Our variant** & 13.4 & 1,888 & 91.25 & 91.08 \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of accumulated CPU-time and accuracy for different HE-friendly ResNet50 DNNs over CIFAR-10.
compared to the reference network we got an improvement of \(\times 1.18\), \(\times 1.3\), and \(\times 1.34\) in the consumed compute resources when using polynomial activations of degrees 2, 4, and 8, respectively. On the other hand, our network consumed slightly more compute resources compared to a network completely without skip connections, specifically, \(\times 0.88\), \(\times 0. |
2305.10404 | $\mathbb{F}_q\mathcal{R}$-skew cyclic codes and their application to
quantum codes | Let $p$ be a prime and $\mathbb{F}_q$ be the finite field of order $q=p^m$.
In this paper, we study $\mathbb{F}_q\mathcal{R}$-skew cyclic codes where
$\mathcal{R}=\mathbb{F}_q+u\mathbb{F}_q$ with $u^2=u$. To characterize
$\mathbb{F}_q\mathcal{R}$-skew cyclic codes, we first establish their algebraic
structure and then discuss the dual-containing properties by considering a
non-degenerate inner product. Further, we define a Gray map over
$\mathbb{F}_q\mathcal{R}$ and obtain their $\mathbb{F}_q$-Gray images. As an
application, we apply the CSS (Calderbank-Shor-Steane) construction on Gray
images of dual containing $\mathbb{F}_q\mathcal{R}$-skew cyclic codes and
obtain many quantum codes with better parameters than the best-known codes
available in the literature. | Om Prakash, Shikha Patel, Habibul Islam | 2023-05-17T17:46:57Z | http://arxiv.org/abs/2305.10404v1 | # \(\mathbb{F}_{q}\mathcal{R}\)-skew cyclic codes and their application to quantum codes
###### Abstract
Let \(p\) be a prime and \(\mathbb{F}_{q}\) be the finite field of order \(q=p^{m}\). In this paper, we study \(\mathbb{F}_{q}\mathcal{R}\)-skew cyclic codes where \(\mathcal{R}=\mathbb{F}_{q}+u\mathbb{F}_{q}\) with \(u^{2}=u\). To characterize \(\mathbb{F}_{q}\mathcal{R}\)-skew cyclic codes, we first establish their algebraic structure and then discuss the dual-containing properties by considering a non-degenerate inner product. Further, we define a Gray map over \(\mathbb{F}_{q}\mathcal{R}\) and obtain their \(\mathbb{F}_{q}\)-Gray images. As an application, we apply the CSS (Calderbank-Shor-Steane) construction on Gray images of dual containing \(\mathbb{F}_{q}\mathcal{R}\)-skew cyclic codes and obtain many quantum codes with better parameters than the best-known codes available in the literature.
Keywords:Cyclic codes Skew cyclic codes Self-orthogonal codes Gray map CSS construction Quantum codes Msc: 11T71 11T06 94B05 94B15.
## 1 Introduction
The family of skew cyclic codes is an emerging and interesting class of linear codes introduced by Boucher et al. [14] in 2007. In 2009, they have obtained some new linear codes with record-breaking parameters from skew cyclic codes [15]. Remarkably, these works have considered the length \(n\) of the code as an integral multiple of the order of underlying automorphism. In 2011, Siap et al. [42] relaxed this restriction from \(n\) to investigate skew cyclic codes of any length over a finite field. In addition to quasi-cyclic (QC) and quasi-twisted (QT) codes over commutative rings [2; 34], skew linear codes such as skew cyclic, skew QC and skew QT codes produce some excellent parameters [14; 15]. In fact, in the last two decades, skew cyclic codes produced some new parameters. Therefore, exploring skew cyclic codes remains one of the active research areas. Currently, many researchers have been working on these codes over different alphabets, particularly to produce new codes over finite rings, we refer [22; 35; 42]. Also, the number of skew cyclic codes is directly proportional to the number of factors of \(x^{n}-1\). Since the skew polynomial ring is a non-UFD, it generally possesses more factorization of a polynomial in comparison to a commutative ring. Hence, it is one of the reasons to study such noncommutative skew codes.
On the other hand, many researchers have also worked on codes over mixed alphabets for the last two decades. Linear codes over mixed alphabets were introduced by Brouwer et al. [16] in 1998. In 2010, Borges et al. [11] studied \(\mathbb{Z}_{2}\mathbb{Z}_{4}\)-linear codes and discussed their generator matrices and parity-check matrices. In 2014, Abualrub et al. [1] investigated \(\mathbb{Z}_{2}\mathbb{Z}_{4}\)-cyclic codes and proved that the dual of a \(\mathbb{Z}_{2}\mathbb{Z}_{4}\)-cyclic code is again a \(\mathbb{Z}_{2}\mathbb{Z}_{4}\)-cyclic code. Further, in 2015 Aydogdu et al. [7] generalized these additive codes to \(\mathbb{Z}_{p^{\prime}}\mathbb{Z}_{p^{\prime}}\). Again, in 2018 Aydogdu et al. [6] generalized \(\mathbb{Z}_{2}\mathbb{Z}_{4}\)-cyclic codes to \(\mathbb{Z}_{2}\mathbb{Z}_{4}\mathbb{Z}_{8}\)-cyclic codes. For more related works on mixed alphabets, we refer [4, 5, 8, 12, 28, 37, 45]. Recently, in 2020, Benbelkacem et al. [10] studied mixed alphabet skew cyclic codes. Here, we discuss structural properties of skew cyclic codes of length \((\alpha,\beta)\) over the mixed alphabet \(\mathbb{F}_{q}\mathcal{R}\) where \(\mathcal{R}=\mathbb{F}_{q}+u\mathbb{F}_{q}\) with \(u^{2}=u\). In particular, if \(\alpha=0\), we get skew cyclic codes over \(\mathcal{R}\) discussed by Gursoy et al. [22], and if \(\beta=0\), we get skew cyclic codes over \(\mathbb{F}_{q}\). Therefore, we generalize both codes over \(\mathbb{F}_{q}\) and \(\mathcal{R}\), which is one of our motivations for this study.
On the other hand, quantum information theory is a fascinating area of research. It introduces concepts from computer science, classical information theory, and quantum mechanics all at once. Results and techniques of various branches of mathematics, mathematical physics, and quantum statistical physics find applications in this fast-growing field. Classical information theory deals with information-processing functions like storage and transmission of information, whereas quantum information theory studies how these tools can be accomplished using quantum mechanical systems. In classical information theory, error correction occurs in bits; however, in quantum information theory, error correction occurs in quantum states (qubits). Both are concerned with one of the fundamental problems of communication namely reliable transmission of information over a noisy channel. The implementation of quantum error-correcting codes is widely acknowledged as necessary for the development of large-scale, viable quantum computers and the usage of quantum communication. Quantum error-correcting codes (QECCs) have been used to protect quantum information from errors caused by decoherence and other types of quantum noise. Classical computers use a large number of electrons, so when one goes wrong, it is not too severe. However, in quantum computers single qubit will probably be just one or a small number of particles. This single qubit already creates a need for some sort of error correction. Therefore, to build reliable quantum computers, quantum error correction will be necessarily needed, while classical computers do not use error correction [21]. The construction of quantum error-correcting codes from classical error-correcting codes was independently invented by Shor [41] and Steane [43]. In 1998, Calderbank et al. [17] proposed a systematic method to obtain quantum codes from classical codes, known as CSS (Calderbank-Shor-Steane) construction. Under this construction, several investigations have been carried out, for instance, quantum codes from cyclic codes over \(\mathbb{F}_{4}+u\mathbb{F}_{4},u^{2}=0\) are found in [32]. In the literature [18, 19, 25, 30, 38, 39], binary and non-binary quantum codes from cyclic codes over finite non-chain rings are well studied. For a detailed study of quantum codes from linear codes, we refer [3, 20, 23, 26, 27, 29, 40]. The number of (dual containing) skew codes depends on the factorization of \(x^{n}-1\), and hence the major reason behind the investigation of skew cyclic codes in the construction of quantum code lies in the factorization of \(x^{n}-1\), which sometimes has more choices for \(h(x)\) in a skew polynomial ring. We note that quantum codes from skew codes have appeared in very few papers [9, 31, 36, 44].
Further, we notice that in the case of mixed alphabets, researchers mainly focused on exploring the structural properties such as generator matrices, parity check matrices, generating polynomials, minimal generating sets, generating polynomials for dual codes, etc. Presently, only a few works are available in the literature on the applications of mixed alphabet codes [27, 33]. Recently, Li et al. [33] studied \(\mathbb{F}_{q}R\)-linear skew constacyclic codes and constructed quantum codes by the Hermitian construction. In continuation with that
framework, here we find quantum codes in the case of noncommutative mixed alphabets codes. Two significant contributions of the paper are given below.
1. We discuss the algebraic structure of \(\mathbb{F}_{q}\mathcal{R}\)-skew cyclic codes and their dual codes. Then a necessary and sufficient condition of these codes to contain their dual is provided.
2. As an application, we obtain quantum codes with better parameters (compared to the recent literature) from dual containing codes.
The paper is structured as follows: In Section 2, we present some basic results and definitions followed by the algebraic structure of the ring \(\mathbb{F}_{q}\mathcal{R}\). Section 3 presents the algebraic properties of \(\mathbb{F}_{q}\mathcal{R}\)-skew cyclic codes. We have shown that the dual of an \(\mathbb{F}_{q}\mathcal{R}\)-skew cyclic code is again an \(\mathbb{F}_{q}\mathcal{R}\)-skew cyclic code. Section 4 includes the Gray map through which we discuss images of \(\mathbb{F}_{q}\mathcal{R}\)-skew cyclic codes. Moreover, we have provided some examples of \(\mathbb{F}_{q}\mathcal{R}\)-skew cyclic codes and their Gray images over \(\mathbb{F}_{q}\). Section 5 gives a method to construct quantum codes. Section 6 concludes the paper.
## 2 Preliminary
Let \(\mathbb{F}_{q}\) be a finite field and \(q=p^{m}\) for some prime \(p\) and a positive integer \(m\). A \(k\)-dimensional subspace \(\mathcal{C}\) of \(\mathbb{F}_{q}^{n}\) is said to be a linear code of length \(n\) over \(\mathbb{F}_{q}\) and every element \(c=(c_{0},c_{1},\ldots,c_{n-1})\in\mathcal{C}\) is called a codeword. Let \(c\in\mathbb{F}_{q}^{n}\). The Hamming weight of \(c\), denoted by \(w_{H}(c)\), is defined as the number of nonzero coordinates in \(c\). For any two vectors \(c\) and \(c^{\prime}\) in \(\mathbb{F}_{q}^{n}\), the Hamming distance between \(c\) and \(c^{\prime}\), denoted by \(d_{H}(c,c^{\prime})\), is defined as the number of places in which \(c\) and \(c^{\prime}\) differ. For a linear code \(\mathcal{C}\), the Hamming distance is given by
\[d_{H}(\mathcal{C})=\min\{d_{H}(c,c^{\prime})\ |\ c\neq c^{\prime},\text{for all }c,c^{\prime}\in\mathcal{C}\}.\]
Let \(c=(c_{0},c_{1},\ldots,c_{n-1})\) and \(c^{\prime}=(c^{\prime}_{0},c^{\prime}_{1},\ldots,c^{\prime}_{n-1})\) be two vectors. The Euclidean inner product of \(c\) and \(c^{\prime}\) in \(\mathbb{F}_{q}^{n}\) is defined by \(c\cdot c^{\prime}=\sum_{i=0}^{n-1}c_{i}c^{\prime}_{i}\). The dual code of \(\mathcal{C}\) is defined by \(\mathcal{C}^{\perp}=\{c\in\mathbb{F}_{q}^{n}\ |\ c\cdot c^{\prime}=0,\text{ for all }c^{\prime}\in \mathcal{C}\}\). A linear code \(\mathcal{C}\) is self-orthogonal if \(\mathcal{C}\subseteq\mathcal{C}^{\perp}\) and self-dual if \(\mathcal{C}=\mathcal{C}^{\perp}\). The parameters of a code with minimum distance \(d\) and dimension \(k\) are written as \([n,k,d]\). If \(\mathcal{C}\) is an \([n,k,d]\) code, then from the Singleton bound, its minimum distance is bounded above by
\[d\leq n-k+1.\]
A code meeting the above bound with equality is called maximum-distance-separable (MDS). We call a code is almost MDS if its minimum distance is one unit less than the MDS bound. A code is called optimal if it has the highest possible minimum distance for a given length and dimension.
Throughout this paper, \(\mathcal{R}\) represents the quotient ring \(\mathbb{F}_{q}[u]/\langle u^{2}-u\rangle\). It can be easily seen that \(\mathcal{R}\) is a finite commutative non-chain ring containing \(q^{2}\) elements with characteristic \(p\). It is a semi-local ring with maximal ideals \(\langle u\rangle\) and \(\langle 1-u\rangle\). Let \(\xi_{1}=1-u,\xi_{2}=u\). Then \(\xi_{i}^{2}=\xi_{i}\), \(\xi_{i}\xi_{j}=0\) and \(\sum_{k=1}^{2}\xi_{k}=1\) where \(i,j=1,2\) and \(i\neq j\).Thus, we have
\[\mathcal{R}=\xi_{1}\mathcal{R}\oplus\xi_{2}\mathcal{R}.\]
Further, \(\xi_{i}\mathcal{R}\cong\xi_{i}\mathbb{F}_{q}\), \(i=1,2\). Therefore, any \(r\in\mathcal{R}\) can be uniquely written as \(r=\sum_{i=1}^{2}\xi_{i}a_{i}\) where \(a_{i}\in\mathbb{F}_{q}\) for \(i=1,2\). A linear code \(\mathcal{C}\) of length \(n\) over \(\mathcal{R}\) is an \(\mathcal{R}\)-submodule of \(\mathcal{R}^{n}\). Let \(B_{i}\) be a linear code over \(\mathcal{R}\), for \(i=1,2\). Then the operations \(\oplus\) and \(\otimes\) are defined as
\[B_{1}\oplus B_{2}=\{b_{1}+b_{2}\ |\ b_{i}\in B_{i}\text{ for all }i\}\]
\[B_{1}\otimes B_{2}=\{(b_{1},b_{2})\ |\ b_{i}\in B_{i}\ \text{for all}\ i\},\]
respectively. We know that any element of the ring \(\mathcal{R}\) can be uniquely written as
\[a+ub=(a+b)u+a(1-u)\]
where \(a,b\in\mathbb{F}_{q}\). Therefore, \(a+ub\in\mathcal{R}\) is a unit if and only if \(a,a+b\in\mathbb{F}_{q}^{*}\). Now, following [22], let \(\mathcal{C}\) be a linear code of length \(n\) over \(\mathcal{R}\) and
\[\mathcal{C}_{1} =\{\mathbf{a}\in\mathbb{F}_{q}^{n}\ |\ \mathbf{a}+u\mathbf{b}\in \mathcal{C},\ \text{for some}\ \mathbf{b}\in\mathbb{F}_{q}^{n}\};\] \[\mathcal{C}_{2} =\{\mathbf{a}+\mathbf{b}\in\mathbb{F}_{q}^{n}\ |\ \mathbf{a}+u \mathbf{b}\in\mathcal{C}\}\]
Then \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\) are linear codes of length \(n\) over \(\mathbb{F}_{q}\) and \(\mathcal{C}\) can be uniquely written as \(\mathcal{C}=\xi_{1}\mathcal{C}_{1}\oplus\xi_{2}\mathcal{C}_{2}\). Let \(s=a+ub\in\mathcal{R}\) and \(c=(c_{0},c_{1},\ldots,c_{n-1})\in\mathcal{C}\) where \(c_{i}=a_{i}+ub_{i}\) with \(a_{i},b_{i}\in\mathbb{F}_{q}\) for \(0\leq i\leq n-1\). The \(i^{th}\)-entry of \(sc\) is given as
\[(a+ub)(a_{i}+ub_{i}) =((a+b)u-a(u-1))(a_{i}+ub_{i})\] \[=aa_{i}+u(ba_{i}+(b+a)b_{i})\] \[=(1-u)r+u(r+t)\]
where \(r=aa_{i}\) and \(t=ba_{i}+bb_{i}+ab_{i}\). Hence, \(sc\) can be written in terms of \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\) with \(r=a(a_{0},a_{1},\ldots,a_{n-1})\) and \(t=(a+b)(b_{0},b_{1},\ldots,b_{n-1})+b(a_{0},a_{1},\ldots,a_{n-1})\).
Now, we introduce two classes of noncommutative rings \(\mathbb{F}_{q}[x;\Theta]\) and \(\mathcal{R}[x;\theta]\).
**Definition 1**: Let \(\Theta\) be an automorphism of \(\mathbb{F}_{q}\) defined by \(\Theta(a)=a^{p^{i}}\) and \(q=p^{m}\). We consider the set
\[\mathbb{F}_{q}[x;\Theta]=\{b_{e}x^{e}+\cdots+b_{1}x+b_{0}\ |\ b_{i}\in \mathbb{F}_{q}\ \text{and}\ e\in\mathbb{N}\}.\]
Then \(\mathbb{F}_{q}[x;\Theta]\) is a noncommutative ring unless \(\Theta\) is the identity under the usual addition of polynomials and multiplication defined by \((ax^{i})(bx^{j})=a\Theta^{i}(b)x^{i+j}\) for \(a,b\in\mathbb{F}_{q}\) and associative and distributive rules over \(\mathbb{F}_{q}[x;\Theta]\).
Now, consider
\[\mathcal{R}[x;\theta]=\{a_{l}x^{l}+\cdots+a_{1}x+a_{0}\ |\ a_{i}\in\mathcal{R} \ \text{and}\ l\in\mathbb{N}\}\]
where \(\theta\) is the automorphism of \(\mathcal{R}\) defined by \(\theta(a+ub)=a^{p^{i}}+ub^{p^{i}}\). Hence the order of the automorphism is \(|\langle\theta\rangle|=\frac{m}{\gcd(m,i)}\) and in particular, \(|\langle\theta\rangle|=\frac{m}{i}\) when \(i|m\). Also, the subring of \(\mathbb{F}_{p^{i}}+u\mathbb{F}_{p^{i}}\) is invariant under \(\theta\). Then \(\mathcal{R}[x;\theta]\) is a noncommutative ring unless \(\theta\) is the identity under the usual addition of polynomials and multiplication defined by \((sx^{i})(s^{\prime}x^{j})=s\theta^{i}(s^{\prime})x^{i+j}\) for \(s,s^{\prime}\in\mathcal{R}\) and associative and distributive rules over \(\mathcal{R}[x;\theta]\). This ring is known as a skew polynomial ring.
**Definition 2**: A linear code \(\mathscr{C}\) of length \(\beta\) over \(\mathcal{R}\) is said to be a skew cyclic code with respect to the automorphism \(\theta\) if for any codeword \(c=(c_{0},c_{1},...,c_{\beta-1})\in\mathscr{C}\) we have \(\delta(c)=(\theta(c_{\beta-1}),\theta(c_{0}),...,\theta(c_{\beta-2}))\in \mathscr{C}\) where \(\delta\) is a skew cyclic shift of \(\mathscr{C}\).
Here, \(\frac{\mathcal{R}[x;\theta]}{\langle x^{\beta}-1\rangle}\) forms a ring when \(|\theta|\) divide \(\beta\). But for \(|\theta|\nmid\beta\), \(\frac{\mathcal{R}[x;\theta]}{\langle x^{\beta}-1\rangle}\) fails to be a ring. However, in that case it is a left \(\mathcal{R}[x;\theta]\)-module under the left multiplication defined by \(f(x)\big{(}g(x)+\langle x^{\beta}-1\rangle\big{)}=f(x)g(x)+\langle x^{\beta }-1\rangle\) where \(f(x),g(x)\in\mathcal{R}[x;\theta]\). As usual, it is convenient to identify each codeword \(c=(c_{0},c_{1},...,c_{\beta-1})\in\mathscr{C}\subseteq\mathcal{R}^{\beta}\) with the polynomial \(c(x)=c_{0}+c_{1}x+\cdots+c_{\beta-1}x^{\beta-1}\in\frac{\mathcal{R}[x;\theta]} {\langle x^{\beta}-1\rangle}\) under the correspondence \(c=(c_{0},c_{1},...,c_{\beta-1})\longrightarrow c(x)=c_{0}+c_{1}x+\cdots+c_{ \beta-1}x^{\beta-1}\). As a consequence, every code of length \(\beta\) over \(\mathcal{R}\) can be viewed as a subset of \(\frac{\mathcal{R}[x;\theta]}{\langle x^{\beta}-1\rangle}\). The following theorems are used to derive the results in further sections.
**Theorem 1**: _[_22_, Theorem 3]_ _Let \(\mathscr{C}=\xi_{1}\mathcal{C}_{1}\oplus\xi_{2}\mathcal{C}_{2}\) be a linear code of length \(\beta\) over \(\mathcal{R}\). Then \(\mathscr{C}\) is a skew cyclic code of length \(\beta\) over \(\mathcal{R}\) if and only if \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\) are skew cyclic codes of length \(\beta\) over \(\mathbb{F}_{q}\)._
**Theorem 2**: _[_22_, Theorem 5]_ _Let \(\mathscr{C}=\xi_{1}\mathcal{C}_{1}\oplus\xi_{2}\mathcal{C}_{2}\) be a skew cyclic code of length \(\beta\) over \(\mathcal{R}\). Let \(g_{1}(x)\) and \(g_{2}(x)\) be generator polynomials of \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\), respectively. Then \(\mathscr{C}=\langle\xi_{1}g_{1}(x)+\xi_{2}g_{2}(x)\rangle\)._
Let \(s=a+ub\) be an element of \(\mathcal{R}\). Then we define a map \(\eta:\mathcal{R}\rightarrow\mathbb{F}_{q}\) as follows
\[\eta(s)=a.\]
It is clear that \(\eta\) is a ring homomorphism. Let \(\mathbb{F}_{q}\mathcal{R}=\{(x,y)\ |\ x\in\mathbb{F}_{q}\ \text{and}\ y\in \mathcal{R}\}\). Now for any \(s\in\mathcal{R}\) and \((x,y)\in\mathbb{F}_{q}\mathcal{R}\), we define an \(\mathcal{R}\)-scalar multiplication on \(\mathbb{F}_{q}\mathcal{R}\) as
\[*:\mathcal{R}\times\mathbb{F}_{q}\mathcal{R}\longrightarrow\mathbb{F}_{q} \mathcal{R}\]
such that
\[s*(x,y)=(\eta(s)x,sy).\]
It is easy to verify that \(\mathbb{F}_{q}\mathcal{R}\) is an \(\mathcal{R}\)-module under this multiplication. Further, it can be extended componentwise to \(\mathbb{F}_{q}^{\alpha}\times\mathcal{R}^{\beta}\) as follows:
\[*:\mathcal{R}\times(\mathbb{F}_{q}^{\alpha}\times\mathcal{R}^{\beta}) \longrightarrow\mathbb{F}_{q}^{\alpha}\times\mathcal{R}^{\beta}\]
where
\[s*l=(\eta(s)x_{0},\eta(s)x_{1},\ldots,\eta(s)x_{\alpha-1},sy_{0},sy_{1},\ldots,sy_{\beta-1});\]
with \(s\in\mathcal{R}\) and \(l=(x_{0},x_{1},\ldots,x_{\alpha-1},y_{0},y_{1},\ldots,y_{\beta-1})\in\mathbb{ F}_{q}^{\alpha}\times\mathcal{R}^{\beta}\). Under this multiplication, \(\mathbb{F}_{q}^{\alpha}\times\mathcal{R}^{\beta}\) forms an \(\mathcal{R}\)-module.
**Definition 3**: A nonempty subset \(\mathcal{C}\) of \(\mathbb{F}_{q}^{\alpha}\times\mathcal{R}^{\beta}\) is said to be an \(\mathbb{F}_{q}\mathcal{R}\)-linear code of length \((\alpha,\beta)\) if \(\mathcal{C}\) is an \(\mathcal{R}\)-submodule of \(\mathbb{F}_{q}^{\alpha}\times\mathcal{R}^{\beta}\).
Now, we define an inner product between \(l=(x_{0},x_{1},\ldots,x_{\alpha-1},y_{0},y_{1},\ldots,y_{\beta-1})\) and \(l^{\prime}=(x_{0}^{\prime},x_{1}^{\prime},\ldots,x_{\alpha-1}^{\prime},y_{0}^ {\prime},y_{1}^{\prime},\ldots,y_{\beta-1}^{\prime})\) as
\[l\cdot l^{\prime}=u\sum_{i=0}^{\alpha-1}x_{i}x_{i}^{\prime}+\sum_{j=0}^{\beta- 1}y_{j}y_{j}^{\prime}.\]
Let \(\mathcal{C}\) be an \(\mathbb{F}_{q}\mathcal{R}\)-linear code of length \((\alpha,\beta)\). Then the dual code of \(\mathcal{C}\) is defined as
\[\mathcal{C}^{\perp}=\{l^{\prime}\in\mathbb{F}_{q}^{\alpha}\times\mathcal{R}^{ \beta}\ |\ l\cdot l^{\prime}=0\ \forall\ l\in\mathcal{C}\}.\]
Denote \(R_{\alpha,\beta}=\frac{\mathbb{F}_{q}[x;\Theta]}{\langle x^{\alpha}-1\rangle}\times \frac{\mathcal{R}[x;\theta]}{\langle x^{\beta}-1\rangle}\), \(k(x)=k_{0}+k_{1}x+\cdots+k_{\alpha-1}x^{\alpha-1}\in\frac{\mathbb{F}_{q}[x; \Theta]}{\langle x^{\alpha}-1\rangle}\) and \(t(x)=t_{0}+t_{1}x+\cdots+t_{\beta-1}x^{\beta-1}\in\frac{\mathcal{R}[x;\theta] }{\langle x^{\beta}-1\rangle}\). Then any vector \(c=(k_{0},k_{1},\ldots,k_{\alpha-1},t_{0},t_{1},\ldots,t_{\beta-1})\in\mathbb{ F}_{q}^{\alpha}\times\mathcal{R}^{\beta}\) can be identified with a polynomial of the form
\[c(x)=(k(x),t(x))\in R_{\alpha,\beta}\]
which gives a one-to-one correspondence between \(\mathbb{F}_{q}^{\alpha}\times\mathcal{R}^{\beta}\) and \(R_{\alpha,\beta}\). Let \(r(x)=r_{0}+r_{1}x+\cdots+r_{\gamma-1}x^{\gamma-1}\in\mathcal{R}[x;\theta]\) and \((k(x),t(x))\in R_{\alpha,\beta}\). Then the multiplication \(*\) in \(R_{\alpha,\beta}\) is defined as follows:
\[r(x)*(k(x),t(x))=(\eta(r(x))k(x),r(x)t(x))\]
where \(\eta(r(x))=\eta(r_{0})+\eta(r_{1})x+\cdots+\eta(r_{\gamma-1})x^{\gamma-1}\). Then under the above defined multiplication \(*\), \(R_{\alpha,\beta}\) forms a left \(\mathcal{R}[x;\theta]\)-module.
## 3 Structural Properties of \(\mathbb{F}_{q}\mathcal{R}\)-Skew Cyclic Codes
In this section, first, we define \(\mathbb{F}_{q}\mathcal{R}\)-skew cyclic codes. Then algebraic properties of these codes are discussed in detail. Also, the form of a generator polynomial of an \(\mathbb{F}_{q}\mathcal{R}\)-skew cyclic code in \(R_{\alpha,\beta}\) is established.
Definition 4: An \(\mathcal{R}\)-submodule \(\mathcal{C}\) of \(\mathbb{F}_{q}^{\alpha}\times\mathcal{R}^{\beta}\) is said to be an \(\mathbb{F}_{q}\mathcal{R}\)-skew cyclic code of length \(n=(\alpha,\beta)\) if for any \(c=(x_{0},x_{1},\ldots,x_{\alpha-1},y_{0},y_{1},\ldots,y_{\beta-1})\in\mathcal{C}\) we have \(\sigma(c):=(\Theta(x_{\alpha-1}),\Theta(x_{0}),\Theta(x_{1}),\ldots,\Theta(x_{ \alpha-2}),\theta(y_{\beta-1}),\theta(y_{0}),\theta(y_{1}),\ldots,\theta(y_{ \beta-2}))\in\mathcal{C}\) where \(\sigma\) is the skew cyclic shift of \(c\).
Theorem 3.1: _Let \(\mathcal{C}\) be an \(\mathbb{F}_{q}\mathcal{R}\)-skew cyclic code of length \(n=(\alpha,\beta)\) and \(|\langle\theta\rangle|\) divide \(\beta\). Then \(\mathcal{C}^{\perp}\) is also an \(\mathbb{F}_{q}\mathcal{R}\)-skew cyclic code of length \(n=(\alpha,\beta)\)._
Proof: Let \(\mathcal{C}\) be an \(\mathbb{F}_{q}\mathcal{R}\)-skew cyclic code of length \(n=(\alpha,\beta)\) and \(l^{\prime}=(x_{0}^{\prime},x_{1}^{\prime},\ldots,x_{\alpha-1}^{\prime},y_{0}^ {\prime},y_{1}^{\prime},\)\(\ldots,y_{\beta-1}^{\prime})\in\mathcal{C}^{\perp}\). Assume that \(l=(x_{0},x_{1},\ldots,x_{\alpha-1},y_{0},y_{1},\ldots,y_{\beta-1})\in\mathcal{C}\) and \(lcm(\alpha,\beta)=\mathcal{L}\). Now, we have to show that \(\sigma(l^{\prime})\in\mathcal{C}^{\perp}\). Consider the inner product of \(l\) and \(\sigma(l^{\prime})\). We have
\[l\cdot\sigma(l^{\prime})= (x_{0},x_{1},\ldots,x_{\alpha-1},y_{0},y_{1},\ldots,y_{\beta-1})\cdot\] \[(\Theta(x_{\alpha-1}^{\prime}),\Theta(x_{0}^{\prime}),\Theta(x_{ 1}^{\prime}),\ldots,\Theta(x_{\alpha-2}^{\prime}),\theta(y_{\beta-1}^{\prime}),\theta(y_{0}^{\prime}),\theta(y_{1}^{\prime}),\ldots,\theta(y_{\beta-2}^{ \prime}))\] \[= u(x_{0}\Theta(x_{\alpha-1}^{\prime})+x_{1}\Theta(x_{0}^{\prime})+ \cdots+x_{\alpha-1}\Theta(x_{\alpha-2}^{\prime}))\] \[+(y_{0}\theta(y_{\beta-1}^{\prime})+y_{1}\theta(y_{0}^{\prime})+ \cdots+y_{\beta-1}\theta(y_{\beta-2}^{\prime})).\]
We only need to show that \(x_{0}\Theta(x_{\alpha-1}^{\prime})+x_{1}\Theta(x_{0}^{\prime})+\cdots+x_{ \alpha-1}\Theta(x_{\alpha-2}^{\prime})=0\) and \(y_{0}\theta(y_{\beta-1}^{\prime})+y_{1}\theta(y_{0}^{\prime})+\cdots+y_{\beta -1}\theta(y_{\beta-2}^{\prime})=0\). As \(\mathcal{C}\) is an \(\mathbb{F}_{q}\mathcal{R}\)-skew cyclic code, \(\sigma^{\mathcal{L}-1}(l)\in\mathcal{C}\) where \(\sigma^{\mathcal{L}-1}(l)=(\Theta^{\mathcal{L}-1}(x_{1}),\Theta^{\mathcal{L}-1 }(x_{2}),\ldots,\Theta^{\mathcal{L}-1}(x_{\alpha-1}),\Theta^{\mathcal{L}-1}(x _{0}),\theta^{\mathcal{L}-1}(y_{1}),\theta^{\mathcal{L}-1}(y_{2}),\ldots,\)\(\theta^{\mathcal{L}-1}(y_{\beta-1}),\theta^{\mathcal{L}-1}(y_{0}))\). Now, we get that \(\sigma^{\mathcal{L}-1}(l)\cdot l^{\prime}=0\), where
\[\sigma^{\mathcal{L}-1}(l)\cdot l^{\prime}=u\sum_{i=0}^{\alpha-1}\Theta^{ \mathcal{L}-1}(x_{i+1})x_{i}^{\prime}+\sum_{j=0}^{\beta-1}\theta^{\mathcal{L}- 1}(y_{j+1})y_{j}^{\prime}.\]
Comparing the coefficients on both sides, we get
\[\Theta^{\mathcal{L}-1}(x_{0})x_{\alpha-1}^{\prime}+\Theta^{\mathcal{L}-1}(x_{1 })x_{0}^{\prime}+\cdots+\Theta^{\mathcal{L}-1}(x_{\alpha-1})x_{\alpha-2}^{ \prime}=0\]
and
\[\theta^{\mathcal{L}-1}(y_{0})y_{\beta-1}^{\prime}+\theta^{\mathcal{L}-1}(y_{1 })y_{0}^{\prime}+\cdots+\theta^{\mathcal{L}-1}(y_{\beta-1})y_{\beta-2}^{\prime}=0.\]
Since the order of \(\theta\) divides \(\beta\), \(|\langle\theta\rangle|\) divide \(lcm(\alpha,\beta)=\mathcal{L}\) and hence \(\theta^{\mathcal{L}}(a)=\Theta^{\mathcal{L}}(a)=a\) for any \(a\in\mathbb{F}_{q}\). Then \(\theta(\theta^{\mathcal{L}-1}(y_{0})y_{\beta-1}^{\prime}+\theta^{\mathcal{L}- 1}(y_{1})y_{0}^{\prime}+\cdots+\theta^{\mathcal{L}-1}(y_{\beta-1})y_{\beta-2}^ {\prime})=\theta(0)=0\) and \(\Theta(\Theta^{\mathcal{L}-1}(x_{0})x_{\alpha-1}^{\prime}+\Theta^{\mathcal{L}- 1}(x_{1})x_{0}^{\prime}+\cdots+\Theta^{\mathcal{L}-1}(x_{\alpha-1})x_{\alpha-2 }^{\prime})=\Theta(0)=0\). This implies \(y_{0}\theta(y_{\beta-1}^{\prime})+y_{1}\theta(y_{0}^{\prime})+\cdots+y_{\beta-1 }\theta(y_{\beta-2}^{\prime})=0\) and \(x_{0}\Theta(x_{\alpha-1}^{\prime})+x_{1}\Theta(x_{0}^{\prime})+\cdots+x_{\alpha-1 }\Theta(x_{\alpha-2}^{\prime})=0\). Therefore, \(l\cdot\sigma(l^{\prime})=0\) and hence \(\sigma(l^{\prime})\in\mathcal{C}^{\perp}\). Thus, \(\mathcal{C}^{\perp}\) is an \(\mathbb{F}_{q}\mathcal{R}\)-skew cyclic code of length \((\alpha,\beta)\).
Theorem 3.2: _A code \(\mathcal{C}\) is an \(\mathbb{F}_{q}\mathcal{R}\)-skew cyclic code of length \((\alpha,\beta)\) if and only if \(\mathcal{C}\) is a left \(\mathcal{R}[x;\theta]\)-submodule of \(R_{\alpha,\beta}\)._
Proof: Let \(\mathcal{C}\) be an \(\mathbb{F}_{q}\mathcal{R}\)-skew cyclic code of length \((\alpha,\beta)\) and \(c=(k_{0},k_{1},\ldots,k_{\alpha-1},t_{0},t_{1},\ldots,\)\(t_{\beta-1})\in\mathcal{C}\); and the corresponding element of \(c\) in \(R_{\alpha,\beta}\) be \(c(x)=(k(x),t(x))\). Now, consider
\[x*c(x)=(\Theta(k_{\alpha-1})+\Theta(k_{0})x+\cdots+\Theta(k_{\alpha-2})x^{\alpha -1},\theta(t_{\beta-1})+\theta(t_{0})x+\cdots+\theta(t_{\beta-2})x^{\beta-1})\]
which corresponds to the skew cyclic shift \((\Theta(k_{\alpha-1}),\Theta(k_{1}),\ldots,\Theta(k_{\alpha-2}),\theta(t_{\beta-1} ),\theta(t_{1}),\ldots,\)\(\theta(t_{\beta-2}))\) of \((k_{0},k_{1},\ldots,k_{\alpha-1},t_{0},t_{1},\ldots,t_{\beta-1})\). Therefore, \(x\ast c(x)\in\mathcal{C}\). Then by linearity of \(\mathcal{C}\), \(s(x)\ast c(x)\in\mathcal{C}\) for any \(s(x)\in\mathcal{R}[x;\theta]\) and hence \(\mathcal{C}\) is a left \(\mathcal{R}[x;\theta]\)-submodule of \(R_{\alpha,\beta}\).
The converse part follows from the definition.
Let \(\mathcal{C}\) be an \(\mathbb{F}_{q}\mathcal{R}\)-skew cyclic code of length \((\alpha,\beta)\). Let \(c(x)=(k(x),t(x))\in\mathcal{C}\) and \(m(x)\in\frac{\mathbb{F}_{q}[x;\Theta]}{\langle x^{\alpha}-1\rangle}\). Now, we consider
\[M=\{t(x)\in\mathcal{R}[x;\theta]/\langle x^{\beta}-1\rangle\ |\ (m(x),t(x))\in \mathcal{C}\}\]
and
\[N=\{k(x)\in\mathbb{F}_{q}[x;\Theta]/\langle x^{\alpha}-1\rangle\ |\ (k(x),0)\in \mathcal{C}\}.\]
Next, we present the algebraic properties of these two sets.
Lemma 1: _The above defined set \(N\) is a left \(\mathbb{F}_{q}[x;\Theta]\)-submodule of \(\mathbb{F}_{q}[x;\Theta]/\langle x^{\alpha}-1\rangle\) generated by a right divisor of \(x^{\alpha}-1\)._
Proof: Let \(k_{1}(x)\) and \(k_{2}(x)\) be two elements of \(N\). Then from definition, \((k_{1}(x),0)\in\mathcal{C}\) and \((k_{2}(x),0)\in\mathcal{C}\). Further, \((k_{1}(x),0)+(k_{2}(x),0)=(k_{1}(x)+k_{2}(x),0)\in\mathcal{C}\). This implies that \(k_{1}(x)+k_{2}(x)\in N\). Again, let \(t^{\prime}(x)\in\mathbb{F}_{q}[x;\Theta]/\langle x^{\alpha}-1\rangle\) and \(k(x)\in N\). Then \((k(x),0)\in\mathcal{C}\). As \(\mathcal{C}\) is a left \(\mathcal{R}[x;\theta]\)-submodule, we have
\[t^{\prime}(x)\ast(k(x),0)=(t^{\prime}(x)k(x),0)\in\mathcal{C}.\]
This implies \(t^{\prime}(x)k(x)\in N\) where \(t^{\prime}(x)k(x)\) is taken modulo \(x^{\alpha}-1\). Hence, \(N\) is a left \(\mathbb{F}_{q}[x;\Theta]\)-submodule of \(\mathbb{F}_{q}[x;\Theta]/\langle x^{\alpha}-1\rangle\) generated by a right divisor \(f(x)\) of \(x^{\alpha}-1\).
Lemma 2: _The set \(M\) is a left \(\mathcal{R}[x;\theta]\)-submodule of \(\mathcal{R}[x;\theta]/\langle x^{\beta}-1\rangle\) which is generated by a single element._
Proof: Let \(t_{1}(x)\) and \(t_{2}(x)\) be two elements of \(M\). Then from definition there exist \(m_{1}(x)\) and \(m_{2}(x)\) in \(\mathbb{F}_{q}[x;\Theta]/\langle x^{\alpha}-1\rangle\) such that \((m_{1}(x),t_{1}(x))\in\mathcal{C}\), \((m_{2}(x),t_{2}(x))\in\mathcal{C}\). Thus,
\[(m_{1}(x),t_{1}(x))+(m_{2}(x),t_{2}(x))=(m_{1}(x)+m_{2}(x),t_{1}(x)+t_{2}(x)) \in\mathcal{C}.\]
This implies that \(t_{1}(x)+t_{2}(x)\in M\). Let \(s(x)\in\mathcal{R}[x;\theta]/\langle x^{\beta}-1\rangle\) and \((m(x),t(x))\in\mathcal{C}\). Since \(\mathcal{C}\) is a left \(\mathcal{R}[x;\theta]\)-submodule of \(R_{\alpha,\beta}\), we have
\[s(x)\ast(m(x),t(x))=(\eta(s(x))m(x),s(x)t(x))\in\mathcal{C}.\]
This implies \(s(x)t(x)\in M\), where \(s(x)t(x)\) is taken modulo \(x^{\beta}-1\) and \(\eta(s(x))m(x)\) is taken modulo \(x^{\alpha}-1\). Therefore, \(M\) is a left \(\mathcal{R}[x;\theta]\)-submodule of \(\mathcal{R}[x;\theta]/\langle x^{\beta}-1\rangle\). Further, from Theorem 2, \(M=\langle g(x)\rangle\) where \(g(x):=\xi_{1}g_{1}(x)+\xi_{2}g_{2}(x)\).
Theorem 5.1: _Let \(\mathcal{C}\) be an \(\mathbb{F}_{q}\mathcal{R}\)-skew cyclic code of length \((\alpha,\beta)\). Then \(\mathcal{C}\) is generated as a left \(\mathcal{R}[x;\theta]\)-submodule of \(R_{\alpha,\beta}\) by \((f(x),0)\) and \((m(x),g(x))\) where \(m(x)\in\mathbb{F}_{q}[x;\Theta]/\langle x^{\alpha}-1\rangle\), \(g(x)=\xi_{1}g_{1}(x)+\xi_{2}g_{2}(x)\) and \(f(x)\) is a right divisor of \(x^{\alpha}-1\)._
Proof: Let \(c=(c_{1},c_{2})\in\mathcal{C}\) with \(c_{1}\in\mathbb{F}_{q}^{\alpha}\) and \(c_{2}\in\mathcal{R}^{\beta}\). It has the polynomial representation as \(c(x)=(c_{1}(x),c_{2}(x))\) in \(R_{\alpha,\beta}\). Then \(c_{2}(x)\in M\) and from Lemma 2, \(c_{2}(x)=h(x)g(x)\) for some \(h(x)\in\mathcal{R}[x;\theta]/\langle x^{\beta}-1\rangle\). As \(g(x)\in M\), there exist \(m(x)\in\mathbb{F}_{q}[x;\Theta]/\langle x^{\alpha}-1\rangle\) such that \((m(x),g(x))\in\mathcal{C}\). Now,
\[c(x) =(c_{1}(x),c_{2}(x))\] \[=(c_{1}(x),0)+(0,h(x)g(x))\] \[=(c_{1}(x),0)+p(\eta(h(x)m(x)),0)+(0,h(x)g(x))\] \[=(c_{1}(x),0)+(p-1)(\eta(h(x))m(x),0)+(\eta(h(x)m(x)),0)+(0,h(x)g(x))\] \[=(c_{1}(x),0)+(p-1)(\eta(h(x))m(x),0)+(\eta(h(x))m(x),h(x)g(x))\] \[=(c_{1}(x)+(p-1)\eta(h(x))m(x),0)+(\eta(h(x))m(x),h(x)g(x)).\]
where \(p\) is the characteristic of \(\mathbb{F}_{q}\). This gives \((c_{1}(x)+(p-1)\eta(h(x))m(x),0)\in\mathcal{C}\) and hence \((c_{1}(x)+(p-1)\eta(h(x))m(x),0)\in N\). Moreover, by Lemma 1 there exists \(d(x)\in N\) such that \(c_{1}(x)+(p-1)\eta(h(x))m(x)=d(x)f(x)\). Therefore, \(c(x)=(\eta(h(x))m(x),h(x)g(x))+(d(x)f(x),0)=h(x)*(m(x),g(x))+d(x)*(f(x),0)\).
Theorem 4.1: _Let \(\mathcal{C}=\langle(f(x),0),(m(x),g(x))\rangle\) be an \(\mathbb{F}_{q}\mathcal{R}\)-skew cyclic code of length \((\alpha,\beta)\) with \(m(x)=0\). Then \(\mathcal{C}=\mathcal{C}^{\prime}\otimes\mathscr{C}\) where \(\mathcal{C}^{\prime}\) is a skew cyclic code of length \(\alpha\) over \(\mathbb{F}_{q}\) and \(\mathscr{C}\) is a skew cyclic code of length \(\beta\) over \(\mathcal{R}\)._
Proof: Let \(c=(c_{1},c_{2})\in\mathcal{C}\) with \(c_{1}\in\mathbb{F}_{q}^{\alpha}\) and \(c_{2}\in\mathcal{R}^{\beta}\). Then, from Lemma 1 and Lemma 2, \(c_{1}\in\mathcal{C}^{\prime}=\langle f(x)\rangle\) and \(c_{2}\in\mathscr{C}=\langle g(x)\rangle\). This implies \(\mathcal{C}=\mathcal{C}^{\prime}\otimes\mathscr{C}\) where \(\mathcal{C}^{\prime}=\langle f(x)\rangle\) and \(\mathscr{C}=\langle g(x)\rangle\).
Corollary 1: _Let \(\gcd(\alpha,|\langle\Theta\rangle|)=1\) and \(\mathcal{C}=\langle(f(x),0),(0,g(x))\rangle\) be an \(\mathbb{F}_{q}\mathcal{R}\)-skew cyclic code of length \((\alpha,\beta)\). Then \(\mathcal{C}=\mathcal{C}^{\prime}\otimes\mathscr{C}\) where \(\mathcal{C}^{\prime}\) is a cyclic code of length \(\alpha\) over \(\mathbb{F}_{q}\) and \(\mathscr{C}\) is a skew cyclic code of length \(\beta\) over \(\mathcal{R}\)._
Proof: Straightforward.
Definition 5: Let \(\mathcal{C}\) be an \(\mathbb{F}_{q}\mathcal{R}\)-linear code of length \((\alpha,\beta)\) and \(\mathcal{C}_{\alpha}\) (respectively, \(\mathcal{C}_{\beta}\)) be the canonical projection of \(\mathcal{C}_{\alpha}\) on the first \(\alpha\) (respectively, on the last \(\beta\)) coordinates. The code \(\mathcal{C}\) is called separable if \(\mathcal{C}\) is the direct product of \(\mathcal{C}_{\alpha}\) and \(\mathcal{C}_{\beta}\), i.e., \(\mathcal{C}=\mathcal{C}_{\alpha}\otimes\mathcal{C}_{\beta}\).
Let \(\mathcal{C}^{\prime}\) be a skew cyclic code over \(\mathbb{F}_{q}\), and \(\mathscr{C}\) be a skew cyclic code over \(\mathcal{R}\). If \(\mathcal{C}\) is separable, then \(\mathcal{C}=\mathcal{C}^{\prime}\otimes\mathscr{C}\), i.e. \(\mathcal{C}=\langle(f(x),0),(0,g(x))\rangle\) where \(\mathcal{C}^{\prime}=\langle f(x)\rangle\) with \(f(x)\) is a right divisor of \(x^{\alpha}-1\) and \(\mathscr{C}=\langle g(x)\rangle\) with \(g(x)\) is a right divisor of \(x^{\beta}-1\).
Theorem 4.2: _Let \(\mathcal{C}=\mathcal{C}^{\prime}\otimes\mathscr{C}\) be an \(\mathbb{F}_{q}\mathcal{R}\)-skew cyclic code of length \((\alpha,\beta)\) where \(\mathcal{C}^{\prime}\) and \(\mathscr{C}\) are skew cyclic codes over \(\mathbb{F}_{q}\) and \(\mathcal{R}\), respectively and \(\mathcal{C}=\langle(f(x),0),(0,g(x))\rangle\). Then \(\mathcal{C}^{\prime\perp}\subseteq\mathcal{C}^{\prime}\) and \(\mathscr{C}^{\perp}\subseteq\mathscr{C}\) if and only if \(\mathcal{C}^{\perp}\subseteq\mathcal{C}\)._
Proof: Let \(\mathcal{C}^{\prime\perp}\subseteq\mathcal{C}^{\prime}\) and \(\mathscr{C}^{\perp}\subseteq\mathscr{C}\). Let \(c_{1},c_{1}^{\prime}\in\mathcal{C}^{\prime\perp}\) and \(c_{2},c_{2}^{\prime}\in\mathscr{C}^{\perp}\). Then \(c=(c_{1},c_{2}),c^{\prime}=(c_{1}^{\prime},c_{2}^{\prime})\in\mathcal{C}^{\perp}\). This implies \(c_{1}\cdot c_{1}^{\prime}=0\) in \(\mathbb{F}_{q}\) and \(c_{2}\cdot c_{2}^{\prime}=0\) in \(\mathcal{R}\). Therefore, \(c\cdot c^{\prime}=uc_{1}c_{1}^{\prime}+c_{2}c_{2}^{\prime}=0\). Hence, \(\mathcal{C}^{\perp}\subseteq\mathcal{C}\).
Converse follows directly from the definition.
## 4 Gray Map
Any arbitrary element of \(\mathbb{F}_{q}\mathcal{R}\) can be uniquely written as \((a,r)=(a,\xi_{1}r_{1}+\xi_{2}r_{2})\), where \(a\in\mathbb{F}_{q}\) and \(r\in\mathcal{R}\). Let \(GL_{2}(\mathbb{F}_{q})\) be the set of all \(2\times 2\) invertible matrices over \(\mathbb{F}_{q}\). Consider a Gray map (see [33; 36] for similar mapping) \(\phi:\mathcal{R}\longrightarrow\mathbb{F}_{q}^{2}\) defined by
\[\phi(r_{1}+ur_{2})=(r_{1},r_{2})M,\]
where \(M\in GL_{2}(\mathbb{F}_{q})\) such that \(MM^{T}=\gamma I_{2}\), \(M^{T}\) is the transpose matrix of \(M\), \(\gamma\in\mathbb{F}_{q}^{*}\) and \(I_{2}\) is the identity matrix of order \(2\). Then \(\phi\) is a linear bijection and can be extended to \(\mathcal{R}^{n}\) componentwise. Particularly, when \(M=I_{2}\), we have the following Gray map \(\phi_{1}:\mathcal{R}\longrightarrow\mathbb{F}_{q}^{2}\)
\[\phi_{1}(r_{1}+ur_{2})=(r_{1},r_{2}). \tag{1}\]
Now, we define a Gray map \(\varphi:\mathbb{F}_{q}\mathcal{R}\longrightarrow\mathbb{F}_{q}^{3}\) as follows:
\[\varphi(a,r)=(a,\phi(r))=(a,(r_{1},r_{2})M).\]
It is easy to verify that \(\varphi\) is an \(\mathbb{F}_{q}\)-linear map and can be extended componentwise to \(\mathbb{F}_{q}^{\alpha}\mathcal{R}^{\beta}\) in the following manner:
\[\varphi:\mathbb{F}_{q}^{\alpha}\mathcal{R}^{\beta}\longrightarrow \mathbb{F}_{q}^{\alpha+2\beta}\]
\[\varphi(a_{0},a_{1},\ldots,a_{\alpha-1},r_{0},r_{1},\ldots,r_{ \beta-1})=(a_{0},a_{1},\ldots,a_{\alpha-1},(r_{0,1},r_{0,2})M,(r_{1,1},r_{1,2} )M,\] \[\ldots,(r_{\beta-1,1},r_{\beta-1,2})M),\]
where \((a_{0},a_{1},\ldots,a_{\alpha-1})\in\mathbb{F}_{q}^{\alpha}\) and \((r_{0},r_{1},\ldots,r_{\beta-1})\in\mathcal{R}^{\beta}\) such that \(r_{j}=\xi_{1}r_{j,1}+\xi_{2}r_{j,2}\in\mathcal{R}\) for \(j=0,1,\ldots,\beta-1\). The Lee weight of an element \((a,r)\in\mathbb{F}_{q}\mathcal{R}\) as \(w_{L}(a,r)=w_{H}(a)+w_{L}(r)\) for any \((a,r)\in\mathbb{F}_{q}^{\alpha}\times\mathcal{R}^{\beta}\) where \(w_{H}\) denotes the Hamming weight and \(w_{L}\) denotes the Lee weight. Furthermore, the Lee distance between \(c,c^{\prime}\in\mathbb{F}_{q}^{\alpha}\times\mathcal{R}^{\beta}\) is defined as \(d_{L}(c,c^{\prime})=w_{L}(c-c^{\prime})=w_{H}(\varphi(c-c^{\prime}))\).
Proposition 1: _The Gray map \(\varphi\) is an \(\mathbb{F}_{q}\)-linear and distance preserving map from \(\mathbb{F}_{q}^{\alpha}\mathcal{R}^{\beta}\) (Lee distance) to \(\mathbb{F}_{q}^{\alpha+2\beta}\) (Hamming distance)._
Proof: Let \(c=(a,r),c^{\prime}=(a^{\prime},r^{\prime})\in\mathbb{F}_{q}^{\alpha}\mathcal{ R}^{\beta}\), where \(a=(a_{0},a_{1},\ldots,a_{\alpha-1})\in\mathbb{F}_{q}^{\alpha}\), \(r=(r_{0},r_{1},\ldots,r_{\beta-1})\in\mathcal{R}^{\beta}\), \(a^{\prime}=(a_{0}^{\prime},a_{1}^{\prime},\ldots,a_{\alpha-1}^{\prime})\in \mathbb{F}_{q}^{\alpha}\), \(r^{\prime}=(r_{0}^{\prime},r_{1}^{\prime},\ldots,r_{\beta-1}^{\prime})\in \mathcal{R}^{\beta}\) such that \(r_{j}=\xi_{1}r_{j,1}+\xi_{2}r_{j,2}\) and \(r_{j}^{\prime}=\xi_{1}r_{j,1}^{\prime}+\xi_{2}r_{j,2}^{\prime}\) for \(j=0,1,\ldots,\beta-1\). Then
\[\varphi(c+c^{\prime})= \varphi(a+a^{\prime},r+r^{\prime})\] \[= \varphi(a_{0}+a_{0}^{\prime},a_{1}+a_{1}^{\prime},\ldots,a_{ \alpha-1}+a_{\alpha-1}^{\prime},r_{0}+r_{0}^{\prime},r_{1}+r_{1}^{\prime}, \ldots,r_{\beta-1}+r_{\beta-1}^{\prime})\] \[= \varphi(a_{0}+a_{0}^{\prime},a_{1}+a_{1}^{\prime},\ldots,a_{ \alpha-1}+a_{\alpha-1}^{\prime},\xi_{1}(r_{0,1}+r_{0,1}^{\prime})+\xi_{2}(r_{0,2}+r_{0,2}^{\prime}),\] \[\xi_{1}(r_{1,1}+r_{1,1}^{\prime})+\xi_{2}(r_{1,2}+r_{1,2}^{ \prime}),\ldots,\xi_{1}(r_{\beta-1,1}+r_{\beta-1,1}^{\prime})+\xi_{2}(r_{\beta -1,2}+r_{\beta-1,2}^{\prime})\] \[= (a_{0}+a_{0}^{\prime},a_{1}+a_{1}^{\prime},\ldots,a_{\alpha-1}+a_{ \alpha-1}^{\prime},(r_{0,1}+r_{0,1}^{\prime},r_{0,2}+r_{0,2}^{\prime})M,(r_{1,1 }+r_{1,1}^{\prime},r_{1,2}\] \[+r_{1,2}^{\prime})M,\ldots,(r_{\beta-1,1}+r_{\beta-1,1}^{\prime},r _{\beta-1,2}+r_{\beta-1,2}^{\prime})M)\] \[= (a_{0},a_{1},\ldots,a_{\alpha-1},(r_{0,1},r_{0,2})M,(r_{1,1},r_{1,2})M,\ldots,(r_{\beta-1,1},r_{\beta-1,2})M)+(a_{0}^{\prime},a_{1}^{\prime},\ldots,\] \[a_{\alpha-1}^{\prime},(r_{0,1}^{\prime},r_{0,2}^{\prime})M,(r_{1,1 }^{\prime},r_{1,2}^{\prime})M,\ldots,(r_{\beta-1,1}^{\prime},r_{\beta-1,2}^{ \prime})M)\] \[= \varphi(a,r)+\varphi(a^{\prime},r^{\prime})\] \[= \varphi(c)+\varphi(c^{\prime}).\]
Now, for any \(\lambda\in\mathbb{F}_{q}\), we have
\[\varphi(\lambda c)=\varphi(\lambda a,\lambda r)\] \[=(\lambda(a_{0},a_{1},\ldots,a_{\alpha-1}),\lambda(r_{0,1},r_{0,2})M,\lambda(r_{1,1},r_{1,2})M,\ldots,\lambda(r_{\beta-1,1},r_{\beta-1,2})M)\] \[=\lambda(a_{0},a_{1},\ldots,a_{\alpha-1},(r_{0,1},r_{0,2})M,(r_{1,1},r_{1,2})M,\ldots,(r_{\beta-1,1},r_{\beta-1,2})M)\] \[=\lambda\varphi(a,r)\] \[=\lambda\varphi(c).\]
Thus, \(\varphi\) is an \(\mathbb{F}_{q}\)-linear map. Moreover, \(d_{L}(c,c^{\prime})=w_{L}(c-c^{\prime})=w_{H}(\varphi(c-c^{\prime}))=w_{H}( \varphi(c)-\varphi(c^{\prime}))=d_{H}(\varphi(c),\varphi(c^{\prime}))\). Therefore, \(\varphi\) is a distance preserving map.
Theorem 3.1: _If \(\mathcal{C}\) is an \([n,k,d_{L}]\)\(\mathbb{F}_{q}\mathcal{R}\)-linear code, then \(\varphi(\mathcal{C})\) is a \([\alpha+2\beta,k,d_{H}]\) linear code over \(\mathbb{F}_{q}\)._
Proof: It follows directly from Proposition 1 and the definition of the Gray map.
Now, the following result shows the Gray map \(\varphi\) preserves the orthogonality.
**Theorem 9**: _Let \(\mathcal{C}\) be an \(\mathbb{F}_{q}\mathcal{R}\)-skew cyclic code of length \((\alpha,\beta)\). Then \(\varphi(\mathcal{C})^{\perp}=\varphi(\mathcal{C}^{\perp})\). Further, \(\mathcal{C}\) is self-dual if and only if \(\varphi(\mathcal{C})\) is self-dual._
Proof: Let \(\mathcal{C}\) be an \(\mathbb{F}_{q}\mathcal{R}\)-skew cyclic code of length \((\alpha,\beta)\). Assume that \(l=(x_{0},x_{1},\ldots,x_{\alpha-1},\)\(y_{0},y_{1},\ldots,y_{\beta-1})\in\mathcal{C}^{\perp}\) where \(y_{j}=\xi_{1}a_{j}+\xi_{2}b_{j}\) for \(0\leq j\leq\beta-1\). Then \(\varphi(l)\in\varphi(\mathcal{C}^{\perp})\). To show that \(\varphi(l)\in\varphi(\mathcal{C})^{\perp}\), we consider \(l^{\prime}=(x_{0}^{\prime},x_{1}^{\prime},\ldots,x_{\alpha-1}^{\prime},y_{0}^ {\prime},y_{1}^{\prime},\ldots,y_{\beta-1}^{\prime})\in\mathcal{C}\) where \(y_{j}^{\prime}=\xi_{1}a_{j}^{\prime}+\xi_{2}b_{j}^{\prime}\) for \(0\leq j\leq\beta-1\). Now, \(l\cdot l^{\prime}=0\) implies that
\[l\cdot l^{\prime} =u\sum_{i=0}^{\alpha-1}x_{i}x_{i}^{\prime}+\sum_{j=0}^{\beta-1}y _{j}y_{j}^{\prime}\] \[=u\sum_{i=0}^{\alpha-1}x_{i}x_{i}^{\prime}+\sum_{j=0}^{\beta-1}( \xi_{1}a_{j}a_{j}^{\prime}+\xi_{2}b_{j}b_{j}^{\prime})=0.\]
Further, it gives \(\sum_{i=0}^{\alpha-1}x_{i}x_{i}^{\prime}=0\) and \(\sum_{j=0}^{\beta-1}(a_{j}a_{j}^{\prime}+b_{j}b_{j}^{\prime})=0\). Next, we have
\[\varphi(l) =(x_{0},x_{1},\ldots,x_{\alpha-1},(a_{0},b_{0})M,(a_{1},b_{1})M, \cdots,(a_{\beta-1},b_{\beta-1})M)\] \[=(x_{0},x_{1},\ldots,x_{\alpha-1},c_{0}M,c_{1}M,\cdots,c_{\beta-1 }M)\] \[\varphi(l^{\prime}) =(x_{0}^{\prime},x_{1}^{\prime},\ldots,x_{\alpha-1}^{\prime},(a_ {0}^{\prime},b_{0}^{\prime})M,(a_{1}^{\prime},b_{1}^{\prime})M,\cdots,(a_{ \beta-1}^{\prime},b_{\beta-1})^{\prime}M)\] \[=(x_{0}^{\prime},x_{1}^{\prime},\ldots,x_{\alpha-1}^{\prime},c_{ 0}^{\prime}M,c_{1}^{\prime}M,\cdots,c_{\beta-1}^{\prime}M)\]
where \(c_{j}=(a_{j},b_{j})\) and \(c_{j}^{\prime}=(a_{j}^{\prime},b_{j}^{\prime})\) for \(0\leq j\leq\beta-1\). Also,
\[\varphi(l)\cdot\varphi(l^{\prime}) =\varphi(l)\varphi(l^{\prime})^{T}=u\sum_{i=0}^{\alpha-1}x_{i}x_{ i}^{\prime}+\sum_{j=0}^{\beta-1}y_{j}MM^{T}y_{j}^{\prime T}\] \[=u\sum_{i=0}^{\alpha-1}x_{i}x_{i}^{\prime}+\gamma\sum_{j=0}^{ \beta-1}y_{j}y_{j}^{\prime T}\] \[=u\sum_{i=0}^{\alpha-1}x_{i}x_{i}^{\prime}+\gamma\sum_{j=0}^{ \beta-1}y_{j}y_{j}^{\prime T}\] \[=u\sum_{i=0}^{\alpha-1}x_{i}x_{i}^{\prime}+\gamma\sum_{j=0}^{ \beta-1}(a_{j}a_{j}^{\prime}+b_{j}b_{j}^{\prime}).\]
Then, \(\varphi(l)\in\varphi(\mathcal{C})^{\perp}\) and hence \(\varphi(\mathcal{C}^{\perp})\subseteq\varphi(\mathcal{C})^{\perp}\). As \(\varphi\) is a bijection, \(|\varphi(\mathcal{C}^{\perp})|=|\varphi(\mathcal{C})^{\perp}|\). Thus, \(\varphi(\mathcal{C}^{\perp})=\varphi(\mathcal{C})^{\perp}\).
Let \(\mathcal{C}\) be self-dual, i.e., \(\mathcal{C}=\mathcal{C}^{\perp}\). Then \(\varphi(\mathcal{C})=\varphi(\mathcal{C}^{\perp})=\varphi(\mathcal{C})^{\perp}\), i.e., \(\varphi(\mathcal{C})\) is self-dual. Conversely, let \(\varphi(\mathcal{C})\) be self-dual. Then \(\varphi(\mathcal{C})=\varphi(\mathcal{C}^{\perp})=\varphi(\mathcal{C})^{\perp}\). Since, \(\varphi\) is a bijection, \(\mathcal{C}=\mathcal{C}^{\perp}\). Hence, \(\mathcal{C}\) is self-dual.
**Theorem 10**: _Let \(\mathcal{C}\) be an \(\mathbb{F}_{q}\mathcal{R}\)-skew cyclic code of length \((\alpha,\beta)\). Then \(\varphi(\mathcal{C})=\mathcal{C}^{\prime}\otimes\mathcal{C}_{1}\otimes\mathcal{C }_{2}\), \(\mathcal{C}^{\prime}\) is a skew cyclic code of length \(\alpha\) in \(\mathbb{F}_{q}[x;\mathcal{O}]/\langle x^{\alpha}-1\rangle\) and \(\mathcal{C}_{1}\), \(\mathcal{C}_{2}\) are skew cyclic codes of length \(\beta\) over \(\mathbb{F}_{q}\). Furthermore, \(|\varphi(\mathcal{C})|=|\mathcal{C}^{\prime}||\mathcal{C}_{1}||\mathcal{C}_{2}|\)._
Proof: Consider an element \(c=(a,r)\in\mathcal{C}\) such that \(a=(a_{0},a_{1},\ldots,a_{\alpha-1})\in\mathbb{F}_{q}^{\alpha}\) and \(r=(r_{0},r_{1},\ldots,r_{\beta-1})\in\mathcal{R}^{\beta}\) where \(r_{i}=b_{i}+uc_{i}\) for \(i=0,1,\ldots,\beta-1\). Now, we define
\[\mathcal{C}^{\prime} :=(a_{0},a_{1},\ldots,a_{\alpha-1}),\] \[\mathcal{C}_{1} :=(b_{0},b_{1},\ldots,b_{\beta-1}),\]
\[\mathcal{C}_{2}:=(c_{0},c_{1},\ldots,c_{\beta-1}).\]
Next, a given codeword \(c^{\prime}=(a_{0},a_{1},\ldots,a_{\alpha-1})\in\mathcal{C}^{\prime}\) corresponds to a codeword \(c=(a_{0},a_{1},\ldots,a_{\alpha-1},b_{0}+uc_{0},b_{1}+uc_{1},\ldots,b_{\beta-1} +uc_{\beta-1})\in\mathcal{C}\). As \(\mathcal{C}\) is an \(\mathbb{F}_{q}\mathcal{R}\)-skew cyclic code, the \(\sigma\)-skew cyclic shift of \(c\) is given by \(\sigma(c)=(\Theta(a_{\alpha-1}),\Theta(a_{0}),\ldots,\Theta(a_{\alpha-2}), \theta(b_{\beta-1}+uc_{\beta-1}),\theta(b_{0}+uc_{0}),\theta(b_{1}+uc_{1}), \ldots,\theta(b_{\beta-2}+uc_{\beta-2}))\in\mathcal{C}\). Therefore, \((\Theta(a_{\alpha-1}),\Theta(a_{0}),\ldots,\)\(\Theta(a_{\alpha-2}))\in\mathcal{C}^{\prime}\). Hence, \(\mathcal{C}^{\prime}\) is a skew cyclic code of length \(\alpha\) in \(\mathbb{F}_{q}[x;\Theta]/\langle x^{\alpha}-1\rangle\). Similarly, we get \(\mathcal{C}_{1}\), \(\mathcal{C}_{2}\) are skew cyclic codes of length \(\beta\) over \(\mathbb{F}_{q}\). Thus, \(\varphi(\mathcal{C})=\mathcal{C}^{\prime}\otimes\mathcal{C}_{1}\otimes \mathcal{C}_{2}\). As \(\varphi\) is bijective, \(|\mathcal{C}|=|\varphi(\mathcal{C})|\) and hence \(|\mathcal{C}|=|\mathcal{C}^{\prime}||\mathcal{C}_{1}||\mathcal{C}_{2}|\).
Now, we present an example of an \(\mathbb{F}_{q}\mathcal{R}\)-skew cyclic code. To compute Gray image \(\varphi(\mathcal{C})\) of an \(\mathbb{F}_{q}\mathcal{R}\)-skew cyclic code \(\mathcal{C}\), first we find the corresponding generator polynomials \(f(x)\) of skew cyclic code \(\mathcal{C}^{\prime}\) of length \(\alpha\) over \(\mathbb{F}_{q}\) and \(g(x)\) of skew cyclic code \(\mathscr{C}\) of length \(\beta\) over \(\mathcal{R}\) with \(g(x)=g_{1}(x)\xi_{1}+g_{2}(x)\xi_{2}\), where \(g_{1}(x)\) and \(g_{2}(x)\) are the generator polynomials of skew cyclic codes \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\) over \(\mathbb{F}_{q}\) respectively. Then, with the help of Magma computational algebra system [13], we find \(\varphi(\mathcal{C})\).
_Example 1_ Let \(q=9\), \(\alpha=18\), \(\beta=36\) and \(\mathcal{R}=\mathbb{F}_{9}[u]/\langle u^{2}-u\rangle\) where \(\mathbb{F}_{9}=\mathbb{F}_{3}(w)\) and \(w^{2}+2w+2=0\). In \(\mathbb{F}_{9}\), the Frobenius automorphism \(\Theta:\mathbb{F}_{9}\longrightarrow\mathbb{F}_{9}\) is defined by \(\Theta(a)=a^{3}\) for all \(a\in\mathbb{F}_{9}\). Therefore, \(\mathbb{F}_{9}[x;\Theta]\) is a skew polynomial ring. In \(\mathbb{F}_{9}[x;\Theta]\), we have
\[x^{18}-1 =x^{15}+w^{5}x^{14}+x^{13}+2x^{12}+x^{9}+w^{5}x^{8}+x^{7}+2x^{6}+ x^{3}+w^{5}x^{2}+x+2)(x^{3}\] \[+w^{3}x^{2}+x+1).\] \[x^{36}-1 =(x^{35}+w^{2}x^{34}+x^{33}+w^{2}x^{32}+x^{31}+w^{2}x^{30}+x^{29} +w^{2}x^{28}+x^{27}+w^{2}x^{26}+x^{25}\] \[+w^{2}x^{24}+x^{23}+w^{2}x^{22}+x^{21}+w^{2}x^{20}+x^{19}+w^{2}x^ {18}+x^{17}+w^{2}x^{16}+x^{15}+w^{2}x^{14}\] \[+x^{13}+w^{2}x^{12}+x^{11}+w^{2}x^{10}+x^{9}+w^{2}x^{8}+x^{7}+w^{2 }x^{6}+x^{5}+w^{2}x^{4}+x^{3}+w^{2}x^{2}\] \[+x+w^{2})(x+w^{2}).\] \[=w^{6}x^{34}+2x^{33}+w^{5}x^{32}+2x^{31}+x^{30}+w^{2}x^{28}+x^{27 }+wx^{26}+x^{25}+2x^{24}+w^{6}x^{22}\] \[+2x^{21}+w^{5}x^{20}+2x^{19}+x^{18}+w^{2}x^{16}+x^{15}+wx^{14}+x^{ 13}+2x^{12}+w^{6}x^{10}+2x^{9}\] \[+w^{5}x^{8}+2x^{7}+x^{6}+w^{2}x^{4}+x^{3}+wx^{2}+x+2)(w^{2}x^{2}+ x+1).\]
Now, let \(f(x)=x^{3}+w^{3}x^{2}+x+1\), \(g_{1}(x)=x+w^{2}\) and \(g_{2}(x)=w^{2}x^{2}+x+1\). Then \(\mathcal{C}\) is an \(\mathbb{F}_{q}\mathcal{R}\)-skew cyclic code of length \((18,36)\). Let
\[M=\begin{pmatrix}1&1\\ 1&-1\end{pmatrix}\in GL_{2}(\mathbb{F}_{9})\]
such that \(MM^{T}=2I_{2}\). Then \(\varphi(\mathcal{C})\) is a \([90,84,4]_{9}\) linear code over \(\mathbb{F}_{9}\) which is BKLC (best-known linear code) as per the database [24].
## 5 Quantum Codes from \(\mathbb{F}_{q}\mathcal{R}\)-Skew Cyclic Codes
Recall that the set of \(n\)-fold tensor product \((\mathbb{C}^{q})^{\otimes n}=\underbrace{\mathbb{C}^{q}\otimes\mathbb{C}^{q} \otimes\cdots\otimes\mathbb{C}^{q}}_{n\text{ times}}\) is a \(q^{n}\)-dimensional Hilbert space, where \(\mathbb{C}^{q}\) is a \(q\)-dimensional complex Hilbert space. A \(q\)-ary quantum code of length \(n\), dimension \(k\) and minimum distance \(d\) is denoted by \([[n,k,d]]_{q}\), is a \(q^{k}\)-dimensional subspace of \((\mathbb{C}^{q})^{\otimes n}\). A quantum code \([[n,k,d]]_{q}\) is known as quantum MDS (maximum-distance-separable) if it attains the singleton bound \(k+2d\leq n+2\). Note that a quantum code \([[n,k,d]]_{q}\) is said to be better than \([[n^{\prime},k^{\prime},d^{\prime}]]_{q}\) if any one of the following or both hold:
1. \(d>d^{\prime}\) when the code rate \(\frac{k}{n}=\frac{k^{\prime}}{n^{\prime}}\) (Larger distance with same code rate).
2. \(\frac{k}{n}>\frac{k^{\prime}}{n^{\prime}}\) when the distance \(d=d^{\prime}\) (Larger code rate with same distance).
The relationship between quantum information and classical information is a matter currently receiving much attention from researchers. One of the fascinating developments in the study of linear codes is the construction of quantum codes from classical codes. The first quantum code was independently studied by Shor [41] and Steane [43]. However, the construction of quantum codes from classical codes, their existence proofs and correction methods were discussed by Calderbank et al. [17]. One of the widely used techniques to obtain quantum codes from classical linear codes is the CSS construction (Lemma 3) in which dual containing linear codes play an instrumental role.
Lemma 3 (_23_, Theorem 3): _Let \(\mathcal{C}_{1}=[n,k_{1},d_{1}]_{q}\) and \(\mathcal{C}_{2}=[n,k_{2},d_{2}]_{q}\) be two linear codes over \(GF(q)\) with \(\mathcal{C}_{2}^{\perp}\subseteq\mathcal{C}_{1}\). Then there exists a quantum error-correcting code \(\mathcal{C}=[[n,k_{1}+k_{2}-n,d]]\) where \(d=\min\{w(v):v\in(\mathcal{C}_{1}\backslash\mathcal{C}_{2}^{\perp})\cup( \mathcal{C}_{2}\backslash\mathcal{C}_{1}^{\perp})\}\geq\min\{d_{1},d_{2}\}\). Further, if \(\mathcal{C}_{1}^{\perp}\subseteq\mathcal{C}_{1},\) then there exists a quantum error-correcting code \(\mathcal{C}=[[n,2k_{1}-n,d_{1}]]\), where \(d_{1}=\min\{w(v):v\in\mathcal{C}_{1}\backslash\mathcal{C}_{1}^{\perp}\}\)._
Now, we recall from [15] that a skew cyclic code of length \(n\) over \(\mathbb{F}_{q}\) is a principally generated left \(\mathbb{F}_{q}[x;\Theta]\)-submodule of \(\frac{\mathbb{F}_{q}[x;\Theta]}{\langle x^{n}-1\rangle}\) and \(\mathcal{C}=\langle g(x)\rangle\) where \(x^{n}-1=h(x)g(x)\), i.e., \(g(x)\) is a right divisor of \(x^{n}-1\). Its dual \(\mathcal{C}^{\perp}\) is also a skew cyclic code of length \(n\) and \(\mathcal{C}^{\perp}=\langle h^{\dagger}(x)\rangle\) where \(h^{\dagger}(x)=h_{n-r}+\Theta(h_{n-r-1})x+\cdots+\Theta^{n-r}(h_{0})x^{n-r}\) for \(h(x)=h_{0}+h_{1}x+\cdots+h_{n-r}x^{n-r}\). If \(\Theta=id\), then \(h^{\dagger}(x)=h^{*}(x)\) where \(h^{*}(x)=h_{n-r}+h_{n-r-1}x+\cdots+h_{0}x^{n-r}\). Further, we recall the dual containing property for skew cyclic codes.
Lemma 4 (_25_): _Let \(\gcd(\alpha,|\langle\Theta\rangle|)=1\) and \(\mathcal{C}=\langle f(x)\rangle\) be a skew cyclic code of length \(\alpha\) over \(\mathbb{F}_{q}\). Then \(\mathcal{C}^{\perp}\subseteq\mathcal{C}\) if and only if \(x^{\alpha}-1\equiv 0\pmod{f(x)f^{*}(x)},\) where \(f^{*}(x)\) is the reciprocal polynomial of \(f(x)\)._
Lemma 5 (_9; 33_)_ _Let \(|\langle\Theta\rangle|\) divide \(\alpha\) and \(\mathcal{C}=\langle g(x)\rangle\) be a skew cyclic code of length \(\alpha\) over \(\mathbb{F}_{q}\) where \(x^{\alpha}-1=h(x)g(x)\). Then \(\mathcal{C}^{\perp}\subseteq\mathcal{C}\) if and only if \(h^{\dagger}(x)h(x)\) is divisible by \(x^{\alpha}-1\) from the right side._
Now, we derive the necessary and sufficient conditions for \(\mathbb{F}_{q}\mathcal{R}\)-skew cyclic codes to contain their duals.
Theorem 4.1: _Let \(|\langle\Theta\rangle|\) divide \(\alpha\), \(|\langle\theta\rangle|\) divide \(\beta\) and \(\mathcal{C}=\mathcal{C}^{\prime}\otimes\mathscr{C}\) be an \(\mathbb{F}_{q}\mathcal{R}\)-skew cyclic code of length \((\alpha,\beta)\) where \(\mathcal{C}^{\prime}=\langle f(x)\rangle\) and \(\mathscr{C}=\xi_{1}\mathcal{C}_{1}\oplus\xi_{2}\mathcal{C}_{2}=\langle g(x)\rangle\) are skew cyclic codes over \(\mathbb{F}_{q}\) and \(\mathcal{R}\) respectively, with \(g(x)=\xi_{1}g_{1}(x)+\xi_{2}g_{2}(x)\) and \(x^{\alpha}-1=h(x)f(x)\), \(x^{\beta}-1=h_{i}(x)g_{i}(x)\). Then \(\mathcal{C}^{\perp}\subseteq\mathcal{C}\) if and only if \(h^{\dagger}(x)h(x)\) and \(h^{\dagger}_{i}(x)h_{i}(x)\) are divisible by \(x^{\alpha}-1\) and \(x^{\beta}-1\), respectively for \(i=1,2\) from the right side._
Proof: Let \(\mathcal{C}^{\perp}\subseteq\mathcal{C}\). Then by Theorem 4.1, \(\mathcal{C}^{\prime\perp}\subseteq\mathcal{C}^{\prime}\) and \(\mathscr{C}^{\perp}\subseteq\mathscr{C}\). Also, \(\mathcal{C}^{\perp}_{1}\subseteq\mathcal{C}_{1}\) and \(\mathcal{C}^{\perp}_{2}\subseteq\mathcal{C}_{2}\) where \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\) are skew cyclic codes of length \(\beta\) over \(\mathbb{F}_{q}\). Then from Lemma 5, we have \(h^{\dagger}(x)h(x)\) and \(h^{\dagger}_{i}(x)h_{i}(x)\) are divisible by \(x^{\alpha}-1\) and \(x^{\beta}-1\), respectively from the right side for \(i=1,2\).
Conversely, suppose \(h^{\dagger}(x)h(x)\) and \(h^{\dagger}_{i}(x)h_{i}(x)\) are divisible by \(x^{\alpha}-1\) and \(x^{\beta}-1\), respectively from the right side for \(i=1,2\). Then by Lemma 5, we have \(\mathcal{C}^{\prime\perp}\subseteq\mathcal{C}^{\prime}\), \(\mathcal{C}^{\perp}_{1}\subseteq\mathcal{C}_{1}\) and \(\mathcal{C}^{\perp}_{2}\subseteq\mathcal{C}_{2}\). This implies \(\mathscr{C}^{\perp}\subseteq\mathscr{C}\). Therefore, by Theorem 4.1, \(\mathcal{C}^{\perp}\subseteq\mathcal{C}\).
Finally, we present our main result for the construction of quantum codes from \(\mathbb{F}_{q}\mathcal{R}\)-skew cyclic codes with the help of Theorem 4.1.
**Theorem 12**: _Let \(|\langle\Theta\rangle|\) divide \(\alpha\), \(|\langle\theta\rangle|\) divide \(\beta\) and \(\mathcal{C}=\mathcal{C}^{\prime}\otimes\mathscr{C}\) be an \(\mathbb{F}_{q}\mathcal{R}\)-skew cyclic code of length \((\alpha,\beta)\) with \(x^{\alpha}-1=h(x)f(x)\), \(x^{\beta}-1=h_{i}(x)g_{i}(x)\). If \(h^{\dagger}(x)h(x)\) and \(h_{i}^{\dagger}(x)h_{i}(x)\) are divisible by \(x^{\alpha}-1\) and \(x^{\beta}-1\), respectively from the right side for \(i=1,2\), then there exists a quantum code \([[\alpha+2\beta,2k-(\alpha+2\beta),d_{H}]]_{q}\) where \(d_{H}\) denotes the Hamming distance and \(k\) denotes the dimension of the code \(\varphi(\mathcal{C})\)._
Proof.: Let \(h^{\dagger}(x)h(x)\) and \(h_{i}^{\dagger}(x)h_{i}(x)\) be divisible by \(x^{\alpha}-1\) and \(x^{\beta}-1\), respectively from the right side for \(i=1,2\). Then by Theorem 11, we have \(\mathcal{C}^{\perp}\subseteq\mathcal{C}\). As \(\varphi(\mathcal{C})=\mathcal{C}^{\prime}\otimes\mathcal{C}_{1}\otimes \mathcal{C}_{2}\), it is easy to check that \(\varphi(\mathcal{C})^{\perp}=\mathcal{C}^{\prime\perp}\otimes\mathcal{C}_{1}^ {\perp}\otimes\mathcal{C}_{2}^{\perp}\). Also, \(\mathcal{C}^{\prime\perp}\subseteq\mathcal{C}^{\prime}\), \(\mathcal{C}_{1}^{\perp}\subseteq\mathcal{C}_{1}\) and \(\mathcal{C}_{2}^{\perp}\subseteq\mathcal{C}_{2}\). Therefore, \(\varphi(\mathcal{C})\) is a dual containing linear code with parameters \([\alpha+2\beta,k,d_{H}]\). Hence, by Lemma 3, there exists a quantum code with parameters \([[\alpha+2\beta,2k-(\alpha+2\beta),d_{H}]]_{q}\).
Now, with the help of Corollary 1, Theorem 12 can also be stated in the following form:
**Corollary 2**: _Let \(|\langle\theta\rangle|\) divide \(\beta\), \(\gcd(\alpha,|\langle\Theta\rangle|)=1\) and \(\mathcal{C}=\mathcal{C}^{\prime}\otimes\mathscr{C}\) be an \(\mathbb{F}_{q}\mathcal{R}\)-skew cyclic code of length \((\alpha,\beta)\) where \(\mathcal{C}^{\prime}\) is a cyclic code over \(\mathbb{F}_{q}\) and \(\mathscr{C}\) is a skew cyclic code over \(\mathcal{R}\) with \(x^{\alpha}-1=h(x)f(x)\), \(x^{\beta}-1=h_{i}(x)g_{i}(x)\). If \(x^{\alpha}-1\) is divisible by \(f(x)f^{*}(x)\) and \(h_{i}^{\dagger}(x)h_{i}(x)\) is divisible by \(x^{\beta}-1\) from the right side for \(i=1,2\), then there exists a quantum code \([[\alpha+2\beta,2k-(\alpha+2\beta),d_{H}]]_{q}\)._
_Remark 1_: Notice that the length of the quantum code obtained from the codes over the ring \(\mathcal{R}\) must be an integral multiple of \(2\), whereas the code length in Theorem 12 has no such restriction, i.e., we can find code of any length \(\alpha+2\beta\) with some suitable choices of \(\alpha\) and \(\beta\). For example, to construct a code of length \(60\), there are finitely many choices for \(\alpha\) and \(\beta\) such that \(\alpha+2\beta=60\). However, in the case of \(\mathcal{R}\), the only choice is \(\beta=30\). This is one of the advantages of studying the \(\mathbb{F}_{q}\mathcal{R}\)-skew cyclic codes in quantum code constructions.
Now, with the help of our derived results, we construct several new and better quantum codes than the existing ones appearing in [9, 20, 44]. All our computations in the examples are carried out using the Magma computation system [13].
_Example 2_: Let \(q=9\), \(\alpha=49\), \(\beta=36\) and \(\mathcal{R}=\mathbb{F}_{9}[u]/\langle u^{2}-u\rangle\) where \(\mathbb{F}_{9}=\mathbb{F}_{3}(w)\) and \(w^{2}+2w+2=0\). In \(\mathbb{F}_{9}\), the Frobenius automorphism \(\Theta:\mathbb{F}_{9}\longrightarrow\mathbb{F}_{9}\) is defined by \(\Theta(a)=a^{3}\) for all \(a\in\mathbb{F}_{9}\). Therefore, \(\mathbb{F}_{9}[x;\Theta]\) is a skew polynomial ring. In \(\mathbb{F}_{9}[x;\Theta]\), we have
\[x^{36}-1 =w^{2}x^{34}+2x^{33}+w^{7}x^{32}+2x^{31}+x^{30}+w^{6}x^{28}+x^{27} +w^{3}x^{26}+x^{25}+2x^{24}+w^{2}x^{22}\] \[+2x^{21}+w^{7}x^{20}+2x^{19}+x^{18}+w^{6}x^{16}+x^{15}+w^{3}x^{1 4}+x^{13}+2x^{12}+w^{2}x^{10}+2x^{9}\] \[+w^{7}x^{8}+2x^{7}+x^{6}+w^{6}x^{4}+x^{3}+w^{3}x^{2}+x+2)(w^{6}x ^{2}+x+1),\] \[=(w^{6}x^{34}+2x^{33}+w^{5}x^{32}+2x^{31}+x^{30}+w^{2}x^{28}+x^{ 27}+wx^{26}+x^{25}+2x^{24}+w^{6}x^{22}\] \[+2x^{21}+w^{5}x^{20}+2x^{19}+x^{18}+w^{2}x^{16}+x^{15}+wx^{14}+x^{ 13}+2x^{12}+w^{6}x^{10}+2x^{9}\] \[+w^{5}x^{8}+2x^{7}+x^{6}+w^{2}x^{4}+x^{3}+wx^{2}+x+2)(w^{2}x^{2}+ x+1).\]
In \(\mathbb{F}_{9}[x]\), we have
\[x^{49}-1 =(x+2)(x^{3}+wx^{2}+w^{7}x+2)(x^{3}+w^{3}x^{2}+w^{5}x+2)(x^{21}+ wx^{14}+w^{7}x^{7}+2)\] \[(x^{21}+w^{3}x^{14}+w^{5}x^{7}+2).\]
Now, let \(f(x)=x^{3}+w^{3}x^{2}+w^{5}x+2\), \(g_{1}(x)=w^{6}x^{2}+x+1\) and \(g_{2}(x)=w^{2}x^{2}+x+1\). Then \(\mathcal{C}\) is an \(\mathbb{F}_{q}\mathcal{R}\)-skew cyclic code of length \((49,36)\). Let
\[M=\begin{pmatrix}1&1\\ 1&-1\end{pmatrix}\in GL_{2}(\mathbb{F}_{9})\]
such that \(MM^{T}=2I_{2}\). Then \(\varphi(\mathcal{C})\) is a \([121,114,4]_{9}\) linear code over \(\mathbb{F}_{9}\). Now, we have
\[h_{1}(x) =w^{2}x^{34}+2x^{33}+w^{7}x^{32}+2x^{31}+x^{30}+w^{6}x^{28}+x^{27}+ w^{3}x^{26}+x^{25}+2x^{24}+w^{2}x^{22}\] \[+2x^{21}+w^{7}x^{20}+2x^{19}+x^{18}+w^{6}x^{16}+x^{15}+w^{3}x^{14} +x^{13}+2x^{12}+w^{2}x^{10}+2x^{9}\] \[+w^{7}x^{8}+2x^{7}+x^{6}+w^{6}x^{4}+x^{3}+w^{3}x^{2}+x+2,\] \[h_{1}^{\dagger}(x) =2x^{34}+x^{33}+w^{3}x^{32}+x^{31}+w^{6}x^{30}+x^{28}+2x^{27}+w^{7 }x^{26}+2x^{25}+w^{2}x^{24}+2x^{22}\] \[+x^{21}+w^{3}x^{20}+x^{19}+w^{6}x^{18}+x^{16}+2x^{15}+w^{7}x^{14} +2x^{13}+w^{2}x^{12}+2x^{10}+x^{9}\] \[+w^{3}x^{8}+x^{7}+w^{6}x^{6}+x^{4}+2x^{3}+w^{7}x^{2}+2x+w^{2},\] \[h_{1}^{\dagger}(x)h_{1}(x) =(w^{6}x^{32}+w^{5}x^{31}+wx^{30}+w^{6}x^{29}+w^{7}x^{28}+w^{2}x^{ 27}+2x^{26}+2x^{25}+2x^{24}\] \[+w^{6}x^{23}+w^{7}x^{22}+w^{2}x^{21}+wx^{20}+w^{7}x^{19}+w^{6}x^{ 18}+w^{2}x^{14}+wx^{13}+w^{5}x^{12}\] \[+w^{2}x^{11}+w^{3}x^{10}+w^{6}x^{9}+x^{8}+x^{7}+x^{6}+w^{2}x^{5}+ w^{3}x^{4}+w^{6}x^{3}+w^{5}x^{2}+w^{3}x\] \[+w^{2})(x^{36}-1),\]
and
\[h_{2}(x) =w^{6}x^{34}+2x^{33}+w^{5}x^{32}+2x^{31}+x^{30}+w^{2}x^{28}+x^{27 }+wx^{26}+x^{25}+2x^{24}+w^{6}x^{22}\] \[+2x^{21}+w^{5}x^{20}+2x^{19}+x^{18}+w^{2}x^{16}+x^{15}+wx^{14}+x^ {13}+2x^{12}+w^{6}x^{10}+2x^{9}\] \[+w^{5}x^{8}+2x^{7}+x^{6}+w^{2}x^{4}+x^{3}+wx^{2}+x+2,\] \[h_{2}^{\dagger}(x) =2x^{34}+x^{33}+wx^{32}+x^{31}+w^{2}x^{30}+x^{28}+2x^{27}+w^{5}x^ {26}+2x^{25}+w^{6}x^{24}+2x^{22}\] \[+x^{21}+wx^{20}+x^{19}+w^{2}x^{18}+x^{16}+2x^{15}+w^{5}x^{14}+2x^ {13}+w^{6}x^{12}+2x^{10}+x^{9}\] \[+wx^{8}+x^{7}+w^{2}x^{6}+x^{4}+2x^{3}+w^{5}x^{2}+2x+w^{6},\] \[h_{2}^{\dagger}(x)h_{2}(x) =(w^{2}x^{32}+w^{7}x^{31}+w^{3}x^{30}+w^{2}x^{29}+w^{5}x^{28}+w^{6 }x^{27}+2x^{26}+2x^{25}+2x^{24}\] \[+w^{2}x^{23}+w^{5}x^{22}+w^{6}x^{21}+w^{3}x^{20}+w^{5}x^{19}+w^{2 }x^{18}+w^{6}x^{14}+w^{3}x^{13}+w^{7}x^{12}\] \[+w^{6}x^{11}+wx^{10}+w^{2}x^{9}+x^{8}+x^{7}+x^{6}+w^{6}x^{5}+wx^{4 }+w^{2}x^{3}+w^{7}x^{2}+wx\] \[+w^{6})(x^{36}-1).\]
Therefore, \(h_{1}^{\dagger}(x)h_{1}(x)\) and \(h_{2}^{\dagger}(x)h_{2}(x)\) are divisible by \(x^{36}-1\) from the right sides. Also, \(x^{49}-1\) is divisible by \(f(x)f^{*}(x)\). Hence, by Theorem 2, we have a quantum code with parameters \([[121,107,4]]_{9}\) which has same length and minimum distance but larger code rate than existing code \([[121,106,4]]_{9}\) given by [20].
In Table 1, we present some quantum codes from \(\mathbb{F}_{q}\mathcal{R}\)-skew cyclic code for \(q=9,25,49\). To compute their Gray images for \(q=9,25,49\), we use
\[M=\begin{pmatrix}1&1\\ 1&-1\end{pmatrix}\in GL_{2}(\mathbb{F}_{q})\]
satisfying \(MM^{T}=2I_{2}\). Let \(\mathcal{C}=\mathcal{C}^{\prime}\otimes\mathscr{C}\) be an \(\mathbb{F}_{q}\mathcal{R}\)-skew cyclic code of length \((\alpha,\beta)\) where \(|\langle\theta\rangle|\) divides \(\beta\) and \(\gcd(\alpha,|\langle\Theta\rangle|)=1\). In Table 1, we provide generator polynomials \(f(x)\) for \(\mathcal{C}^{\prime}\) and \(g_{1}(x),g_{2}(x)\) for \(\mathscr{C}=\xi_{1}\mathcal{C}_{1}\oplus\xi_{2}\mathcal{C}_{2}\) such that \(x^{\alpha}-1=h(x)f(x)\), \(x^{\beta}-1=h_{i}(x)g_{i}(x)\) for \(i=1,2\). Also, we construct these codes under the conditions that \(x^{\alpha}-1\) is divisible by \(f(x)f^{*}(x)\) and \(h_{i}^{\dagger}(x)h_{i}(x)\) is divisible by \(x^{\beta}-1\) from the right side for \(i=1,2\). Therefore, by Corollary 2, we construct quantum codes (in the \(5^{\text{th}}\) column), which have better parameters than the existing codes (in the \(6^{\text{th}}\) column). To represent polynomials \(f(x),g_{i}(x)\) for \(i=1,2\), we write the vectors consisting of their coefficients in decreasing order of power of \(x\). For instance, we write the vector \(1ww^{7}2\) to represent the polynomial \(x^{3}+wx+w^{7}x+2\).
## 6 Conclusion
In this paper, we have first discussed the structure of the ring \(\mathbb{F}_{q}\mathcal{R}\) and studied the structural properties of \(\mathbb{F}_{q}\mathcal{R}\)-skew cyclic codes. As an application of our established results, we have constructed many quantum codes with better parameters using the CSS construction. To show the novelty of our obtained codes, we have compared them with the best-known codes available in the literature. We believe that this work still has scope for further study by taking the direct product of cyclic, constacyclic, skew cyclic and skew constacyclic codes over finite rings.
## Acknowledgements
The first and second authors thank the Department of Science and Technology (DST), Govt. of India, for financial support under CRG/2020/005927, vide Diary No. SERB/F/67 80/ 2020-2021 dated 31 December, 2020, and Ref No. DST/INSPIRE/03/2016/ 001445, respectively. H. Islam thanks the University of St. Gallen, Switzerland, for financial support under International Postdoctoral Fellowship (IPF). Also, the authors would like to thank the anonymous referee(s) and the Editor for their valuable comments to improve the presentation of the paper.
## Data Availability
The authors declare that [the/all other] data supporting the findings of this study are available within the article. Any clarification may be requested from corresponding author provided it is essential.
**Competing interests**: The authors declare that there is no conflict of interest regarding the publication of this manuscript.
|
2301.05298 | Open Case Studies: Statistics and Data Science Education through
Real-World Applications | With unprecedented and growing interest in data science education, there are
limited educator materials that provide meaningful opportunities for learners
to practice statistical thinking, as defined by Wild and Pfannkuch (1999), with
messy data addressing real-world challenges. As a solution, Nolan and Speed
(1999) advocated for bringing applications to the forefront in undergraduate
statistics curriculum with the use of in-depth case studies to encourage and
develop statistical thinking in the classroom. Limitations to this approach
include the significant time investment required to develop a case study --
namely, to select a motivating question and to create an illustrative data
analysis -- and the domain expertise needed. As a result, case studies based on
realistic challenges, not toy examples, are scarce. To address this, we
developed the Open Case Studies (https://www.opencasestudies.org) project,
which offers a new statistical and data science education case study model.
This educational resource provides self-contained, multimodal, peer-reviewed,
and open-source guides (or case studies) from real-world examples for active
experiences of complete data analyses. We developed an educator's guide
describing how to most effectively use the case studies, how to modify and
adapt components of the case studies in the classroom, and how to contribute
new case studies. (https://www.opencasestudies.org/OCS_Guide). | Carrie Wright, Qier Meng, Michael R. Breshock, Lyla Atta, Margaret A. Taub, Leah R Jager, John Muschelli, Stephanie C. Hicks | 2023-01-12T21:22:15Z | http://arxiv.org/abs/2301.05298v1 | # Open Case Studies: Statistics and Data Science Education through Real-World Applications
###### Abstract
With unprecedented and growing interest in data science education, there are limited educator materials that provide meaningful opportunities for learners to practice _statistical thinking_, as defined by Wild and Pfannkuch [1], with messy data addressing real-world challenges. As a solution, Nolan and Speed [2] advocated for bringing applications to the forefront in undergraduate statistics curriculum with the use of in-depth _case studies_ to encourage and develop statistical thinking in the classroom. Limitations to this approach include the significant time investment required to develop a case study - namely, to select a motivating question and to create an illustrative data analysis - and the domain expertise needed. As a result, case studies based on realistic challenges, not toy examples, are scarce. To address this, we developed the Open Case Studies (opencasestudies.org) project, which offers a new statistical and data science education case study model. This educational resource provides self-contained, multimodal, peer-reviewed, and open-source guides (or case studies) from real-world examples for active experiences of complete data analyses. We developed an educator's guide describing how to most effectively use the case studies, how to modify and adapt components of the case studies in the classroom, and how to contribute new case studies. (opencasestudies.org/OCS.Guide).
**Keywords**: applied statistics, data science, statistical thinking, case studies, education, computing
## 1 Introduction
A major challenge in the practice of teaching data science and statistics is the limited availability of courses and course materials that provide meaningful opportunities for students to practice and apply _statistical thinking_, as defined by Wild and Pfannkuch [1], with messy data addressing real-world challenges across diverse context domains. To address this problem, Nolan and Speed [2] presented a model for developing _case studies_ (also known as 'labs') for use in undergraduate statistics courses with a specific goal to "encourage and develop statistical thinking". Specifically, the model calls for each case study to be:
"a substantial exercise with nontrivial solutions that leave room for different analyses, and for it to be a central part of the course. The lab should offer motivation and a framework for studying theoretical statistics, and it should give students experience with how statistics can be used to answer scientific questions. An important goal of this approach is to encourage and develop statistical thinking while imparting knowledge in mathematical statistics." [2]
In 2018, Hicks and Irizarry [3] stated that one of their five principles for teaching data science was to "organize the course around a set of diverse case studies" based on the model by Nolan and Speed [2], with a goal of practicing statistical thinking and bringing real-world applications into the classroom. Case studies are also being used in the classroom across a diverse set of fields, including statistics [4, 5, 6, 7, 8], evolutionary biology [9], engineering [10], and environmental science [11].
However, there are several limiting factors to scaling up the use of case studies. First, the process of selecting motivating questions [12], finding real-world and motivating data [13, 14], describing the context around the data [15, 16], and preparing diverse didactic data analyses requires a large initial investment in time and effort [3]. Second, the individuals who are most primed to develop effective and insightful case studies are practitioner-instructors [17], or practicing applied statisticians and data scientists, who teach and practice in a field-specific context. For these individuals, success
fully constructing a diverse set of case studies across a wide range of contextual topics may require collaboration with individuals in other disciplines; this can be hard without protected time and effort from their academic institutions [18]. Third, while there are rich repositories of data sets [7], there are few collections of associated data analyses that show how the data can be used to demonstrate fundamental data science and statistical concepts, potentially with unexpected outcomes [19]. This is especially true for complex and messy data, where analysis decisions must go beyond what can be summarized in a brief summary about the data, such as a README file [20, 21]. These challenges have resulted in a scarcity of case studies based on real-world challenges instead of simple toy examples. Moreover, many data repositories have different recommended processing and analysis of subsets of data, which are commonly used as "the" analysis, without proper discussion of alternative choices along the research pathway.
To address these challenges, we developed an open-source educational resource, the Open Case Studies (OCS) project (opencasestudies.org). This resource contains in-depth, self-contained, multimodal, and peer-reviewed experiential guides (or case studies) that demonstrate illustrative data analyses covering a diverse range of statistical and data science topics to teach learners how to effectively derive knowledge from data. These guides can be used by instructors to bring applications to the forefront in the classroom or they can be used by independent learners outside of the classroom. Finally, we developed an educator's guide describing how to most effectively use the case studies, how to modify and adapt components of the case studies in the classroom, and how to contribute new case studies. (opencasestudies.org/OCS_Guide).
The rest of the manuscript is as follows. First, we provide an overview and discuss individual components of the Open Case Studies model (**Section 2**), a new model that extends the [2] case studies model. Second, we describe the Open Case Studies educational resource (**Section 3**). Third, we give guidance based on our experience about how others can create their own case studies (**Section 4**), including how to create interactive case studies. We conclude with a summary about the utility of such case studies inside and outside of the classroom (**Section 5**).
## 2 Putting OCS model into practice
### An overview of the Open Case Studies model
The case-studies model described by Nolan and Speed [2] divides each case study into five main components: (i) introduction, (ii) data description, (iii) background, (iv) investigations, and (v) theory, with an optional section for advanced analyses or related theoretical material. In our Open Case Studies (OCS) model, we expand upon these components to thirteen components. Table 1 describes the components of the OCS model as well as the mapping between our model and the original model of Nolan and Speed [2].
We highlight that while the structures of the two case-study models are similar, our OCS model has a different purpose than the one proposed by Nolan and Speed [2]. Briefly, Nolan and Speed [2] designed case studies to be either (i) used in open-ended discussions in lecture or (ii) used as open-ended lab exercises where students do extensive analyses outside of class and write reports containing their observations and solutions. In both applications, the case studies are designed to be open-ended; the background may be initially discussed in class or as part of an assignment, but students work independently or in a group to create their own solutions and summarize their own findings in a full-length report to answer the original question. In contrast, we made a design choice to build case studies that are full-length, in-depth experiential guides that walk learners through the entire process of data analysis, with an emphasis on computing [22], starting from a motivating question and ending with a summary of the results. Our goal is for educators either to directly use an entire case study in the classroom or to adapt a subset of the material for their use. For example, an educator can choose to show the solutions provided in the case study, show a different solution, or leave the discussion open-ended. Our reasoning for providing full-length guides is that it is typically easier for an educator to remove or modify material instead of creating it from scratch. In this way, we aim to reach a broader audience than just educators in a classroom, as any learner interested in a particular topic can walk through the case study to see an example of a complete data analysis. In addition, this method is particularly helpful for instructors who may not feel confident creating an analysis from scratch, especially if it is outside their main area of expertise, as our case studies built with domain experts and are peer-reviewed.
### Components of the Open Case Studies model
We will describe the thirteen individual components of our Open Case Studies model (**Table 1**) using one case study as an example. Currently all of our case studies showcase how to use the R statistical programming language [23] for data analyses, although other programming languages could be used with our model. Here, we use the "Exploring CO\({}_{2}\) emissions across time" case study (opencasestudies.org/ocs-bp-co2-emissions), which explores global and country level carbon dioxide (CO\({}_{2}\)) emissions from the 1700s to 2014 (**Figure 1**). This case study also investigates how CO\({}_{2}\) emission rates may relate to increasing temperatures and increasing rates of natural disasters in the United States (US). We also describe four other case studies (**Table 2**) and give
example topics covered in all case studies (**Table S1**).
**1. Motivation**. Each case study begins with a motivating data visualization. This idea originated from Dr. Mine Cetinkaya-Rundel's talk entitled 'Let Them Eat Cake (First)!', presented at the Harvard University Statistics Department's 2018 David K. Pickard Memorial Lecture [24]. She argues that, similar to a recipe book about baking cakes, showing a learner a visualization first can be motivating and give learners a sense of what they will be doing. This practice of showing a visualization at the start of a data analysis and then showing learners the code for how to produce the data visualization enables the learners to have a better sense of the final product and can be motivating to learn the more challenging concepts needed to make the visualization.
The motivating figure from the CO2 emissions case study (**Figure 1**) is reproduced here. In the case studies, we also include text explaining the motivation for the case study. Our case studies are often motivated by a recent report or publication investigating a specific scientific question. In this section, we explain why the topic is of interest and define any terms that are needed to understand the main questions of interest (described in the next section).
**2. Main questions**. In this section, we highlight and explicitly state a precise set of scientific question(s) or problem(s) before beginning the analysis [25]. When the case study is motivated by a previous publication, these questions may not be exactly the same as what was originally investigated in the paper or report. For example, a case study may only investigate a small subset of the results presented in the report or publication. Alternatively, a case study may not investigate the same question(s) at all, but rather use the data from the report or publication to demonstrate a specific data science or statistics learning objective. This framework also reiterates that many problems have a set of questions prior to analysis; finding an answer and engineering the question post-doc is not recommended. Data exploration is a large component of the analysis framework and is shown in case studies, but OCS impresses thoughtful questions should be determined prior to analysis.
In the CO2 emissions case study, the scientific questions are:
1. How have global CO2 emission rates changed over time? In particular for the US, and how does the US compare to other countries?
2. Are CO2 emissions in the US, global temperatures, and natural disaster rates in the US associated?
**3. Learning objectives**. Each case study consists of a set of didactic learning objectives. We categorize each objective as related to either (i) data science or (ii) statistics where the latter are concepts traditionally taught in a statistics curriculum such as linear regression, multiple testing correction, significance and the former are
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}||p{113.8pt}|p{113.8pt}|} \hline \multicolumn{4}{|c||}{**Mapping of components between two case study models**} \\ \hline \hline \multicolumn{4}{|c||}{**Open Case Studies model**} & \multicolumn{2}{c|}{**Case-study model of Nolan and Speed [2]**} \\ \hline \hline
**Component** & **Description** & **Component** & **Description** \\ \hline
1. Motivation & Motivating figure and text at the start of the case study & Introduction & Describes context of scientific question and motivation \\
2. Main questions & Scientific question(s) & & \\ \hline \hline
3. Learning objectives & Both data science and statistics learning objectives & & \\
4. Context & Context of question(s) or data & Background & Information to put question in context using non-technical language \\
5. Limitations & Any limitations in case study or with data used & & \\ \hline \hline
6. What are the data? & Summary of where the data came from and what the data contain & Data description & Documentation for data collected to address the question \\ \hline \hline
7. Data import & Analyses for importing data & Investigations & Suggestions for answering the question (varies in difficulty) \\
8. Data wrangling and exploration & Analyses for data visualization & & \\ \hline
9. Data visualization & Analyses containing statistical concepts and methods to answer question(s) & Theory & Describes relevant statistical concepts and methodologies to answer the question \\ \hline \hline
10. Data analysis & Summary of results & Extended material (optional) & Describes advanced analyses or related theoretical material \\ \hline \hline
11. Summary & Summary of results & & \\
12. Suggested homework & Question(s) to explore further & & \\
13. Additional information & Helpful links or packages used & & \\ \hline \end{tabular}
\end{table}
Table 1: **Components of an Open Case Study** Descriptions of the components of our Open Case Studies model (left) and their mapping to the components of the case studies model proposed by Nolan and Speed [2] (right). We note that the model from Nolan and Speed [2] orders ‘Data description’ before ‘Background’. However, Background is listed first here to more easily map to our Open Case Studies model.
concepts often appearing outside of a traditional statistics course, such as re-coding data values, scraping data from a website, or creating a dashboard for a data set. Other categories could be considered depending on the purpose of the case study. This separation also allows for educators to adapt the material to other computational frameworks and languages other than R, such as Python.
We include these learning objectives for three reasons: (i) to help educators select a case study that meets the objectives they want to teach and (ii) to help learners select a case study that demonstrates what they want to learn, and (iii) to provide both educators and learners with a clear understanding about the goals of a particular case study. For example, a study of the use of learning objectives in an undergraduate science course found that students find learning objectives helpful for narrowing and organizing their studying [26].
For the CO\({}_{2}\) emissions case study, we designed the case study around the following learning objectives:
(i) Data Science Learning Objectives:
* Importing data from various types of Excel files and CSV files
* Apply action verbs in dplyr[27] for data wrangling
* How to pivot between "long" and "wide" data sets
* Joining together multiple datasets using dplyr
* How to create effective longitudinal data visualizations with ggpplot2[28]
* How to add text, color, and labels to ggpplot2 plots
* How to create faceted ggpplot2 plots
(ii) Statistical Learning Objectives:
* Correlation coefficient as a summary statistic
Figure 1: **Example of a motivating figure in the “Exploring CO\({}_{2}\) emissions across time” case study ]** The complete case study can be found at (opencasestudies.org/ocs-bp-co2-emissions). **Top row**: Line plot showing the increase in CO\({}_{2}\) emissions over time (left). Longitudinal heatmap plot highlighting that the US has been one of the top emission producing countries historically and currently (right). **Bottom row**: Scatter plots showing the trends between CO\({}_{2}\) emissions and temperature across time.
* Relationship between correlation, linear regression
* Correlation is not causation
In addition, by stating these objectives within the case studies, students may begin identify how they can apply these concepts for future analyses. Finally, we provide an interactive search table of learning objectives on the Open Case Studies website (opencasestudies.org) to make it easier to find a case study that would demonstrate a particular technique, method, or concept that an instructor or learner might be interested in.
**4. Context**. The context section provides background information needed to understand the context of the question(s) of interest and the data that will be used to answer the questions [15, 16]. This may include information from the publication on which the case study is based, but also additional background literature. For an example from public health, the case study may describe what is currently known (or not known) about the health impact of the topic. This serves to demonstrate how the specific question(s) fit into a larger scientific context.
For the CO2 case study, the context section includes a discussion of the potential impacts of climate change on human health, an overview of the likely progression of warming in the coming years, and potential impacts on other components of the environment such as ocean acidity and rainfall quantities.
**5. Limitations**. In addition to the motivation and context for each case study, it is important to formally describe limitations of the analysis presented as it provides important context for the educator or learner [7]. Examples of limitations include (i) limitations due to the available data, such as the use of surrogate variables or indicators, (ii) limitations in the methods used, such as annual average estimates for quantities that are likely to vary daily or monthly, and (iii) selection biases due to sampling of observed data. A key concept in data science is that the conclusions from an analysis can only be as good as the data that go into it and the methods used to analyze them, so presenting these limitations provides a valuable learning opportunity.
In the CO2 case study, we describe limitations about how the data are incomplete because only certain countries reported CO2 emissions for certain years. We describe how additional emissions were also produced
\begin{table}
\begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline \multicolumn{1}{|c|}{**Example case studies in the OCS resource**} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ \hline
**Iopic** & **Question(s)** & **Data sources(s)** & **Raw data** & **Data science skills** & **Statistical concepts** \\ \hline \multirow{8}{*}{Air Pollution [html]} & Can we predict annual fine particulate air pollution predictors such as population density, urbanization, and satellite data data & \multirow{8}{*}{\begin{tabular}{} \end{tabular} } & \multirow{8}{*}{\begin{tabular}{} \end{tabular} } & \multirow{8}{*}{\begin{tabular}{} \end{tabular} } & \multirow{8}{*}{\begin{tabular}{} \end{tabular} } & \multirow{8}{*}{
\begin{tabular}{} \end{tabular} } \\ & & & & \\ \cline{1-1} \cline{4-4} \cline{6-6}
by countries that are not included in the data. This helps the learners to understand that while the data will help us understand CO\({}_{2}\) emissions, it will not provide the full picture.
**6. What are the data?** To provide transparency about the data sources, we describe where and how the raw data were obtained and used in the case study. If the data are obtained from a website, survey or report, and where possible, we also describe how the data were originally collected. We typically describe what the variables are in each dataset later in the case study to better match the experience of the learners discovering the data after they import and explore it.
The data sources for the CO\({}_{2}\) case study are from Gapminder (gapminder.org) (originally from the World Bank) and the United States National Oceanic and Atmospheric Administration. In the case study, we present a table with the different data sources and a brief description of each one, including sources to cite.
**7. Data import**. Next, we describe the steps and give the code required to read the raw data into the analysis environment. Currently, all of our case studies describe analyses in the R programming language. In some cases, importing the raw data is fairly straightforward, and this section is quite short. Other case studies have longer and more involved data import sections that involve scraping data from a PDF, accessing data using an Application Programming Interface (API), or writing functions to efficiently access data from multiple files with the same format. Importantly, we describe all of our use of code in the case studies in a literate programming way [29], meaning that we describe each step in a way that can be understandable by learners.
Since the data for the CO\({}_{2}\) case study are stored in Excel and comma-separated-variable (CSV) files, we use standard data import functions read_excel() from the readxl package and read_csv() from the readr R package [30] to import our data.
**8. Data wrangling and exploration**. Typically one of the longest sections for many of our case studies is the wrangling section, which describes all of the steps required to take the imported raw data and get it into a state that is ready for analysis and creating visualizations. We also demonstrate how to perform exploratory data analysis [31].
For example in the CO\({}_{2}\) case study, the raw data needs to be converted from a "wide" to "long" format so that each country-year observation is in a single row. After wrangling the data from each source, we demonstrate how to join together data sets from different sources by matching on country-year combinations. Ultimately, we create one large data set containing all the variables we want to use for our analysis (in the columns) with one record for each country-year combination (in the rows).
**9. Data visualization**. We show both simple and complex data visualizations to explore and demonstrate a variety of graphical design choices, including plot type and other aesthetic choices to best show the types of variables of interest. In addition, most case studies describe how to facet or combine plots together so that all the major findings of the case study are illustrated in a single data visualization.
In the CO\({}_{2}\) case study, we create data visualizations for a subset of the variables. For example, we use line plots to visualize how CO\({}_{2}\) emissions, in metric tons, have varied over time globally (**Figure 1**). We go into detail around coloring and labeling the lines, zooming in and out on the time-scale axis, as well as including informative plot titles and axis labels. We demonstrate that when looking at CO\({}_{2}\) emissions from different countries across time, special consideration for labeling is required. We show that a heat map or tile plot does a great job of illustrating top country differences in a less overwhelming manner (**Figure 1**). We also demonstrate the utility of faceted plots to simultaneously visualize more variables over time. We also show how to start looking for associations or trends in the data through scatter plots with smoothed lines or linear regression lines added.
**10. Data analysis**. Our case studies are intended to introduce how a particular statistical test or data science technique might be implemented and interpreted to answer the scientific question(s) of interest. However, we walk the learner through an unexpected outcome and how we diagnosed it [19]. We provide background information about statistical concepts and how these concepts apply to our example analysis.
The main topic of the analysis section for our CO\({}_{2}\) emissions Open Case Study is correlation and how correlation is related to linear regression. We discuss background information such as a description about what summary statistics are, what the correlation coefficient is, and how the correlation coefficient is mathematically calculated. We also describe the limitations of correlation analysis and how correlation does not imply causation. We demonstrate how to implement assessments of correlation and how to interpret the results.
**11. Summary**. In this section we provide a summary figure that visually indicates some of the major findings of the case study. The goal of this visualization is to demonstrate how to communicate the results of the analysis to a broader audience [6]. This often involves combining plots and adding annotations. This summary figure is the motivating figure used at the beginning of the case study. Along with this figure, we provide a synopsis of the case study in which the motivation, context, and
scientific questions are restated and summarized, while the major steps of wrangling, data exploration, and analysis are described. The main findings of the analysis are discussed, with emphasis on what these findings might indicate for the larger context of the scientific question, in addition to what still remains unknown.
In the CO\({}_{2}\) emissions Open Case Study, the summary figure (**Figure 1**) combines several of the plots from the case study together to demonstrate the major findings. The synopsis recaps what data we worked with (CO\({}_{2}\) emissions for some countries from 1751- 2014) and what we have shown in the analysis, including touching on the learning objectives outlined at the beginning. We give a simpler explanation about the statistical concepts that were discussed in the analysis section, in this case about correlation and regression. We discuss more about what we were able to answer or not answer in terms of the questions of interest. We describe how we discovered a dramatic increase in global CO\({}_{2}\) emissions over time and that some countries appear to be especially responsible. We discuss that although the data suggests a relationship between temperature and CO\({}_{2}\) emissions in the US, there are many other important factors to consider based on what we know about climate change. These include: the influence of CO\({}_{2}\) emissions from other countries in the atmosphere, the influence of other greenhouse gases, the fact that the already existing CO\({}_{2}\) in the atmosphere continues to trap heat for many years, and the fact that heat trapped in the ocean due to previous emissions causes delayed changes in surface temperatures. We also point out what the results of our analysis might mean for how we should consider mitigating climate change effects and how warming temperatures may impact society in the future.
## 12 Suggested homework
Each case study suggests a homework activity for students to try on their own. These activities typically require the students to use the skills that they have learned on a new data set or to expand the analysis to evaluate another subset of the data. Students may also be asked to make visualizations based on these analyses.
The suggested homework for the CO\({}_{2}\) emissions Open Case Study are to:
* Create a plot with labels showing the countries with the lowest CO\({}_{2}\) emission levels.
* Plot CO\({}_{2}\) emissions and other variables (e.g. energy use) on a scatter plot, calculate the Pearson's correlation coefficient, and discuss results.
These suggestions would require learners to practice their visualization an analytic skills to further investigate the data with less guidance.
## 13 Additional information
This section includes additional information about the broader scientific topic of the case study, the methods used to analyze the data, and the specific data sets used in the analysis. Information is provided as links to external online resources such as blog posts, scientific articles, scientific reports, and educational websites. We also provide links to documentation about the R packages used, as well as the specific package versions that were used. We also link to information about the specific subject-matter experts who contributed to the development of the case study.
The CO\({}_{2}\) emissions Open Case Study includes links to resources for learning more about the various R packages used in the case study (such as here [32], readxl [33], readr [30], dplyr [27], magrittr [34], stringr [35], purrrr [36], tidyr [37], tibble [38], forcats [39], ggplot2 [28], directlabels [40], ggrepel [41], broom [42], patchwork [43]) and how they were used, as well as information about the statistical topics touched on, including correlation, regression and time series analysis. These go beyond some of the material presented in the case study, to help point instructors or learners to additional resources for topics of interest.
## 3 The OCS educational resource
The OCS resource can be found online (opencasestudies.org). In addition, we created an educator's guide describing how to most effectively use the case studies, how to modify and adapt components of the case studies in the classroom, and how to contribute new case studies. (opencasestudies.org/OCS_Guide).
### Open Case Study website and search tool
Our case study resource is hosted on our Open Case Studies (OCS) website (**Figure 2**). To navigate the case studies, we provide an interactive search table, built using the DT package [44], that allows those interested to search through our case studies by topic, statistical learning objective, data science learning objective, and R packages demonstrated. This table includes links to the code and data for each case study, as well as a links to websites that are rendered versions of each case study where the entire analysis can be read in full.
### Open Case Studies on GitHub
The code and data for each case study are hosted in a GitHub repository (**Figure 2**). Our case studies are built in R Markdown, allowing text, images, and gifs that describe the context and data analytic process to be interspersed with code chunks that show the actual code used in the analysis [29]. We developed these prior to the release of the quarto publishing system (quarto.org/quarto). These case studies are then "knit" into rendered html-formatted files using GitHub actions [45] for continuous integration and deployment. By continuous integration, we mean that changes are tracked
Figure 2: **An overview of the OCS educational resource** The Open Case Studies website contains a searchable database of all available case studies. Users can search by case study name, R packages used, learning objectives, and category. Each case study links to a website with a rendered version of the entire analysis and to the Github repository. The Github repository hosts the online lesson and all of the related code, data, image, plot, and document files needed to follow along or conduct new analyses. Some case studies now have interactive versions that include live quizzes and coding tutorials.
and a history of the code from various authors is saved to a single main version [46] using Git and GitHub. By continuous deployment, we mean that the website versions of the case studies are automatically rendered and available to the public once a new version is established on GitHub. These website versions of the case studies are also hosted on GitHub. Currently our case studies are all written using the R programming language, however our current format could be extended to support tutorials using other programming languages as well. Our case studies have a table of contents that allows instructors and learners to easily navigate from section to section, so that they can focus on the materials most useful for their needs. In addition, each case study starts with a graphic or plot that describes the basic findings of the case study. Each case study is organized with the same basic structure so that learners can navigate case studies more easily, and see patterns across case studies on how analysis is performed (**Figure S1**).
### Open Case Study file structure
Each case-study repository has a similar file structure, with a data directory containing both raw data and versions of the data in various processed forms to allow instructors/learners to modularize the case studies for their own purposes (**Figure 3**). For example, an instructor could skip the data import and wrangling sections of the case study and focus on the visualizations and analysis pieces using a fully cleaned data set. To support this modular style of instruction, each case study includes commands at the beginning of each section that imports the data in the final state of the previous section. These different stages of the data are organized in a data folder with five categories: raw, imported, wrangled, simpler import, and extra. The raw data directory includes files in their original unaltered condition and in the original file format from the original data source (in some cases raw files are CSV files, Excel files, PDFs among other file formats). The imported data directory includes files containing the data in a format that is directly compatible with R, such as RData files which are often abbreviated as Rda. The wrangled data directory also includes an RData file that contains a clean and tidy version of the data that has been pre-processed and is ready for analysis, as well as csv files for instructors that wish to demonstrate a simpler version of data import. The simpler import folder contains raw files that have been converted to CSV file format or other formats that can be more easily imported into R. The extra data folder contains data files that allow for individuals to conduct analyses beyond what was done in the case study (the file format for these extra files varies). Each repository also contains a README file [20; 21] that explains the modular aspect of the case study, as well as other information about how to use the case study for educational purposes (**Table S2**).
### Interactive elements in Open Case Studies.
To make our case studies more experiential, we have introduced interactive elements including quizzes and coding exercises using the learnr[47] and gradethis[48] packages.
We include a mix of multiple choice questions and coding exercises in each case study. Coding exercises are embedded throughout the content of the case studies and give students a chance to write code for a specific step in the analysis. The answers to these exercises (the code/output used in the case study to complete these steps) is then hidden in a click-to-expand section right after the exercise window. Students can compare their own code and output with these answers. We also create exercise subsections at the end of the main sections of the case study. These exercise subsections include both multiple choice questions and coding exercises. Students can use them to test their understanding of the content in each section. All multiple-choice questions provide real-time feedback, giving hints after wrong answers and allowing students to retry the questions if they submitted a wrong answer. For most of the coding exercises, hints and/or solutions are available. With the help of the gradethis package, some of these coding problems also provide real-time feedback after students submit their code.
## 4 Building your own case studies
For educators interested in constructing their own case studies, in this next section, we describe our recommendations for the process based on our experiences and challenges throughout this project. We also describe these recommendations in our Educator's guide (open-casestudies.org/OCS_Guide).
### Identifying questions and data for case studies
The process of choosing data sources and questions of interest is arguably the most important part of constructing a case study. We can either identify an interesting and publicly available data set and then ask a timely and engaging question about a topic related to the data, or we can identify an interesting question and then work to find publicly available data to answer this question. This process of linking a question to publicly available data often involves a bit of trial and error and reshaping of the question while keeping in mind and potentially adjusting what the case study is meant to demonstrate.
In our experience developing case studies, we found that identifying a data set first was often easier than relying on finding a data set to answer a particular question. While many of our case studies were specifically designed to address a public health challenge, we sometimes struggled to find publicly available data that was appropriate for the question or set of questions of inter
est. Collaboration with subject-matter experts can be especially helpful in addressing this challenge. For our case studies, we worked with public health experts in order to both identify interesting, timely, and testable questions and to find a public source of data to answer our questions.
We found we could use the difficulty of obtaining data in a standard format (e.g., Excel, CSV) as a teaching opportunity, and that being open-minded about the source of the data allowed us to demonstrate unconventional skills. For example, when we could not easily access the data stored in a table in a published report, we illustrated the data science skill of pulling data directly from a PDF. As future data scientists, our students need the skills to be flexible to access data that cannot simply be read in or imported as-is into R.
While we typically started developing each case study with a set of data science and statistical learning objectives in mind, there was sometimes a tension between finding a data set that would allow us to meet these specific objectives and allowing the data to guide the direction of the case study. We found that following opportunities presented by the data itself led us to give examples that were more authentic to a real-world data analysis situation. We recorded some of these challenges within the case studies themselves so students could better understand the process of finding the right data to answer a question of interest (and the potential need to refocus a question). The limitations section in particular provides some of the most useful material for class discussions about the types of questions the data can and cannot answer and how sometimes we must simplify our analysis to reflect the limitations of the data available to us.
As educators working during a time of reflection and social change around issues of gender and race in research, we also took care to point out some historically overlooked aspects of our data sets. For example, collecting data with surveys that provide a limited number of options about ethnicity or race or racial and gender intersections, limits our ability to accurately capture the diversity of the population being studied. As an example, we refer the reader to the case study about youth disconnection (opencasesstudies.org/ocs-bp-youth-disconnection).
Figure 3: **An overview of the data file structure on GitHub** A tree illustrating the repository data directory structure. Each bubble describes the type of data files that can be found in the sub-folders.
For some case studies, we focused on finding mostly clean and complete data to allow us to demonstrate certain concepts, like machine learning or how to create a dashboard. In these case studies where we knew that the analytical material was going to get quite intensive and lengthy, we specifically sought to find data sets that would allow us to jump right in with little difficulty in terms of gathering, cleaning, and importing the data.
Our overall suggestions for starting a case study are:
* **Be open-minded and flexible about data sources:** Unlike performing a real analysis where an analyst might choose to avoid complications in accessing the data (when the option is available to go with a data set that is easier to access), such complications can provide teaching opportunities to prepare students for cases where they will not have a simpler option available.
* **Determine the level of flexibility based on the goals of the case study:** If the case study is intended to demonstrate a specific statistical method or data import method, more effort may be required to find the right data to meet this specific teaching expectation. In our case we knew we were planning to make several case studies, thus we were able to let some of the case studies naturally flow in directions we didn't initially intend. This ultimately led to some teaching opportunities we did not expect. However, for some of our case studies we were more rigid about our data needs.
* **Think about the scope of the case study:** Keep in mind the 1) type of learners that the case study is intended for and 2) data analysis method goals that the case study is intended to demonstrate. Try to avoid a case study that is both intensive for data import/wrangling and intensive for data analysis. At a later point reevaluation of the overall direction and scope of the case study may be needed. If the case study is too long, consider splitting it into multiple case studies.
* **Keep it simple:** Explaining a process at a beginner level often involved more space within a case study than anticipated. Keeping case study plans simple can help as unexpected teaching opportunities may arise that may require more instruction.
### Do the analysis first but with a learner in mind
To present an analysis narrative, it is necessary to first perform the analysis before working on the narrative description. However, the case study itself should not simply be a reproduction of the process used to analyze the data. Instead it should contain simplifications and modifications to create a clear and coherent presentation for students. To do this, it is crucial to keep a good record of all the steps taken during this initial analysis, including explanations and comments to justify the analysis choices made along the way. Special care should be taken to record exactly how the raw data is obtained.
Often the way we would typically perform an analysis ourselves might not always be the best for instruction purposes. For example, an experienced data analyst might start by writing a function that wrangles multiple similar data files. However, this would not be the appropriate way to start a case study for beginners. Instead one might choose to focus on wrangling a single specific file in great detail before trying to generalize the code as a function. Thus we try to determine an overall process of data import and wrangling for the intended level of audience before really generating the dialogue that describes this process.
We also found that often the data exploration steps and the steps involved in the decision-making process of how to wrangle the data needed to be simplified for a case study. For example, we may ultimately decide to remove a data source from our analysis because we find errors in the data and dealing with these errors are beyond the scope for our intended audience. While it may be useful to tell students about these data errors [19] and how to address them, we also need to keep an appropriate level of detail so as not to overwhelm them.
Another situation where we might modify the analysis is if a process requires a considerable level of trial and error. Rather than showing the students all the iterations of the trial and error and all of the decisions around this process, we may only demonstrate a small portion so as not to make the case study too lengthy. In a case study about machine learning, for example, we aimed to achieve a certain level of performance so we spent a fair amount of time demonstrating how to optimize and tune parameters. While we briefly described our tuning strategy, we did not show all intermediate models, but ultimately showed two that were interesting and useful for describing parameter tuning.
To conclude, we may have gone through a learning process in our own analysis, eventually arriving at a more refined approach. Instead of describing the entire process to get to this point, we would sometimes simply present the final approach, yet describe in the narrative that in practice more effort would be required. While we do want to present a realistic depiction of the data analysis process, we also need to achieve clarity and focus.
### Creating the case study narrative
Once an analyst has performed the analysis to address the questions of interest, it is time to start writing the narrative. First, we introduce and motivate the main topic by presenting some research related to the particular question evaluated in the case study.
First, we describe the data import, wrangling, and analysis processes. As mentioned above, this will likely not be a faithful reproduction of our own analysis process, but will be recreated to best meet the pedagogical goals of the case study. In terms of added narrative, we
do our best to guide students through the new information we are presenting. The first time we a function we describe what it does, its main arguments, and what packages it comes from. We describe the thoughts behind our decision making process from one step to the next, sometimes illustrating times where we try something and it does not work to reflect a real-world data analysis.
We also describe jargon and background information where possible with click-to-expand sections so as not to disrupt the general flow of the case study. For example, an expanded section would explain how "piping" works, passing objects through a series of steps, to avoid slowing down students who are already familiar, while allowing us to not lose students that have never seen piping before. Other material for such expandable sections includes describing the "grammar of graphics" for the ggplot2 package or providing background statistical information before performing a statistical test. In some cases we describe a concept at great length in another case study so we link to the description there, but in general we at least minimally describe most concepts and methods in each case study to keep them as self-contained as possible. Similarly, we found including portions of RStudio cheat sheets [49] to be very useful for certain topics, such as describing regular expressions or joining functions. In some cases we found it best to explain a concept or challenge with a simpler example first using a smaller data set imported into R or created in R ourselves. This material is also included in click-to-expand sections for students who might already be familiar with such concepts.
While constructing the narrative, we think about where we can include question opportunities. These opportunities include places for an instructor to start a discussion about the analysis decision-making process, such as why a particular graph choice is not always effective or why a wrangling method might not be reproducible. We may prompt students to try to remember how to perform a task that has already been shown in the case study previously. In our interactive case studies we also include quiz questions and coding exercises, as described in the Section 3.4.
Finally, we end the narrative by summarizing how to communicate the major findings of the analysis [6]. We also describe how the results fit into the greater context of the field, what the implications are, the limitations of the study, and what is still unknown. We finish by going through the case study to create a list of all the resources shown throughout the case study.
Through the process of creating this resource, we discovered a variety of challenges, as well as strategies that we used to overcome these challenges, as described in **Table S3** and guidelines for creating new case studies (Supplemental Note S1).
### Creating interactive case studies
We have also included interactive elements in a subset of our case studies using packages (learnr [47] and gradethis [48]) that build on the shiny [50] package which allows R users to more easily create web applications. The learnr package allows users to create multiple choice questions and coding exercises, while gradethis allows for customization of the feedback divided to learners as they answer questions or perform exercises.
There are two methods to do this. One method is to host each exercise as an individual Shiny application and then embed these applications in the case study using inline frames (HTML 'iframe'). The second method is to create one single application that incorporates exercises within the case study (Supplemental Note S2 and Supplemental Note S3).
## 5 Discussion and Conclusions
In this paper, we introduce a model for creating fully open-source, peer-reviewed, and complete case studies to create an archive of examples of best practices to guide students through data analyses involving real, complicated, messy, and interesting data. Our archive can be used in the classroom by instructors to guide students through any part of our case studies due to the easy navigation and common modularized architecture to structure the case studies. These can also be used by independent learners due to the thorough narrative, interactive elements, and complete analyses. Students and learners can learn about new topics or return to a case study to brush up on details of a particular method or technique. The data within our case studies and the narrated data analyses and data science methods can be used by instructors educating undergraduate and graduate students, as well as high school students in a variety of topics including statistics, public health, programming, and data science. This provides an opportunity for instructors to use data that is relevant to current public health concerns and therefore of interest to a large variety of students without the work required to identify such data or to determine what analyses are possible with such data. This will free instructors to focus on challenging the students with more interactive discussions in class and allow students to learn more about the decision processes required for analyzing data.
In summary, OCS provides a consistent framework grounded by [2], is open, and additionally provides recommendations on how to teach the material. With the OCS resources, educators can also make their own and expand OCS if they contribute back. We believe these additions try to bridge the gaps in the last mile of analysis education.
## Back Matter
### Author Contributions
Contributions listed according to the CRediT system.
* Conceptualization: CW, MAT, LRJ, SCH
* Software: all co-authors
* Formal analysis: all co-authors
* Investigation: CW, SCH
* Data Curation CW, SCH
* Original Draft: CW, SCH
* Review & Editing: all co-authors
* Visualization: CW, MB
* Supervision: CW, MAT, LRJ, SCH
* Funding acquisition: CW, MAT, LRJ, SCH
## Acknowledgements
We would like to thank the Johns Hopkins Data Science lab (jhudatascience.org), in particular Roger Peng, Jeff Leek, Brian Caffo, and Jessica Crowell for their support and valuable feedback on the Open Case Studies project. We would like to thank Ira Gooding for his feedback on incorporating case studies into the Coursera platform. In addition we would like to thank all the data science and statistics reviewers of our case studies, including: Shannon Ellis, Nicholas Horton, Leslie Myint, Mine Cetinkaya-Rundel, Michael Love, and Christina P. Knudson, as well as the following student reviewers: Jensen Stanton, Tina Trinh, and Ruby Ho. We would also like to acknowledge the topic reviewers including: Roger Peng, Tamar Mendelson, Brendan Salomer, Rene Johnson, Jessica Fanzo, Daniel Webster, Elizabeth Stuart, Aboozar Hadavand, Megan Latshaw, Kirsten Koehler, and Alexander McCourt. We would also like to acknowledge Ashkan Afshin and Erin Mullany for giving us access to the data for the case study titled "Exploring global patterns of dietary behaviors associated with health risk." We would also like to thank the Johns Hopkins Bloomberg School of Public Health Department of Biostatistics for initially funding this project.
## Funding
The Open Case Study project reported in this publication was supported by a High-Impact Project grant in 2019-2020 by the Bloomberg American Health Initiative to create the majority of the case studies currently part of the project. A 2020 Digital Education & Learning Technology Acceleration (DELTA) Grant from the Office of the Provost at the Johns Hopkins University supported the creation of interactive case studies and many of the tools that support their use, such as the search tool. The Open Case Studies guide was funded as an extension to funding for the Genomic Data Science Community Network (GDSCN). The GDSCN is supported through a contract to Johns Hopkins University (75N92020P00235) NHGRI. JM was supported by Streamline Data Science, U24HG010263-01 (ANVIL), UL1TR003098 (NIH/NCATS): Institutional Clinical and Translational Science Award, and UE5CA254170.
### Conflict of Interest
The co-authors Carrie Wright and Stephanie Hicks receive royalties on a book available on Leanpub and a course on Coursera titled "Tidyverse Skills for Data Science", both which incorporate three case studies from the Open Case Studies project.
|
2307.01370 | Multilingual Language Models are not Multicultural: A Case Study in
Emotion | Emotions are experienced and expressed differently across the world. In order
to use Large Language Models (LMs) for multilingual tasks that require
emotional sensitivity, LMs must reflect this cultural variation in emotion. In
this study, we investigate whether the widely-used multilingual LMs in 2023
reflect differences in emotional expressions across cultures and languages. We
find that embeddings obtained from LMs (e.g., XLM-RoBERTa) are Anglocentric,
and generative LMs (e.g., ChatGPT) reflect Western norms, even when responding
to prompts in other languages. Our results show that multilingual LMs do not
successfully learn the culturally appropriate nuances of emotion and we
highlight possible research directions towards correcting this. | Shreya Havaldar, Sunny Rai, Bhumika Singhal, Langchen Liu, Sharath Chandra Guntuku, Lyle Ungar | 2023-07-03T21:54:28Z | http://arxiv.org/abs/2307.01370v2 | # Multilingual Language Models are not Multicultural: A Case Study in Emotion
###### Abstract
Emotions are experienced and expressed differently across the world. In order to use Large Language Models (LMs) for multilingual tasks that require emotional sensitivity, LMs must reflect this cultural variation in emotion. In this study, we investigate whether the widely-used multilingual LMs in 2023 reflect differences in emotional expressions across cultures and languages. We find that embeddings obtained from LMs (e.g., XLM-RoBERTa) are Anglo-centric, and generative LMs (e.g., ChatGPT) reflect Western norms, even when responding to prompts in other languages. Our results show that multilingual LMs do not successfully learn the culturally appropriate nuances of emotion and we highlight possible research directions towards correcting this.
## 1 Introduction
The global reach of Large Language Models (LMs) today prompts an important question - _Are multilingual LMs also multicultural?_ We are specifically interested in the multicultural behavior of LMs from the lens of emotion. LMs are used for many multilingual tasks that require emotional sensitivity and therefore must be able to reflect cultural variation in emotion. For instance, LM-powered Therapy Bots must delicately adapt the way they speak to patients in different languages (Wang et al., 2021), LMs as creative writing assistants must produce content that will elicit the appropriate emotional response in an author's desired audience (Shakeri et al., 2021), LMs used for workplace communication must understand the subtleties of interpersonal interaction (Thiergart et al., 2021), etc.
We define cultural variation in emotion as _the nuances in meaning and usage of emotion words across cultures_. For example, in English, we have many different words that express Anger. One can say "I feel angry," but may also choose to say "frustrated", "iriritated", or "furious." The Anger invoked by a baby crying on an airplane is different from the Anger invoked by an unfair grade on an exam; different situations that cause Anger will invoke different language to best express it. These nuances in meaning and usage patterns of emotion words exist differently across cultures (Mesquita et al., 1997; Wierzbicka, 1999).
Therefore, there is not a perfect one-to-one mapping between languages for emotion words coupled with their meaning and usage patterns. The direct translation for "I feel frustrated" from English to Chinese (simplified), for example, is "\(\nexists\)". However, in a situation where a native English speaker would likely say "I feel frustrated," a native Chinese speaker may use a different phrase than "\(\nexists\)", based on situation, context, and the cultural norms of emotion expression in China.
As we rely on multilingual LMs today for emotionally sensitive tasks, they must reflect this cultural variation in emotion. However, the widely
Figure 1: Do LMs always generate culturally-aware emotional language? We prompt GPT-4 to answer ”How would you feel about confronting your friend in their home?” like someone from Japan. We provide cultural context either via English (stating ”You live in Japan” in the prompt) or via a Japanese prompt. GPT-4 returns two drastically different completions, with the Japanese completion annotated as not culturally appropriate.
used multilingual LMs are trained on Anglocentric corpora and encourage alignment of other languages with English Reimers and Gurevych (2020), both implicitly and explicitly, during training. The key problem in this approach to building multilingual LMs is that any form of alignment destroys a model's ability to encode subtle differences, like the difference between "I feel frustrated" in the United States and "\(\text{\raisebox{-0.86pt}{\scalebox{1.0}{\scalebox{1.0}{\scalebox{1.0}{\scalebox{1.0}{ \scalebox{1.0}{\scalebox{1.0}{\scalebox{1.0}{\scalebox{1.0}{\scalebox{1.0}{\scalebox{1.0 }{\scalebox{1.0}{\scalebox{1.0}{\scalebox{1.0}{\scalebox{1.0}{\scalebox{1.0}{ \scalebox{1.0}{\scalebox{1.0}{\scalebox{1.0}{\scalebox{1.0}{\scalebox{1.0}{ \scalebox{1.0}{\scalebox{1.0}{\scalebox{1.0}{\scalebox{1.0}{\scalebox{1.0}{ \scalebox{1.0}{\scalebox{1.0}{\scalebox{1.0}{\scalebox{1.0}{\scalebox{1.0}{ \scalebox{1.0}{\scalebox{1.0}{\scalebox{1.0}{\scalebox{1.0}{\scalebox{1.0}{ \scalebox{1.0}{\scalebox{1.0}{\scalebox{1.0}{\scalebox{1.0}{\scalebox{1.0}{ \scalebox{1.0}{\scalebox{1.0}{\scalebox{1.0}{\scalebox{1.0}{\scalebox{1.0}{ \scalebox{1.0}{\scalebox{1.0}{\scalebox{1.0}{\scalebox{1.0}{\scalebox{1.0}{ \scalebox{1.0}{\scalebox{1.0}{\scalebox{1.0}{\scalebox{1.0}{\scalebox{1.0}{ \scalebox{1.0}{\scalebox{1.0}{\scalebox{1.0}{\scalebox{1.0}{\scalebox{1.0}{ \scalebox{1.0}{\scalebox{1.0}{\scalebox{1.0}{\scalebox{1.0}{\scalebox{1.0}{ \scalebox{1.0}{\scalebox{1.0}{\scalebox{1.0}{\scalebox{1.0}{\scalebox{1.0}{ \scalebox{1.0}{\scalebox{1.0}{\scalebox{1.0}{\scalebox{1.0}{\scalebox{1.0}{ \scalebox{1.0}{\scalebox{1.0}{\scalebox{1.0}{\scalebox{1.0}{\scalebox{1.0}{ \scalebox{1.0}{\scalebox{1.0}{\scalebox{1.0}{\scalebox{1.0}{\scalebox{1.0}{ \scalebox{1.0}{\scalebox{1.0}{\scalebox{1.0}{\scalebox{1.0}{\scalebox{1.0}{ \scalebox{1.0}{\scalebox{1.0}{\scalebox{1.0}{\scalebox{1.0}{\scalebox{1.0}{ \scalebox{1.0}{\scalebox{1.0}{\scalebox{1.0}{\scalebox{1.0}{\scalebox{1.0}{ \scalebox{1.0}}{\scalebox{1.0}{\scalebox{1.0}{\scalebox{1.0}{\scalebox{1.0}{ \scalebox{1.0}}{\scalebox{1.0{\scalebox{1.0}}{\scalebox{1.0}{\scalebox{1.0}{ \scalebox{1.0}}{\scalebox{1.0{\scalebox{1.0}}{\scalebox{1.0}{\scalebox{1.0}{ \scalebox{1.0}}{\scalebox{\scalebox{1.0}}{\scalebox{1.0}{\scalebox{1.0}{\scalebox{ \tiny{\}}}{\scalebox{\scalebox{1.0}}{\scalebox{1.0}}{\scalebox{\scalebox{1.0}}{\scalebox{ \cdot}}{\scalebox{1.0}}{\scalebox{1.0}{\scalebox{1.0}}{\scalebox{1.0}{\scalebox{1.0 }}{\scalebox{1.0}}{\scalebox{1.0}{\scalebox{1.0}}{\scalebox{1.0}{\scalebox{1.0}}{ \scalebox{1.0}}{\scalebox{1.0}{\scalebox{1.0}}{\scalebox{1.0}{\scalebox{1.0}{ \cdot}}{\scalebox{1.0}{\scalebox{1.0}}{\scalebox{1.0}{\cdot}{\scalebox{1.0}}{ \scalebox{1.0}}{\scalebox{1.0{\cdot}}{\scalebox{1.0}}{\scalebox{1.0}{\cdot}{\cdot} \cdot{\cdot{\cdot{\cdot{\cdot{\cdot{\cdot{\cdot{\cdot{\cdot{\cdot
tiple languages. The nature of Wikipedia, which has topic-aligned articles in different languages, causes _implicit alignment_ in training. Worse, XLM-RoBERTa variants trained via multilingual knowledge distillation Reimers and Gurevych (2020) enforce English sentences and their translations to map to the same point in embedding space, giving _explicit alignment_ of other languages with English.
This section investigates the effect of alignment - both implicit and explicit - by analyzing emotion embeddings from monolingual, multilingual, and aligned RoBERTa models (See Table A2). We further investigate whether this anchoring impacts our ability to visualize known cultural differences (e.g. differences between Pride and Shame in the US vs. Japan Tsai et al. (2006)) when projecting embeddings into the two-dimensional Valence-Arousal plane Russell (1980).
### Does implicit and explicit alignment inappropriately anchor emotion embeddings to English?
We analyze whether implicitly aligned embeddings become Anglocentric by comparing emotion embeddings from XLM-RoBERTa to emotion embeddings learned in a parallel, monolingual setting. We further analyze explicit alignment by comparing embeddings from vanilla XLM-RoBERTa to an explicitly aligned variant of XLM-RoBERTa Reimers and Gurevych (2020).
Distance-Based SimilarityHow do we compare the emotion embeddings of two models? Let us take Joy, one of the six Ekman emotions Ekman et al. (1999), as an example - can we compare the similarity of embeddings from two models for the phrase "I feel joy"? 2 A direct numerical comparison is challenging, as we would need to align the embedding spaces of these two models and possibly distort the Joy embeddings. Taking this into account, we pose the following solution:
Footnote 2: We prepend each emotion word with the phrases ”I feel” and ”I am” to add context and circumvent polysemy when generating embeddings for analysis.
The more similar two models are, the more similarly we expect them to embed the same phrases in embedding space. For example, let us embed phrases x, y, and, z using Model A and Model B. This gives us the embedding vectors \(\vec{x}_{A},\vec{y}_{A},\vec{z}_{A}\) and \(\vec{x}_{B},\vec{y}_{B},\vec{z}_{B}\) respectively. Figure 2 illustrates this, showing the embeddings of Joy, Anger, Elation, Sadness, and Happiness using a monolingual and multilingual RoBERTa model.
If Model A and Model B have embedded phrases x, y, and z in a similar way, then we expect to see a high correlation between the numerical distances \(x\to y,x\to z,\) and \(y\to z\) in the respective embedding spaces of Model A and B. We calculate the correlation between the following two vectors:
\(<\|\vec{x}_{A}-\vec{y}_{A}\|,\|\vec{x}_{A}-\vec{z}_{A}\|,\|\vec{y}_{A}-\vec{ z}_{A}\|>\)
\(<\|\vec{x}_{B}-\vec{y}_{B}\|,\|\vec{x}_{B}-\vec{z}_{B}\|,\|\vec{y}_{B}-\vec{ z}_{B}\|>\)
to inform how similar the embeddings of x, y, and, z are between Model A and Model B.
Using this idea, we can compare the _distances_ from "I feel joy" to other contextualized emotion phrases (e.g. "I feel anger", "I feel happiness", etc.) in embedding space A to those same distances in embedding space B. For example, if the monolingual and multilingual RoBERTa models shown in Figure 2 have learned similar representations of Joy, then we can expect to see a high Pearson correlation between the vectors \(<13.05,9.85,12.55.2.23>\) and \(<28.44,6.68,28.48,4.25>\). We use this distance-based similarity metric to answer the following three questions:
1. Do implicitly aligned multilingual LMs embed emotion words differently than monolingual LMs?
2. Do implicitly aligned multilingual LMs embed emotion words in an Anglocentric way?
3. Does explicit alignment further anchor multilingual emotion embeddings to English?
Figure 2: We determine the similarity between the embeddings of monolingual Joy and multilingual Joy by comparing the distances from Joy to other emotions embeddings in both settings. Specifically, we calculate the correlation between \(<13.05,9.85,12.55.2.23>\) and \(<28.44,6.68,28.48,4.25>\) to infer similarity.
Do implicitly aligned multilingual LMs embed emotion words differently than monolingual LMs?We compare the emotion representations from _monolingual_ and _multilingual_ RoBERTa models across English, Spanish, Chinese, and Japanese. We select the four monolingual RoBERTa models most downloaded on Huggingface, additionally ensuring the four models selected have the same number of parameters. Table A2 contains additional details on the models used in our experiments.3
Footnote 3: We note that differences in training data for the monolingual RoBERTa models affect how these models are able to capture emotion. However, it is important to investigate LMs actively used in NLP research rather than explicitly creating a perfectly parallel set of monolingual models.
Figure 2 illustrates this experiment. In practice, we use a list of 271 emotions Davis (2023) for our distance-based similarity computation. Additionally, to account for variance in descriptions of experiencing emotion, we average the embedding of two contextualized phrases for each emotion - "I feel <_emotion_>" and "I am <_emotion_>".
For non-English languages, we machine translate the two contextualized English phrases for each emotion (e.g. a representation of Joy in English is the average of the embeddings of "I feel joy" and "I am joyful". The representation of Joy in Spanish is the average of the embeddings "siento alegria" and "soy alegre", etc.). In order to ensure quality, we have native speakers evaluate a subset of the machine-translated emotion phrases, and we find that translation does yield sufficient results.
We then apply our distance-based similarity metric to compare the monolingual and multilingual emotion embeddings across languages. The "Mono vs. Multi" column in Table 1 shows the average distance-based similarity across all 271 emotions. The lower similarities for non-English languages indicate that _XLM-RoBERTa embeds non-English emotions differently compared to monolingual models_. We can thus say that multilingual LMs do not preserve the embedding space of monolingual non-English LMs.
Do implicitly aligned multilingual LMs embed emotion words in an Anglocentric way?We compare the emotion representations of _English_ vs. _non-English_ languages. We apply our distance-based similarity metric to measure the similarity between English and non-English emotion representations in two settings - monolingual and multilingual. Figure 3 illustrates this experiment.
The "English vs. Non-English" columns in Table 1 show the average distance-based similarity between English and non-English emotion embeddings across all 271 emotions, in monolingual and multilingual settings respectively. Results reveal low similarity between non-English and English emotion embeddings in monolingual space. _In a multilingual setting, however, the non-English emotion embeddings become more similar to English ones._ This suggests that implicit alignment in multilingual LMs anchors non-English emotion embeddings to their English counterparts.
Does explicit alignment further anchor multilingual emotion embeddings to English?We compare emotion embeddings from an _unaligned_ RoBERTa model to a RoBERTa model trained via _forced alignment_ across English, Spanish, Chinese, and Japanese Reimers and Gurevych (2020).
The average distance-based similarity between aligned and unaligned emotion embeddings across all 271 emotions is shown in column "Aligned vs. Unaligned" in Table 1. _Emotion embeddings from explicitly aligned models are most similar to unaligned embeddings in English_, indicating explicitly aligned embedding space fails to preserve the structure of non-English embedding spaces.
Finding 1:Multilingual LMs embed non-English emotion words differently from their monolingual counterparts, whereas English emotion embed
Figure 3: We compare the similarity between the embeddings of Joy in English and Joy(Alegria) in Spanish by comparing the distances from Joy to other emotion embeddings in both languages. Specifically, we calculate the correlation between \(<13.05,9.85,12.55.223>\) and \(<0.39,0.41,0.37,0.35>\) to infer similarity.
dings are more stable and similar in all settings. We demonstrate that _implicit and explicit alignment in multilingual LMs anchor non-English emotion embeddings to English emotions._ All observed trends persist under ablation studies on the effect of distance metric and correlation function (see Appendix A).
### Do emotion embeddings reflect known psychological cultural differences?
Though emotion embeddings from multilingual LMs are Anglocentric, we nonetheless investigate whether they encode any information about known cultural variation in emotion. Prior work Tsai (2017); Russell et al. (1989) underlines the differences in emotional expression across cultures, and often illustrates these differences via the circumplex model of affect Russell (1980). The circumplex model assumes all emotions can be classified along two independent dimensions - _arousal_ (the magnitude of intensity or activation) and _valence_ (how negative or positive).
Pride and Shame are two widely researched emotions when investigating cultural differences in emotional expression. Lewis et al. (2010); Wong and Tsai (2007). Shame is expressed more commonly and has a desirable affect in Eastern cultures compared to Western cultures. Similarly, Pride is openly expressed in Western cultures whereas Eastern cultures tend to inhibit the feeling of Pride Lim (2016). Moreover, these proclivities are deeply ingrained in society and thus acquired at a very young age Furukawa et al. (2012).
For our experiments, we consider the US and Japan, as the subtle differences in expression of Pride and Shame between these two cultures are well-studied Kitayama et al. (2000); Tsai et al. (2006). We project emotion embeddings from English and Japanese onto the Valence-Arousal plane to visualize whether multilingual LMs capture the expected differences in Pride and Shame. When comparing the embeddings, we expect to specifically observe:
1. The embedding for English Pride should have a more positive valence. _(as Pride is more accepted in the US than Japan)_ Furukawa et al. (2012)
2. The embedding for English Shame should have a more negative valence. _(as Shame is more embraced in Japan than the US)_ Furukawa et al. (2012)
3. The embeddings for English Pride should have higher arousal _(as Pride is more internally and culturally regulated in Japan than the US)_ Lim (2016)
Projection into the Valence-Arousal planeIn order to define the valence and arousal axes, we first generate four axis-defining points by averaging the contextualized embeddings of the emotions
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & Mono vs. Multi & English vs. Non-English & Aligned vs. Unaligned \\ \hline Language (L) & \(\bar{r}(L_{mono},L_{multi})\) & \(\bar{r}(En,L)_{mono}\) & \(\bar{r}(En,L)_{multi}\) & \(\bar{r}(L_{align},L_{unalign})_{multi}\) \\ \hline English (En) & **0.758** (0.35) & \(-\) & \(-\) & **0.483** (0.22) \\ Spanish & 0.318\({}^{*}\) (0.20) & 0.222\({}^{*}\) (0.14) & **0.628\({}^{*}\)** (0.36) & 0.280\({}^{*}\) (0.19) \\ Chinese & 0.378\({}^{*}\) (0.10) & 0.213\({}^{*}\) (0.12) & **0.437\({}^{*}\)** (0.35) & 0.102\({}^{*}\) (0.06) \\ Japanese & 0.332\({}^{*}\) (0.18) & 0.055\({}^{*}\) (0.09) & **0.485\({}^{*}\)** (0.39) & 0.332\({}^{*}\) (0.18) \\ \hline \hline \end{tabular}
\end{table}
Table 1: We report the average distance-based similarity across 271 emotions for each of our experiments (standard deviation given in parentheses). \({}^{*}\)indicates the difference in mean correlation between English vs. non-English settings (for Mono vs. Multi, Aligned vs. Unaligned) and monolingual vs. multilingual settings (for English vs. Non-English) is statistically significant (\(p<0.05\)); we compute this using an independent t-test. See Table A2 for models used in each setting.
Figure 4: The six Ekman emotions projected onto the Valence-Arousal plane. We replicate the circumplex model of affect, enabling visualization and theoretical analysis of multi-dimensional emotion embeddings.
listed in Table A1. This gives us four vectors in embedding space that best represent positive valence (\(PV\)) negative valence (\(NV\)), high arousal (\(HA\)), and low arousal (\(LA\)). We can now project any emotion embedding onto the plane defined by the valence axis (\(NV\to PV\)) and the arousal axis (\(LA\to HA\)). We give a more formal, mathematical description of this projection method in the Appendix B. Figure 4 shows the six Ekman emotions Ekman et al. (1999) projected into the Valence-Arousal plane, indicating that our projection method successfully recreates the circumplex.
To visualize Pride and Shame in the Valence-Arousal plane, we manually translate the axis-defining emotions to Japanese and average the English and Japanese points of each axis category to define _multilingual valence and arousal axes_. We then project the contextualized sentence embeddings "I am proud" and "I am ashamed" in English and Japanese. We experiment with both aligned and unaligned RoBERTa models; these plots are shown in Figure 5.
Looking at the plots, we observe that English Pride is slightly higher in valence than Japanese Pride, and English Shame is slightly lower in valence than Japanese Shame. This does serve as a weak confirmation of the first two hypotheses. However, we do not observe English Pride to have higher arousal than Japanese Pride. This discrepancy suggests our results are inconclusive, and we cannot confirm whether multilingual RoBERTa encodes cultural variation in English vs. Japanese Pride and Shame.
Finding 2:By projecting emotion embeddings into the Valence-Arousal plane, we show that _LMs are not guaranteed to encode the nuances in meaning and usage of emotion words across cultures_. Researchers who utilize embeddings from multilingual LMs for emotion-related tasks assume these pre-trained models have learned adequate representations of emotion across languages. However, implicit and explicit alignment during training causes multilingual LMs to ignore the subtle differences in emotion expression across cultures.
## 4 Investigating multilingual LM generation
We now turn from investigating embeddings to analyzing language generated by Language Models (GPT-3, GPT-3.5, and GPT-4) to see if multilingual LM completions reflect cultural variation in emotion. In order for LMs to be used for tasks that require emotional sensitivity, their responses must align with cultures' socio-cultural norms Genesee (1982); generated text must reflect users' cultural tendencies and expected affect Tsai (2017).
We first analyze token-level completion probabilities from GPT-3, to see if they reflect cultural differences between American and Japanese Shame and Pride. We then prompt GPT-3.5 and GPT-4 in English and non-English languages to respond to scenarios that should elicit different emotional responses across cultures and assess their cultural appropriateness in a small-scale user study.
### Do LMs reflect known psychological cultural differences?
Continuing our example of English vs. Japanese Pride and Shame, we evaluate whether this known cultural difference is reflected in OpenAI's GPT-3.
We design a set of 24 prompts (See Table A5) for GPT-3 (davinci) based on six scenarios that would invoke a combination of Pride and Shame in the form <context><feeling>. For example, "I received an award in front of my coworkers. I feel proud." One might feel proud for re
Figure 5: We project English and Japanese Pride and Shame embeddings into the Valence-Arousal plane. We use an aligned (top) and unaligned (bottom) RoBERTa model to embed the contextualized emotions. In both cases, we do not see all of our hypotheses confirmed.
ceiving an award or embarrassed for being publically praised. We then prompt GPT-3 using various <context><feeling> prompts, and analyze the log probability of each token of the prompt. Finally, we sum the log probability of each token in the <feeling> sentence to get a sense of how likely the <feeling> is to follow the <context>. Based on cultural norms about how one would react in situations that elicit both Pride and Shame, we expect to see a higher probability for "I feel happy" and "I feel proud" in English, and a higher probability for "I feel embarrassed" and "I feel ashamed" in Japanese across scenarios.
Figure 6 shows the results of this for the prompt "I received an award in front of my coworkers. I feel ____" where we test two Pride words: "proud", "happy", and two Shame words: "ashamed", and "embarrassed". We replicate this experiment in Japanese, and compare the summed log probabilities of "I feel ____" between English and Japanese across emotions. The full results, along with the remaining prompts are given in Appendix Table A5. Analyzing the results across six scenarios (see Appendix C), we do not see any consistent evidence that Pride is more likely to be expressed in English or Shame is more likely to be expressed in Japanese. In Figure 6, for example, we see contradicting results for "proud", "happy", and "embarrassed".
Finding 3:These results suggest that _GPT-3 lacks knowledge of Pride and Shame and the norms surrounding their expression in the US and Japan._ This is a major limitation; such a failure to capture cultural variation is likely to limit both the utility and applicability of LMs in downstream emotionally-sensitive tasks.
### Do LMs provide culturally-aware emotional responses?
To further investigate whether LM completions reflect cultural norms, we conduct a small-scale user study to see if GPT-3.5 and GPT-4 are capable of appropriately adapting when prompted in different languages. Annotators assess whether the completions parallel the accepted emotional responses associated with the user's culture.
Prompting with cultural contextPrior psychological research has detailed scenarios that reveal how emotional expressions vary across cultures Mesquita (2022). We use this work to design a set of 19 questions (see Table A6) that should elicit different emotional responses across cultures. For example, the question "How would you feel if your guests chose to keep their shoes on when entering your home?" would likely elicit a different response from someone culturally American vs. Chinese.
We use these scenarios to prompt GPT-3.5 (gpt-3.5-turbo) and GPT-4 (gpt-4) in the form <prefix>+<question> (see Table 2). In order to include cultural context and coax the LM into returning a culturally appropriate emotional response, we experiment with providing cultural context in two ways (using Chinese as an example):
1. _via English_ - we add the sentence "You live in China." to the prefix. The LM returns an English completion.
2. _via Native Language_ - we translate the <prefix>+<question> prompt to Chinese. The LM returns a Chinese completion.
Figure 1 details these two cultural context modes and how they may cause conflicting LM responses.
Figure 6: A comparison of GPT-3 sentence completion probabilities in English and Japanese. We show the log probabilities for the sentence “I feel X.” following the scenario ”I received an award in front of my coworkers.” and test emotion words associated with Pride or Shame in English and Japanese. Contrary to cultural expectation, we do not observe a pattern where Pride words have a higher likelihood in English or Shame words have a higher likelihood in Japanese.
User StudyTo assess the quality of the LM completions, we perform a small-scale user study using eight volunteers, consisting of four pairs fluent in English, Spanish, Chinese, and Japanese respectively. We ask our volunteers to annotate GPT-3.5 and GPT-4's responses for cultural awareness along two axes - _linguistic norms_ (how you would expect a native speaker to talk), and _cultural norms_ (what you would expect a native speaker to say). As these two norms are deeply correlated, annotators are instructed to take both of these dimensions into account and give a single rating to each completion. We use a scale of 1-7, where 7 indicates the LM's response is fully expected of a native speaker.
Across languages, we observe a high agreement within each pair of volunteers. Figure 7 details the average score across annotators and questions for GPT-4 and GPT-3.5 completions. We provide the annotator agreement statistics in Appendix Table A4. Analyzing the completions and annotations, we notice some interesting trends:
* We see a large difference in quality between the LM responses returned using the two cultural context prompting modes (even though the questions are identical.)
* For Chinese and Japanese, the LM returns a less culturally-appropriate response using the _Native Language_ cultural context mode.
* English completions are the most culturally-aware across languages, and English response quality is unaffected by cultural context mode.
Finding 4:GPT-3.5 and GPT-4 fail to infer that a prompt in a non-English language suggests a response that aligns with the linguistic and cultural norms of a native speaker. Additionally, the LM completions reflect culturally appropriate emotion much better in Western languages than Eastern.
\begin{table}
\begin{tabular}{p{227.6pt}} \hline \hline _Prefix_ \\ \hline You are a helpful chatbot. Your goal is to answer my questions like you are a human capable of feelings and emotions. (**You live in China.**) Answer the following question using a single sentence that begins with “I would feel...”. \\ \hline _Questions_ \\ \hline Your neighbor offers you food. How would you feel turning it down? \\ How would you feel if your parents moved in with you right after you were married? \\ How would you feel if your guests chose to keep their shoes on when entering your home? \\ \hline \hline \end{tabular}
\end{table}
Table 2: Example prompts (<prefix>+<question>) designed to evaluate whether GPT-3.5 and GPT-4 can adapt to account for cultural variation in emotion. In our first set of experiments, we include the bold sentence ”You live in China.” and prompt GPT in English. In our second set of experiments, we do NOT include the bold sentence, and instead provide cultural context by translating our <prefix>+<question> prompt to Chinese. The full set of questions is given in Appendix Table A6.
Figure 7: Average cultural awareness scores across annotations for GPT-3.5 and GPT-4 completions in each language. We observe a consistently higher quality of English completions, and poor performance of Eastern languages compared to Western, especially when prompted using the _Native Language_ context mode.
Conclusion
We find that multilingual models fail to fully capture cultural variations associated with emotion, and predominantly reflect the cultural values of the Western world. Emotion embeddings from multilingual LMs are anchored to English, and the text completions generated in response to non-English prompts are not in tune with the emotional tendencies of users' expected culture. For instance, when GPT-4 is prompted in Japanese, it responds as an American fluent in Japanese but unaware of Japanese culture or values.
Our results caution against blindly relying on emotion representations learned by LMs for downstream applications. Using machine translation to transfer labels or utilizing multilingual LMs in a zero-shot setting for unseen languages has risks - the multilingual representations of emotion learned by these models do not perfectly reflect how their corresponding cultures express emotion.
Future Research DirectionsOur paper motivates the need for future work that transcends current Anglocentric LMs. This could take the form of higher performing, non-English models in a monolingual setting, or of multilingual models trained on more linguistically and culturally balanced corpora. Future work should additionally investigate whether state-of-the-art monolingual models in non-English languages succeed in encoding the respective culture's norms. Furthermore, we encourage the evaluation of multilingual models on benchmarks that measure cultural awareness in addition to standard metrics.
## 6 Limitations
We only analyze four high-resource languages in this study, our analysis could have benefited from more languages, especially low-resource ones. Additionally, we only analyze Japanese and English Pride/Shame as a known cultural difference; analyzing other differences could provide stronger results. We perform a small user study, and our work could have benefited from a larger-scale study with more annotators and completions analyzed.
We recognize the added complexity of investigating Pride embeddings from a culture where explicit expressions of Pride are discouraged; we note this may be a contributing factor to our results indicating that LMs do not reflect the culturally appropriate nuances of Shame and Pride. Additionally, we acknowledge that the experiments outlined in this paper are specific to investigating cultural awareness from the lens of emotion. These experiments are not easily applicable to measuring cultural awareness from different perspectives; therefore, results may not be generalizable.
At a higher level, we equate _language_ with _culture_. Psychologists have observed higher cultural similarities within languages than between them (Stulz and Williamson, 2003), however, we recognize there are variations within the populations that speak each language. For example, Spanish is spoken by people in Spain, Mexico, and other countries, each having a unique and varied culture.
## 7 Ethical Considerations
Although culturally-aware multilingual LMs are critical in uses such as therapy, storytelling, and interpersonal communication, these are possible misuses for nefarious purposes - persuasion, misinformation generation, etc. Additionally, our analyses behave as if China, Japan, Spain, and the United States are a single culture with a single set of cultural norms. In reality, this is not the case; we recognize there are huge variations in the way people view emotion within each of these cultures.
|
2301.03831 | Dynamic Grained Encoder for Vision Transformers | Transformers, the de-facto standard for language modeling, have been recently
applied for vision tasks. This paper introduces sparse queries for vision
transformers to exploit the intrinsic spatial redundancy of natural images and
save computational costs. Specifically, we propose a Dynamic Grained Encoder
for vision transformers, which can adaptively assign a suitable number of
queries to each spatial region. Thus it achieves a fine-grained representation
in discriminative regions while keeping high efficiency. Besides, the dynamic
grained encoder is compatible with most vision transformer frameworks. Without
bells and whistles, our encoder allows the state-of-the-art vision transformers
to reduce computational complexity by 40%-60% while maintaining comparable
performance on image classification. Extensive experiments on object detection
and segmentation further demonstrate the generalizability of our approach. Code
is available at https://github.com/StevenGrove/vtpack. | Lin Song, Songyang Zhang, Songtao Liu, Zeming Li, Xuming He, Hongbin Sun, Jian Sun, Nanning Zheng | 2023-01-10T07:55:29Z | http://arxiv.org/abs/2301.03831v1 | # Dynamic Grained Encoder for Vision Transformers
###### Abstract
Transformers, the de-facto standard for language modeling, have been recently applied for vision tasks. This paper introduces sparse queries for vision transformers to exploit the intrinsic spatial redundancy of natural images and save computational costs. Specifically, we propose a Dynamic Grained Encoder for vision transformers, which can adaptively assign a suitable number of queries to each spatial region. Thus it achieves a fine-grained representation in discriminative regions while keeping high efficiency. Besides, the dynamic grained encoder is compatible with most vision transformer frameworks. Without bells and whistles, our encoder allows the state-of-the-art vision transformers to reduce computational complexity by 40%-60% while maintaining comparable performance on image classification. Extensive experiments on object detection and segmentation further demonstrate the generalizability of our approach. Code is available at [https://github.com/StevenGrove/vtpack](https://github.com/StevenGrove/vtpack).
## 1 Introduction
Following the evolution of network architectures in natural language processing (NLP), Vision Transformers [1; 2; 3; 4; 5] have recently attracted increasing research attention and demonstrated promising results on several vision tasks, such as image classification, object detection, and other pixel-level tasks. Vision transformers are notable for modeling long-range dependencies and introducing less inductive bias, considered to be a solid alternative to CNNs for vision tasks.
One of the eminent obstacles for vision transformers is the high computational cost. Vision tasks typically require high-resolution image features to obtain detail and structure representation, which is critical for pixel-level tasks [6; 7; 8; 9; 10]. However, since the encoders in vision transformers need to establish pairwise relationships, high-resolution features could impose unacceptable computational and memory costs. Therefore, similar to the efficient transformers [11; 12; 13] in NLP, many variants [2; 3; 4] of vision transformers are proposed to perform sparse self-attentions with _dense_ queries and _sparse_ key-value pairs based on fixed pattern or heuristic rules.
In this paper, we notice that different from natural language, natural images involve much spatial redundancy, especially in flat or low-texture regions [14; 15; 16; 17; 18]. This could enable the image features to have a low resolution in some regions while maintaining similar representational capabilities.
To verify the spatial redundancy in vision transformers, we give an empirical analysis for DeiT [19] on ImageNet [20] classification dataset (the details refer to Sec. 3.1). It demonstrates the existence
of spatial redundancy in queries, and the complexity can be dramatically reduced by downsampling some highly redundant regions while maintaining comparable performance. These properties allow the queries to use mixed granularity to achieve a balance between effectiveness and efficiency, _i.e._, more tokens in more discriminative regions while fewer tokens in less informative regions. However, the distribution of spatial redundancy varies greatly among different input images, making it difficult for a static method to handle complex and variable features.
We thus attempt to explore a new perspective: _introducing dynamic network mechanism into vision transformers to reduce the spatial redundancy of image features_. As shown in Fig. 1, we propose a Dynamic Grained Encoder (DGE) to replace the vanilla encoder in vision transformers. It could assign a suitable number of queries for each region by using a dynamic grained router, _e.g._, the foreground regions of the cat head in Fig. 1 are assigned more queries than the background regions. Concretely, a reshaped 2D feature is first divided into regions using a fixed window. For each region, the number of patches is decided by a data-dependent routing process, and each patch is average pooled to obtain a 1D token. All the tokens are then concatenated into a sequence as the queries. Since our method focuses on the sparsity of queries, it is compatible with many efficient transformer encoders [2, 11, 12, 13, 3], making our approach available as a _generic plugin_ in most vision transformers [1, 21, 19, 3, 1]. Furthermore, the output of the encoder is restored to the input resolution by an un-pooling operation and compensates for detailed information with the input feature.
To demonstrate the effectiveness, we conduct extensive experiments on three typical vision transformers, _i.e._, DeiT [19], PVT [3] and DPVT, where DPVT is a new framework based on the deformable attention [2]. In the image classification task, our dynamic grained encoder allows these models to reduce computational complexity by 40%-60% while maintaining comparable performance. On the other hand, with lower computational complexity, the accuracy can be improved by up to 4.4% on ImageNet _val_ set. In addition, the experiments on object detection and segmentation show the strong robustness and generalization of our method.
## 2 Related Work
### Vision Transformer
Recently, Vision Transformers, inspired by the significant success of transformer [22] achieved in the NLP field, have received more attention in the vision community. ViT [1], which converts the image into a sequence and applies the transformer encoder structure directly on it for image classification, has pioneered this direction in visual recognition. To tackle the issue of training efficiency and data efficiency, DeiT [19] introduces several training strategies to enable learning the vision transformer on ImageNet. PVT [3] further develops a feature pyramid based on the transformer structure and makes it applicable for the various downstream vision tasks. Swin [21] introduces the local window idea to improve the efficiency of the transformer structure. Our work mainly focuses on reducing the spatial redundancy and improving the model efficiency in a data-dependent manner, which is rarely explored in previous works and complementary with various vision transformer structures.
Figure 1: The overall diagram of the proposed dynamic grained encoder. \(\mathbf{x}\) is the input sequence, and \(\mathbf{y}\) is the output sequence. The dynamic grained router automatically split a 2D feature into mixed-grained patches with a different number of tokens in a patch. Each patch is then flattened as a sparse query by an average pooling operator. The vanilla encoder block can be a standard transformer encoder or other efficient variants. Besides, the dash lines are only used in the training phase.
### Efficient Transformer
To improve the efficiency of transformers, prior works mainly concentrate on reducing the quadratic computation of self-attention. These works can be roughly summarized as three types: learnable/fixed pattern based methods, low-rank/kernel based methods and memory based methods. Some recent approaches [23, 24, 25, 12, 26] try to reduce the complexity of the self-attention mechanism by using a heuristic method to generate fixed or learnable patterns. Other efforts [11, 13, 27, 28] focus on utilizing the low-rank property of the attention matrix or introducing kernels to avoid computing the attention matrix explicitly. Moreover, some works [29, 30, 31] also explore the memory mechanism to improve efficiency. However, previous attempts mainly concentrate on the NLP tasks. Different from the language sequence, which has a highly abstract representation of information, natural images typically have much spatial redundancy. It makes the vision transformers require expensive costs for downstream vision tasks, especially the dense-prediction tasks, _e.g._, object detection, segmentation. Our work tries to utilize this intrinsic property of natural images to achieve redundancy reduction in a data-dependent manner.
### Dynamic Network
Dynamic networks [32] are proposed to adaptively change the network architecture and parameters according to input, which have been widely explored in computer vision and natural language processing tasks. Most of the dynamic networks focus on coarse-grained strategy by dropping blocks [33, 34, 35, 36], pruning channels [37, 38] or adjusting layer-level scales [39, 40]. For instance, MSDNet [34] proposes an early existing mechanism to achieve efficient inference for image classification. Switch Transformer [41] uses the Mixture of Experts (MoE) model [42] to select different parameters for each input sample. DRNet [39] attempts to perform adaptive scale transformation in a feature pyramid network for semantic segmentation. The closest works to ours are probably the Dynamic Convolution [43] and the Dynamic Head [10], which use a learnable mask to skip specific spatial locations. However, they are only applicable to the CNN-based networks, and the skipping-location strategy could result in significant performance degradation for vision transformers (refer to Sec. 4.1.2). Different from them, our method adapts the region-level granularity to the input feature for the vision transformers, which is more general and flexible.
## 3 Method
### Empirical Analyses on Spatial Redundancy
To investigate the spatial redundancy of vision transformer on image data, we conduct a series of experiments on the ImageNet [20]_val_ set with a pre-trained DeiT-S [19] model. Our main purpose is to explore the relationship among the granularity of queries, computational complexity, and classification performance. Specifically, for each encoder layer in DeiT-S, we reshape its input
Figure 2: Spatial redundancy statistics of the vanilla encoders in DeiT-S [19]. The correlation coefficient is used to measure the similarity of queries in a local region. Higher the correlation corresponds to more spatial redundancy. (a) indicates that most queries are highly redundant in a local region. (b) reflects that reducing the queries with high redundancy has little impact on performance. (c) means that the redundancy varies greatly in some layers.
queries (excluding the extra embedding) as a 2D feature map and split it into \(2\times 2\) non-overlap patches. For each patch, we calculate its average token, and measure the similarity of each token in the patch with the average token by using the Pearson Correlation Coefficient (PCC) metric.
Then we have three valuable observations. _(1) Queries share similar patterns in a local region._ From the correlation coefficient histogram plotted in Fig. 2(a), most of the correlation coefficients are greater than 0.8, which indicates the queries typically have a strong correlation in a local region. _(2) Large potential of reducing spatial redundancy._ Furthermore, in each patch, we replace the tokens with the average token when their correlation coefficient is above a given threshold. As shown in Fig. 2(b), we illustrate the accuracy/complexity curve varying correlation thresholds. When the threshold is 0.9, the complexity decreases by 27%, but the top-1 accuracy decreases by only 0.3%. This evidence demonstrates the potential of reducing the spatial redundancy on vision transformers. _(3) Static strategy is sub-optimal._ As shown in Fig. 2(c), some encoders have large variance of correlation coefficients among different images. Thus, using data-independent methods to reduce spatial redundancy is sub-optimal, which may lead to considerable performance degradation. These observations motivate us to explore a data-dependent manner to reduce spatial redundancy.
### Dynamic Grained Encoder
#### 3.2.1 Overall Architecture
In this paper, we propose a new encoder block for vision transformers, called _Dynamic Grained Encoder_ (DGE). As shown in Fig. 1, the proposed encoder consists of two main modules, _i.e.,_ dynamic grained router and vanilla encoder block. Specifically, the dynamic grained router adaptively generates mixed-grained patches for a 2D feature. The vanilla encoder block can be a standard encoder block [22] or other efficient variants [2, 9, 11, 12, 13, 44], which is made up of a multi-head attention and a feed-forward network. If there are extra tokens in the input sequence, such as class embedding in ViT [1], we handle them separately with the vanilla encoder. For ease of presentation, the rest of this section only considers the input sequence without extra tokens.
Given an input sequence \(\mathbf{x}\in\mathbb{R}^{(H\times W)\times C}\) for the dynamic grained encoder, \((H,W)\) denotes the resolution of the feature, \(C\) is the number of channels. To compatible with most vanilla encoders, we only generate sparse queries \(\mathbf{q}\in\mathbb{R}^{N\times C}\) by the dynamic grained router, where \(N\) indicates the number of queries. Then the sparse queries as well as dense keys \(\mathbf{k}\) and values \(\mathbf{v}\) are transformed by a vanilla encoder. It is worth mentioning that keys and values can be sparse in the vanilla encoder to improve efficiency further. The output sequence of the vanilla encoder is restored to a 2D feature with the original resolution by using an un-pooling operation. Furthermore, to enhance the details of the output feature and alleviate the vanishing gradient problem, we add a residual connection [45] to fuse the input sequence.
#### 3.2.2 Dynamic Grained Router
To achieve dynamic grained patches in space, we first partition the 2D feature, denoting as \(\mathbf{z}\), into multiple regions, which can perform in regular or irregular ways. Although the irregular ways, _e.g.,_ superpixels [46] and segmentation [47], may lead to better performance, it is very unfriendly to memory access and inducing inefficiency. Therefore, as shown in Fig. 3, we adopt a \(S\times S\) non-overlap window3 to split image features into multiple regular regions. Furthermore, we define a set of candidate granularities \(\Phi=\{\phi_{1},\phi_{2},...,\phi_{K}\}\) to represent the optional patch size in a region, where \(K\) is the number of candidate granularities. The granularity denotes the side length of a patch, _e.g._, \(\phi=8\) corresponds to an \(8\times 8\) patch. Since each patch is pooled into one query in the encoder, larger granularity indicates fewer queries and less computation. For convenience, we set the region size with the maximum granularity, _i.e.,_\(S=\max(\Phi)\), in the experiments.
Footnote 3: Bottom-right padding is adopted on the feature if needed.
**Inference.** For a region \(i\in\{1,2,...,\lceil\frac{H}{S}\rceil\cdot\lceil\frac{W}{S}\rceil\}\), we use a gating network to select a granularity from the set of candidate granularities. Concretely, we reduce the region feature \(\mathbf{z}_{i}\) into a representative token by using the average pooling operation and linearly project it to the gating logits:
\[h(\mathbf{z}_{i})=\frac{1}{S^{2}}\sum_{j=1}^{S^{2}}\mathbf{z}_{i,j}\mathbf{W}+b, \tag{1}\]
where \(\mathbf{W}\in\mathbb{R}^{C\times K}\) and \(b\in\mathbb{R}^{1\times K}\) indicate the weight and bias, respectively. The gating logits is used to decide the granularity for the region by calculating the gating indices:
\[\theta_{i}=\operatorname*{arg\;max}_{k}(h(\mathbf{z}_{i})_{k})\in\{1,2,...,K\}. \tag{2}\]
As shown in Fig. 3, we split the region feature into multiple groups of patches1 with \(K\) granularities. We then choose a group of specific granularity according to the gating indices. We denote the selected group as \(\mathbf{z^{\prime}}_{i}\in\mathbb{R}^{N_{i}\times\phi_{\theta_{i}}^{2}\times C}\), where \(N_{i}=\lceil\frac{S}{\phi_{\theta_{i}}}\rceil\cdot\lceil\frac{S}{\phi_{\theta_ {i}}}\rceil\) is the number of patches in the group.
Footnote 1: [https://github.com/google-g](https://github.com/google-g)
#### 3.2.3 Budget Constraint
In the absence of a budget constraint, our encoder typically prefers to assign more queries to each region to achieve high performance. To obtain a better balance between effectiveness and efficiency, we define a _computational budget_ denoted as \(\gamma\in[0,1]\), which corresponds to the desired computational complexity ratio relative to the vanilla encoder without dynamic grained.
Given a vision transformer with \(L\) dynamic grained encoders, we can calculate the used computational complexity ratio of the transformer by:
\[\beta=\frac{\sum_{l}^{L}\mathcal{C}^{l}\psi^{l}}{\sum_{l}^{L}\mathcal{C}^{l}H^ {l}W^{l}},\;\mathrm{where}\;\psi^{l}=\left\{\begin{array}{ll}\sum_{i}\phi_{ \theta_{i}}^{2}&\mathrm{forward}\\ \sum_{i}p_{i}\cdot\phi_{\theta_{i}}^{2}&\mathrm{backward}\end{array}\right. \tag{7}\]
The \(\mathcal{C}^{l}\) indicates the computational complexity required to compute a query in an encoder layer. The \(\psi^{l}\) corresponds to the number of queries, adopting a straight-through estimator to enable end-to-end training. This strategy ensures an accurate complexity estimation when computing the training loss. Moreover, we use the Euclidean distance for the budget loss to narrow the computational complexity to a predetermined bound:
\[\mathcal{L}=\mathcal{L}_{\mathrm{task}}+\lambda\mathcal{L}_{\mathrm{budget}},\;\mathrm{where}\;\mathcal{L}_{\mathrm{budget}}=(\beta-\gamma)^{2}. \tag{8}\]
The hyper-parameter \(\lambda\) balances losses among different tasks, making the gradients have the same order of magnitude. Besides, for batched image inputs, \(\beta\) is averaged along the batch dimension to estimate the average load of the network.
## 4 Experiment
In this section, we apply our encoder to the state-of-the-art vision transformers and conduct extensive experiments on image classification, object detection, and segmentation. To demonstrate the generalization of our method, we conduct experiments on three Vision Transformer frameworks, _i.e._, DeiT [19], PVT [3] and DPVT. Where DPVT is a new framework we proposed, which is based on the architecture of PVT [3] but using the deformable attention [2] as the vanilla encoder. Different from the dense self-attention process in DeiT, PVT and DPVT utilize sparse key-value pairs in position-insensitive and position-sensitive ways, respectively. These three frameworks could represent the vanilla encoder used by most vision transformers.
Figure 4: Visualization of predicted gating indices of PVT-S+DGE on ImageNet _val_ set. The candidate granularity set is \(\Phi=\{1,2,4\}\), which are shown in red, green and blue respectively. Higher granularity corresponds to less computational complexity. Our dynamic encoder tends to assign more queries to the representative foreground regions than the background regions, thus significantly reducing the computational cost. The left and right parts of Fig.4(a) come from stage 1 and stage 2 of PVT, respectively. From left to right, the heatmaps of each instance in Fig.4(b) correspond to stage 1, stage 2, and stage 3, respectively.
### Image Classification on ImageNet
#### 4.1.1 Implementation Detail
All the experiments for image classification are based on ImageNet [20] classification dataset. We use \(256\times 256^{4}\) as the input image resolution for training and evaluation. For a fair comparison, we follow the training settings in DeiT and PVT. Specifically, the random-size cropping, random horizontal flipping [53] and mixup [54] are used for data augmentation. We use the AdamW [55] optimizer with the weight decay of 0.05 and the momentum of 0.9. The learning rate is initially set to 0.001 and decreases according to the cosine schedule [56]. All the models are trained for 300 epochs with 128 images per batch. The label-smoothing regularization is used in the training phase. Besides, for the dynamic grained encoders, \(\lambda\) is set to 1.0 and \(\Phi\) is set to {1, 2, 4} by default. During the training phase, we use four compute nodes with 32 Nvidia Tesla V100 GPUs. For instance, we spend about 1.2 days training the PVT-S with DGE model for 300 epochs. For the runtime evaluation, we measure the frameworks on both Intel Xeon Gold 6130 CPU and Nvidia Tesla V100 GPU to demonstrate the efficiency of our dynamic networks.
Figure 5: Visualization of accuracy and computational complexity of different configurations. (a), (b) and (c) are evaluated on ImageNet _val_ set. The PVT and PVT+DGE in (a) is scaled by model size, _i.e._, “tiny”, “small” “medium” and “large”. (b) indicates the performance of our method with different budget constraints. (c) reflects the distribution of computational complexity in different encoder layers of the DeiT-S+DGE. (d) is evaluated on ADE-20K _val_ set with varying image resolutions.
#### 4.1.2 Ablation Study
Where are Fine-Grained Queries Assigned?To reveal the undergoing properties of our dynamic grained encoder, we illustrate the predicted gating indices \(\theta\) on ImageNet _val_ set, which is shown in Fig. 4. Without additional supervision other than classification, our dynamic network can generate instance-aware masks with rich details. It allows the encoder to assign more queries on the foreground regions with discriminative features than background regions. This ensures that the network can consume less computational cost while maintaining fine-grained representation. In addition, as presented in Fig. 4(b), the predicted gating indices have similar patterns among different stages in the PVT. It demonstrates the effectiveness for a pyramid network, which is crucial for applying to the downstream tasks.
Dynamic vs StaticTo demonstrate the superiority of the dynamic mechanism, we give a comparison on the PVT framework with different model sizes in Fig. 5(a). For convenience, we fix the budget constraint \(\gamma\) at 0.5. Our dynamic grained encoder can reduce the computational complexity by half while maintaining comparable performance. On the other hand, with similar computational complexity, our method can improve the static transformers by up to 4.4%. The results demonstrate the effectiveness of our method even on the efficient vision transformers. In addition, as shown in Fig. 5(c), we calculate the complexity ratio of each layer in DeiT-S with DGE, where the complexity of the network in the middle layers varies significantly due to the dynamic mechanism. Interestingly, the deeper layer has lower average computational complexity, which means the deeper layer tends to assign fewer queries. Thus, _DeiT is turned into a dynamic feature pyramid structure, which is consistent with the observation in CNNs._
Budget Constraint and Candidate Granularity SetAs illustrated in Fig. 5(b), we give a comparison of varying the budget constraints \(\gamma\), which is selected from \(\{0.25,0.5,0.75,1.0\}\) respectively. The redundancy in space allows the network to achieve comparable performance with much less computational cost even on the efficient transformers, _e.g._, PVT and DPVT. Our encoder achieves the optimal balance between effectiveness and efficiency when the budget is about half. Therefore, we set the budget constraint to 0.5 for other experiments by default. In addition, we report the performance of PVT-S with DGE with different candidate granularity set \(\Phi\) in Tab. 1. When \(\Phi=\{0,1\}\), the gating indices degenerate into a learnable binary mask similar to dynamic convolutions [10, 43], but this strategy results in significant performance degradation. There is no significant difference in performance between other granularity settings. The performance is highest when \(\Phi=\{1,2,4\}\), which becomes our default setting.
Region-wise Routing vs Layer-wise RoutingThe Fig. 4 clearly demonstrates that DGE can perform dynamic granularity in space to adapt to different object structures. Nevertheless, most previous dynamic networks are based on layer-wise routing [32]. To demonstrate the advantages of our method, we set the region size \(S\times S\) to the input feature size so that DGE can be degraded from region-wise routing to layer-wise routing. As shown in Tab. 1, region-wise gating achieves 1.1% absolute gains over layer-wise gating with similar complexity, which agrees well with the empirical analysis in Sec.3.1.
\begin{table}
\begin{tabular}{l|c|c|c|c c|c|c} \hline \hline
**Framework** & **Dynamic** & **Region** & \(\Phi\) & **Top1 Acc** & **Top5 Acc** & **FLOPs** & **\#Param** \\ \hline PVT-S & ✗ & - & - & 80.2 & 95.2 & 6.2G & 28.2M \\ \hline & & ✗ & \(1,2,4\) & 79.1 & 94.5 & 3.4G & +12.1K \\ \cline{3-8} PVT-S+DGE & ✓ & & \(0,1\) & 78.8 & 94.4 & 3.5G & +8.1K \\ & & ✓ & \(1,2\) & 80.0 & 95.0 & 3.5G & +8.1K \\ & & ✓ & \(1,2,4\) & 80.2 & 95.0 & 3.5G & +12.1K \\ & & & \(1,2,4,8\) & 79.9 & 95.0 & 3.4G & +16.1K \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance of dynamic grained encoder with different configurations on ImageNet _val_ set. The budget for DGE is set to 0.5. “Region” means using region-wise routing instead of layer-wise routing in the encoder.
### Experiments for Downstream Tasks
#### 4.2.1 Object Detection/Instance Segmentation on COCO
We apply our models for object detection and instance segmentation on the COCO dataset [58]. We resize the images so that the shorter side is 768 pixels. All experiments are conducted on 8 GPUs with 2 images per GPU (effective minibatch size 16) for 90K iterations. The learning rate is initialized to 1e-4, which is decreased by 10 at the 60K and 80K iteration. Following the settings in PVT [3], we report the performance with 1x training schedule [57, 59].
The results are reported in Tab. 2. When equipped with DGE, the PVT-S achieves comparable performance at 40.1% AP\({}_{box}\) with a significant complexity reduction (185G vs 251G) and inference speed up by 22%. Even with larger models or different vanilla encoders, our method is still effective and efficient. In addition, the proposed vision transformer variant, _i.e._, DPVT, is also competitive in terms of parameter, computational cost and performance. Moreover, DPVT-M+DGE achieves 45.8 AP\({}_{box}\) with 169G FLOPs, even efficient than the ResNet-50 backbone.
#### 4.2.2 Semantic Segmentation on ADE-20K
We further evaluate our models as the backbones for Semantic-FPN [61] on ADE-20K [62] dataset. All the experiments are based on MM-Segmentation toolkit [63]. In the training phase, we follow the settings in PVT [3] and set the learning rate to 1e-4, which gradually decreases to 0 by the poly strategy [64]. The images are cropped to \(512\times 512\) and augmented with random scaling (from 0.5 to 2.0) and flipping. All models are trained in 80k iterations with a batch size of 32.
\begin{table}
\begin{tabular}{l|c c c} \hline \hline
**Backbone** & **\#Param** & **FLOPs** & **mIoU** \\ & (M) & (G) & (\%) \\ \hline ResNet-50 [45] & 28.5 & 184 & 36.7 \\ PVT-S [3] & 28.2 & 226 & 41.8 \\ Swin-Ti [21] & 31.9 & 187 & 41.5 \\ Twins-S [60] & 28.3 & 174 & 43.2 \\ \hline
**DPVT-S+DGE** & **21.7** & **121** & **44.4** \\ \hline ResNet-101 [45] & 47.5 & 262 & 38.8 \\ PVT-M [3] & 48.0 & 316 & 44.0 \\ Swin-S [21] & 53.2 & 280 & 44.9 \\ Twins-B [60] & 60.4 & 318 & 45.3 \\ \hline
**DPVT-M+DGE** & **34.3** & **148** & **46.1** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Comparisons with state-of-the-art vision transformers on ADE-20K _val_ set. FLOPs is tested on 512\(\times\)2048 resolution.
\begin{table}
\begin{tabular}{l|c|c|c c|c c c c|c c c} \hline \hline
**Backbone** & **Size** & **\#Param** & \multicolumn{2}{c|}{**Latency**} & \multicolumn{4}{c}{**FLOPS**} & \multicolumn{4}{c}{**Mask R-CNN(1x)**} \\ & & (M) & C(ms) & G(ms) & (G) & AP\({}_{b}\) & AP\({}_{b}^{50}\) & AP\({}_{b}^{75}\) & AP\({}_{m}\) & AP\({}_{m}^{50}\) & AP\({}_{m}^{75}\) \\ \hline ResNet & 50 & 44.2 & - & - & 189 & 38.0 & 59.6 & 41.4 & 34.4 & 55.1 & 36.7 \\ PVT & Small & 44.3 & 880 & 33 & 251 & 40.4 & 62.9 & 43.8 & 37.8 & 60.1 & 40.3 \\
**PVT+DGE** & Small & 44.3 & 440 & 26 & 185 & 40.1 & 62.6 & 43.2 & 37.5 & 59.7 & 40.0 \\ \hline DPVT & Small & 37.7 & 1090 & 50 & 186 & 44.0 & 65.9 & 48.2 & 40.3 & 62.9 & 43.4 \\
**DPVT+DGE** & Small & 37.7 & 720 & 34 & 147 & 43.8 & 65.7 & 47.7 & 40.0 & 62.6 & 43.2 \\ \hline ResNet & 101 & 63.2 & - & - & 263 & 40.4 & 61.1 & 44.2 & 36.4 & 57.7 & 38.8 \\ ResNeXt & 10132x4) & 62.8 & - & - & 354 & 41.9 & 62.5 & 45.9 & 37.5 & 59.4 & 40.2 \\ PVT & Medium & 63.9 & 1260 & 73 & 339 & 42.0 & 64.4 & 45.6 & 39.0 & 61.6 & 42.1 \\
**PVT+DGE** & Medium & 63.9 & 620 & 40 & 228 & 41.7 & 64.1 & 45.0 & 38.3 & 62.0 & 40.6 \\ \hline DPVT & Medium & 49.9 & 1800 & 75 & 236 & 46.4 & 68.0 & 51.1 & 42.0 & 65.2 & 45.2 \\
**DPVT+DGE** & Medium & 49.9 & 1240 & 50 & 169 & 45.8 & 67.2 & 50.0 & 41.4 & 64.5 & 44.6 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance of dynamic grained encoder on COCO _val_ set. All experiments are conducted with 1x schedule [57]. Time and FLOPs are measured on an \(800\times 1280\) image. ”C” and ”G” indicate the backbone latency on CPU (Xeon 6130) and GPU (Tesla V100). All the budget for DGE is 0.5.
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline
**Backbone** & **\#Param** & **FLOPs** & **mIoU** & **Latency** \\ & (M) & (G) & (\%) & C(ms) & G(ms) \\ \hline PVT-S & 28.2 & 226 & 41.8 & 1350 & 65 \\
**PVT+DGE** & 28.2 & 155 & 41.7 & 720 & 42 \\ \hline PVT-M & 48.0 & 316 & 44.0 & 1910 & 100 \\
**PVT-M+DGE** & 48.0 & 202 & 43.9 & 1100 & 64 \\ \hline DPVT-S & 21.7 & 157 & 44.4 & 1470 & 55 \\
**DPVT-S+DGE** & 21.7 & 121 & 44.4 & 860 & 32 \\ \hline DPVT-M & 34.3 & 209 & 46.8 & 1990 & 110 \\
**DPVT-M+DGE** & 34.3 & 148 & 46.1 & 1260 & 50 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance of different backbones for semantic segmentation on ADE-20K _val_ set.The inference time (backbone) is measured for a 512 \(\times\) 2048 input image. ”C” and ”G” indicate the latency on CPU and GPU.
We conduct several ablation studies by introducing the DGE block into PVT [3] and our proposed DPVT. As shown in Tab. 3, with our dynamic grained encoder, DPVT+DGE and PVT+DGE both achieve competitive performance with a significant computation cost reduction by about 30% FLOPs. On the other hand, PVT-M+DGE achieves 2.1% mIoU absolute gains over PVT-S but with less computational complexity. As illustrated in Fig. 5(d), this phenomenon also occurs for different image sizes on the same framework, _e.g._, our method has up to 1.2% mIoU absolute gains against the baseline with similar computational complexity. In addition, as shown in Tab. 4, our DPVT models with DGE are superior to the state-of-the-art vision transformers in terms of parameters, computational complexity and performance. These results well demonstrate the generalization ability and robustness of our method.
## 5 Conclusion
In this paper, we analyze the spatial redundancy in vision transformers and propose a dynamic grained encoder to speed up inference. Our encoder can adaptively yield a suitable number of queries for different regions to reduce spatial redundancy while maintaining comparable performance. Besides, our encoder is compatible with many efficient transformers and can be trained in an end-to-end manner. The extensive experiments demonstrate the effectiveness and generalization of our method. In general, this paper explores a new perspective, _i.e._, leveraging the intrinsic properties of natural images with the dynamic network mechanism to achieve efficient vision transformers. We hope that our dynamic grained encoder can provide insights into future works and beyond.
## Acknowledgments and Disclosure of Funding
This research was supported by National Key R&D Program of China (No. 2017YFA0700800), National Natural Science Foundation of China (No. 61790563 and 61774125), Shanghai Science and Technology Program (No. 21010502700).
## Appendix A Additional Experiments
### Quantitative Analysis on Dynamic Grained Router
We follow the weakly supervised segmentation [64] to show how well the dense query region captures the foreground region. The metric in [64] is used to measure the gating scores in each DGE layer. Specifically, we set the candidate granularities \(\Phi\) to \(\{1,2\}\), so that the finer-grained gating scores are taken as a soft-segmentation of the image. We adopt the evaluation protocol in [64] to report the quantitative segmentation results. As shown in Tab. 5 and Tab. 6, our gating scores have significant superiority even over the weakly supervised method, _i.e._, GradCAM. These results demonstrate that the DGE could guide the transformer to focus on the foreground regions, which is consistent with the visualization.
\begin{table}
\begin{tabular}{l|c c c c c} \hline \hline
**Metric** & **Random** & **Layer 1** & **Layer 6** & **Layer 11** & **Layer 16** \\ \hline Accuracy & 50.0 & 55.4 & 49.1 & **67.8** & 65.5 \\ mAP & 50.0 & 68.0 & 45.2 & 71.3 & **79.4** \\ mIoU & 31.9 & 34.5 & 32.5 & **50.2** & 46.6 \\ \hline \hline \end{tabular}
\end{table}
Table 6: The quantitative analysis on PVT-S with DGE (\(\gamma=0.5\)).
\begin{table}
\begin{tabular}{l|c c c c c} \hline \hline
**Metric** & **Random** & **GradCAM [64]** & **Layer 1** & **Layer 4** & **Layer 8** \\ \hline Accuracy & 50.0 & 64.4 & 55.4 & 56.3 & **67.6** \\ mAP & 50.0 & 71.6 & 63.5 & 60.7 & **78.8** \\ mIoU & 31.9 & 40.8 & 36.4 & 37.7 & **48.2** \\ \hline \hline \end{tabular}
\end{table}
Table 5: The quantitative analysis on DeiT-S with DGE (\(\gamma=0.5\)).
### Runtime Analysis on GPUs
The efficiency of our DGE modules on GPUs mainly relies on the throughput of sparse matrix multiplication, which is dependent on hardware architecture and code optimization. To demonstrate the potential of our method for parallel devices, we implement an optimized CUDA kernel with multiple streams for batched sparse matrix multiplication. With this kernel, we report the runtime comparison of different backbones for multiple downstream tasks on a Tesla V100 GPU. The results are reported in Tab. 7 and Tab. 8, where the latency indicates the runtime of backbone.
### Implementation Details for Complexity Computation
We report the FLOPs following the conventional protocol of dynamic networks [32]. Specifically, we split the entire network into static and dynamic parts. The complexity of the static part, _i.e._, the modules without dynamic mechanism including the gating networks in DGE, is computed in the standard way [1, 3, 19]. For the complexity of the dynamic part, _i.e._, the dynamic modules in DGE, we accumulate the complexity associate with each enabled query according to the gating indices.
|
2305.00617 | Unique continuation for a mean field game system | For a mean field game system, we prove the unique continuation which asserts
that if Cauchy data are zero on arbitrarily chosen lateral subboundary, then
the solution identically vanishes. | Oleg Imanuvilov, Hongyu Liu, Masahiro Yamamoto | 2023-05-01T01:23:27Z | http://arxiv.org/abs/2305.00617v1 | # Unique continuation for a mean field game system
###### Abstract.
For a mean field game system, we prove the unique continuation which asserts that if Cauchy data are zero on arbitrarily chosen lateral subboundary, then the solution identically vanishes.
\({}^{1}\) Department of Mathematics, Colorado State University, 101 Weber Building, Fort Collins CO 80523-1874, USA e-mail: [email protected] \({}^{2}\) Department of Mathematics, City University of Hong Kong, Kowloon, Hong Kong SAR, China email: [email protected] \({}^{3}\) Graduate School of Mathematical Sciences, The University of Tokyo, Komaba, Meguro, Tokyo 153-8914, Japan e-mail: [email protected]
###### Contents
* 1 Introduction
* 2 Key Carleman estimate for the heat equation
* 2.1 The heat equation
* 2.2 The heat equation
* 2.3 The heat equation
* 2.4 The heat equation
* 2.5 The heat equation
* 2.6 The heat equation
* 2.7 The heat equation
* 2.8 The heat equation
* 2.9 The heat equation
* 2.1 The heat equation
* 2.1 The heat equation
* 2.2 The heat equation
* 2.3 The heat equation
* 2.4 The heat equation
* 2.5 The heat equation
* 2.6 The heat equation
* 2.7 The heat equation
* 2.8 The heat equation
* 2.9 The heat equation
* 2.1 The heat equation
* 2.1 The heat equation
* 2.2 The heat equation
* 2.3 The heat equation
* 2.4 The heat equation
* 2.5 The heat equation
* 2.6 The heat equation
* 2.7 The heat equation
* 2.8 The heat equation
* 2.9 The heat equation
* 2.1 The heat equation
* 2.1 The heat equation
* 2.2 The heat equation
* 2.3 The heat equation
* 2.4 The heat equation
* 2.5 The heat equation
* 2.6 The heat equation
* 2.7 The heat equation
* 2.8 The heat equation
* 2.9 The heat equation
* 2.1 The heat equation
* 2.2 The heat equation
* 2.3 The heat equation
* 2.4 The heat equation
* 2.5 The heat equation
* 2.6 The heat equation
* 2.7 The heat equation
* 2.8 The heat equation
* 2.9 The heat equation
* 2.1 The heat equation
* 2.2 The heat equation
* 2.3 The heat equation
* 2.4 The heat equation
* 2.5 The heat equation
* 2.6 The heat equation
* 2.7 The heat equation
* 2.8 The heat equation
* 2.9 The heat equation
* 2.1 The heat equation
* 2.1 The heat equation
* 2.2 The heat equation
* 2.3 The heat equation
* 2.4 The heat equation
* 2.5 The heat equation
* 2.6 The heat equation
* 2.7 The heat equation
* 2.8 The heat equation
* 2.9 The heat equation
* 2.1 The heat equation
* 2.2 The heat equation
* 2.3 The heat equation
* 2.4 The heat equation
* 2.5 The heat equation
* 2.6 The heat equation
* 2.7 The heat equation
* 2.8 The heat equation
* 2.9 The heat equation
* 2.1 The heat equation
* 2.1.1 The heat equation
* 2.1.2 The heat equation
* 2.1.2 The heat equation
* 2.1.2 The heat equation
* 2.2.1 The heat equation
* 2.2.2 The heat equation
* 2.2.3 The heat equation
* 2.3.1 The heat equation
* 2.3.2 The heat equation
* 2.3.3 The heat equation
* 2.3.4 The heat equation
* 2.3.5 The heat equation
* 2.3.6 The heat equation
* 2.3.7 The heat equation
* 2.3.8 The heat equation
* 2.3.9 The heat equation
* 2.4.1 The heat equation
* 2.4.2 The heat equation
* 2.4.3 The heat equation
* 2.4.4 The heat equation
* 2.4.5 The heat equation
* 2.4.6 The heat equation
* 2.4.7 The heat equation
* 2.4.8 The heat equation
* 2.5.1 The heat equation
* 2.5.2 The heat equation
* 2.5.3 The heat equation
* 2.5.4 The heat equation
* 2.5.5.5 The heat equation
* 2.6.1 The heat equation
* 2.6.2 The heat equation
* 2.6.3 The heat equation
* 2.6.4 The heat equation
* 2.6.5 The heat equation
* 2.6.6 The heat equation
* 2.6.7 The heat equation
* 2.6.8 The heat equation
* 2.6.9 The heat equation
* 2.6.1 The heat equation
* 2.6.1 The heat equation
* 2.6.2 The heat equation
* 2.6.2 The heat equation
* 2.6.3 The heat equation
* 2.6.4 The heat equation
* 2.6.5 The heat equation
* 2.6.6.1 The heat equation
* 2.6.1 The heat equation
* 2.6.1 The heat equation
* 2.6.2 The heat equation
* 2.6.2 The heat equation
* 2.6.3 The heat equation
* 2.6.4 The heat equation
* 2.6.5 The heat equation
* 2.6.6.2 The heat equation
* 2.6.6.3 The heat equation
* 2.6.4 The heat equation
* 2.6.7 The heat equation
* 2.6.7 The heat equation
* 2.6.8 The heat equation
* 2.6.9 The heat equation
* 2.6.1 The heat equation
* 2.6.1 The heat equation
* 2.6.2 The heat equation
* 2.6.1 The heat equation
* 2.6.2 The heat equation
* 2.6.3 The heat equation
* 2.6.4 The heat equation
* 2.6.5 The heat equation
* 2.6.6.6 The heat equation
* 2.6.7 The heat equation
* 2.6.8 The heat equation
* 2.6.9 The heat equation
* 2.6.1 The heat equation
* 2.6.2 The heat equation
* 2.6.1 The heat equation
* 2.6.2 The heat equation
* 2.6.2 The heat equation
* 2.6.3 The heat equation
* 2.6.4 The heat equation
* 2.6.5 The heat equation
* 2.6.6.6 The heat equation
* 2.6.7 The heat equation
* 2.6.8 The heat equation
* 2.6.9 The heat equation
* 2.6.1 The heat equation
* 2.6.1 The heat equation
* 2.6.2 The heat equation
* 2.6.1 The heat equation
* 2.6.2 The heat equation
* 2.6.2 The heat equation
* 2.6.2 The heat equation
* 2.6.3 The heat equation
* 2.6.4 The heat equation
* 2.6.5 The heat equation
* 2.6.6.7 The heat equation
* 2.6.8 The heat equation
* 2.6.9 The heat equation
* 2.6.1 The heat equation
* 2.6.1 The heat equation
* 2.6.2 The heat equation
* 2.6.1 The heat equation
* 2.6.2 The heat equation
* 2.6.3 The heat equation
* 2.6.4 The heat equation
* 2.6.5 The heat equation
* 2.6.7 The heat equation
* 2.6.6.7 The heat equation
* 2.6.8 The heat equation
* 2.6.9 The heat equation
* 2.6.1 The heat equation
* 2.6.1 The heat equation
* 2.6.1 The heat equation
* 2.6.2 The heat equation
* 2.6.1 The heat equation
* 2.6.2 The heat equation
* 2.6.2 The heat equation
* 2.6.2 The heat equation
* 2.6.3 The heat equation
* 2.6.4 The heat equation
* 2.6.5 The heat equation
* 2.6.6.7 The heat equation
* 2.6.8 The heat equation
* 2.6.9 The heat equation
* 2.6.1 The heat equation
* 2.6.1 The heat equation
* 2.6.2 The heat equation
* 2.6.1 The heat equation
* 2.6.2.2 The heat equation
* 2.6.2 The heat equation
* 2.6.3 The heat equation
* 2.6.4 The heat equation
* 2.6.5 The heat equation
* 2.6.6.7 The heat equation
* 2.6.7 The heat equation
* 2.6.8 The heat equation
* 2.6.9 The heat equation
* 2.6.1 The heat equation
* 2.6.1 The heat equation
* 2.6.2.2 The heat equation
* 2.6.2.3 The heat equation
* 2.6.1 The heat equation
* 2.6.2.4 The heat equation
* 2.6.1 The heat equation
* 2.6.2.5 The heat equation
* 2.6.1 The heat equation
* 2.6.2.6 The heat equation
* 2.6.2.6 The heat equation
* 2.6.1 The heat equation
* 2.6.2.7 The heat equation
* 2.6.1 The heat equation
* 2.6.2.8 The heat equation
* 2.6.2.9 The heat equation
* 2.6.1 The heat equation
* 2.6.2.1 The heat equation
* 2.6.2.2 The heat equation
* 2.6.2.2.2.3 The heat equation
* 2.6.1 The heat equation
* 2.6.2.4 The heat equation
* 2.6.2.5 The heat equation
* 2.6.2.6 The heat equation
* 2.6.2.6.1 The heat equation
* 2.6.2.7 The heat equation
* 2.6.2.8 The heat equation
* 2.6.2.9 The heat equation
* 2.6.1.1 The heat equation
* 2.6.2.1 The heat equation
* 2.6.2.2.2 The heat equation
* 2.6.2.2.1 The heat equation
* 2.6.2.2.2.2.4 The heat equation
* 2.6.2.5 The heat equation
* 2.6.2.6.2.6.27 The heat equation
* 2.6.2.27 The heat equation
* 2.6.2.27 The heat equation
* 2.6.2.2.8 The heat equation
* 2.6.2.27 The heat equation
* 2.6.2.28 The heat equation
* 2.6.2.9 The heat equation
* 2.6.2.1 The heat equation
* 2.6.2.29 The heat equation
* 2.6.2.1 The heat equation
* 2.6.2.2.29 The heat equation
* 2.6.2.1.29 The heat equation
* 2.6.2.10 The heat equation
* 2.6.2.11 The heat equation
* 2.6.2.22.29 The heat equation
* 2.6.2.29 The heat equation
* 2.6.2.29 The heat equation
* 2.6.2.111 The heat equation
* 2.6.2.22.29 The heat equation
* 2.6.2.29 The heat equation
* 2.6.22.12.13 The heat equation
* 2.6.2.229 The heat equation
* 2.6.2.29 The heat equation
* 2.6.2.29 The heat equation
* 2.6.2.29 The heat equation
* 2.6.2.114 The heat equation
* 2.6.2.29 The heat equation
* 2.6.2.29 The heat equation
* 2.6.2.29 The heat equation
* 2.6.2.29 The heat equation
* 2.6.2.29 The heat equation
* 2.6.2.29 The heat equation
* 2.6.29.10.22.3 The heat equation
where \(a\in C^{2}(\overline{Q_{I}})\), \(>0\) on \(\overline{Q_{I}}\), and
\[|R(x,t,v)|\leq C_{0}(|v(x,t)|+|\nabla v(x,t)|),\quad(x,t)\in Q_{I}. \tag{2.2}\]
Moreover, let
\[\varphi(x,t)=e^{\lambda(d(x)-\beta(t-t_{0})^{2})},\]
where \(\lambda>0\) is a sufficiently large parameter and \(\beta>0\) is arbitrarily given. Henceforth \(C>0\) denote generic constants which independent of \(s>0\), but depends on \(\lambda,\beta,C_{0}\) in (2.2). Then
**Lemma 1**.: _There exist constants \(s_{0}>0\) and \(C>0\) such that_
\[\int_{Q_{I}}\left(\frac{1}{s}(|\partial_{t}v|^{2}+|\Delta v|^{2})+s|\nabla v|^ {2}+s^{3}|v|^{2}\right)e^{2s\varphi}dxdt\leq Cs^{4}\int_{Q_{I}}|P_{k}v|^{2}e^{2 s\varphi}dxdt+C\mathcal{B}(v),\quad k=1,2 \tag{2.3}\]
_for all \(s>s_{0}\) and \(v\in H^{2,1}(Q_{I})\) satisfying \(v\in H^{1}(\partial\Omega\times I)\). Here and henceforth we set_
\[\mathcal{B}(v):=e^{Cs}\|v\|_{H^{1}(\Gamma\times I)}^{2}+s^{3}\int _{(\partial\Omega\setminus\Gamma)\times I}(|v|^{2}+|\nabla_{x,t}v|^{2})e^{2s }dSdt\] \[+s^{2}\int_{\Omega}(|v(x,t_{0}-\delta)|^{2}+|\nabla v(x,t_{0}- \delta)|^{2}+|v(x,t_{0}+\delta)|^{2}+|\nabla v(x,t_{0}+\delta)|^{2})e^{2s \varphi(x,t_{0}-\delta)}dx.\]
The proof of the lemma with \(k=1\) is done similarly to Lemma 7.1 (p.186) in Bellassoued and Yamamoto [2] or Theorem 3.2 in Yamamoto [17] by keeping all the boundary integral terms \(v|_{\partial Q}\) which are produced by integration by parts and using \(d|_{\partial\Omega\setminus\Gamma}=0\) in (2.1). The proof for \(k=2\) follows directly from the case \(k=1\) by setting \(V(x,t):=v(x,2t_{0}-t)\) and using \(\varphi(x,t)=\varphi(x,2t_{0}-t)\) for \((x,t)\in Q_{I}\).
We emphasize that the backward parabolic Carleman estimate is the same as the forward parabolic Carleman estimate thanks to the symmetry of the weight \(\varphi(x,t)\) with respect to \(t\) centered at \(t_{0}\).
Using the Carleman estimate (2.3) we prove a Carleman estimate for a mean field game system. Setting \(y:=u-\widetilde{u}\) and \(z:=v-\widetilde{v}\) and subtracting the system (1.1) with \((\widetilde{u},\widetilde{v},\widetilde{F},\widetilde{G})\) from (1.1) with \((u,v,F,G)\), we reach
\[\begin{cases}&\partial_{t}y+a_{1}(x,t)\Delta y+R_{1}(x,t,y)=h(x,t)z+F- \widetilde{F},\\ &\partial_{t}z-a_{2}(x,t)\Delta z+R_{2}(x,t,z)=\kappa v\Delta y+R_{3}(x,t,y)+ G-\widetilde{G}\quad\text{in }Q_{I}.\end{cases} \tag{2.4}\]
Here by (1.2) and (1.3), we can verify
\[|R_{j}(x,t,y)|\leq C_{0}\sum_{k=0}^{1}|\nabla^{k}y(x,t)|,\quad j=1,3,\quad|R_ {2}(x,t,z)|\leq C_{0}\sum_{k=0}^{1}|\nabla^{k}z(x,t)|,\quad(x,t)\in Q_{I}. \tag{2.5}\]
We apply Carleman estimate (2.3) to the first equation in (2.4), and multiply the resulting equality by \(s\): \(y\) and obtain
\[\int_{Q_{I}}(|\partial_{t}y|^{2}+|\Delta y|^{2}+s^{2}|\nabla y|^{2}+s^{4}|y|^{2} )e^{2s\varphi}dxdt\]
\[\leq C\int_{Q_{I}}s|hz|^{2}e^{2s\varphi}dxdt+C\int_{Q_{I}}s|F-\widetilde{F}|^{2 }e^{2s\varphi}dxdt+Cs\mathcal{B}(y) \tag{2.6}\]
for all \(s>s_{0}\). In terms of (2.5), application of (2.3) with \(k=1\) to \(z\) yields
\[\int_{Q_{I}}\left(\frac{1}{s}(|\partial_{t}z|^{2}+|\Delta z|^{2})+s|\nabla z|^ {2}+s^{3}|z|^{2}\right)e^{2s\varphi}dxdt\]
\[\leq C\int_{Q_{I}}(|\kappa\Delta y|^{2}+|y|^{2}+|\nabla y|^{2})e^{2s\varphi} dxdt+C\int_{Q_{I}}|G-\widetilde{G}|^{2}e^{2s\varphi}dxdt+C\mathcal{B}(z) \tag{2.7}\]
for all \(s>s_{0}\).
Using \(\kappa\in L^{\infty}(Q_{I})\) and substituting (2.6) into the terms including \(\Delta y,\nabla y,y\) on the right-hand side of (2.7), we have
\[\int_{Q_{I}}\left(\frac{1}{s}(|\partial_{t}z|^{2}+|\Delta z|^{2}) +s|\nabla z|^{2}+s^{3}|z|^{2}\right)e^{2s\varphi}dxdt\] \[\leq C\int_{Q_{I}}s|z|^{2}e^{2s\varphi}dxdt+C\int_{Q_{I}}(s|F- \widetilde{F}|^{2}+|G-\widetilde{G}|^{2})e^{2s\varphi}dxdt+Cs(\mathcal{B}(y) +\mathcal{B}(z))\]
for all large \(s>0\). Absorbing the first term on the right-hand side into the left-hand side by choosing \(s>0\) sufficiently large, we see
\[\int_{Q_{I}}\left(\frac{1}{s}(|\partial_{t}z|^{2}+|\Delta z|^{2})+s|\nabla z|^ {2}+s^{3}|z|^{2}\right)e^{2s\varphi}dxdt\]
\[\leq C\int_{Q_{I}}(s|F-\widetilde{F}|^{2}+|G-\widetilde{G}|^{2})e^{2s\varphi} dxdt+Cs(\mathcal{B}(y)+\mathcal{B}(z)) \tag{2.8}\]
for all \(s>s_{0}\). Adding (2.8) and (2.6) and choosing \(s>0\) large again to absorb the term \(\int_{Q_{I}}s|hz|^{2}e^{2s\varphi}dxdt\) on the right-hand side into the left-hand side, we obtain
**Theorem 2** (Carleman estimate for a mean field game).: _There exist constants \(s_{0}>0\) and \(C>0\) such that_
\[\int_{Q_{I}}\biggl{(}|\partial_{t}(u-\widetilde{u})|^{2}+|(\Delta(u- \widetilde{u})|^{2}+s^{2}|\nabla(u-\widetilde{u})|^{2}+s^{4}|u-\widetilde{u}| ^{2}+\frac{1}{s}(|\partial_{t}(v-\widetilde{v})|^{2}+|\Delta(v-\widetilde{v}) |^{2})\]
\[+s|\nabla(v-\widetilde{v})|^{2}+s^{3}|v-\widetilde{v}|^{2}\biggr{)}e^{2s \varphi}dxdt\leq C\int_{Q_{I}}(s|F-\widetilde{F}|^{2}+|G-\widetilde{G}|^{2})e^ {2s\varphi}dxdt\]
\[+Cs(\mathcal{B}(u-\widetilde{u})+\mathcal{B}(v-\widetilde{v}))\quad\text{for all $s>s_{0}$.}\]
## 3. Proof of Theorem 1
We arbitrarily choose \(t_{0}\in(0,T)\) and \(\delta>0\) such that \(0<t_{0}-\delta<t_{0}+\delta<T\). We define
\[d_{0}:=\min_{x\in\overline{\Omega}}d(x),\quad d_{1}:=\max_{x\in \overline{\Omega}}d(x),\quad 0<r<\left(\frac{d_{0}}{d_{1}}\right)^{\frac{1}{2}}<1. \tag{3.1}\]
We note that \(0<r<1\).
We now show
**Lemma 2.**_Under regularity condition (1.3), if \(u=\widetilde{u}\), \(v=\widetilde{v}\), \(\nabla u=\nabla\widetilde{u}\) and \(\nabla v=\nabla\widetilde{v}\) on \(\Gamma\times(t_{0}-\delta,\,t_{0}+\delta)\) imply \(u=\widetilde{u}\) and \(v=\widetilde{v}\) in \(\Omega\times(t_{0}-r\delta,\,t_{0}+r\delta)\)._
For the proof of Theorem 1, it suffices to prove Lemma 2. Indeed, since \(t_{0}\in(0,T)\) and \(\delta>0\) can be arbitrarily chosen and the Carleman estimate is invariant with respect to \(t_{0}\) provided that \(0<t_{0}-\delta<t_{0}+\delta<T\), we can apply Lemma 2 by changing \(t_{0}\) over \((\delta,T-\delta)\) to obtain \(u=\widetilde{u}\) and \(v=\widetilde{v}\) in \(\Omega\times((1-r)\delta,\,T-(1-r)\delta)\). Since \(\delta>0\) can be arbitrary, this means that \(u=\widetilde{u}\) and \(v=\widetilde{v}\) in \(\Omega\times(0,T)\).
**Proof of Lemma 2.** Once we derived the relevant Carleman estimate in Theorem 2, the proof of Lemma 2 is done similarly to Proposition 2 in [3] as follows. First we determine the constant \(\beta>0\) in the weight of the Carleman estimate such that
\[\frac{d_{1}-d_{0}}{\delta^{2}-r^{2}\delta^{2}}<\beta<\frac{d_{0}}{r^{2}\delta ^{2}}. \tag{3.2}\]
Here we note that (3.1) verifies \(0<\frac{d_{1}-d_{0}}{\delta^{2}-r^{2}\delta^{2}}<\frac{d_{0}}{r^{2}\delta^{2}}\), which allows us to choose \(\beta\) satisfying (3.2).
For short descriptions, we set
\[M_{1}:=\sum_{k=0}^{1}(\|\nabla_{x,t}^{k}(u-\widetilde{u})\|_{L^{ 2}((\partial\Omega\setminus\Gamma)\times I)}^{2}+\|\nabla_{x,t}^{k}(v- \widetilde{v})\|_{L^{2}((\partial\Omega\setminus\Gamma)\times I)}^{2}),\] \[M_{2}:=\sum_{k=0}^{1}(\|(u-\widetilde{u})(\cdot,t_{0}+(-1)^{k} \delta)\|_{H^{1}(\Omega)}^{2}+\|(v-\widetilde{v})(\cdot,t_{0}+(-1)^{k}\delta) \|_{H^{1}(\Omega)}^{2}\]
and \(\mu_{1}:=e^{\lambda(d_{1}-\beta\delta^{2})}\). Since \(u=\widetilde{u}\) and \(v=\widetilde{v}\) on \(\Gamma\times I\), Theorem 2 yields
\[s^{3}\int_{Q_{I}}(|u-\widetilde{u}|^{2}+|v-\widetilde{v}|^{2})e^{2s\varphi}dxdt \leq Cs^{5}M_{1}e^{2s}+Cs^{5}M_{2}e^{2s\mu_{1}}\]
for all large \(s>0\). We shrink the integration region of the left-hand side to \(\Omega\times(t_{0}-r\delta,\,t_{0}+r\delta)\). Then, since \(\varphi(x,t)=e^{\lambda(d(x)-\beta(t-t_{0})^{2})}\geq e^{\lambda(d_{0}-\beta r ^{2}\delta^{2})}=:\mu_{2}\) in \(\Omega\times(t_{0}-r\delta,\,t_{0}+r\delta)\), we obtain
\[e^{2s\mu_{2}}\int_{\Omega\times(t_{0}-r\delta,\,t_{0}+r\delta)}(|u- \widetilde{u}|^{2}+|v-\widetilde{v}|^{2})dxdt\leq Cs^{2}M_{1}e^{2s}+Cs^{2}M_{2} e^{2s\mu_{1}},\]
that is,
\[\|u-\widetilde{u}\|_{L^{2}(\Omega\times(t_{0}-r\delta,\,t_{0}+r\delta))}^{2}+\|v- \widetilde{v}\|_{L^{2}(\Omega\times(t_{0}-r\delta,\,t_{0}+r\delta))}^{2}\leq Cs ^{2}M_{1}e^{-2s(\mu_{2}-1)}+Cs^{2}M_{2}e^{-2s(\mu_{2}-\mu_{1})} \tag{3.3}\]
for all large \(s>0\). Here, by (3.2), we see that \(\mu_{2}>\max\{1,\,\mu_{1}\}\), and so we let \(s\to\infty\) in (3.3), so that \(u=\widetilde{u}\) and \(v=\widetilde{v}\) in \(\Omega\times(t_{0}-r\delta,\,t_{0}+r\delta)\). Thus the proof of Lemma 2, and so Theorem 1 are complete. \(\blacksquare\)
**Acknowledgments.** The work was supported by Grant-in-Aid for Scientific Research (A) 20H00117 of Japan Society for the Promotion of Science.
|
2301.04186 | Elliptic flow measurement of $J/ψ$ in PHENIX Run14 Au+Au at
$\sqrt{s_{NN}}=200$ GeV | We obtain the first measurement of $J/\psi$ elliptic flow at RHIC energies in
forward rapidity using data from the PHENIX detector and applying an event
plane method. The dataset used contains 19 billion events from the PHENIX
experiment's Run 14 Au + Au dataset at $\sqrt{s_{NN}}=200$ GeV. PHENIX has
measured a $J/\psi$ $v_2$ in a centrality range of $10-60\%$ that is consistent
with zero. Taken together with results from LHC the measurement of $v_2$, which
is consistent with zero may indicate that $J/\psi$ production by coalescence is
not significant at forward rapidity at RHIC energy. | Luis Bichon III | 2023-01-10T19:49:55Z | http://arxiv.org/abs/2301.04186v1 | # Elliptic flow measurement of \(J/\psi\) in PHENIX Run14 Au+Au at \(\sqrt{s_{NN}}=200\) GeV +
###### Abstract
We obtain the first measurement of \(J/\psi\) elliptic flow at RHIC energies in forward rapidity using data from the PHENIX detector and applying an event plane method. The dataset used contains 19 billion events from the PHENIX experiment's Run 14 Au + Au dataset at \(\sqrt{s_{NN}}=200\) GeV. PHENIX has measured a \(J/\psi\)\(v_{2}\) in a centrality range of \(10-60\%\) that is consistent with zero. Taken together with results from LHC the measurement of \(v_{2}\), which is consistent with zero may indicate that \(J/\psi\) production by coalescence is not significant at forward rapidity at RHIC energy.
## 1 Introduction
The QGP has been found to exhibit a nearly perfect fluid behavior [1]. This behavior manifests itself as strong correlations between particles produced in nuclear collisions. Presently, the detailed interactions of the heavy quarks in the QGP medium are under investigation and, because heavy flavor quarks will have relatively larger masses, they may not be thermalized and flow with the medium. The production of \(J/\psi\) in p+p collisions is theoretically well understood because they are produced in hard scattering processes. This feature in addition to their production in hard scattering events in the initial stages of the collision make them ideal probes for testing the properties of the QGP medium. However, in nucleus+nucleus collisions some of the produced \(J/\psi\) mesons may be dissolved by the QGP, which may create anisotropies in the observed \(J/\psi\) azimuthal distributions due to the different path length in the medium. Additionally, a similar signal may be created if the \(J/\psi\) thermalizes inside the medium and follows the pressure gradients as lighter particles do, or the \(J/\psi\) may dissociate, and the charm
quarks could equilibrate which could lead to \(J/\psi\) regeneration. We present a preliminary result for \(J/\psi\)\(v_{2}\) using the PHENIX Run14 Au+Au dataset at \(\sqrt{s_{NN}}=200\) GeV.
## 2 Data Analysis & Methodology
### Dataset and Detectors
In this analysis, we use the Run 14 Au+Au Muon Arm dataset at \(\sqrt{s_{NN}}=200\) GeV containing 19 billion events. The dimuon decay channel is used to reconstruct candidate \(J/\psi\) mesons. The PHENIX experiment has a unique coverage at forward rapidity with muon identification. This in addition to the large dataset of Au+Au collisions collected in 2014 allows for a statistically improved measurement of \(J/\psi\) elliptic flow at RHIC energies.
The key detector in this analysis is the Forward Silicon Vertex Detector (FVTX). With the FVTX, an increase in precision vertexing capabilities was added to the muon spectrometers, enabling the rejection of muons from the decay of relatively long-lived particles, the rejection of muons from the decays of relatively long-lived particles, and an additional way of determining the event plane [2].
### Combinatorial Background Subtraction
To obtain a pure signal for the \(J/\psi\) from dimuon mass distributions we employ event-mixing as the standard method of removing the background dimuons. For this event-mixing method, the background is constructed from dimuon pairs of opposite sign, but the single muons come from different events. Mixed event dimuon pairs are only formed if two events have a centrality closer than 5%, a \(Z\) vertex closer than 0.75 cm and a event plane angle closer than \(\pi/20\) rad. Using events instead of individual dimuons allows us to increase the likelihood that we are using combinatorial background dimuons. A normalization factor must be applied for the background which can be obtained by using the ratio of like-sign pairs from the same event to like-sign pairs from mixed events. The signal is then obtained by the subtraction of the normalized background from the foreground.
### Fitting the Dimuon Mass Distribution
In the fitting of the mass distributions, we assume the shape of the \(J/\psi\) signal to be a Crystal Ball function, and given the statistical precision of the dataset, we also apply the same shape to the \(\Psi(2S)\) to avoid their inclusion in the higher mass \(J/\psi\) region. The parameters of the Crystal Ball function are obtained using \(J/\psi\) embedded Monte Carlo simulation data. We produce simulated mass distributions for low/high \(p_{T}\) and South/North
arm rapidities, fitting the distributions allowing for the function to have free (\(\alpha\), \(n\), \(\bar{x}\), and \(\sigma\)) parameters. The \(J/\psi\) count for each distribution is obtained by the integral of the \(J/\psi\) crystal ball function in the fit (see Figure 1).
Figure 1: Mass distributions using mixed-event subtraction for the unweighted “standard” set. These are binned by \(p_{T}\) in each column, and rapidity+\(\Delta\phi\) angle for each row. The green/dashed curve is a Crystal Ball fitted to the \(J/\psi\) peak, the blue/dashed-dot curve is a Crystal Ball fitted to the \(\psi(2S)\) peak, the red/dotted curve is an exponential fitted to the remaining background after subtraction, and the black/solid curve is the total fit.
### Event Plane Method and Measuring \(v_{2}\)
We are primarily using the In/Out ratio method, which is an event plane method [3] that uses the counts of the \(J/\psi\) in bins of \(\Delta\phi\) to measure \(v_{2}\). The In/Out ratio method splits the distributions into 2 bins of \(\Delta\phi\) one in plane with the event plane and the other out of plane. We measure \(v_{2}\) using this method by looking at the difference between these bins. If there is no preference in either plane, we would observe a flow around zero.
### Systematic Uncertainties
The systematic uncertainties are determined by changing various aspects of the analysis. As of this time, we have employed changing the primary detector of the analysis from the FVTX to the Central Arm Spectrometers (CNT), which covers a different pseudorapidity range. We have used a different method for our combinatorial background subtraction, the like-sign method, which constructs the background with dimuon pairs of the same sign (\(\mu^{+}\mu^{+}\) and \(\mu^{-}\mu^{-}\)) that come from the same event. The uncertainty in the normalization factor in the event-mixing method was also incorporated into the systematic uncertainty. The last systematic uncertainty we consider comes from the mass fitting of the dimuon distribution, where the shape of the continuum distribution was assumed to be an exponential function, and the uncertainty in this assumption can be explored by assuming no continuum contribution in the \(J/\psi\) mass region.
## 3 Results
Figure 2 shows the \(p_{T}\)-dependent \(J/\psi\)\(v_{2}\). The measurement in this analysis for PHENIX Run 14 at forward rapidity in a centrality range of 10 - 60% is shown in red. The measurement made by STAR at mid-rapidity and in a centrality range of 10-40% is shown in black. The ALICE result at forward rapidity in a centrality range of 20-40% is shown in blue. Boxes surrounding the data points represent systematic uncertainties.
PHENIX observes a larger suppression of \(J/\psi\) yield in forward rapidity when compared to mid-rapidity. This is contrary to expectations, because effects that dissolve the \(J/\psi\) have been determined to be stronger at mid-rapidity [4]. To understand this observation we begin by looking into the production of \(c\bar{c}\) pairs. The majority of \(c\bar{c}\) pairs per event in central collisions at RHIC are produced at mid-rapidity. At LHC energies, less suppression is observed, where many more \(c\bar{c}\) pairs per event in central collisions are produced [5]. To explain this behavior, theoretical models require a contribution of coalescence via a recombination mechanism between charm and anticharm quarks [6]. It was found that the strength of this coalescence
effect increases with the initial number of produced \(c\bar{c}\) pairs relative to the total number of quarks, increasing with the collisions energy.
At LHC energies, a nonzero \(v_{2}\) is observed, this is in line with \(J/\psi\) formed by coalescence in the QGP medium, and carrying the azimuthal anisotropy of the system [7]. At RHIC energies, STAR has measured \(v_{2}\) that is consistent with zero, but due to limited statistics remains inconclusive [8]. With coalescence being the dominant mechanism for nonzero \(J/\psi\)\(v_{2}\) it should follow that systems where fewer \(c\bar{c}\) pairs are formed should have a smaller azimuthal anisotropy.
From the figure we can see the clear nonzero \(v_{2}\) measured by ALICE. Although the ALICE measurement is at a much higher energy, we know
Figure 2: Plot of \(p_{T}\) dependent \(J/\psi\)\(v_{2}\). The PHENIX result in light gray/red/circle is compared to STAR [8] in black/star and ALICE [7] gray/blue/square.
\(v_{2}\) does not scale with energy for \(J/\psi\), so it makes for a good comparison that the ALICE result which is clearly nonzero is different from our measurement. In our measurement, we see a \(v_{2}\) that is clearly consistent with zero across all \(p_{T}\) bins. The systematic uncertainties were conservatively estimated, not taking into account cancellations or correlations of uncertainties from different sources. Additional data from Run 16 of RHIC will be included in the final results, and we expect that both statistical and systematic uncertainties will be significantly reduced.
## 4 Conclusion and Outlook
We have presented PHENIX Run 14 \(p_{T}\)-dependent \(J/\psi\)\(v_{2}\) at forward rapidity at \(\sqrt{s_{NN}}=200\) GeV. PHENIX has measured a \(J/\psi\)\(v_{2}\) that is consistent with zero. We have determined that the ALICE result, where there is clearly nonzero \(v_{2}\), is distinctly different from our measurement, and that forward and mid-rapidity results at RHIC are consistent, but the uncertainties are still large. In the future, we will incorporate Run 16 data in our measurement, essentially doubling the current dataset and reducing statistical uncertainties accordingly. We also plan to study open heavy flavor \(v_{2}\) to obtain a more complete understanding of the heavy flavor dynamics at RHIC.
|
2305.01509 | Report from Dagstuhl Seminar 23031: Frontiers of Information Access
Experimentation for Research and Education | This report documents the program and the outcomes of Dagstuhl Seminar 23031
``Frontiers of Information Access Experimentation for Research and Education'',
which brought together 37 participants from 12 countries.
The seminar addressed technology-enhanced information access (information
retrieval, recommender systems, natural language processing) and specifically
focused on developing more responsible experimental practices leading to more
valid results, both for research as well as for scientific education.
The seminar brought together experts from various sub-fields of information
access, namely IR, RS, NLP, information science, and human-computer interaction
to create a joint understanding of the problems and challenges presented by
next generation information access systems, from both the research and the
experimentation point of views, to discuss existing solutions and impediments,
and to propose next steps to be pursued in the area in order to improve not
also our research methods and findings but also the education of the new
generation of researchers and developers.
The seminar featured a series of long and short talks delivered by
participants, who helped in setting a common ground and in letting emerge
topics of interest to be explored as the main output of the seminar. This led
to the definition of five groups which investigated challenges, opportunities,
and next steps in the following areas: reality check, i.e. conducting
real-world studies, human-machine-collaborative relevance judgment frameworks,
overcoming methodological challenges in information retrieval and recommender
systems through awareness and education, results-blind reviewing, and guidance
for authors. | Christine Bauer, Ben Carterette, Nicola Ferro, Norbert Fuhr | 2023-04-18T10:28:54Z | http://arxiv.org/abs/2305.01509v1 | # Frontiers of Information Access Experimentation for Research and Education
###### Abstract
This report documents the program and the outcomes of Dagstuhl Seminar 23031 "Frontiers of Information Access Experimentation for Research and Education", which brought together 37 participants from 12 countries.
The seminar addressed technology-enhanced information access (information retrieval, recommender systems, natural language processing) and specifically focused on developing more responsible experimental practices leading to more valid results, both for research as well as for scientific education.
The seminar brought together experts from various sub-fields of information access, namely Information Retrieval (IR), Recommender Systems (RS), Natural Language Processing (NLP), information science, and human-computer interaction to create a joint understanding of the problems and challenges presented by next generation information access systems, from both the research and the experimentation point of views, to discuss existing solutions and impediments, and to propose next steps to be pursued in the area in order to improve not also our research methods and findings but also the education of the new generation of researchers and developers.
The seminar featured a series of long and short talks delivered by participants, who helped in setting a common ground and in letting emerge topics of interest to be explored as the main output of the seminar. This led to the definition of five groups which investigated challenges, opportunities, and next steps in the following areas: _reality check, i.e. conducting real-world studies_, _human-machine-collaborative relevance judgment frameworks_, _overcoming methodological challenges in information retrieval and recommender systems through awareness and education_, _results-blind reviewing_, and _guidance for authors_.
January 15-20, 2023 - [http://www.dagstuhl.de/23031](http://www.dagstuhl.de/23031)
Information systems Information retrieval Information systems Recommender systems Computing methodologies Natural language processing Information systems Users and interactive retrieval Information systems Evaluation of retrieval results
and phrases evaluation, experimentation, information access systems, simulation, user interaction
Digital Object Identifier 10.4230/DagRep.13.1.1
Edited in cooperation with Guglielmo Faggioli, University of Padua, IT
###### Abstract
In this paper we propose a new paradigm of experimental evaluation of the proposed approach to evaluate the performance of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach to evaluate the performance of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental approach. We propose a new paradigm of experimental approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental approach. We propose a new paradigm of experimental approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental approach. We propose a new paradigm of experimental approach. We propose a new paradigm of experimental approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental approach. We propose a new paradigm of experimental approach. We propose a new paradigm of experimental approach. We propose a new paradigm of experimental approach. We propose a new paradigm of experimental evaluation of the proposed approach. We propose a new paradigm of experimental approach.
We started the seminar week with a series of long and short talks delivered by participants, also in response to the above questions. This helped in setting a common ground and understanding and in letting emerge the topics and themes that participants wished to explore as the main output of the seminar.
This led to the definition of five groups which explored challenges, opportunities, and next steps in the following areas
* and points to best practices and remaining challenges in both how to do domain-specific or longitudinal studies, how to recruit the right participants, using existing or creating new infrastructure including appropriate data representation, as well as how, why and what to measure.
* **Human-machine-collaborative relevance judgment frameworks**: The working group studied the motivation for using Large Language Models (LLMs) to automatically generate relevance assessments in information retrieval evaluation, and raises research questions about how LLMs can help human assessors with the assessment task, whether machines can replace humans in assessing and annotating, and what are the conditions under which human assessors cannot be replaced by machines.
* **Overcoming methodological challenges in IR and RS through awareness and education**: Given the potential limitations of today's predominant experimentation practices, we find that we need to better equip the various actors in the scientific ecosystem in terms of scientific methods, and we identify a corresponding set of helpful resources and initiatives, which will allow them to adopt a more holistic perspective when evaluating such systems.
* **Results-blind reviewing**: The current review processes lead to undue emphasis on performance, rejecting papers focusing on insights in case they show no performance improvements. We propose to introduce a results-blind reviewing process forcing reviewers to put more emphasis on the theoretical background, the hypotheses, the methodological plan and the analysis plan of an experiment, thus improving the overall quality of the papers being accepted.
* **Guidance for authors**: The Information Retrieval community has over time developed expectations regarding papers, but these expectations are largely implicit. In contrast to adjacent disciplines, efforts in the ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR) community have been rather sparse and are mostly due to individuals expressing their own views. Drawing on materials from other disciplines, we have built a draft set of guidelines with the aim of them being understandable, broad, and highly concise. We believe that our proposal is general and uncontroversial, can be used by the main venues, and can be maintained with an open and continuous effort driven by, and for, the community.
## 2 Table of Contents
**Executive Summary**
_Christine Bauer, Ben Carterette, Nicola Ferro, Norbert Fuhr_
**Overview of Talks**
Kickoff on Frontiers of Information Access Experimentation for Research and Education
_Ian Soboroff_
Goodhart's Law and the Lucas Critique
_Justin Zobel_
User-centric Evaluation
_Bart P. Knijnenburg_
Offline Evaluation Based on Preferences
_Charles L. A. Clarke_
The Impact of Human Assessors on Judgements, Labels, Supervised Models, and Evaluation Results
_Gianluca Demartini_
A Plea for Result-Less Reviewing
_Norbert Fuhr_
Understanding your User, Process Tracing as a User-centric Method
_Martijn C. Willemsen_
From Living Lab Studies to Continuous Evaluation
_Philipp Schaer_
An Idea for Evaluating Retrieve & Generate Systems
_Laura Dietz_
Metadata Annotations of Experimental Data with the ir_metadata Schema
_Timo Breuer_
Measuring Fairness
_Maria Maistro_
(Aspects of) Enterprise Search
_Udo Kruschwitz_
Identification of Stereotypes: Retrieval and Monitoring
_Paolo Rosso_
Coordinate Research, Evaluation, and Education in Information Access: Towards
a More Sustainable Environment for the Community
_Nicola Ferro_
Recommender Systems Evaluation 2017-2022
_Alan Said_
**Working Groups**
Reality Check - Conducting Real World Studies
_Bruce Ferwerda, Allan Hanbury, Bart P. Knijnenburg, Birger Larsen, Lien Michiels, Andrea Papenmeier, Alan Said, Philipp Schaer, Martijn Willemsen_
HMC: A Spectrum of Human-Machine-Collaborative Relevance Judgment Frameworks
_Charles L. A. Clarke, Gianluca Demartini, Laura Dietz, Guglielmo Faggioli, Matthias Hagen, Claudia Hauff, Noriko Kando, Evangelos Kanoulas, Martin Potthast, Ian Soboroff, Benno Stein, Henning Wachsmuth_
Overcoming Methodological Challenges in Information Retrieval and Recommender Systems through Awareness and Education
_Christine Bauer, Maik Frobe, Dietmar Jannach, Udo Kruschwitz, Paolo Rosso, Damiano Spina, Nava Tintarev_
Results-blind Reviewing
_Joeran Beel, Timo Breuer, Anita Crescenzi, Norbert Fuhr, Meijie Li_
Guidance for Authors
_Giorgio Maria Di Nunzio, Maria Maistro, Christin Seifert, Julian Urbano, Justin Zobel_
**Participants**
**List of Acronyms**
**Author Guidance Appendix**
SIGPLAN Empirical Evaluation Guidelines with Annotations
Experimental Standards for Deep Learning Guidelines with Annotations
Quick reflections
Common Flaws in Submitted IR Papers
## 3 Overview of Talks
### Kickoff on Frontiers of Information Access Experimentation for Research and Education
_Ian Soboroff (National Institute of Standards and Technology, US, [email protected]) License (c) Creative Commons BY 4.0 International license_
The goal of this talk is to set out a common starting point for the seminar, and I approach this from the perspective of test collections and information retrieval. I start from the structure of a test collection and describe the pooling and relevance assessment process, highlighting known issues in those processes, including incompleteness, assessor disagreement, shallow pooling, and integrating results from multiple test collections. I close the talk with a list of hard problems in evaluation such as handling low run coverage and the absence of external ground truth.
### Goodhart's Law and the Lucas Critique
_Justin Zobel (University of Melbourne, AU, [email protected]) License (c) Creative Commons BY 4.0 International license_
The discipline of IR has a deep literature examining how best to measure performance, in particular the practice of assessing retrieval systems using batch experiments based on collections and relevance judgements. However, this literature has only rarely considered an underlying principle: that measured scores are inherently incomplete as a representation of human behaviour. In other disciplines, the significance of the principle has been examined through the perspectives of Goodhart's law and the Lucas critique. Here I argue that these apply to IR and show that neglect of this principle has consequences in practice, separate from issues that can arise from poor experimental designs or the use of effectiveness measures in ways that are known to be questionable. Specifically, blind pursuit of performance gains based on the optimisation of scores, and analysis based solely on aggregated measurements, can lead to misleading or meaningless outcomes.
This talk was based on SIGIR Forum paper "When Measures Mislead: The Limits of Batch Assessment of Retrieval Systems" [31], available at [https://www.sigir.org/wp-c](https://www.sigir.org/wp-c) ontent/uploads/2022/07/p12.pdf.
### User-centric Evaluation
_Bart P. Knijnenburg (Clemson University, US, [email protected]) License (c) Creative Commons BY 4.0 International license_
I presented an evaluation framework to study the user experience of interactive systems. It involves measuring users' perception and experiences with questionnaires and then trian
gulating these with behaviour. The subjective constructs explain why users' behaviour is different for different systems--this explanation is the main value of our framework.
I also addressed the filter bubble, and proposed to evaluate and build information systems in a way that supports rather than replaces decision-making; covers users' tastes, plural; and focuses on exploration and preference development rather than consumption.
Finally, I addressed the challenge of designing human subjects studies that preserve research participants' privacy and security while still generating robust results.
### Offline Evaluation Based on Preferences
_Charles L. A. Clarke (University of Waterloo, CA, [email protected])_
[0.5cm] (C) Creative Commons BY 4.0 International license Charles L. A. Clarke
Traditional offline evaluation of search, recommender, and other systems involves gathering item relevance labels from human editors. These labels can then be used to assess system performance using offline evaluation metrics. Unfortunately, this approach does not work when evaluating highly-effective ranking systems, such as those emerging from the advances in machine learning. Recent work demonstrates that moving away from pointwise item and metric evaluation can be a more effective approach to the offline evaluation of systems.
### The Impact of Human Assessors on Judgements, Labels, Supervised Models, and Evaluation Results
_Gianluca Demartini (The University of Queensland, AU, [email protected])_
[0.5cm] (C) Creative Commons BY 4.0 International license Gianluca Demartini
When we evaluate systems or train supervised models we make use of human annotations (e.g., judgements or labels). In this talk, I have presented examples of how different people may provide different annotations for the same data items. First, I have shown how misinformation judgements are prone to political background bias [21, 2]. Then, I have shown how human annotators discriminate based on the socio-economic status of the persons depicted in the annotated content [9]. The way human annotators are biased also depends on how the annotation task is framed and on what extra information we provide them with [29]. Finally, I have shown what it means to train supervised models with such biased labels and how these models behave very differently when they are trained with labels provided by different human annotators [20]. It is thus important for us to start considering tracking information about who the human assessors and annotators are and to include this as meta-data of our test collections [7].
### A Plea for Result-Less Reviewing
_Norbert Fuhr (University of Duisburg-Essen, DE, [email protected])_
[0.5cm] License [0.5cm]
Creative Commons BY 4.0 International license
Norbert Fuhr
Scientific experiments aim at testing hypotheses and gaining insights into cause-and-effect for the setting studied. Unfortunately, most IR publications focus on the first aspect, while papers addressing the second aspect get rejected if they fail to show improvements in terms of performance. However, many published papers suffer from severe flaws in their experimental analysis part, which makes their results almost useless. Focusing on performance numbers, top IR conferences and journals accept only papers showing improvements, which also leads to publication bias. As PhD students must publish to get a degree, they might be tempted to cheat if their proposed method does not yield the desired results.
As a way out, we propose to switch to result-less reviewing, which is standard e.g. in some psychological journals. Here reviewers cannot see the actual experimental results and have to base their decision on the theoretical background, the hypotheses, the methodological plan and the analysis plan. In case of acceptance, the experimental results are included in the paper published.
This approach could help to achieve higher scientific quality and better reproducibility of experimental studies in IR.
### Understanding your User, Process Tracing as a User-centric Method
_Martijn C. Willemsen (Eindhoven University of Technology & JADS -'s-Hertogenbosch, NL, [email protected])_
[0.5cm] License [0.5cm]
Creative Commons BY 4.0 International license
Martijn C. Willemsen
In evaluating our information access systems, we get more insights if we combine subjective measures (e.g. satisfaction) with interaction data [17]. However, most interaction data used nowadays, like simple clickstreams, do not provide sufficient insights into the underlying cognitive processes of the user. In this talk, I show how richer process measures (like hovers and eye-tracking) can provide deeper insights into the underlying decision processes of a user. For example, they help to understand when and why users search more superficially or more deeply into a list of results from the algorithm.
#### Process tracing in decision making
In decision-making, process tracing methods are commonly used to better understand human decision processes [25]. In the talk, I demonstrated one technique that I developed myself, called mouselabWEB1. This information board tool allows users to acquire information by hovering over boxes. It can be regarded as a cheap and simple eye-tracker-like tool that can be used in online studies. The tool allows users to easily design a mouselabWEB table and page and takes care of data storage and handling [28].
Footnote 1: [https://github.com/MCWillemsen/mouselabWEB20](https://github.com/MCWillemsen/mouselabWEB20)
#### 3.7.2 Process tracing used in Recommender Systems
We already used process tracing-like measures in earlier RS work to better understand the decision processes. In our work on latent feature diversification [27], we presented diversified lists of movie recommendations by their titles. Only when hovering the titles, additional movie information and poster were shown. This measured how much effort people spend and how many recommendations were inspected. We found that a top-20 list of recommendations resulted in more effort than a top-5 list, which subsequently increased choice difficulty and reduced satisfaction. In work on user inaction [30], we investigated why users do not interact with some recommended items, questioning if we should keep showing these recommendations. We found diverse reasons for inaction and showed that some reasons provide good reasons for not recommending the item again, whereas others indicate that it would actually be very beneficial to show the item again in the next round of recommendations.
### From Living Lab Studies to Continuous Evaluation
_Philipp Schaer (Technische Hochschule Koln, DE, [email protected])_
License (c)Creative Commons BY 4.0 International license
Philipp Schaer
In this short talk, I briefly introduced the basic idea behind using living labs for information retrieval or recommender system evaluation. I also outlined a framework to extend living labs to enable a continuous evaluation environment.
#### 3.8.1 Living labs
Livings labs were introduced in CLEF and TREC by initiatives like NewsREEL [14], OpenSearch [15] or, more recently, LiLAS [24], with a particular focus on academic search evaluation. The general motivation behind living labs is to enable in-vivo evaluation in real-world settings and to extend the Cranfield-style in-vitro evaluations. Limitations of Cranfield studies like being static and not incorporating real-world users should be avoided. Instead of using (domain-specific) experts to evaluate retrieval results, the behaviour of real-world users is logged to measure their usage of different system implementations. Approaches like A/B testing or interleaving allow comparing the amount and type of interactions with these different systems to infer the underlying system performance. By integrating real-world systems and users into the evaluation process, organizers of living lab evaluations can hope to bring more diversity and heterogeneity in the set of evaluators and, therefore, a higher level of realism. In industry, these types of online evaluations in real-world applications are common but not in academia, as access to these systems is usually not possible for external researchers and their systems. Although in principle, systems like STELLA [4] would make this possible, it is rarely used.
Most living lab CLEF and TREC initiatives suffered from a common set of issues, like, the small number of click events gathered in the experiments, therefore long-running experiments, missing user information or anonymous profiles, no differentiating in click events and no possibility to include expert feedback and generally the problem of being confronted with constant change in the systems and their data sets.
#### 3.8.2 Continuous evaluation
A framework for continuous evaluation was outlined to overcome some of the previously outlined issues. The framework is based on a living lab installation within a real-world system but extends it with the following components:
* Different user profiles -(regular) platform users whose user interaction data is logged and expert users that can directly annotate relevance labels on results in the systems.
* The expert assessments will be added to a constantly growing test collection that has to support versioning.
* As both expert and regular user feedback is expected to be insufficiently small at the beginning, different user types or interaction patterns can be simulated based on the interaction and relevance data gathered so far.
These components within the framework can run over a long time and create a constantly growing set useful for evaluating systems - running in the living lab as an online study or using the distilled/simulated evaluation data available for offline evaluation.
A first version of this framework will be implemented in the DFG-funded STELLA II project2.
Footnote 2: [https://stella-project.org/](https://stella-project.org/)
### An Idea for Evaluating Retrieve & Generate Systems
_Laura Dietz (University of New Hampshire, US, [email protected])_
License
C Creative Commons BY 4.0 International license
L Laura Dietz Natural language generation models (like GPT*) are here to stay, and they are a huge opportunity to build systems that combine retrieval and language generation in a combined system.
But: how can we evaluate the quality of such systems?
We discuss an idea for a new paradigm, the EXAM Answerability Metric [23], which uses a Question Answering (QA) system along with some human-written exam questions to evaluate whether the systems retrieve good _information_ (instead of the right terms).
The paradigm has other advantages such as no need for highly trained assessors, no fixed corpus for retrieval (open web is possible), and comparison of retrieval-only systems and fully-generated systems on equal footing. Moreover, additional systems can be added for evaluation later without bias against non-participating systems. There is the possibility to add additional exam questions at a later point, to increase resolution between systems.
We compare the EXAM evaluation metric to the official TREC quality metrics on the TREC Complex Answer Retrieval Y3 track. We observe a Spearman Rank Correlation coefficient of 0.73. In contrast, ROUGE yields a correlation of 0.01.
There are also many open questions about the evaluation paradigm, I would like to discuss with participants in this Dagstuhl Seminar.
### Metadata Annotations of Experimental Data with the ir_metadata Schema
_Timo Breuer (Technische Hochschule Koln, DE, [email protected])_
License \(\copyright\) Creative Commons BY 4.0 International license
Timo Breuer
In this talk, we present the current status of ir_metadata [3] - a metadata schema for annotating run files of information retrieval experiments. We briefly outline the logical plan of the schema that is based on the PRIMAD model (first introduced as part of the Dagstuhl seminar 16041 [12]). The acronym stems from the six components that can possibly affect the reproducibility of an experiment including the Platform, Research Goal, Implementation, Method, Actor, and Data. In addition, we extended the taxonomy with related subcomponents, for which details can be found on the project's website3.
Footnote 3: [https://www.ir-metadata.org/](https://www.ir-metadata.org/)
Furthermore, we demonstrate how run files can be annotated in practice, describe the current software support and include example experiments in the form of reproducibility studies. Open points of discussion include what kinds of additional software features could be implemented to reduce the annotation effort or how the schema can be made a community standard in general. By introducing this resource to the community, we hope to stimulate a more reproducible, transparent, and sustainable use of experimental artefacts.
### Measuring Fairness
_Maria Maistro (University of Copenhagen, DK, [email protected])_
License \(\copyright\) Creative Commons BY 4.0 International license
Maria Maistro
In recent years, the discussion on the fairness of Machine Learning (ML) models has gained increasing attention and involved different research communities, including Information Retrieval (IR) and Recommender Systems (RS). In the ML community, well-defined fairness criteria have been proposed and applied to the risk assignment score returned by classifiers. Assume that there are two (or more) groups, denoted by \(\mathcal{A}\) and \(\mathcal{B}\), defined on attributes that should not be used to discriminate people, e.g., gender, ethnicity, or age. Kleinberg et al. [16] propose 3 fairness criteria: (1) calibration within groups; (2) balance for the positive class; and (3) balance for the negative class. Calibration within groups means that the probability score estimated by a classifier is well-calibrated, i.e., if a classifier returns a probability \(x\) for people in group \(\mathcal{A}\) to belong to the positive class, then an \(x\) percentage of people in \(\mathcal{A}\) should truly belong to the positive class. Balance for the positive class states that the average estimated probability for people truly belonging to the positive class should be the same in groups \(\mathcal{A}\) and \(\mathcal{B}\). Balance for the negative class is the counterpart defined for the negative class. Kleinberg et al. [16] proves that these criteria are incompatible, except for two non-realistic cases.
The above criteria are not directly applicable when the output of a system is a ranking. Ekstrand et al. [8] identify several reasons, some of which are mentioned in the following. First, items are organized in a ranking, where they receive different levels of attention due
to the position bias [5]. Therefore decisions based on model scores, i.e., how to generate the ranking, are not independent and can not be evaluated independently. Second, users can access IR and recommendation systems multiple times over a period of time and decisions based on model predictions are repeated over time. Thus, fairness should be evaluated for the whole process, not at a single point in time. Third, multiple stakeholders are involved with IR and RS systems and they have different fairness constraints. For example, users of the system might be concerned about receiving results that are not biased towards some of their attributes, e.g., gender, while providers might be concerned about their items not being underrepresented in the ranking.
Due to the above reasons, there has been a proliferation of fairness definitions and measures, targeting different nuances of the same problem and trying to adapt more general fairness definitions to the ranking problem. Recent surveys identify more than 6060 different variants of fairness definitions resulting in more than 4040 different fairness measures [26, 1].
In this talk, I argue that there is a need for a better understanding of different fairness definitions and measures. I present some open questions and future research directions which include: an exploration of the relationship between bias, data distribution, and fairness [6]; an analysis of formal properties and pitfalls of fairness measures as done for IR measures [10]; evaluation approaches able to accommodate multiple aspects, e.g., relevance, fairness and credibility [19]; guidelines, benchmarks, and tools to advise researchers and practitioners in designing the most appropriate evaluation protocol for fairness.
### 3.12 (Aspects of) Enterprise Search
_Udo Kruschwitz (University of Regensburg, DE, [email protected])_
Search and IR is commonly associated with Web search but there are plenty of other areas that fall outside the scope of Web search and which are nevertheless interesting and challenging. One example is enterprise search which describes search within companies or other organisations [18]. This is an area that has attracted little attention in academia (as well as in shared tasks and competitions) yet it affects millions of users who try to locate relevant information as part of their everyday work. Key challenges include the silo structure of data sources, privacy issues, the lack of link structure and the fact that there may only be a single relevant document (or none at all) for a given information need. All this has implications, and in the context of this seminar, some of the main challenges include the absence of test collections, problems with data sharing and reproducibility as well as the domain-specific nature of each use case.
### Identification of Stereotypes: Retrieval and Monitoring
_Paolo Rosso (Technical University of Valencia, ES, [email protected])_
License (c) Creative Commons BY 4.0 International license
Polo Rosso
In the short talk, I addressed the problem of the retrieval of text fragments containing implicit and subjective information such as stereotypes, framing them, and annotating them. Part of the work was done in collaboration with OBERAXE, the Spanish observatory of racism and xenophobia. Transcribed speeches of the Spanish Congress of Deputies with immigrants as the target were framed as a threat or victims using a taxonomy where the negative/neutral/positive attitudes of the speaker were taken into account. Moreover, social media memes with women as a target were retrieved and annotated. The low inter-annotator agreement shows the necessity to go beyond the aggregated ground truth and consider the pre-aggregated information of each individual annotator in order to give voice also to minorities in disagreement with the opinion of the majority. Using, for instance, the learning with disagreements paradigm should allow the development of more equitable systems in the name of fairness.
Coordinate Research, Evaluation, and Education in Information Access: Towards a More Sustainable Environment for the Community
_Nicola Ferro (University of Padua, IT, [email protected])_
License (c) Creative Commons BY 4.0 International license
Nicola Ferro
The information access research field is characterized by several areas, such as IR, RS, and NLP. These areas, in turn, offer various venues where the community can meet, discuss, and grow; typically, a mix of _scientific conferences_, _evaluation forn_, and _summer/winter schools_. For example, in the IR area, there are several such venues around the world. In Europe, there is European Conference on Information Retrieval (ECIR)4 as scientific conference; Conference and Labs of the Evaluation Forum (CLEF)5[11] as evaluation forum; and, European Summer School on Information Retrieval (ESSIR)6 as summer school. In America, there is SIGIR7 as scientific conference, which is also the premier international venue for the area; Text REtrieval Conference (TREC)8[13] as evaluation forum; however, they lack a summer/winter school. In Asia, there is the newly born Information Retrieval in the Asia Pacific (SIGIR-AP)9 as scientific conference; NII Testbeds and Community for Information
access Research (NTCIR)10[22] and Forum for Information Retrieval Evaluation (FIRE)11 as evaluation fora; and, Asian Summer School in Information Access (ASSIA)12.
Footnote 10: [https://research.nii.ac.jp/ntcir/index-en.html](https://research.nii.ac.jp/ntcir/index-en.html)
Footnote 11: [http://fire.irsi.res.in/](http://fire.irsi.res.in/)
Footnote 12: [https://goassia.github.io/](https://goassia.github.io/)
All these venues are independent events, coordinated by their own steering committees (or equivalent bodies), with their own vision and strategic goals. Obviously, being the members of the community shared across the different committees and part of most of them, there is some informal level of coordination among these venues, which are cooperating for the overall growth of the community rather than competing for acquiring "shares" of it.
However, the main question of this talk is whether we can make better use of the venues we have in the field in order to fully unveil the potential of (research, evaluation, and education) in a more coordinated way and deliver further benefits to our community in terms of quality and volume of the research produced, robustness of the experimental results achieved, effective and smooth training and education to make our junior members the new leaders.
And, if this were possible in an area, such as IR, what would it mean for the information access field at large? How would we cross the boundaries of the different areas?
#### Examples of Coordination between Research, Evaluation, and Education
In the following, we provide some possible examples of coordination between research, evaluation, and education, considering the case of ECIR, CLEF, and ESSIR.
As a preliminary note, all of them happen in Europe, all of them follow an annual cycle, and their schedules match well enough13:
Footnote 13: The alignment of the schedule is a partially intentional decision by the committees behind these venues.
* ECIR: submission deadline in October, conference in March/April;
* CLEF: evaluation activities in January-May/June, submission deadline in June/July, conference in September;
* ESSIR: school in July-August.
ECIR\(\leftrightarrow\) CLEF: Research\(\leftrightarrow\) Evaluation
There are already some coordination activities in place between ECIR and CLEF:
* ECIR hosts a section dedicated to CLEF labs, in order to stimulate participation in the CLEF evaluation activities;
* CLEF solicits its participants to follow-up their work in the labs with a submission to ECIR.
This link is possible because the new labs for CLEF are selected around July and this matches with the submission deadline to ECIR the next October; moreover, the ECIR session happens in March/April, which is still in due time for allowing participation in a CLEF lab up to May/July. On the other side, CLEF activities end in July (labs, papers), even if the actual event is later on in September; therefore, CLEF participants have time for planning a follow-up submission to ECIR in October.
Why is this link needed? Even if both ECIR and CLEF are part of the same IR area, being it a large community, the audience of ECIR and CLEF is only partially overlapping.
On the other hand, this audience may benefit from participation in both venues, not only because of more opportunities of conducting research but also because of the organized progress of such activities throughout the year, with intermediate delivery points, which help in making it smoother and break-down the overall work.
In his talk, Fuhr, see Section 3.6, argued for the need for a _result-less reviewing_ approach, where papers are assessed on the basis of their methodology, innovation, research questions, soundness of the planned experiments and, if accepted, the actual experiments will be conducted later on, possibly in a follow-up publication.
This could represent another area of coordination between ECIR and CLEF: result-less papers are submitted at ECIR and, if accepted, their experimental part is then submitted to CLEF as a follow-up publication. Also in this case the schedule of the two venues aligns well enough to make this possible. And, again, this would allow the community to have more regular and intermediate steps at which to deliver their research, with the additional benefit of focusing each step on a specific aspect of the research and, possibly, improving the overall quality of the output, both the methodology and the experiments.
#### ECIR \(\leftrightarrow\) Essir: Research \(\leftrightarrow\) Education
There is currently no specific joint activity between ECIR and ESSIR.
A first example of activity could be for ESSIR to offer a mentorship program for the students attending it, in order to help them in preparing their submission to ECIR and getting feedback about it. Conferences sometimes offer mentorship programs to students but these are often asynchronous exchanges of emails or, at best, remote calls. In this case, students and senior researchers would be back-to-back in the same place for a week and this would allow for a much more smother and productive interaction. This link between ECIR and ESSIR would be possible because ESSIR happens in July/August and the submission deadline for ECIR is in October.
During the discussion that followed-up after the presentation, it was correctly asked how this link would compare/relate to a Doctoral Consortium activity. It is true that the two activities would share some commonalities, both being a form of mentorship to students. However, in the case of a Doctoral Consortium, the purpose is to provide students with overall feedback about the PhD theme or thesis; in this case, we would focus on a much more specific goal, which is the submission of a paper to a conference. As a side note, ESSIR already hosts a form of Doctoral Consortium which is Future Directions in Information Access (FDIA).
Another form of activities could be to present at ESSIR "digested" research breakthrough highlights from the latest ECIR edition. In organizing a summer/winter school there is always a trade-off between offering foundational and advanced lectures; in both cases, the lectures are expected to cover in a reasonably complete way the topic they are about. This forces school organizers to select some topics and makes it impossible to cover all the frontier of the research in the field. These "digested highlights" could be a partial solution: they could provide a taste of other areas of the research frontier, still not being fully-fledged lectures.
#### ESSir \(\leftrightarrow\) CLEF: Education \(\leftrightarrow\) Evaluation
There is currently no specific joint activity between ESSIR and CLEF.
A possible activity could be to organize a permanent educational lab at CLEF, focusing on some basic tasks such as ad-hoc retrieval. This would allow us to address another trade
off typical of summer/winter schools: lectures versus hands-on sessions. Indeed, it is often difficult to find the right balance between the two and, due to limited time available or even hardware/software setup, the hands-on sessions are often at risk to be an oversimplification. On the other hand, a permanent lab at CLEF could be seen as a very extensive hands-on session of ESSIR, giving the possibility of exploring further details, also of practical nature. Moreover, this would allow for addressing some foundational concepts (and ensuring they are well understood) before the school, giving them additional freedom when school organizers have to balance between foundational and advanced topics.
#### Towards a More Sustainable Environment for Our Community
The examples discussed in the previous section provide a very basic idea of what better coordination among our venues could be. At the same time, they should help in making clear that a change in our perspective is required.
Indeed, we currently adopt a sort of _point-wise_ vision, where we target and optimize for each venue separately, and the venues themselves are somewhat organized and managed in isolation. In a sense, this incurs in a _waste of resources_, since we (both organizers and participants) may need to redo some part of the same work when passing from one venue to another and, definitely, we do not exploit any synergy and interaction among venues.
On the other hand, the approach presented in the previous section would require us to adopt a more _flow-wise_ vision, consisting of progressive stamps of quality, where the different steps of our research and education activities are part of an organized process, whose ultimate goal is to make them proceed in a smoother way along the pipeline, possibly also improving the quality of the outputs. Moreover, this could be also of further help for junior researchers who often are under the "publish or perish" pressure, forcing them to spread submissions to whatever venue, often repeating or slicing their work. In this case, for example, submitting a result-less paper to ECIR and the follow-up experiments to CLEF would preserve the publication volume but in a more controlled way, aimed at ensuring a better quality of each output, methodology first, and experiment after.
Obviously, this new vision will require training of both authors and reviewers, who should understand the model and how to properly apply it. For example, if a result-less paper is accepted at ECIR, when reviewing the experimental part at CLEF, its methodology should not be questioned again, especially if the reviewers happen to be different, but the review should focus just on the experimentation and the insights gathered from it.
Overall, this new coordinated vision aims at creating a _more sustainable environment_ for our community, reducing the waste of resources for intermediate steps and optimizing the overall effort for delivering an improved quality.
### Recommender Systems Evaluation 2017-2022
_Alan Said (University of Gothenburg, SE, [email protected])_
Recommender systems research and practice is a fast-developing topic with growing adoption in a wide variety of information access scenarios. In this talk, I presented a snapshot of the evaluation landscape in RS research between 2017 and 2022. The talk is based on a systematic literature review analyzing 64 papers, focusing particularly on the evaluation
methods applied, the datasets utilized, and the metrics used. The study shows that the predominant experiment method is offline experimentation and that online evaluations are primarily used in combination with other experimentation methods, e.g., an offline experiment. The analysis of the snapshot of the last six years of recommender systems research shows that the research community in recommender systems has consolidated the majority of experiments on a few metrics, datasets, and methods.
## References
* [1] Enrique Amigo, Yashar Deldjoo, Stefano Mizzaro, and Alejandro Bellogin. A unifying and general account of fairness measurement in recommender systems. _Inf. Process. Manag._, 60(1):103115, 2023.
* 42nd European Conference on IR Research, ECIR 2020, Lisbon, Portugal, April 14-17, 2020, Proceedings, Part II_, volume 12036 of _Lecture Notes in Computer Science_, pages 207-214. Springer, 2020.
* [3] Timo Breuer, Juri Keller, and Philipp Schaer. ir_metadata: An extensible metadata schema for IR experiments. In _SIGIR_, pages 3078-3089. ACM, 2022.
* Proceedings of the 16th International Symposium of Information Science, ISI 2021, Regensburg, Germany, March 8-10, 2021_, pages 348-362. Werner Hulsbusch, 2021.
* [5] Nick Craswell, Onno Zoeter, Michael J. Taylor, and Bill Ramsey. An experimental comparison of click position-bias models. In Marc Najork, Andrei Z. Broder, and Soumen Chakrabarti, editors, _Proceedings of the International Conference on Web Search and Web Data Mining, WSDM 2008, Palo Alto, California, USA, February 11-12, 2008_, pages 87-94. ACM, 2008.
* [6] Yashar Deldjoo, Alejandro Bellogin, and Tommaso Di Noia. Explaining recommender systems fairness and accuracy through the lens of data characteristics. _Inf. Process. Manag._, 58(5):102662, 2021.
* [7] Gianluca Demartini, Kevin Roitero, and Stefano Mizzaro. Managing bias in human-annotated data: Moving beyond bias removal. _CoRR_, abs/2110.13504, 2021.
* [8] Michael D. Ekstrand, Anubrata Das, Robin Burke, and Fernando Diaz. Fairness in information access systems. _Found. Trends Inf. Retr._, 16(1-2):1-177, 2022.
* 29, 2022_, pages 98-109. ACM, 2022.
* [10] Marco Ferrante, Nicola Ferro, and Maria Maistro. Towards a formal framework for utility-oriented measurements of retrieval effectiveness. In James Allan, W. Bruce Croft, Arjen P. de Vries, and Chengxiang Zhai, editors, _Proceedings of the 2015 International Conference on The Theory of Information Retrieval, ICTIR 2015, Northampton, Massachusetts, USA, September 27-30, 2015_, pages 21-30. ACM, 2015.
* Lessons Learned from 20 Years of CLEF_, volume 41 of _The Information Retrieval Series_. Springer, 2019.
* [12] Juliana Freire, Norbert Fuhr, and Andreas Rauber. Reproducibility of data-oriented experiments in e-science (dagstuhl seminar 16041). _Dagstuhl Reports_, 6(1):108-159, 2016.
* [13] D. K. Harman and E. M. Voorhees, editors. _TREC. Experiment and Evaluation in Information Retrieval_. MIT Press, Cambridge (MA), USA, 2005.
* Lessons Learned from 20 Years of CLEF_, volume 41 of _The Information Retrieval Series_, pages 511-543. Springer, 2019.
* [15] Rolf Jagerman, Krisztian Balog, Philipp Schaer, Johann Schaible, Narges Tavakolpoursaleh, and Maarten de Rijke. Overview of TREC opensearch 2017. In Ellen M. Voorhees and Angela Ellis, editors, _Proceedings of The Twenty-Sixth Text REtrieval Conference, TREC 2017, Gaithersburg, Maryland, USA, November 15-17, 2017_, volume 500-324 of _NIST Special Publication_. National Institute of Standards and Technology (NIST), 2017.
* Leibniz-Zentrum fur Informatik, 2017.
* [17] Bart P. Knijnenburg, Martijn C. Willemsen, Zeno Gantner, Hakan Soncu, and Chris Newell. Explaining the user experience of recommender systems. _User Model. User Adapt. Interact._, 22(4-5):441-504, 2012.
* [18] Udo Kruschwitz and Charlie Hull. Searching the enterprise. _Found. Trends Inf. Retr._, 11(1):1-142, 2017.
* 5, 2021_, pages 1232-1242. ACM, 2021.
* [20] Periklis Perikleous, Andreas Kafkalias, Zenonas Theodosiou, Pinar Barlas, Evgenia Christoforou, Jahna Otterbacher, Gianluca Demartini, and Andreas Lanitis. How does the crowd impact the model? A tool for raising awareness of social bias in crowdsourced training data. In Mohammad Al Hasan and Li Xiong, editors, _Proceedings of the 31st ACM International Conference on Information & Knowledge Management, Atlanta, GA, USA, October 17-21, 2022_, pages 4951-4954. ACM, 2022.
* [21] Kevin Roitero, Michael Soprano, Shaoyang Fan, Damiano Spina, Stefano Mizzaro, and Gianluca Demartini. Can the crowd identify misinformation objectively?: The effects of judgment scale and assessor's background. In Jimmy X. Huang, Yi Chang, Xueqi Cheng, Jaap Kamps, Vanessa Murdock, Ji-Rong Wen, and Yiqun Liu, editors, _Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, SIGIR 2020, Virtual Event, China, July 25-30, 2020_, pages 439-448. ACM, 2020.
* NTCIR's Legacy of Research Impact_, volume 43 of _The Information Retrieval Series_. Springer International Publishing, Germany, 2021.
* [23] David P. Sander and Laura Dietz. EXAM: how to evaluate retrieve-and-generate systems for users who do not (yet) know what they want. In Omar Alonso, Stefano Marchesin, Marc Najork, and Gianmaria Silvello, editors, _Proceedings of the Second International Conference on Design of Experimental Search & Information REtrieval Systems, Padova,
Italy, September 15-18, 2021_, volume 2950 of _CEUR Workshop Proceedings_, pages 136-146. CEUR-WS.org, 2021.
* living labs for academic search. In K. Selouk Candan, Bogdan Ionescu, Lorraine Goeuriot, Birger Larsen, Henning Muller, Alexis Joly, Maria Maistro, Florina Piroi, Guglielmo Faggioli, and Nicola Ferro, editors, _Experimental IR Meets Multilinguality, Multimodality, and Interaction
- 12th International Conference of the CLEF Association, CLEF 2021, Virtual Event, September 21-24, 2021, Proceedings_, volume 12880 of _Lecture Notes in Computer Science_, pages 394-418. Springer, 2021.
* [25] M. Schulte-Mecklenbeck, J.G. Johnson, U. Bockenholt, D.G. Goldstein, J.E. Russo, N.J. Sullivan, and M.C. Willemsen. Process-tracing methods in decision making: on growing up in the 70s. _Current Directions in Psychological Science_, 26(5):442-450, October 2017.
* [26] Yifan Wang, Weizhi Ma, Min Zhang, Yiqun Liu, and Shaoping Ma. A survey on the fairness of recommender systems. _ACM Trans. Inf. Syst._, 2022.
* [27] Martijn C. Willemsen, Mark P. Graus, and Bart P. Knijnenburg. Understanding the role of latent feature diversification on choice difficulty and satisfaction. _User Model. User Adapt. Interact._, 26(4):347-389, 2016.
* [28] Martijn C. Willemsen and Eric J. Johnson. _(Re)Visiting the Decision Factory: Observing Cognition with MouselabWE_, pages 76-95. Taylor and Francis Ltd., United Kingdom, 2nd edition, 2019. Publisher Copyright: (c) 2019 selection and editorial matter, Michael Schulte- Mecklenbeck, Anton Kuhberger, and Joseph G. Johnson; individual chapters, the contributors.
* [29] Jiechen Xu, Lei Han, Shazia Sadiq, and Gianluca Demartini. On the role of human and machine metadata in relevance judgment tasks. _Information Processing & Management_, 60(2):103177, 2023.
* [30] Qian Zhao, Martijn C. Willemsen, Gediminas Adomavicius, F. Maxwell Harper, and Joseph A. Konstan. Interpreting user inaction in recommender systems. In Sole Pera, Michael D. Ekstrand, Xavier Amatriain, and John O'Donovan, editors, _Proceedings of the 12th ACM Conference on Recommender Systems, RecSys 2018, Vancouver, BC, Canada, October 2-7, 2018_, pages 40-48. ACM, 2018.
* [31] J. Zobel. When measurement misleads: The limits of batch assessment of retrieval systems. _SIGIR Forum_, 56(1), June 2022.
## 4 Working Groups
### Reality Check - Conducting Real World Studies
_Bruce Ferwerda (Jonkoping University, SE, [email protected]) Allan Hanbury (TU Wien, AT, [email protected]) Bart P. Knijnenburg (Clemson University, US, [email protected]) Birger Larsen (Aalborg University Copenhagen, DK, [email protected]) Lien Michiels (University of Antwerp, BE, [email protected]) Andrea Papenmeier (Universitat Duisburg-Essen, DE, [email protected]) Alan Said (University of Gothenburg, SE, [email protected]) Philipp Schaer (Technische Hochschule Koln, DE, [email protected]) Martijn Willemsen (Eindhoven University of Technology & JADS, NL, [email protected])_
Information retrieval and recommender systems are deployed in real world environments. Therefore, to get a real feeling for the system, we should study their characteristics in "real world studies". This raises the question: What does it mean for a study to be realistic? Does it mean the user has to be a real user of the system or can anyone participate in a study of the system? Does it mean the system needs to be perceived as realistic by the user? Does it mean the manipulations need to be perceived as realistic by the user?
#### Background & Motivation
Arguably, the most realistic users can be found on existing systems, which will typically have a sufficiently large user base. However, this raises some additional questions. Firstly, there is the question of how to sample from this user base to obtain a representative sample. Secondly, these users may have some expectations of the system, which may make them somewhat resistant to (drastic) changes. On the other hand, recruiting new users comes with its own set of challenges, discussed further in Section 4.1.2.
In a similar vein, the largest degree of "system realism" would be achieved by studying real users of an existing system. For example, log-based studies have been considered the best examples of real world studies [26] since they capture behavior in a real-life setting, with little chance of contamination or bias. However, this limits the amount of control we, as researchers, can exert, and thus the research questions we can pose and answer. On the other hand, highly controlled experiments might lack realism in terms of the system, the user experience (users knowing they are being studied) and the generalizability of the study. Realism in a study is a continuum, as illustrated in Figure 1, ranging from highly controlled experiments towards real systems with real users, and researchers need to identify the appropriate experiment type for their purpose [59].
One central question in running real world studies is the influence of measurements on the behavior and experience of users. Following the Heisenberg principle [18], it is impossible to measure without influencing. If we study existing users in an existing system, and only use behavioral measures and logs from the system we will not affect users much but it will be hard to answer our question, as the evaluation of our manipulation will be difficult. On the other hand, when we start collecting additional measures, like intermediate surveys, users will know they are part of a study and modify their behavior because of
that (Hawthorne effect [50]). Also, longer surveys might break the actual flow of system usage and demotivate people. Survey questions might provide the users with insights into the underlying research questions, resulting in unwanted demand characteristics or socially desirable answer patterns.
However, triangulating objective (behavioral) data with subjective measures will be crucial to understand how users experience the system [30], so a careful development and usage of a combination of subjective and objective measures is going to be central to balancing realism with adequate measurement. The challenge of 'How to measure' is further discussed in Section 4.1.3.
Then, we have the realism of the research question and experiment design. In any experiment, we manipulate the system, thus breaking some existing habits or patterns. Especially when studying users of an existing system, the realism of this manipulation is crucial. If users do not experience the manipulation as a realistic feature or implementation, the results may not be representative. Similarly, the degree of information given to the user may also influence the realism of the study. If we provide users with too much information, e.g., a very specific task and scenario to work from, users may perform actions they would not have in a realistic situation. On the other hand, if we provide too little information, e.g., when we introduce a new feature on an existing platform without any instruction, we require users to invest the time and effort to find out how the feature works before they can use it in the way we intended.
Another important consideration regarding experiment design is the assignment of users to different versions of a system. Should the experience of a single user be kept consistent throughout the entire study? Such between-subjects designs have the advantage of preventing any spill-over effects but users working side by side or communicating about the system might discover there are different versions of the system, accidentally revealing the experimental conditions and goals. Within-subject designs allow users to experience all experimental conditions, which increases statistical power (as we can control for participant variance) but ordering and spill-over effects have to be considered. Moreover, to make a real world study sufficiently realistic and also understand how behavior changes over time and how habits are formed, we will need to consider longitudinal studies which come with their own set of challenges discussed in Section 4.1.4.
Even when we carefully design our experiments and research questions and select the appropriate participants, we may arrive at conclusions that do not necessarily generalize beyond the domain. The tension between domain-specific experiments and generalizable findings is further discussed in Section 4.1.5.
Finally, the cost of running a real world study is typically many times higher than performing offline evaluation [59]. Therefore it is important to also consider the available research infrastructure, and promote the development of reusable research infrastructure, as elaborated in Section 4.1.6, and provide datasets in sufficiently general formats to promote reuse, as discussed in Section 4.1.7.
Figure 1: Control versus realism continuum
#### 4.1.2 Recruiting Participants
Real-world user studies require recruiting efforts to find the "right" participants for the research. As a prerequisite, researchers need to have a clear understanding of the target user group and be able to **formalize the target user characteristics**. While some research can be conducted on a user sample with few limitations, others pose fine-grained requirements for user characteristics. In both cases, the user group needs to be carefully designed and adapted to the research problem at hand so that the user study is conducted on a sample representative for the user base [41].
Although some research communities have a broad consensus of what characteristics of participants should be reported, the RS and IR communities do not yet have a clear checklist of **reporting sample characteristics** and their information needs. Similarly, very few test collections, like the iSearch collection [37], actually report on the context and task users are in. Standardized reporting and metadata would also enable reproducibility [8] and data re-use (see Section 4.1.7). Inviting users that fit the recruitment criteria can be challenging. To invite users that fit the user group characteristics, information about the potential participants must be available in a structured format for filtering. Especially in IR and RS, systems often rely on user profiles [31]. Such profiles would therefore not only facilitate recruitment but also the usage of the system and avoid the "cold-start" problem [35]. With detailed user profiles, adhering to the GDPR and CCPA and formulating appropriate consent forms become additional points on a researcher's checklist.
Moreover, participants must be **recruited at the right moment**: People must be in the right mindset to start with the study. For some user groups, e.g., professionals, finding a good timing to ask for participation is crucial. Participants also must stay motivated throughout the session (or possibly even beyond) to deliver complete data. To gather high-quality data from users in real-life, ensuring that users participate for the right reasons is important too, e.g., participants should have an internal motive (that is, an actual information need) rather than generating data for financial compensation. That said, offering appropriate incentives also works towards data quality and participant motivation [14]. For that, a thorough understanding of user needs and motivations is needed. If the task/system provides users with a real benefit and actual value, the payment might not be needed and could even reduce realism and intrinsic motivation. Without such benefits, user behavior might be mostly driven by monetary incentives and divert from user behavior in the wild. However, these aspects are not necessarily in contradiction. Carefully designed, payment combined with benefits might reinforce each other. For example, in a recent longitudinal study on a music genre exploration tool, Liang and Willemsen [34] recruited new users online and paid them per session, with the system providing the additional benefit of exploring genre exploration and providing them with a personalized playlist. User drop-out was lower than common and engagement remained high across 6 weeks and 4 sessions, despite users having to respond to a medium-sized survey after every session.
**Recruiting at the right time** can also concern the time of day, week, or season. For example, recruiting during working hours might lead to a lack of users with full-time jobs. Defining filter criteria does not ensure that the diversity of the target user group is covered. Consequently, researchers must monitor the participant group to cover the full bandwidth of the user group under investigation. Neglecting the monitoring of incoming participants could lead to under- or over-representation of certain age, gender, or profession groups [5].
The **recruitment channel** is equally important for IR and RS studies in the wild. Several online recruiting platforms exist and can be used for studies in this field [1], e.g., MTurk or Prolific, each with their own participant characteristics [13, 43]. Other online
recruiting channels include social media [41]. Offline recruiting for online experiments can pose additional challenges for participants. In some cases, IR and RS systems are already used in the wild and provide an established user base to invite for studies.
#### 4.1.3 How to Measure
The abundance of various types of data is both a benefit and a curse of real world studies. Whereas the subsection on data representation (see Section 4.1.7 covers the proper management of this data, the current subsection addresses the measurement of data from the perspective of motivation (why do we measure?), best practices (what should we measure, and how can we make measurement easier?), and issues (what makes measurement difficult in realistic studies?). As real world studies often revolve around specific tasks and use contexts (Section 4.1.5), we also address the (lack of) generalizability of measurement.
##### 4.1.3.1 Why to measure
**Conduct theory-driven research** Real-world studies allow us to go beyond optimization of offline algorithmic performance in terms of performance metrics such as Mean Reciprocal Rank (MRR), normalized Discounted Cumulative Gain (nDCG) and recall, to a fine-grained analysis of how different system parameters can influence the system's performance at a given task.
Running a real world study requires researchers to think carefully about this "task", the right way of measuring how well the system performs at this task, and how the performance is impacted by the different system parameters. Tasks may range from highly domain-specific to more general, as discussed in Section 4.1.5. This domain-specificity means that if such studies aim to make generalizable contributions to an existing body of scientific knowledge, they should aim to explain why certain system parameters lead to higher performance.
Conducting theory-driven research requires additional measurement of intermediate (or mediating) variables that provide an explanation for the variance in performance indicators caused by system manipulations. Such mediating variables are often inherently user-centred; they can be characterized as subjective system aspects (users' perceptions of the manipulations) and user experience variables (users' self-relevant evaluation of the user experience) [30]. These can be measured with questionnaires, but there may exist behavioral proxies.
**Define an evaluation target** In realistic studies, the evaluation target must shift from system performance to a multi-faceted consideration of stakeholder satisfaction [59].
As the main goal--and hence the standard metrics--of traditional IR and RS research is to optimize system performance, it avoids the question of who these metrics are optimized for. In realistic studies, metrics must be optimized to satisfy the stakeholders of the system, and the goals of these stakeholders--and hence the metrics to measure these goals--may not always align. Most prominently, measuring the satisfaction of the end-users of a system has traditionally involved user experience metrics like satisfaction, decision confidence, and self-actualization [30], while system owners tend to be interested in metrics related to conversions, such as click-through rate, session length and basket value [22, 21].
##### 4.1.3.2 What to measure
**Carefully determine what to measure** Realistic studies must capture a variety of measures that are closely related to the evaluation target and/or can explain how/why certain system aspects influence the evaluation target.
Realistic studies tend to support a variety of user behaviors, and researchers are encouraged to instrument their research systems to capture these behaviors, such as page visits, ratings, and purchases. At the same time, though, considerations of end-user privacy may prescribe that measurement be limited to the metrics that are essential to answer the research questions. It is important to acknowledge here that a user's behavior is not always an accurate representation of their own longer-term goals (let alone the goals of the system owner). As the "true" evaluation target may be difficult to measure (i.e., "user satisfaction" is an inherently latent variable, and "company profit" is an aggregate measure that depends on many other variables), researchers must decide which of the measurable behaviors are most closely related to the evaluation target (see also Section 4.1.3.3).
An important consideration here is that certain implicit behaviors may also provide valuable insights--especially when taking the importance of explanation into consideration. Users who are ignoring a recommendation, quickly navigating away from a page, or abandoning a shopping cart are providing important insights into their experience.
Users' subjective evaluations may also be important to measure: such measures may be a more accurate representation of their goals than behaviors, and even in cases where the value of behavioral metrics is clear, subjective evaluations can be used to explain the occurrence of certain behaviors. Subjective evaluations are inherently latent and must be measured using "indicator variables" [11]. The best practice to measure such evaluations is to use multi-item measurement scales, but administering such scales may be considered an intrusive practice (more suggestions on how to best do this are provided below).
Process data can also be used to explain how an evaluation target is or is not met. Process data consists of particularly granular navigational data--usually at the level of mouse-overs, intermediate clicks, or mouse movements--that can be used as evidence of a user's decision processes (e.g., which search result to visit, which product to buy, which movie to watch) [58, 49].
**Make things more measurable** Realistic studies must trade off depth of measurement with user burden: more insightful measures are often more obtrusive, thereby reducing realism and participation. Below we provide suggestions on how to reduce the obtrusiveness of measurement.
While process measures are very useful to explain users' decision processes, precise process measures tend to require a certain system structure. For example, users' attention is easier to measure if certain information is hidden behind a click or a mouse-over if the user must perform a measurable action to acquire said information. More generally, behavioral data tend to be noisy due to the influence of external factors and system factors. The latter can be attenuated by reducing the number of available features and/or the amount of system personalization. Conversely, one can boost the "signal" to be measured by making the manipulated system aspect (e.g., a list of recommendations from a variety of different algorithms) more prominent in the system. Importantly, though, all of these practices may reduce the realism of the study.
Moreover, while subjective measures and process measures are invaluable in realistic studies--especially when it comes to explanation--subjective measurement is also more intrusive. Interrupting the user to fill out a questionnaire makes the interaction less realistic, and may cause asymmetric drop-outs from the study. An important consideration in this regard is when to measure users' subjective experience. The ideal but most intrusive timing is during the interaction; if the measurement occurs after the experience, it will be a retrospective and aggregate account of their experience. Aggregate retrospective evaluations of experiences have been shown to be unduly influenced by strong negative events (peaks), and
events that occurred at the end of the experience [24]. Finally, if the measurement occurs too long after the experience, it may no longer accurately reflect the experience, as the user may simply no longer remember the experience. Similarly, in certain contexts users' subjective evaluations and even their interaction patterns may be inaccurate representations of their true interests--people's responses may fall prey to desirability bias, framing and default effects, or other heuristic influences that must be accounted for in measurement.
As a final consideration, one could suggest that rather than minimizing (the obtrusiveness of) measurement, one could attempt to promote measurement, e.g., by providing easily accessible and/or gamified feedback elements. Evidently, this may reduce the realism of the study.
**Provide qualitative insights** Realistic studies benefit from qualitative evaluations that can be triangulated with quantitative metrics.
The metrics discussed above are well-suited for statistical evaluation--either in a correlational study, an intervention study, or a controlled experiment. When studies are sufficiently large, statistical significance may not be a suitable guideline to decide on the relevance of a finding, as even very small effects become significant when the sample size is large. In this case, researchers should focus on whether the size of the effect constitutes a meaningful contribution. Conversely, some real world studies may not attain the precision or sample size needed for statistical significance. Such studies may still provide valuable insights by treating them as pilot studies for more concerted (but perhaps less realistic) evaluation efforts.
If large sample sizes cannot be attained, a better approach may be to conduct a qualitative study. Regardless, there is immense value in deep, qualitative insights that such studies can provide. For example, one can conduct Grounded Theory studies to establish theories of users' psychology [9], or Contextual Design studies to gain a thorough understanding of users' experiences and their system needs [20]. Such studies are particularly useful when investigating evaluation targets that are highly context-dependent and/or not yet very well understood, such as fairness [23], serendipity [6] or surprise [25]. And while statistical methods are often not suitable for qualitative data, established methods exist that allow for systematic comparisons between users and/or systems (cf. "constant comparison" [9]).
Qualitative studies vary from purely observational studies to in-depth user interviews, and from single sessions to long-running studies where the researcher is "embedded" in a team or organization. As realism is often a prime consideration in such studies, other scholars have covered this aspect in much detail [20]. Note, though, that the collection and analysis of qualitative data are particularly labour-intensive, especially when they must integrate into a larger real world research infrastructure. It is also important to carefully report on qualitative procedures (e.g., procedures for "coding" qualitative data) and findings (e.g., by considering the researchers' positionality in conducting the study [9] and by providing ample evidence in the form of user quotes).
##### 4.1.3.3 Towards best practices in measurement
**Standardize measurement practices** To expedite generalizable research with real world systems, the field must adopt a set of theoretically-grounded measurement principles.
While most system-centric evaluation metrics in RS and IR have relatively standardized definitions that enjoy mostly universal adoption, this is not true for user behavior and experience metrics. While this is partially due to the highly contextual nature of relevant metrics in such studies, it may still be beneficial to identify a set of standardized metrics--or, at the very least, measurement principles that can improve the robustness of our evaluations
and expedite comparisons between studies.
On the subjective side, the field could create a repository of validated measurement scales that have been proven useful in past studies. Care must be taken, though, that such a repository does not become an exclusive source of measurement instruments--there are usually limits to the applicability of existing scales. Researchers could be encouraged to particularly study the measurement principles of existing scales, such as how well they generalize to new tasks, contexts, and user groups (this can be done through the statistical process of "measurement invariance testing" [56]). Another way to address the context-specificity of measurement is to provide guidelines for researchers to adapt existing scales to their particular context, as well as guidelines for the development of completely new scales [11].
Finally, it is best if the selection, adaptation and development of scales are rooted in a theoretical framework, such as the Knijnenburg et al. [29] framework for the user-centric evaluation of recommender systems. This framework should be extended beyond recommender systems and augmented with theoretical considerations regarding users' long-term behaviors and goals.
**Triangulate measures across multiple studies** To develop a set of robust and relevant metrics, IR and RS researchers should conduct a variety of studies--offline evaluations, controlled experiments, and A/B tests and observational studies with real world systems--and triangulate the data collected across these evaluation efforts.
Replication is a fundamental principle of robust scientific progress. Researchers who conduct realistic studies have an opportunity to conduct "conceptual replications" [10], where they try to replicate the findings from one domain (or one type of study) in their specific real world context. Such conceptual replications can particularly benefit from a theoretical framework like the Knijnenburg et al. framework [29], which can provide a high-level understanding of how the user experience of systems comes about (supporting the goal of explanation), provide guidance for the generation of measurement instruments and hypotheses for in-depth empirical research, and serve as a common frame of reference to compare and integrate findings across studies in different real world contexts. Furthermore, the Knijnenburg et al. framework specifically encourages the triangulation of user behaviors with their subjective evaluations--this grounds the subjective evaluations in observable actions, and in turn, explains the observable actions with subjective evaluations.
Relatedly, an important goal of conducting multi-faceted measurements in realistic studies is to test the validity and universality of the system-centric metrics that are commonly used in IR and RS research. Do these metrics correlate with positive, long-term, real world outcomes? In what contexts do they fail, and are there better system-centric metrics to optimize in these settings? As offline studies are likely not going away anytime soon, realistic studies can provide the all-important "reality check" that such studies need to validate their approach. Conversely, real world studies could provide a platform for researchers to test whether the offline performance of their solutions generalizes to a real world context. One could even create leaderboard-style challenges for each real world system to standardize this approach.
**Measure unobtrusively, where possible** To maintain realism, researchers should aim to measure things unobtrusively wherever possible.
As mentioned in our introductory subsection (Section 4.1.1), it is impossible to measure users without influencing them. So while subjective evaluations are invaluable to better understand users' experiences, it would be better for the realism of our studies if such obtrusive measures could eventually be avoided. This could be supported by a concerted effort to es
tablish behavioral proxies for subjective measures: which user behaviors best correlate with, e.g., user satisfaction? For example, Ekstrand et al. [12] showed that objective measures of diversity, novelty and accuracy correlated strongly with subjective measures based on items from a survey. In commercial systems, item ratings may--or may not--be a good proxy for user interests [38]. In dialogue-based systems, users' phrasing or tone of voice may be an indicator of their satisfaction or frustration. The answer to this question is likely highly context-dependent, so each real world study should identify its own best behavioral proxy metrics.
Similarly, researchers could benefit from easily measurable proxy metrics for longer-term (behavioral) outcomes. As outlined in Section 4.1.4, conducting longitudinal studies is a complicated affair, so the establishment of good proxy metrics could help set realistic long-term evaluation goals in studies that run over a shorter time span. Again, the best proxies for longer-term outcomes are likely context-dependent, so each real world study should aim to identify its own best proxies before reverting to shorter studies.
**Conduct appropriate statistical evaluations** As real world data is messy and complex, researchers must take care to conduct the appropriate statistical evaluations of their study data.
Using the guidelines for measurement outlined above, researchers conducting realistic studies will likely collect datasets that are complex (i.e., users may have multiple sessions, or may interact in groups) and longitudinal: users are tracked over time, may interact in groups, and can drop out of and into studies at any given moment. Conducting statistical evaluations on such data is not straightforward--aggregating data to a point where simple statistics apply likely wastes much of the benefit of conducting realistic studies, so complex statistical methods are likely required to carefully analyze the data. Calculating the required sample size (both in terms of the number of users and the number of measures per user) is also not straightforward [7].
A potential benefit of longitudinal data is that such data can be used to analyze "cross-lagged panel models" [51], where metric A at timestep n is regressed on metric B at timestep n-1 and vice versa. This allows researchers to establish the causal order between metrics.
If studies are conducted on a real world system, then it is important to establish a baseline measurement of user behavior and subjective evaluation. Moreover, if this system is continuously updated, this baseline metric must be continuously updated as well.
Subsequently, researchers must aim to detect trends in the data that are caused by their interventions. Such trends may be difficult to detect, as external factors (e.g., seasonal patterns) and the effects of multiple overlapping studies influence the study data simultaneously. This means that the data must be "de-biased" to isolate the effect of the intended study. Another consideration is that study samples may not be representative (see Section 4.1.2), which may introduce bias in the statistical results. Stratified sampling and weighting may be used to avoid such biases.
A final statistical consideration in real world studies is that most study participants will have an established interaction history with the system before the study starts. Their past experiences may "spill over" into subsequent evaluations. It is thus possible that they may be biased against (or in favour of) changes made to the system as part of the experimental study. Ideally, such systems would have a steady stream of new users that can be used to avoid such effects.
#### 4.1.4 Longitudinal Studies
Longitudinal studies conduct continuous measurements on their test subjects over a prolonged period of time. This temporal aspect provides opportunities to increase our understanding of the evolution of user experiences and behaviors over time in a way that does not only capture factors related to users' initial acceptance of a system or technology but also what influences their prolonged usage. Although longitudinal studies provide extended insights on experiences and behaviors and therefore contribute to a more realistic understanding of users, they are often considered too time-consuming and cumbersome to conduct [32]. We have defined several challenges and opportunities for longitudinal studies.
**Types of longitudinal studies** The strength of longitudinal aspects lies within revealing behavioral and attitudinal changes of users over time. In the most traditional way, longitudinal studies use the same participants over the course of the study (so-called, panel studies). However, the measurement of temporal changes within panel studies comes with its own challenges. For example, researchers must keep participants motivated to continue their participation in the study. These types of longitudinal studies are particularly susceptible to attrition (e.g., missing data due to non-returning dropouts) [42]. Attrition becomes a problem when complete data is systematically different from missing data, as the impact of missing data can accumulate over time.
Time is an important factor when addressing attrition. Dropouts during a longitudinal study typically occur when the study is too long, or the sampling rate is too high (in particular for non-behavioral studies). Hence, careful consideration of temporal aspects within longitudinal studies is crucial to keep participants motivated. Besides time aspects, there are several alternative types of longitudinal studies [39] that can help to circumvent the negative effects of panel studies:
1. A cohort study: participants are drawn from a sample consisting of people sharing the same characteristics and events of interest
2. A retrospective study: analyzing historical data (e.g., offline data)
A cohort study allows for flexibility in the participants that one wants to use at a certain point in time as long as the participants show overlap in the characteristics of interest. This would allow for a lightened load on participants that would otherwise continuously be participating in the study. Alternatively, a retrospective study would make inferences based on historical data instead of collecting new data. Existing datasets such as datasets of LastFM14 and MovieLens15 could be used to analyze longitudinal behaviors in retrospect.
Footnote 14: E.g., [http://www.cp.jku.at/datasets/LFM-2b/](http://www.cp.jku.at/datasets/LFM-2b/)
Footnote 15: [https://grouplens.org/datasets/movielens/](https://grouplens.org/datasets/movielens/)
**Confounding factors** Considering the reliability and the robustness of the collected data, not only the study design but also user and platform aspects play a role. Particularly in paid studies, participants could start multiple sessions to participate by creating multiple accounts or could influence one another when they are acquainted with each other and discuss the study. These activities by participants are difficult to detect and create potential confounds in the collected data. There are also several challenges with platform aspects. For example, adapting and changing the experimental platform based on interactions that were done during the longitudinal study. Adaptation of platform aspects based on participant interactions may contribute to the realism of the study (compared to a static platform) but can also collude how the data should be interpreted.
**Analysis** A challenge with longitudinal studies is how to analyze the data meaningfully. Although behavioral data collection might be continuous (unobtrusive), attitudinal data is collected less frequently as this often involves questionnaires (obtrusive). Hence, the challenge in the analysis is how to distinguish correlation from causation within the collected data. A potential way to address the aforementioned issue is to triangulate the analysis between unobtrusively collected data and obtrusively collected data.
#### 4.1.5 Domain-specific vs. General
In both RS and IR, real world experiments are often done in specific domains, for example, IR in the patent [44] and medical [40, 53] domains and recommender systems in the fashion [33] and travel [28] domains. The domains are specified by the data used, users, tasks, etc. These domains can be defined at varying levels of granularity, e.g., scientific paper search or recommendation as a domain, vs. a more specific sub-domain such as physics paper search or recommendation. Another example would be medical search as a domain, with medical search for dentits and for radiologists as sub-domains. While classification systems for research areas like DFG Subject Areas16 or the Common European Research Classification Scheme (CERIF)17 exist and might be a starting point, they do not catch all definitions of domains.
Footnote 16: [https://www.dfg.de/en/dfg_profile/statutory_bodies/review_boards/subject_areas/index.j](https://www.dfg.de/en/dfg_profile/statutory_bodies/review_boards/subject_areas/index.j) sp
Footnote 17: [https://www.arrs.si/en/gradivo/sifranti/sif-cerif-cerfs.asp](https://www.arrs.si/en/gradivo/sifranti/sif-cerif-cerfs.asp)
There is much value in small, in-depth studies, but the results from such studies are hard to generalise. With respect to research infrastructures (see Section 4.1.6) evaluation platforms should be customizable for different applications and domains but are most likely only one-shot implementations that cannot be used in different contexts. The challenge is therefore that domains tend to be treated as silos and there are few attempts to learn general principles that apply across multiple domains. Since the results of domain-specific studies cannot be compared at a numerical level, they must be compared at a conceptual level to allow for generalization. This can be seen as a continuum from general widely-applicable knowledge at one end to domain-specific knowledge at the other end, and the aim would be to shift knowledge from domain-specific to general. The widely applicable knowledge should then also allow theory to be developed--this theory would then allow researchers to make predictions about new domains, which aids the process of building tailored solutions and platforms for specific needs. This is illustrated in Figure 2.
An approach adopted in the DoSSIER project in the area of Professional Search18 is to classify domains by knowledge task types [55], as shown in Figure 3. This would allow similarities between different domains to be more easily identified, which would assist in the generalization of results. Evaluations of approaches could then be done over similar tasks in different domains, rather than within specific domains, referred to as (semi-)replication19, conceptual replication, or transitivity. Given the specifications of a new domain, the generalized knowledge and theory could be used to make predictions about how various approaches would work in the domains before any implementation or experiments are done. The ability to make predictions is also important for domains and tasks for which ethics and privacy concerns prevent large-scale experiments from being carried out.
Footnote 18: [https://dossier-project.eu/](https://dossier-project.eu/)
Footnote 19: In the sense of the ACM’s definition on reproducibility: “Different team, different experimental setup”, see [https://www.acn.org/publications/policies/artifact-review-and-badging-current](https://www.acn.org/publications/policies/artifact-review-and-badging-current)
Such a classification would also assist in systematic reviews and meta-analyses across domains. Meta-analysis is a powerful tool to accumulate and summarize the knowledge in a research field [15]. While meta-analyses are very common in the medical area, they are more challenging in IR and RS as experiments tend to be less comparable and hence amenable to a statistical meta-analysis. A challenge here would be the different types of studies done, e.g., a controlled randomized trial is likely more easily generalizable than a large search log study. The classification should also facilitate a move toward more task-specific workshops (e.g., ALTARS 20222) as a complement to domain-specific workshops (e.g., academic search in medicine or the social sciences [48] and legal IR workshops). The classification could also assist in identifying domains or task types for which too little experimental work has been done, especially to include domains that are most relevant for communities that are outside the commonly considered WEIRD (white, educated, industrialized, rich, democratic) communities [19]. It could also assist in identifying important theoretical questions and planning experiments that should be conducted to answer them (divide and conquer).
Footnote 22: [https://altars2022.dei.unipd.it/](https://altars2022.dei.unipd.it/)
Challenges foreseen for this approach are:
* How should domains be differentiated? Medical search for dentists might be different from medical search for radiologists, or they may be considered as part of the broader domain of medical search. Where are the lines between different domains?
* What are the incentives for researchers to work on generalized insights? Solutions to domain-specific problems are likely more publishable.
* It is unlikely that we can find generalizable knowledge or theory for every aspect under evaluation. How can such limits be recognized?
* It makes sense to start this approach at a smaller scale as a proof-of-concept. How do we identify which domains and tasks to start from?
* Generalizable theory is also about people/users, not only about the systems. What does it mean for users to behave differently in some domains, and how can we generalize knowledge about user behaviors across domains?
#### Research Infrastructure
A well-functioning research infrastructure can significantly speed up and improve research in several ways, e.g., by lowering entry requirements, reducing the cost of conducting research, and making it possible to work on common goals from common standards while
Figure 2: Theory development on a continuum from domain-specific to more general knowledge.
**task name:**: _the unique name assigned to the task, e.g., Prefiling Patentability Search_
**definition:**: _a brief definition of the task_
**rationale:**: _why is the task carried out? what should carrying out the task achieve? e.g., the task should lead to the identification of one or more patents that invalidate the query patent._
**initial information available:**: _what information is available at the beginning to start the search? e.g., a patent application document_
**information source:**: _what information must be searched? e.g., all patent and non-patent information published prior to today._
**searcher:**: _who usually performs the search? e.g., subject expert or librarian_
**query formulation methodology:**: _how are the queries formulated? e.g., extraction of keywords from the query document and formulation of a Boolean query using synonym expansion lists_
**types of tools used:**: _what tools are commonly used in this type of search? e.g,. clustering results, merging results, Boolean search,..._
**search stopping criteria:**: _what criteria are used to decide when the search process must be stopped? e.g., a reasonable number of documents returned by a Boolean query_
**output of the search:**: _what does the result list look like? e.g., a list of patents matching the Boolean query in reverse chronological order._
**how/if the search is documented:**: _is the search documented in some standard way? e.g., queries are placed into a search report along with the number of documents retrieved per query._
**post-processing, interpretation, and analysis of search results:**: _what is done with the result list once it is obtained? e.g., every patent is checked for relevance by an expert, if relevant it is marked as X or Y..._
**any caveats to consider in the analysis or its interpretation:**: _e.g., the searcher needs to have a good understanding of what the requester is looking for to enable a quick review of the answers for relevance._
Figure 3: Task definition template for professional search developed in the DoSSIER Project [55].
also increasing comparability between results [57]. Here, we consider challenges when using existing infrastructures and give overall recommendations for creating new research infrastructures that can facilitate real world studies.
##### 4.1.6.1 Challenges of using existing infrastructure
We distinguish three types of research infrastructure used for real world studies. First, we have frameworks that can be (re)used to conduct small-scale user studies. Examples are the 3bij3 framework by [36], the Experiment Support System (ESS) and the Python Interactive Information Retrieval Evaluation (PyIRE) [16]. We will refer to these as "frameworks". Secondly, there is a research infrastructure that is kept continuously running for longer periods of time. Examples are the MovieLens movie recommendation platform [17] and the Plista Open Recommendations Platform [54, 27], which has since been discontinued. We will refer to these as "live platforms". Finally, the CLEF includes several labs that address challenges in both the IR and RS fields with offline datasets collected from real world systems for a specific task [3], or the ACM Conference on Recommender Systems (RecSys) challenges, which have run since 2010 [33, 2, 45]. We will refer to these as "real-world task datasets". Below we discuss the key aspects to consider when deciding to reuse existing research infrastructure.
##### 4.1.6.2 Recruiting participants
A clear advantage to reusing existing live platforms is that there is often no need to recruit new participants, which comes with its own set of challenges, as discussed in Section 4.1.2. The platform provides either access to real users on a real product, e.g., Plista, or may have obtained sufficient traction because of its value to the community, e.g., MovieLens. Similarly, real world task datasets are usually collected from live platforms, and therefore do not require the recruitment of participants. Frameworks, then, do not share this advantage.
##### 4.1.6.3 Customizability/Flexibility
Frameworks allow for the most flexibility out of all the available options. Provided sufficient knowledge of the tool or some programming experience, frameworks can be customized such that a task of choice can be evaluated, as well as different experimental conditions created at will. At the other end of the spectrum, we find real world task datasets, where the task is set up front and there is no flexibility to change the data collection protocol or decide experimental conditions. In between, we find the live platforms that may have different degrees of flexibility. Flexibility is often at tension with the openness of the platform to the broader research community. On live platforms, users may have some expectations of the system. Therefore, they may be somewhat resistant to change, and therefore offer a limited degree of flexibility. This could be overcome provided a steady stream of new users who do not yet have these expectations of the system, however, on all platforms, only a few users will be converted to loyal users who will use the platform over longer periods of time.
Examples of this tension between flexibility and openness can be found in the RS community. While the NewsReel challenge allowed researchers to directly test algorithms with real users on their platforms, the task was set up front, i.e., obtain the best possible click-through rate, and the data collection protocol was fixed. Here, flexibility was limited in favor of broad community access. On the other hand, the MovieLens movie recommendation platform regularly releases new offline datasets but has thus far restricted access to
its live platform to researchers within the GroupLens organization. However, research coming out of GroupLens is much more varied: it includes a larger variety of tasks, changes experimental conditions and uses a variety of data collection protocols. Here, flexibility is preferred over broad access.
#### 4.1.6.4 Rich data
When an infrastructure draws on data from running systems with many active users realistic behavioral data can be collected. Collecting additional rich data, which can be of pivotal importance for research, can be a challenge though as system owners may be reluctant to, e.g., allow pop-up questionnaires that might annoy or drive users away. Even when these are allowed, the risk of self-selection bias is high. User behavior in a running system can also appear messy, non-targeted and display many confounding properties not related to the overall research goals. System updates can change the system properties and affect user behavior--especially in longitudinal studies [48].
#### 4.1.6.5 Recommendations for creating new infrastructure
When existing research infrastructure is unable to support the researcher's needs, new research infrastructure has to be built.
Here we put forward some recommendations for building new research infrastructure so that it can benefit the entire research community, as building new infrastructure can be a lengthy and costly process.
The first challenge lies in obtaining sufficiently large content corpora, e.g., movies, articles or texts. An important consideration here is that after some amount of time, data will inevitably become stale. Therefore, whenever possible, we propose to integrate with APIs that give access to live content corpora that can be kept up-to-date over longer periods of time. The MovieLens platform, for example, integrates with TMDb, and as such has remained relevant for over a decade [17].
Another challenge lies in developing the system, getting the infrastructure up and running, maintaining it and providing support for both users of the system and researchers who wish to use it. Here, we recommend sufficient'realism': Funding applications should allocate sufficient funds towards software and infrastructure development, as well as the costs of running and supporting research infrastructure over prolonged periods of time. Conversely, funding institutions that wish to support reusable research infrastructure should allow for larger budget applications for the cost of development and running of research infrastructure. An interesting paradox is revealed here: The more successful the platform is with users, the more interesting it becomes for researchers, but also the higher the costs to keep it up and running.
Finally, researchers who wish to create reusable research infrastructure should dedicate significant time and effort towards documenting the system.
#### 4.1.7 Data Representation
Information retrieval and recommender systems are critical components of modern information technology, as they allow for the efficient retrieval and recommendation of relevant information. However, for these systems to function effectively, they require underlying data to be present. This is true both in the real world, where these systems are used to process vast amounts of information, as well as in research, where the systems are being developed and tested. Without access to data sets, the research communities would not be
able to perform the necessary studies and experiments to further our understanding of these systems.
Given the importance of data in information retrieval and recommender systems research, data representation is one of the cornerstones of this field. In order for datasets to be usable by the research communities, we should strive for a common understanding of what we mean by data, how we represent data, and what we communicate by (and in) data. This includes not only the format of the data but also the semantics and meaning behind the data, as well as the methods used to collect and pre-process the data [47].
Furthermore, data representation also includes the way data is organized, indexed, and stored, as well as how it can be queried and analyzed. By focusing on data representation, we can ensure that the datasets used in information retrieval and recommender systems research are of high quality and that they are accessible and usable by the entire research community. This in turn will facilitate the progress of research in our fields, and ultimately lead to the development of more effective information retrieval and recommender systems.
When sharing data, it is important to communicate the necessary details for understanding the context, use cases, and utility of the data. This includes providing detailed data descriptions, as well as data insights, which can be used by potential data users to understand the utility of the data for the intended research purposes. This information can help users to determine whether the data is appropriate for their research needs, and can also help to facilitate collaboration and sharing of data within the research community.
To ensure the reproducibility of research and to promote a deeper understanding of the data used, it is essential that researchers provide detailed information about the origin, version, and processing of the data. This includes information about the source of the data, any pre-processing or cleaning that was done, and any specific versions or updates of the data that were used in the research [4].
One way to achieve this is by adopting the practice of versioning data sets, similar to how software is versioned. This would facilitate easy identification of the specifics of the data set used in a particular study, making it simpler for others to replicate or build upon previous work. Furthermore, it would also allow researchers to clearly communicate which version of the data was used, in turn making it easier for others to access the same data set.
It is also important to remember that data processing is a crucial step in adapting certain datasets to specific use cases. Therefore, introducing the possibility of easily creating and keeping track of unique identifiers for the specific processed data sets used in research studies would facilitate reproducibility of studies. By doing so, researchers can clearly identify the specific processed data set that was used in a particular study, allowing others to easily access and use the same data set for replication or follow-up studies [46].
While keeping track of specific data versions we also need to adopt practices compatible with regulations such as General Data Protection Regulation (GDPR), making sure that users represented in data sets are sufficiently anonymized, and given the opportunity to retrospectively have their data deleted. This may create problematic scenarios if the original data is not sufficiently anonymized. However, this can in turn be used as a motivation for clear and concise privacy policies as to how to generalize, perturb, or as a last resort, censor data in order for it to be released to a wider community [52].
We should remember that data representation within systems may differ immensely between systems. However, when sharing data externally, it is important to ensure that the data representation is realistic in terms of what the data actually express and how. This includes aligning the data types used with the reality, for example, using integers for positive whole numbers and float for non-integer decimal numbers. Additionally, it is important to
convey the quality of data realistically and to clearly communicate the purposes for which the shared data is created. This can help potential users to understand the limitations and potential biases of the data and can help to ensure that the data is used appropriately.
We generalize data into two specific data types commonly used in information retrieval and recommender systems, namely, **living** data, and **archival** data.
Living data refers to continuously updated data. Living data can be made available in various different formats, including continuous and uniquely identifiable downloadable snapshots, or through a so-called firehose where data is continuously delivered through an API endpoint or similar. While snapshots can provide a unique identifier making it easy to trace back to the exact version of the data, a firehose instead provides an easier way to maintain local data repositories containing up-to-date versions of the source data.
Furthermore, keeping in mind the data representation, it is important to keep the data in a format which is easily understandable, processable and accessible. This includes but is not limited to the type of format (text, image, audio, video etc.), the language of the data, the structure of the data, the size of the data, etc.
Overall, paying attention to data representation and sharing it in a clear and informative manner is crucial for the advancement of research in information retrieval and recommender systems. It can help to ensure that data is used appropriately, and can help to facilitate collaboration and sharing of data among members of the research community.
#### 4.1.8 Next Steps
The following steps should be taken to carefully determine the **goals** of conducting real world studies:
* Classify domains by knowledge task types
* Establish context-specific evaluation targets
* Carefully consider users' information needs when conducting studies
* Develop a checklist of sample characteristics and user task details that should be collected and reported for each study
The following **resources** would expedite the design, execution and evaluation of real world studies:
* Provide researchers with access to flexible real world research infrastructure
* Obtain sufficiently large and rich content corpora that can be used in real world studies
* Create a repository of validated measurement scales
* Standardize practices for scale development
* Establish effective recruitment methods to find the "right" participants for a study
* Develop metrics that are as unobtrusive as possible to measure
* Design standardized but flexible ways to represent the data and meta-data collected in real world studies
* Study effective ways to limit attrition in longitudinal studies
* Produce best-practices guidelines for developing real world systems, getting infrastructures up and running, maintaining them and providing support for both users and researchers
* Establish guidelines to protect the privacy of research participants
The following steps must be taken to allow researchers to **integrate the findings of real world studies into generalizable knowledge**:
* Use theory to integrate domain-specific knowledge into a generalized knowledge
* Define a theoretical framework for measurement
* Develop an infrastructure for researchers to contribute analyses of and insights about real world datasets in a centralized manner
* Integrate research within specific domains as well as at the generalized knowledge level using systematic reviews, meta-analyses, task-specific workshops and domain-specific workshops
* Conduct studies to triangulate qualitative and quantitative insights, behavioral and subjective metrics, and short-term and long-term metrics
## References
* [1] Omar Alonso and Stefano Mizzaro. Can we get rid of trec assessors? using mechanical turk for relevance assessment. In _Proceedings of the SIGIR 2009 Workshop on the Future of IR Evaluation_, volume 15, page 16, 2009.
* 1 October 2021_, pages 819-824. ACM, 2021.
* 13th International Conference of the CLEF Association, CLEF 2022, Bologna, Italy, September 5-8, 2022, Proceedings_, volume 13390 of _Lecture Notes in Computer Science_. Springer, 2022.
* [4] Alejandro Bellogin and Alan Said. Improving accountability in recommender systems research through reproducibility. _User Model. User Adapt. Interact._, 31(5):941-977, 2021.
* [5] Mindy E Bergman and Vanessa A Jean. Where have all the "workers" gone? a critical analysis of the unrepresentativeness of our samples relative to the labor market in the industrial-organizational psychology literature. _Industrial and Organizational Psychology_, 9(1):84-113, 2016.
* [6] Lennart Bjorneborn. Three key affordances for serendipity: Toward a framework connecting environmental and personal factors in serendipitous encounters. _J. Documentation_, 73(5):1053-1081, 2017.
* [7] Niall Bolger, Gertraud Stadler, and Jean-Philippe Laurenceau. _Power analysis for intensive longitudinal studies._, pages 285-301. Handbook of research methods for studying daily life. The Guilford Press, New York, NY, US, 2012.
* 15, 2022_, pages 3078-3089. ACM, 2022.
* [9] Kathy Charmaz. _Constructing grounded theory : a practical guide through qualitative analysis_. Sage Publications, London; Thousand Oaks, Calif., 2006.
* [10] Maarten Derksen and Jill Morawski. Kinds of replication: Examining the meanings of "conceptual replication" and "direct replication". _Perspectives on Psychological Science_, 17(5):1490-1505, 2022. PMID: 35245130.
* [11] Robert F. DeVellis. _Scale development: theory and applications_. Number v. 26 in Applied social research methods series. Sage, Newbury Park, Calif, 1991.
* October 06
- 10, 2014_, pages 161-168. ACM, 2014.
* [13] D Jake Follmer, Rayne A Sperling, and Hoi K Suen. The role of mturk in education research: Advantages, issues, and future directions. _Educational Researcher_, 46(6):329-334, 2017.
* [14] Anja S Goritz. Incentives in web studies: Methodological issues and a review. _International Journal of Internet Science_, 1(1):58-70, 2006.
* [15] T. Greco, A. Zangrillo, G. Biondi-Zoccai, and G. Landoni. Meta-analysis: pitfalls and hints. _Heart, lung and vessels_, 5:219-225, 2013.
* [16] Mark M. Hall. To re-use is to re-write: Experiences with re-using IIR experiment software. In Toine Bogers, Samuel Dodson, Maria Gade, Luanne Freund, Mark M. Hall, Marijn Koolen, Vivien Petras, Nils Pharo, and Mette Skov, editors, _Proceedings of the CHIIR 2019 Workshop on Barriers to Interactive IR Resources Re-use co-located with the ACM SIGIR Conference on Human Information Interaction and Retrieval, BIIRRR@CHIIR 2019, Glasgow, UK, March 14, 2019_, volume 2337 of _CEUR Workshop Proceedings_, pages 19-23. CEUR-WS.org, 2019.
* [17] F. Maxwell Harper and Joseph A. Konstan. The movielens datasets: History and context. _ACM Trans. Interact. Intell. Syst._, 5(4):19:1-19:19, 2016.
* [18] W. Heisenberg. Uber den anschaulichen inhalt der quantentheoretischen kinematik und mechanik. _Zeitschrift fur Physik_, 43(3-4):172-198, 1927.
* [19] J. Henrich, S. Heine, and A. Norenzayan. Most people are not WEIRD. _Nature_, 466, 2010.
* [20] Karen Holtzblatt and Hugh R. Beyer. Contextual design. In Mads Soegaard and Rikke Friis Dam, editors, _Encyclopedia of Human-Computer Interaction_. The Interaction Design Foundation., Aarhus, Denmark, 2011.
* [21] Dietmar Jannach and Gediminas Adomavicius. Recommendations with a purpose. In Shilad Sen, Werner Geyer, Jill Freyne, and Pablo Castells, editors, _Proceedings of the 10th ACM Conference on Recommender Systems, Boston, MA, USA, September 15-19, 2016_, pages 7-10. ACM, 2016.
* [22] Dietmar Jannach and Michael Jugovac. Measuring the business value of recommender systems. _ACM Trans. Manag. Inf. Syst._, 10(4):16:1-16:23, 2019.
* [23] Jean-Marie John-Mathews, Dominique Cardon, and Christine Balague. From reality to world. a critical perspective on AI fairness. _Journal of Business Ethics_, 178(4):945-959, 2022.
* [24] Daniel Kahneman, Barbara L. Fredrickson, Charles A. Schreiber, and Donald A. Redlemier. When more pain is preferred to less: Adding a better end. _Psychological Science_, 4(6):401-405, 1993.
* [25] Marius Kaminskas. Measuring surprise in recommender systems. 2014.
* [26] Diane Kelly. Methods for evaluating interactive information retrieval systems with users. _Found. Trends Inf. Retr._, 3(1-2):1-224, 2009.
* [27] Benjamin Kille, Frank Hopfgartner, Torben Brodt, and Tobias Heintz. The plista dataset. In _NRS'13: Proceedings of the International Workshop and Challenge on News Recommender Systems_, ICPS, page 14-22. ACM, 2013.
* [28] Peter Knees, Yashar Deldjoo, Farshad Bakhshandegan Moghaddam, Jens Adamczak, Gerard Paul Leyson, and Philipp Monreal. Recsys challenge 2019: session-based hotel recommendations. In Toine Bogers, Alan Said, Peter Brusilovsky, and Domonkos Tikk, editors,
Proceedings of the 13th ACM Conference on Recommender Systems, RecSys 2019, Copenhagen, Denmark, September 16-20, 2019_, pages 570-571. ACM, 2019.
* First International Conference, UCMedia 2009, Venice, Italy, December 9-11, 2009, Revised Selected Papers_, volume 40 of _Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering_, pages 366-369. Springer, 2009.
* [30] Bart P. Knijnenburg, Martijn C. Willemsen, Zeno Gantner, Hakan Soncu, and Chris Newell. Explaining the user experience of recommender systems. _User Model. User Adapt. Interact._, 22(4-5):441-504, 2012.
* [31] Alfred Kobsa. User modeling: Recent work, prospects and hazards. _Human Factors in Information Technology_, 10:111-111, 1993.
* [32] Sari Kujala, Talya Miron-Shatz, and Jussi J Jokinen. The cross-sequential approach: A short-term method for studying long-term user experience. _Journal of Usability Studies_, 14(2), 2019.
* 23, 2022_, pages 694-697. ACM, 2022.
* 23, 2022_, pages 3-13. ACM, 2022.
* [35] Blerina Lika, Kostas Kolomvatsos, and Stathes Hadjiefthymiades. Facing the cold start problem in recommender systems. _Expert Syst. Appl._, 41(4):2065-2073, 2014.
* [36] Felicia Loecherbach and Damian Trilling. 3bij3-developing a framework for researching recommender systems and their effects. _Computational Communication Research_, 2(1):53-79, 2020.
* [37] Marianne Lykke, Birger Larsen, Haakon Lund, and Peter Ingwersen. Developing a test collection for the evaluation of integrated search. In Cathal Gurrin, Yulan He, Gabriella Kazai, Udo Kruschwitz, Suzanne Little, Thomas Roelleke, Stefan M. Ruger, and Keith van Rijsbergen, editors, _Advances in Information Retrieval, 32nd European Conference on IR Research, ECIR 2010, Milton Keynes, UK, March 28-31, 2010. Proceedings_, volume 5993 of _Lecture Notes in Computer Science_, pages 627-630. Springer, 2010.
* [38] Sean M. McNee, Istvan Albert, Dan Cosley, Prateep Gopalkrishnan, Shyong K. Lam, Al Mamunur Rashid, Joseph A. Konstan, and John Riedl. On the recommending of citations for research papers. In Elizabeth F. Churchill, Joseph F. McCarthy, Christine Neuwirth, and Tom Rodden, editors, _CSCW 2002, Proceeding on the ACM 2002 Conference on Computer Supported Cooperative Work, New Orleans, Louisiana, USA, November 16-20, 2002_, pages 116-125. ACM, 2002.
* [39] Bianca Melo, Rossana M. de Castro Andrade, and Ticianne Darin. Longitudinal user experience studies in the iot domain: a brief panorama and challenges to overcome. In Caroline Queiroz Santos, Maria Lucia Bento Villela, Kamila Rios da Hora Rodrigues, and Ticianne de Gois Ribeiro Darin, editors, _Proceedings of the 21st Brazilian Symposium on
Human Factors in Computing Systems, IHC 2022, Diamantina, Brazil, October 17-21, 2022_, pages 23:1-23:13. ACM, 2022.
* [40] Henning Muller, Jayashree Kalpathy-Cramer, and Alba Garcia Seco de Herrera. _Experiences from the ImageCLEF Medical Retrieval and Annotation Tasks_, volume 41 of _The Information Retrieval Series_, pages 231-250. Springer, 2019.
* [41] Alexander Newman, Yuen Lam Bavik, Matthew Mount, and Bo Shao. Data collection via online platforms: Challenges and recommendations for future research. _Applied Psychology_, 70(3):1380-1402, 2021.
* [42] Yanfang Pan and Peida Zhan. The impact of sample attrition on longitudinal learning diagnosis: A prolog. _Frontiers in psychology_, 11:1051, 2020.
* [43] Eyal Peer, Laura Brandimarte, Sonam Samet, and Alessandro Acquisti. Beyond the turk: Alternative platforms for crowdsourcing behavioral research. _Journal of Experimental Social Psychology_, 70:153-163, 2017.
* Lessons Learned from 20 Years of CLEF_, volume 41 of _The Information Retrieval Series_, pages 365-387. Springer, 2019.
* [45] Alan Said. A short history of the recsys challenge. _AI Mag._, 37(4):102-104, 2016.
* [46] Alan Said and Alejandro Bellogin. Replicable evaluation of recommender systems. In Hannes Werthner, Markus Zanker, Jennifer Golbeck, and Giovanni Semeraro, editors, _Proceedings of the 9th ACM Conference on Recommender Systems, RecSys 2015, Vienna, Austria, September 16-20, 2015_, pages 363-364. ACM, 2015.
* [47] Alan Said, Babak Loni, Roberto Turrin, and Andreas Lommatzsch. An extended data model format for composite recommendation. In Li Chen and Jalal Mahmud, editors, _Poster Proceedings of the 8th ACM Conference on Recommender Systems, RecSys 2014, Foster City, Silicon Valley, CA, USA, October 6-10, 2014_, volume 1247 of _CEUR Workshop Proceedings_. CEUR-WS.org, 2014.
* living labs for academic search (extended overview). In Guglielmo Faggioli, Nicola Ferro, Alexis Joly, Maria Maistro, and Florina Piroi, editors, _Proceedings of the Working Notes of CLEF 2021
- Conference and Labs of the Evaluation Forum, Bucharest, Romania, September 21st
- 24th, 2021_, volume 2936 of _CEUR Workshop Proceedings_, pages 1668-1699. CEUR-WS.org, 2021.
* [49] M. Schulte-Mecklenbeck, J.G. Johnson, U. Bockenholt, D.G. Goldstein, J.E. Russo, N.J. Sullivan, and M.C. Willemsen. Process-tracing methods in decision making: on growing up in the 70s. _Current Directions in Psychological Science_, 26(5):442-450, 2017.
* Proceedings of the National Academy of Sciences_, pages 15242-5246, 2013.
* [51] James P. Selig and Todd D. Little. _Autoregressive and cross-lagged panel analysis for longitudinal data._, pages 265-278. Handbook of developmental research methods. The Guilford Press, New York, NY, US, 2012.
* [52] Divesh Srivastava, Monica Scannapieco, and Thomas C. Redman. Ensuring high-quality private data for responsible data science: Vision and challenges. _ACM J. Data Inf. Qual._, 11(1):1:1-1:9, 2019.
* [53] Hanna Suominen, Lorraine Goeuriot, Liah Kelly, Laura Alonso Alemany, Elias Bassani, Nicola Brew-Sam, Viviana Cotik, Dario Filippo, Gabriela Gonzalez Saez, Franco Luque, Philippe Mulhem, Gabriella Pasi, Roland Roller, Sandaru Seneviratne, Rishabh Upadhyay, Jorge Vivaldi, Marco Viviani, and Chenchen Xu. Overview of the CLEF ehealth evaluation lab 2021. In K. Selcuk Candan, Bogdan Ionescu, Lorraine Goeuriot, Birger Larsen,
Henning Muller, Alexis Joly, Maria Maistro, Florina Piroi, Guglielmo Faggioli, and Nicola Ferro, editors, _Experimental IR Meets Multilinguality, Multimodality, and Interaction - 12th International Conference of the CLEF Association, CLEF 2021, Virtual Event, September 21-24, 2021, Proceedings_, volume 12880 of _Lecture Notes in Computer Science_, pages 308-323. Springer, 2021.
* [54] Mozhgan Tavakolifard, Jon Atle Gulla, Kevin C. Almeroth, Frank Hopfgartner, Benjamin Kille, Till Plumbaum, Andreas Lommatzsch, Torben Brodt, Arthur Bucko, and Tobias Heintz. Workshop and challenge on news recommender systems. In Qiang Yang, Irwin King, Qing Li, Pearl Pu, and George Karypis, editors, _Seventh ACM Conference on Recommender Systems, RecSys '13, Hong Kong, China, October 12-16, 2013_, pages 481-482. ACM, 2013.
* [55] Elaine Toms, Sophia Althammer, Allan Hanbury, Wojciech Kusa, Ginar Santika Niwanputri, Ian Ruthven, Ayah Soufan, and Vasileios Stamatis. Knowledge task survey. Technical Report D1.1, DoSSIER EU Project, 2021.
* [56] Rens van de Schoot, Peter Lugtig, and Joop Hox. A checklist for testing measurement invariance. _European Journal of Developmental Psychology_, 9(4):486-492, 2012.
* [57] E. Voorhees, D.K. Harman, National Institute of Standards, and Technology (US). _TREC: Experiment and evaluation in information retrieval_, volume 63. MIT press Cambridge'eMA MA, 2005.
* [58] Martijn C. Willemsen and Eric J. Johnson. _(Re)Visiting the Decision Factory: Observing Cognition with MouselabWEB_, pages 76-95. Taylor and Francis Ltd., United Kingdom, 2nd edition, 2019. Publisher Copyright: \(\copyright\) 2019 selection and editorial matter, Michael Schulte- Mecklenbeck, Anton Kuhberger, and Joseph G. Johnson; individual chapters, the contributors.
* [59] Eva Zangerle and Christine Bauer. Evaluating recommender systems: Survey and framework. _ACM Comput. Surv._, 55(8):170:1-170:38, 2023.
### HMC: A Spectrum of Human-Machine-Collaborative Relevance Judgment Frameworks
_Charles L. A. Clarke (University of Waterloo, CA, [email protected]) Gianluca Demartini (University of Queensland, AU, [email protected]) Laura Dietz (University of New Hampshire, US, [email protected]) Guglielmo Faggioli (University of Padua, IT, [email protected]) Matthias Hagen (Friedrich-Schiller-Universitat Jena, DE, [email protected]) Claudia Hauff (Spotiy, NL, [email protected]) Noriko Kando (National Institute of Informatics (NII), JP, [email protected]) Evangelos Kanolas (University of Amsterdam, NL, [email protected]) Martin Potthast (Leipzig University and ScaDS.AI, DE, [email protected]) Ian Soboroff (National Institute of Standards and Technology (NIST), US, [email protected]) Benno Stein (Bauhaus-Universitat Weimar, DE, [email protected]) Henning Wachsmuth (Leibniz Universitat Hannover, DE, [email protected])_
#### Motivation
IR evaluation traditionally needs human assessors to generate relevance judgements. Traditionally, human assessors are asked to judge the relevance of a document with respect to a topic [3]. Recently, work looking at preference judgements [2, 4] has looked at research questions related to how to best evaluate IR systems by asking human assessors which of two results is the better given an information need. The recent availability of LLMs has opened the possibility to use them to automatically generate relevance assessments in the form of preference judgements. While the idea of automatically generated judgements has been looked at before [1], new-generation LLMs drive us to re-ask the question of whether human assessors are still necessary.
New models tend to fail in a different and more diverse way compared to traditional approaches. Failure points for old models were more uniform and clear, with new systems it is harder to predict in which ways the model will fail. In most cases, LLMs (especially for what concerns generative aspects) focus on entertainment tasks. Models tend to report false facts in such a convincing way that they need to be carefully read by some expert to identify lacking factuality (e.g., Michel Foucault simulation21).
Footnote 21: [https://www.youtube.com/watch?v=L6c0xeAqEz4E](https://www.youtube.com/watch?v=L6c0xeAqEz4E)
Our motivation to investigate the possibility of using LLMs in order to provide automatic annotations stems from some fundamental research questions that can be summarized as follows.
* **RQ1**: In which way automatic approaches, and in particular LLMs, can help assessors with the assessment task to yield the most reliable annotations while improving the efficiency of the annotation process? This question raises other interesting related inquiries. For example, if we were to build such a mixed human-machine annotation paradigm, which held out (not provided to the IR system) supporting information about the topic would yield the best and fastest annotations? What weighting between human and LLMs and AI-assisted annotations is ideal?
* **RQ2**: Can machines (either in the form of LLMs or in general as Artificial Intelligence (AI) models) replace humans in assessing and annotating? This question raises also concerns about what annotation target (e.g., relevance labeling, summarization, paragraph highlighting, exam questions [5]) would yield the best and fastest annotations.
* **RQ3**: What are the conditions under which human assessors cannot be replaced by machines? Alternatively, in which role can the Human assessor most productively provide relevance assessments? Answering the questions mentioned would also require finding viable solutions for a set of additional questions and open issues that touch a number of IR evaluation process steps.
* Assessors And Collections:
* How to use LLM to help assessors: some examples of possible usages include, summarising text, associating keywords and identifying the content of long podcasts to help assessors annotate the documents, for example by highlighting relevant fragments of text/podcast or segments with correct answers.
* What is the effective role of the human assessor in annotating material for generative models? Should the annotator provide input at the beginning of the pipeline, by annotating the original documents, or are they more useful downstream, after the task has been carried out?
* Generative models can be used to create new collections: corpora, conversations, queries, abstracts and so on.
* LLM and generative models to retrieve information in a broader sense:
* IR tasks that employ LLMs have the means to provide more details: often a single answer is not satisfactory for the user. How to support the user in exploring the results further (for example via links and connected pages). Generative models can help, but is this helpful when the model simply generates the response without knowing where it comes from? In many cases, the user is not interested in receiving only the direct/short answer, but rather in seeing which documents contain it and related pieces of information to expand their knowledge.
* LLMs as an evaluation tool:
* The model is biased: how can we use it to evaluate itself? If a model has been trained on biased data, then also the evaluation is prone to the same biases. How to detect and account for such biases?
Figure 4: The three most relevant components in our system: the human assessor, the Large Language Model (LLM) that can help humans or replace them in annotating documents for relevance, and the system that we want to evaluate using the newly produced relevance judgements.
* Evaluating LLMs and their trustworthiness:
* Can we find a way to understand and measure to what level we can trust the results of a generative model?
* How to carry out fact-checking, for example by identifying the source of information of a generative model and verifying that it is presented accurately.
* Distinguish between human and machine-generated data: Important for many tasks, such as journalism, where it is of uttermost importance to verify the information. Human-generated data is more trusted.
We argue that the collaboration between humans and ML, especially under the form of LLMs, could be abstracted in the form of a spectrum. On the two extremes of this spectrum, we have either the human or the machine entirely tasked to annotate documents for relevance with respect to a query. Within the spectrum, humans and LLMs interact to a different extent. Theoretically, such a spectrum corresponds also to moving from highly expansive annotations in terms of human effort, cost and time, but with high-quality annotations, to a much less expensive annotation procedure with also a decreased annotation quality. We also argue that something exists beyond the spectrum; it corresponds to the scenario in which the machine overcomes the human, by producing relevance judgments without any form of bias. We observed this phenomenon happening already in several tasks and scenarios, and therefore we can aspect this to happen also with respect to the construction of the relevance judgments.
The remainder of this chapter is organized as follows: Subsection 4.2.2 reports details on the current state of the art and limitations associated with the current usage of LLMs ad AI in annotating documents. Subsection 4.2.3 illustrates our proposal of a spectrum of possible interactions between the human and the machine, to provide more efficient and effective annotations and relevance judgments. Subsection 4.2.4 outlines a possible experimental protocol that would allow us to verify at what point modern LLMs and whether they can be used to produce automatically relevance judgements.
#### 4.2.2 State of the Art, Idea, and Gaps
##### 4.2.2.1 Using LLMs to Generate Annotations and Label Automatically
Potential uses of LLMs to annotate documents, extract snippets, summarize and, in the end, annotate documents for relevance. If this can be made to work reliably, it opens up many opportunities for evaluation. For example, the LLM can be used directly to evaluate the output of other large language models (for example in summarization).
Assessments can arise from different sources, with different levels of quality and collection costs as follows.
* Human assessors or, in the enterprise scenario, final users. This scenario, at the current time, is the most expansive, but also likely to produce high-quality annotations.
* Human assessors aided by mild automatic support systems (e.g. remove redundancy, encourage consistency)
* Half of the judgments are produced by human assessors and half of the judgments are produced automatically.
* Automatic annotation of a collection, which is verified and corrected by human intervention.
* At some point even a fully automatic assessment.
An additional axis describes the type of annotations. Typically an annotation is a graded relevance judgment, but for example in EXAM [5], humans are used for generating questions instead. This can be generalized by asking human assessors for something different than traditional annotation while some Machine Learning (ML) converts the human responses into relevance assessments. This follows the paradigm of Competence Partitioning of Human-Machine-Collaboration where humans and machines are performing tasks they are best at (not vice versa).
One concern is that fully automatic assessment with LLMs can be very expensive, which is also the reason why we consider the application of LLMs as part of the retrieval process. In such a case, we could reduce the cost by considering a teacher-student training paradigm (knowledge distillation) in which a large and expensive LLM is used to train a smaller model that is less expensive to run.
Not all IR tasks focus on topics. For example, one may want to search for podcasts where two or more people interact or with a particular style. Another issue is regarding truth. For example, finding a podcast for the query "does lemon cure cancer?" that talks about healing cancer with lemon might be on topic. Nevertheless, it is unlikely to be factually correct, and therefore not relevant to correctly answering the information needs. To overcome this issue, assessors have to access external information to determine the trustworthiness of a source, or the truthfulness of a document. In a similar way, we can assume our LLM is used as an oracle that accesses external facts, verified by humans. To properly support different tasks, human intervention can be plugged into the collection and annotation of additional facts, to define relevance.
There are open questions for the special case of 100%-machine/0%human. How is this ranking evaluation different from being an approach that produces a ranking? (circularity problem). We can use multiple LLMs, possibly based on different rationales, such that it is possible to define an inter-annotation systems agreement, in which different systems are used to verify if there is an agreement between each other. An alternative approach is to endow the evaluation with additional information about relevant facts/questions/nuggets that the system (under evaluation) does not have access to.
It is yet to be understood what the risks associated with such technology are: it is likely that in the next few years, we will assist in a substantial increase in the usage of LLMs to replace human annotators. Nevertheless, a similar change in terms of data collection paradigm was observed with the increased use of crowd assessor. Up to that moment, annotations were typically made by in-house experts. Then, such annotation tasks were delegated to crowd workers, with a substantial decrease in terms of quality of the annotation, compensated by a huge increase in annotated data. It is a concern that machine-annotated assessments might degrade the quality, while dramatically increasing the number of annotations available.
The Cranfield paradigm [6] is based on simplifying assumptions that make manual evaluation feasible: _1)_ independence of queries; _2)_ independence of relevance of documents; _3)_ Relevance is static (and not changing in time). Recently, the field is diverging from this paradigm, for example with TREC CAR and TREC CAsT/iKAT where the information needs are developing as the user learns more about the domain. The TREC Evaluation of CAST describes a tree of connected information needs, where one conversation takes a path through the tree. The Human-Machine evaluation paradigm might make it feasible to assess more connected (and hence, realistic) definitions of relevance.
#### 4.2.3 Collaborative Human-Machine Relevance Judgments
We can describe a spectrum of Collaborative-Human-Machine paradigms to create relevance judgments, where the weighting of human contributions vs machine contributions changes along the spectrum.
* **Only Human (100%H / 0%M)**: On one extreme, the human will do all assessments manually without any kind of support.
* **Human with assessment system (99%H / 1%M)**: This is a more realistic case for how TREC assessment is conducted, where humans have full control of what is relevant but are supported in the following ways: Humans can define "scan terms" that will be highlighted in the text, can limit view the pool that is already judged, ordering documents so that similar documents are near one another, produce readable presentations of retrieve content.
* **Human with document summaries (80%H/ 20%M)**: A text summarization model produces a generative summary representation of the document to be judged. The human assessor judges the representation, which is more efficient to do.
* **EXAM (60%H / 30% M)**: For each query, the human defines information nuggets that are relevant (e.g. exam questions). The machine is trained to automatically determine how many test nuggets are contained in the retrieved results (e.g. via a Q/A system).
* **Equal contribution (50%H / 50%M)**: A theoretic midpoint in the collaborative spectrum. Humans perform tasks that humans are good at. Machines perform the tasks that machines are good at. It is yet to be concretely defined what this might be.
* **3-Brain Setup (32%H / 58%M)**: Two machines each generate an assessment, and a human will select the best of the two assessments (+verification). Human decision trumps machines'.
* **LLM for first pass + human verification (30%H / 60%M)**: A first-pass assessment of the LLM is automatically produced as a suggestion. This can also be an assessment-supporting surrogate prediction like a rationale. The human assessment is based on this suggestion, but the human will have the final say.
* **LLM replaces humans completely (0%H / 100%M)**: We explore the possibility that a fully automatic assessment system might be as good as a human in producing high-quality relevance judgments.
Figure 5: A spectrum of Collaborative-Human-Machine paradigms to create relevance judgments.
* **LLM is beyond human (0%H / 100%M)**: Given known biases in human assessments, we contemplate the possibility that the automatic assessments might even surpass the human in terms of quality. While not feasible at the current time, this is an important case to consider when we evaluate the HMC evaluation.
##### 4.2.3.1 Use LLMs to Help Humans in Annotating Documents
LLMs could be successfully applied in helping human assessors with annotating data. For example, LLMs might be particularly useful in recognizing near duplicates and using them to verify if the two near duplicates share the same relevance annotation - with the human entering the loop only in those cases where the system has a high degree of uncertainty.
Related to the case of (100%H / 0%M), we have the _human-in-the-loop_, helping the system in realizing its annotation goal. Such help might include providing annotated facts or verifying the annotation after a first pass from the system. In the 50%/50% case, equal contributions, we have a substantial equilibrium between both the human and the machine. We refer to this scenario as _competence partitioning_: the task is assigned to either the human or the machine, depending on who is currently better at the current moment. On the other side of the spectrum (%M \(>\) %H), the scenario is called _model-in-the-loop_: the model offers its contribution in organizing the data, where the human is used as a verification step. The concern is that any bias in the LLM might be affecting the relevance assessments, as the human will not be able to correct for information it will not see.
An alternative approach to the collaborative one is a complementary one, where the human and the machine both produce judgments, but different ones. This then becomes a task allocation problem where the aim is to predict who among the human and the machine assessor is best suited for any given judgment.
##### 4.2.3.2 Beyond Human Performance
We could expect that, at a certain point in the future, the LLMs will overcome humans in a number of tasks that can be reconducted to annotate the documents. Humans are likely to make mistakes when annotating documents and are limited in the time dedicated to the annotation. In contrast, LLMs are likely to be more self-consistent and potentially capable of annotating all the documents perfectly. Machines can also annotate a much larger number of data points.
Furthermore, we have a series of assumptions, such as the fact that relevance does not change through time, that are enforced to make evaluation tractable. These assumptions can be relaxed if the machine annotates automatically.
It is an open issue in recognizing when the human is failing. All the above strategies assume human annotations are the gold standard without errors. This assumption is strong: the LLM, having access to more information, might be able to correct human mistakes.
We are likely to reach the limit of measurement: we will not be able to use differences between the current evaluation paradigms to evaluate such models. A problem is that if we surpass the quality of only human-annotated data, we will not be able to detect this if we use only human-annotated data as a gold standard. will not suffice and will fail in providing a gold standard.
Another research question is to identify optimal competence partitioning. One idea is to use the LLM to generate rationales for explaining the relevance. While humans are often considered experts for rational generation, recent advancements, including chatGPT, suggest that we are on the verge of a shift of paradigm, with LLMs constantly improving
in identifying why a document is (non)-relevant, either considering information with the document, or other relevant external pieces of information.
##### 4.2.3.3 Trust, Correctness, and Inter-annotator Agreement
One important difference between humans and automatic assessors concerns the assessment sample size. While it is possible to hire multiple assessors to annotate the documents and, possibly, resolve disagreements between annotators, this is not that trivial in the automatic assessor case. We can expect that LLMs which are trained on similar corpora will likely produce correlated answers -- but we don't know whether these are correct. A possible solution to this would include the usage of different subcorpora based on different sets of documents. This, in turn, could lead to personalized LLMs, fine-tuned on data from different types of users, which would allow to auto-annotate documents directly according to the user's subjective point of view, while also helping with increasing the pool of annotations collected. While this technology is not available yet, mostly due to computational reasons, we expect it to be available in a few years.
A related idea that can be implemented today is to allow LLMs to learn by observing human annotators performing the task or following an active learning paradigm. The LLM starts with mild suggestions to the user on how to annotate the documents, then it continues to learn by considering actual decisions made by the annotator and finally improving the quality of the suggestions provided.
#### 4.2.4 Next Steps
Tables 1 and 2 report two examples of document annotation done with two well-known LLMs: YouChat22 and ChatGPT23. It is interesting to notice that, in both cases, both models provided the correct answer, correctly identifying the passage which was annotated as more relevant. It is possible to observe that, while ChatGPT simply repeats the relevant passage, YouChat is capable of correctly identifying the reason why a passage is more relevant than the other.
Footnote 22: [https://you.com/](https://you.com/)
Footnote 23: [https://chat.openai.com/](https://chat.openai.com/)
To assess the feasibility of the proposed approaches, next steps would include an experimental comparison of the different Collaborative-Human-Machine paradigms. This should include multiple test collections (e.g., TREC-8 and TREC Deep Learning), multiple types of judgments (e.g., binary, graded, preference), and multiple models (e.g., GPT-2, GPT-3, chatGPT, etc.). Comparison between human-generated judgments and machine-generated judgments may be performed both using inter-assessor agreement metrics as well as IR system ranking correlation methods.
* [14] M. C. C.
**Christine Bauer, Ben Carterette, Nicola Ferro, Norbert Fuhr**
**Table 2** Response of two well-known LLMs in comparing two passages as answer for the query 67_8 from CAsT '19: "What is a DC half smoke?"
\begin{tabular}{|l|l|} \hline Prompt & Select the passage that answers the question better. \\ - If both answers are similar, select the one with the least extraneous information. \\ - If both answers are still similar, select the one with the best formatting. \\ & Question: \\ & What is a DC half smoke? \\ & Passage 1: \\ & However, celebrities at Ben's are not limited to American citizens. In 1998, former DC mayor Marion Barry described having traveled to Ghana and meeting the Mayor of Accra, an alumnus of Howard University, whose greeting was, "Glad to have you in Accra. Is Ben's Chili Bowl still there?" And when French President Nicolas Sarkozy and his wife Carla Bruni-Sarkozy visited Washington, D.C. in March 2010, they reportedly each had two of Ben's half smokes during their visit to the restaurant. \\ & Passage 2: \\ & A half-smoke is a "local sausage delicacy" found in Washington, D.C. and the surrounding region. Similar to a hot dog, but usually larger, spicier, and with more coarsely-ground meat, the sausage is often half-pork and half-beef, smoked, and served with herbs, onion, and chili sauce. \\ \hline ChatGPT & Passage 2: \\ & A half-smoke is a local sausage delicacy found in Washington, D.C. and the surrounding region. Similar to a hot dog, but usually larger, spicier, and with more coarsely-ground meat, the sausage is often half-pork and half-beef, smoked, and served with herbs, onion, and chili sauce. \\ \hline YouChat & Passage 2 is the better answer as it more directly answers the question and does not include any extraneous information. \\ \hline \end{tabular}
**Table 3** Response of two well-known LLMs in comparing two passages as answer for the query 67_8 from CAsT '19: "What is a DC half smoke?"
\begin{table}
\begin{tabular}{|l|l|} \hline Prompt & Select the passage that answers the question better. \\ - If both answers are similar, select the one with the least extraneous information. \\ - If both answers are still similar, select the one with the best formatting. \\ & Question: \\ & What is a DC half smoke? \\ & Passage 1: \\ & However, celebrities at Ben’s are not limited to American citizens. In 1998, former DC mayor Marion Barry described having traveled to Ghana and meeting the Mayor of Accra, an alumnus of Howard University, whose greeting was, "Glad to have you in Accra. Is Ben’s Chili Bowl still there?" And when French President Nicolas Sarkozy and his wife Carla Bruni-Sarkozy visited Washington, D.C. in March 2010, they reportedly each had two of Ben’s half smokes during their visit to the restaurant. \\ & Passage 2: \\ & A half-smoke is a “local sausage delicacy” found in Washington, D.C. and the surrounding region. Similar to a hot dog, but usually larger, spicier, and with more coarsely-ground meat, the sausage is often half-pork and half-beef, smoked, and served with herbs, onion, and chili sauce. \\ \hline ChatGPT & Passage 2: \\ & A half-smoke is a local sausage delicacy found in Washington, D.C. and the surrounding region. Similar to a hot dog, but usually larger, spicier, and with more coarsely-ground meat, the sausage is often half-pork and half-beef, smoked, and served with herbs, onion, and chili sauce. \\ \hline YouChat & Passage 2 is the better answer as it more directly answers the question and does not include any extraneous information. \\ \hline \end{tabular}
\end{table}
Table 2: Response of two well-known LLMs in comparing two passages as answer for the query 67_8 from CAsT '19: "What is a DC half smoke?"
## References
* [1] Stefan Buttcher, Charles L. A. Clarke, Peter C. K. Yeung, and Ian Soboroff. Reliable information retrieval evaluation with incomplete and biased judgements. In Wessel Kraaij, Arjen P. de Vries, Charles L. A. Clarke, Norbert Fuhr, and Noriko Kando, editors, _SIGIR 2007: Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Amsterdam, The Netherlands, July 23-27, 2007_, pages 63-70. ACM, 2007.
* [2] Charles L. A. Clarke, Alexandra Vtyurina, and Mark D. Smucker. Assessing top-k preferences. _ACM Trans. Inf. Syst._, 39(3):33:1-33:21, 2021.
* [3] Donna Harman. Information retrieval evaluation. 2011.
* [4] Martin Potthast, Lukas Gienapp, Florian Euchner, Nick Heilenkotter, Nico Weidmann, Henning Wachsmuth, Benno Stein, and Matthias Hagen. Argument search: Assessing argument relevance. In Benjamin Piwowarski, Max Chevalier, Eric Gaussier, Yoelle Maarek, Jian-Yun Nie, and Falk Scholer, editors, _Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2019, Paris, France, July 21-25, 2019_, pages 1117-1120. ACM, 2019.
* [5] David P. Sander and Laura Dietz. EXAM: how to evaluate retrieve-and-generate systems for users who do not (yet) know what they want. In Omar Alonso, Stefano Marchesin, Marc Najork, and Gianmaria Silvello, editors, _Proceedings of the Second International Conference on Design of Experimental Search & Information REtrieval Systems, Padova, Italy, September 15-18, 2021_, volume 2950 of _CEUR Workshop Proceedings_, pages 136-146. CEUR-WS.org, 2021.
* [6] Ellen M. Voorhees. The philosophy of information retrieval evaluation. In Carol Peters, Martin Braschler, Julio Gonzalo, and Michael Kluck, editors, _Evaluation of Cross-Language Information Retrieval Systems, Second Workshop of the Cross-Language Evaluation Forum, CLEF 2001, Darmstadt, Germany, September 3-4, 2001, Revised Papers_, volume 2406 of _Lecture Notes in Computer Science_, pages 355-370. Springer, 2001.
### Overcoming Methodological Challenges in Information Retrieval
and Recommender Systems through Awareness and Education
Christine Bauer (Utrecht University, NL, [email protected])Maik Frobe (Friedrich-Schiller-Universitat Jena, DE, [email protected])Dietmar Jannach (University of Klagenfurt, AT, [email protected])Udo Kruschwitz (University of Regensburg, DE, [email protected])Paolo Rosso (Technical University of Valencia, ES, [email protected])Damiano Spina (RMIT University, AU, [email protected])Nava Tintarev (Maastricht University, NL, [email protected])License \(\copyright\)Creative Commons BY 4.0 International licenseChristine Bauer, Maik Frobe, Dietmar Jannach, Udo Kruschwitz, Paolo Rosso, Damiano Spina, Nava Tintarev
#### Background & Motivation
In recent years, we have observed a substantial increase in research in IR and RS. To a large extent, this increase is fueled by progress in ML (deep learning) technology. As a result, countless papers are nowadays published each year which report that they improved the state-of-the-art when adopting common experimental procedures to evaluate ML based systems. However, a number of issues were identified in the past few years regarding these reported findings and their interpretation. For example, both in IR and RS, studies point to methodological issues in _offline_ experiments, where researchers for example compare their models against weak or non-optimized baselines or where researchers optimize their models on test data rather than on held-out validation data [4, 13, 48, 53].
Besides these issues in offline experiments, questions concerning the _ecological validity_ of the reported findings are raised increasingly. Ecological validity measures how generalizable experimental findings are to the real world. An example of this problem in information retrieval is the known problem of mismatch between offline effectiveness measurement and user satisfaction measured with online experimentation [10, 5, 40, 46, 56] or when the definition of relevance does not consider the effect on a searcher and their decision-making. For example, the order of search results, and the viewpoints represented therein, can shift undecided voters toward any particular candidate if high-ranking search results support that candidate [19]. This phenomenon--often referred to as the _Search Engine Manipulation Effect (SEME)_ --has been demonstrated for both politics [19, 20] and health [2, 43]. By being aware of the phenomena, methods have been adapted to measure its presence [14, 15], and studies to evaluate when and how it affects human decision-makers [16]. Similar questions of ecological validity were also raised in the RS field regarding the suitability of commonly used computational accuracy metrics as predictors of the impact and value such systems have on users in the real world. Several studies indeed indicate that the outcomes of offline experiments are often _not_ good proxies of real-world performance indicators such as user satisfaction, engagement, or revenue [7, 25, 30].
Overall, these observations point to a number of open challenges in how experimentation is predominantly done in the field of information access systems. Ultimately, this leads to the questions of _(i)_ how much progress we really make despite the large number of research works that are published every year [4, 35, 57] and _(ii)_ how effective we are in sharing and translating the knowledge we currently have for doing IR and RS experimentation [23, 45]. One major cause for the mentioned issues, for example, seems to lie in the somewhat narrow way we tend to evaluate information retrieval and recommender systems: primarily based on various computational effectiveness measures. In reality, in
formation access systems are interactive systems used over longer periods of time, i.e., they may only be assessed holistically if the user's perspective (task and context) is taken into account, cf. [36, 51, 55]. Studies on long-term impact furthermore need to consider the wider scope of stakeholders [6, 30]. Moreover, for several types of information access systems, the specific and potentially competing interests of multiple stakeholders have to be taken into account [6]. Typical stakeholders in a recommendation scenario include not only the consumers who receive recommendations but also recommendation service providers who for example want to maximize their revenue through the recommendations [29, 30].
Various factors contribute to our somewhat limited view of such systems, e.g., the difficulties of getting access to real systems and real-world data for evaluation purposes. Unfortunately, the IR and RS research communities to a certain extent seem to have accepted to live with the limitations of the predominant evaluation practices of today. Even more worryingly, the described narrow evaluation approach has become more or less a standard in the scientific literature, and there is not much debate and--as we believe--sometimes even limited awareness of the various limitations of our evaluation practices.
There seems to be no easy and quick way out of this situation, even though some of the problems are known for many years now [17, 5, 32, 46]. However, we argue that improved _education_ of the various actors in the research ecosystem (including students, educators, and scholars) is one key approach to improve our experimentation practices and ensure real-world impact in the future. As will be discussed in the next sections, better training in experimentation practices is not only important for students, but also for academic teachers, research scholars, practitioners and different types of decision-makers in academia, business, and other organizations. This will, in fact, help address the much broader problem of reproducibility24 and replicability 25 we face in Computer Science [12, 1] in general and in AI in particular [26].
Footnote 24: [https://www.wired.com/story/machine-learning-reproducibility-crisis/](https://www.wired.com/story/machine-learning-reproducibility-crisis/)
Footnote 25: [https://cacm.acm.org/magazines/2020/8/246369-threats-of-a-replication-crisis-in-empirical-computer-science/abstract](https://cacm.acm.org/magazines/2020/8/246369-threats-of-a-replication-crisis-in-empirical-computer-science/abstract)
This chapter is organized as follows: Next, in Section 4.3.2 we briefly review which kinds of actors may benefit from better education in information access system experimentation. Afterwards, in Section 4.3.3, we provide concrete examples of what we can do in terms of concrete resources and initiatives to increase the awareness and knowledge level of the different actors. Finally, in Section 4.3.4, we sketch the main challenges that we may need to be aware of when implementing some of the described educational initiatives.
#### Actors
As in any process related to the advancement, communication, and sharing of knowledge, knowing how to properly design and carry out correct and robust experimentation concerns people with various different roles. This covers a broad spectrum including academia, industry, and public organizations, e.g., from a lecturer in IR and RS introducing evaluation paradigms to undergrad students and data scientists--not necessarily experienced in IR and RS--choosing metrics aligned to business Key Performance Indicators (KPIs) by looking at textbooks and Wikipedia pages. We have identified a number of actors that are involved in the education to experimentation in information access, who are listed below. Note that this categorization is not exhaustive nor exclusive, as actors may have multiple roles.
be widely used in education, research (experimentation, etc.), and even production systems, resources have great potential to continuously grow the knowledge of future generations of scholars, practitioners, and decision-makers.
**General Teaching Material**. Textbooks quickly may become outdated,26 but have the advantage that these typically reach a wide audience, whereas slides and tutorials that cover evaluation methodology in more depth might only reach smaller audiences. Often, today's online lectures primarily report on'mainstream' information retrieval (e.g., offline studies, common metrics), but foster reflection and discussion only to a very limited extent. More comprehensive resources should be made publicly available and shared across universities, summer schools, and meetups.27 Finally, having the IR and RS community actively contribute to the curation of material in sources that are widely used by the general public--and, thus, also by students--as a starting point to get a basic understanding of a topic (e.g., Wikipedia) is advisable. Further, contributing to the documentation of software such as Apache Solr,28 Elasticsearch,29 Surprise,30 Implicit,31 etc. (see the report by Ferro et al. [22] for more that are widely used in practice), can help to make non-experts more aware of the best practices in IR and RS experimentation.
Footnote 26: In contrast to that, the main textbook in the area of natural language processing has for years only been available as an online draft and is continuously being updated: [https://web.stanford.edu/~ju](https://web.stanford.edu/~ju) raftsky/slp3/
Footnote 27: For instance, Sebastian Hofstatter released Open-Source Information Retrieval Courses: [https://github.com/sebastian-hofstatter/teaching](https://github.com/sebastian-hofstatter/teaching).
Footnote 28: [https://solr.apache.org/](https://solr.apache.org/)
Footnote 29: [https://www.elastic.co/es/elasticsearch/](https://www.elastic.co/es/elasticsearch/)
Footnote 30: [https://surpriselib.com/](https://surpriselib.com/)
Figure 6: Interaction among actors involved in IR and RS experimental education.
Apart from introducing modern information retrieval systems, **teaching material** should give more attention to a wider set of application fields of IR, including recommender systems and topics related to query and interaction mining and understanding, and online learning to rank [41]. To date, also online evaluation falls short in such resources although it is essential in the spectrum of evaluation types [41]. Students need to be introduced to concepts such as reproducibility and replicability, and it is essential that students understand what makes a research work impactful in practice. To lower the entry barrier to the field, students should be taught how to use available tools and environments that enable quick prototyping, and that have real-world relevance. Teaching fairness, privacy, and ethical aspects, both in designing experiments and also in how to evaluate them, is also important.32
Footnote 32: Cyprus Center for Algorithmic Transparency (CyCAT) project: [https://sites.google.com/view/bi](https://sites.google.com/view/bi) asvisualizationactivity/home
Moreover, the participation in **shared tasks (challenges or competitions)** of evaluation campaigns in IR (e.g., TREC,33 CLEF,34 NTCIR,35 or FIRE36) and RecSys (e.g., the yearly ACM RecSys challenges37) should be fostered. To facilitate the participation of students, it is worthwhile to make the timelines of such challenges and competitions compatible with the academic (teaching) schedules (e.g., in terms of semesters). Students will be provided with the datasets used in the benchmarks and will be able to learn more on evaluation methodologies (for instance, students from Padua, Leipzig, and Halle participated in Touche [8, 9] hosted at CLEF). At the same time, it is important to critically reflect with students on the limitations and dangers of competitions [11] and encourage them to go beyond leaderboard State Of The Art (SOTA) chasing culture--e.g., only optimizing on one metric or a limited set of metrics without reflection of the suitability of these metrics in a given application context [50, 30]. Hence, it is important that a student's (or student group's) grade does not depend on their rank in the leaderboard but to a large degree on their approach, reasoning, and reflection to counteract SOTA chasing and help students to focus on insights. Inspired by result-blind reviewing in Section 4.4, we might refer to this as'result-blind grading'.
Footnote 33: [https://trec.nist.gov/](https://trec.nist.gov/)
Footnote 34: [https://www.clef-initiative.eu/](https://www.clef-initiative.eu/)
Footnote 35: [https://research.nii.ac.jp/ntcir/](https://research.nii.ac.jp/ntcir/)
**Test collections38** and **runs/submissions**--typically combined with novel evaluation methodologies--are the main resources resulting from shared tasks or evaluation campaigns. Integrating the resulting test collections into tools such as Hugging Face datasets [34], ir_datasets [38] or EvALL[3] allows for unified access to a wide range of datasets. Furthermore, some **software components** such as Anserini[52], Capreolus [54], PyTerrier[39], OpenNIR[37], etc., can directly load test collections integrated into ir_datasets which substantially simplifies data wrangling for scholars of all levels. For instance, PyTerrier allows for defining end-to-end experiments, including significance tests and multiple-test correction, using a declarative pipeline and is already used in research and teaching alike (e.g., in a master course with 240 students [39]). Other resources for performance modeling and prediction in RS, IR, and NLP can also be found in the manifesto of a previous Dagstuhl Perspectives Workshop [22]. The broad availability of such resources makes it tremendously easier to replicate and reproduce approaches that were submitted to a shared task (challenge) before. Further, it lowers the entry barrier to experiment with a wider set of datasets
and approaches across domains as switching between collections will be easy. New test collections can be added with limited effort. Still, further promoting the practice of sharing code and documentation,39 or using software submissions with tools such as TIRA [24, 44] in shared tasks is important.
Footnote 39: [https://www.go-fair.org/fair-principles/](https://www.go-fair.org/fair-principles/)
**Combining and integrating the resources** listed above in novel ways has the potential to reduce or even remove barriers between research and education, ultimately enabling Humboldt's ideal to combine teaching and research. Students who participate in shared tasks as part of their curriculum already go in this direction [18]. Continuously maintaining and promoting the integration of test collections and up-to-date best practices for shared tasks into a shared resource might further foster student participants because it becomes easier to "stand on the shoulders of giants" yielding to the cycle of education, research, and evaluation that is streamlined by ECIR, CLEF, and ESSIR (see Section 3.14).
##### 4.3.3.2 Initiatives
We have identified a range of actors, and we argue that addressing the problems around education requires a number of different initiatives some of which target one particular type of actor but more commonly offer benefits for different groups. These initiatives should not be seen in isolation as our vision is in line with what has been proposed in Section 3.14 which calls for coordinated action around education, evaluation, and research. Here we will discuss instruments we consider to be essential on that path. There is no particular order in this discussion other than starting with well-established popular concepts.
**Summer schools** are a key instrument primarily aimed at graduate students. ESSIR40 is a prime example of a summer school focusing on delivering up-to-date educational content in the field of IR; the Recommender Systems Summer School is organized in a similar manner focusing on RS. Beyond the technical content, summer schools do also serve the purpose of community-building involving different actors, namely students and scholars. Annually organized summer schools appear most effective as they make planning easier by integrating them into the annual timeline of IR- and RS-related events. This is in line with the _flow-wise_ vision discussed earlier in Section 3.14.
Footnote 40: [https://www.essir.eu](https://www.essir.eu)
Summer schools also provide a good setting to embed (research-focused) **Mentoring** programs and **Doctoral Consortia**. This allows PhD students as well as early-career researchers to learn from experts in the field outside their own institutions. Both instruments are well-established in the field. However, even though the established summer schools are repeatedly organized, these often happen on an irregular basis (sometimes yearly, sometimes with longer breaks) and using different formats. This irregular setting makes it difficult to integrate it into a PhD student's journey from the outset. Currently, Mentoring is often merely a by-product of other initiatives such as Summer Schools and Doctoral Consortia. It may be a fruitful path to see mentoring programs as an independent (yet, not isolated) initiative. For instance, the "Women in Music Information Retrieval (WiMIR) Mentoring program"41 sets an example of a sustainable initiative that is organized independently of other initiatives and on yearly basis. A similar format seems a fruitful path to follow in the IR and RS communities, where it is advisable to facilitate exchange across (sub-)disciplines and open up the initiative to the entire community. We note that--similar to the WiMIR--mentoring may not only address PhD students but is well suited also for later-career stages.
While the IR and RS communities have a tradition of research-topic-driven **Tutorials** as part of the main conferences, **Courses** that address skills and practices beyond research topics (similar to courses hosted by the CHI conference42) would be an additional fruitful path to follow. Such courses may, for instance, address specific research and evaluation methods on an operational level43 or how to write better research papers for a specific outlet or community44. With regard to support in writing better papers, see also Section 4.5. In Bachelor and Master education, more resources in the form of Formal Educational Materials could be developed. For example, students could benefit from The Black Mirror Writers' Room exercise45 which helps convey ethical thinking around the use of technology. Participants choose current technologies that they find ethically troubling and speculate about what the next stage of that technology might be. They work collaboratively as if they were science fiction writers, and use a combination of creative writing and ethical speculation to consider what protagonist and plot would be best suited to showcase the potential negative consequences of this technology. They plot episodes, but then also consider what steps they might take now (in regulation, technology design, social change) that might result in _not_ getting to this negative future. More experienced Bachelor students and Master students could have assessments similar to paper reviews as part of their curriculum to practice critical thinking.
Footnote 42: [https://chi2023.acm.org/for-authors/courses/accepted-courses/](https://chi2023.acm.org/for-authors/courses/accepted-courses/)
Footnote 43: See, e.g., CHI 2023’s CL2: Empirical Research Methods for Human-Computer Interaction [https://chi2023.acm.org/for-authors/courses/accepted-courses/#C12](https://chi2023.acm.org/for-authors/courses/accepted-courses/#C12), C18: Statistics for CHI [https://chi2023.acm.org/for-authors/courses/accepted-courses/#C18](https://chi2023.acm.org/for-authors/courses/accepted-courses/#C18)
Footnote 44: See, e.g., CHI 2021’s C02: How to Write CHI Papers [42]
Footnote 45: [https://discourse.mozilla.org/t/the-black-mirror-writers-room/46666](https://discourse.mozilla.org/t/the-black-mirror-writers-room/46666)
Topically relevant **Meetups** ranging from informal one-off meetings to more regular thematically structured events offer a much more flexible and informal way to learn about the field. Unlike summer schools they bring together the community for an evening and cater for a much more diverse audience involving _all_ actors with speakers as well as attendees from industry, academia and beyond. Talks range from specific use cases of IR in the industry (e.g., search at Bloomberg), to the latest developments in well-established tools (such as Elasticsearch) to user studies in realistic settings. There is a growing number of information-retrieval-related and recommender-systems-related Meetups46 and many of which have become more accessible recently as they offer virtual or hybrid events. Meetups offer a low entry barrier in particular for students at all levels of education and they help participants obtain a more holistic view of the challenges of building and evaluating IR and RS applications. Loosely incorporating Meetups in the curriculum, in particular when there is alignment with teaching content (e.g., **joint seminars**), has been demonstrated to be effective in our own experience. These joint initiatives may go beyond the dissemination of content, but also involve practitioners as well as decision-makers in terms of facilitating (or hindering) strategic alliances or setting strategic themes.
Footnote 46: See, e.g., [https://opensourceconnections.com/search-meetups-map/](https://opensourceconnections.com/search-meetups-map/), [https://recommender-systems.com/community/meetups/](https://recommender-systems.com/community/meetups/)
Knowledge Transfer through **collaboration between industry and academia** is another instrument offering a mutually beneficial collaboration between three key actors: PhD students, academic scholars, and practitioners in the industry. By tackling real-world problems (as defined by the industrial partner) using state-of-the-art research approaches in the fields of IR and RS (as provided by the academic partner) knowledge does not just flow in one direction but both ways. In the context of our discussion, this is an opportunity to gain insights into evaluation methods and concerns in the industry. There are well-established
frameworks to foster knowledge transfer such as Knowledge Transfer Partnerships47 in the UK with demonstrated impact in IR48 and beyond.
Footnote 47: [http://ktp.innovateuk.org](http://ktp.innovateuk.org)
Footnote 48: [https://www.gov.uk/government/news/media-tracking-firm-wins-knowledge-transfer-partnership-2015](https://www.gov.uk/government/news/media-tracking-firm-wins-knowledge-transfer-partnership-2015)
Knowledge transfer should also be facilitated and supported at a higher level at conferences and workshops. This is where the RS community is particularly successful in attracting industry contributions to the RecSys conference series. In IR, there is still an observable gap between key academic conferences such as SIGIR and practitioners' events like Haystack (_"the conference for improving search relevance"49_). The annual Search Solutions conference is an example of a successful forum to exchange ideas between all different actors.50
Footnote 49: [https://haystackconf.com](https://haystackconf.com)
Footnote 50: [https://www.bcs.org/membership-and-registrations/member-communities/information-retrieval-specialist-group/conferences-and-events/search-solutions/](https://www.bcs.org/membership-and-registrations/member-communities/information-retrieval-specialist-group/conferences-and-events/search-solutions/)
Footnote 51: ACM CHI Conference on Human Factors in Computing Systems
With a view to improving evaluation practices in the long-term, the reviewing process and practices play an important role. Hence, **addressing reviewers and editors** is essential. Reviewers are important actors in shaping what papers will be published and which not. And it is essential that good evaluation is acknowledged and understood while poorly evaluated papers are not let through. Similarly, it is crucial to have reviewers who acknowledge and understand information retrieval and recommendation problems in their broader context (e.g., tasks, users, organizational value, user interface, societal impact) and review papers accordingly. Hence, it is essential to develop educational initiatives concerning evaluation that address current and future reviewers (and editors) accordingly. Promising initiatives include the following:
* Clear reviewer guidelines acknowledging the wide spectrum of evaluation methodology and the holistic view on information retrieval and recommendation problems. For example, CHI51 and Association for Computational Linguistics (ACL)52 provide detailed descriptions of what needs to be addressed and considered in a review and what steps to take.53 Care has to be taken, though, that such guidelines are kept concise to not overwhelm people before even starting to read. Further suggestions on results-blind reviewing and guidance for authors can be found in Sections 4.4 and Section 4.5 respectively. Footnote 52: Association for Computational Linguistics
* Next to reviewers, meta-reviewers and editors is another entity to address, which can be done in a similar manner as addressing reviewers. These senior roles can have strong momentum in inducing change--but have a strong power position in preventing it. Stronger resistance might be expected on that (hierarchical) level. Seemingly, only a few conferences and journals--for instance, ACL54--seem to offer clear guidelines for the meta-reviewing activity. Footnote 53: CHI 2023 Guide to reviewing papers [https://chi2023.acm.org/submission-guides/guide-to-reviewing-papers/](https://chi2023.acm.org/submission-guides/guide-to-reviewing-papers/); ACL’s How to Review for ACL Rolling Review [https://aclrollingreview.org/reviewertutorial:i](https://aclrollingreview.org/reviewertutorial:i) Ken Hinckley’s comment on what excellent reviewing is [28]. Footnote 54: ACL’s Action Editor Guide to Meta-Reviewing [https://aclrollingreview.org/aetutorial](https://aclrollingreview.org/aetutorial)
* Similar to courses on research methods or addressing paper-writing skills, it is advisable to provide courses that specifically address how to peer review.55 Footnote 55: [https://chi2023.acm.org/for-authors/courses/accepted-courses/AC16](https://chi2023.acm.org/for-authors/courses/accepted-courses/AC16)
* Ment
for instance, established in Psychology56. The MIR community57 has a New-to-ISMIR mentoring program58 that mainly addresses paper-writing for people who are new to the community but will likely also have an impact on reviewing practices. Similar programs could be established in the IR and RS communities with a particular focus on evaluation aspects. It is worthwhile to note that a recent study (in ML and AI) indicates that novice reviewers provide valuable contributions in the reviewing process [47]. Footnote 57: [https://www.apa.org/pubs/journals/cpp/reviewer-mentoring-program](https://www.apa.org/pubs/journals/cpp/reviewer-mentoring-program)
* Summer schools mainly address (advanced) students and are also a good opportunity to include initiatives addressing reviewing.
**General Public Dissemination** is another important aspect that needs to be addressed. Communication in the lay language of our field is very important. Editing and curating better relevant Wikipedia pages on evaluation measures for information retrieval59 and recommender systems60 will increase the potential of reaching a wider audience, including potential future students. Other actions can concern publishing papers in magazines with a wider and differentiated audience, such as _Communications of the ACM61_, _ACM Inroads62_, _ACM XRDS: Crossroads63_, _IEEE Spectrum64_. One of the final goals is to make IR and RS more popular to both attract students to the field and grow a healthy ecosystem of professionals at various levels.
Footnote 57: [https://www.ismir.net](https://www.ismir.net)
Footnote 58: [https://semir2022.ismir.net/diversity/mentoring](https://semir2022.ismir.net/diversity/mentoring)
Footnote 59: [https://en.wikipedia.org/wiki/Evaluation_measures_](https://en.wikipedia.org/wiki/Evaluation_measures_)(information_retrieval) [Accessed: 20-Jan-2023]
Footnote 60: [https://en.wikipedia.org/wiki/Recommender_systems#Evaluation](https://en.wikipedia.org/wiki/Recommender_systems#Evaluation) [Accessed: 20-Jan-2023]
Footnote 61: [https://cacm.acm.org/](https://cacm.acm.org/)
Footnote 62: [https://inroads.acm.org/](https://inroads.acm.org/)
Footnote 63: [https://xrds.acm.org/](https://xrds.acm.org/)
Footnote 64: [https://spectrum.ieee.org/](https://spectrum.ieee.org/)
We have described actors, resources, and initiatives that we think are worth considering in moving forward as a community towards creating more awareness, as well as sharing and transferring knowledge on experimental evaluation for IR and RS. We summarize the participation (either primary or secondary actors) in generating and consuming these resources and initiatives in Table 3. This is not intended as a definitive list but aimed to represent the primary and secondary actors which are involved.
#### 4.3.4 Challenges & Outlook
Given the importance of reliable and ecologically valid results, one may ask oneself which obstacles occur in the path of developing better education for experimentation and evaluation of information access systems. We see different potential barriers (and possibilities) for the different actors: students, educators, scholars, practitioners, and decision-makers. We will investigate each actor in turn.
**Scholars.** As has also been identified in a previous Dagstuhl seminar [22], it is significantly harder to test the importance of assumptions in user-facing aspects of the system, such as the presentation of results or the task model, as it is prohibitively expensive to simulate arbitrarily many versions of a system and put them before users. User studies are therefore also at higher risk of resulting in hypotheses that cannot be clearly rejected (non-significant results), leading to fear of criticism and rejection from paper reviewers. There are some proponents of Equivalence Testing [33]65 and Bayesian Analysis [49] in Psychology which
may also be useful in Computer Science.
As LLMs are becoming a commodity, policies to educate and guide authors and reviewers in how different AI tools can (or cannot) be used for writing assistance should be discussed and defined.66 These guidelines may inspire educators on how to characterize the role of these tools in learning & teaching environments, including assessment design and plagiarism policies67.
Footnote 66: For instance, see the ACL 2023 Policy on AI Writing Assistance: [https://2023.aclweb.org/blog/AL-2023-policy/](https://2023.aclweb.org/blog/AL-2023-policy/).
Footnote 67: [https://www.theatlantic.com/technology/archive/2022/12/chatgpt-ai-writing-college-student-essays/672371/](https://www.theatlantic.com/technology/archive/2022/12/chatgpt-ai-writing-college-student-essays/672371/)
Footnote 68: [https://harzing.com/resources/publish-or-perish](https://harzing.com/resources/publish-or-perish)
In addition, a current culture of 'publish or perish' incentivizes short-term and incremental findings68, over more holistic thinking and thoughtful comparative analysis. The problem of 'SOTA-chasing' has also been discussed in other research areas, e.g., in NLP [11]. Change in academic incentive systems both within institutions and for conferences and journals change slowly but they do evolve.
Footnote 68: Further proposals for methodological review are also under discussion in Psychology, but will likely take longer to reach Computer Science: [https://www.nature.com/articles/d41586-022-04504-8](https://www.nature.com/articles/d41586-022-04504-8)
**Students and Educators.** Thankfully, institutions are increasingly recognizing the need for reviewing studies before they are performed, such as Ethics and Data Management plan69. In Bachelor and Master education, in particular, this means that instructors may require training in writing such documents, and institutions appreciate and are equipped for timely review. Therefore, planning of education would benefit from allowing sufficient time for submission, review, and revision.
Footnote 69: For instance, see the ACL 2023 Policy on AI Writing Assistance: [https://2023.aclweb.org/blog/AL-2023-policy/](https://2023.aclweb.org/blog/AL-2023-policy/).
In that context, teaching evaluation methodologies may require some colleagues to retrain, in which case some resistance can be expected. Improving access to training initiatives
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline _Actors:_ & Students & Educators & Scholars & Practitioners & Decision-makers \\ \hline \hline \multicolumn{6}{c}{_Resources_} \\ \hline Teaching Materials & ✓ & ✓ & & & (✓) \\ Shared tasks/challenges/competitions & ✓ & ✓ & ✓ & ✓ & \\ Test collections \& runs/submissions & ✓ & ✓ & ✓ & \\ Software (components) & ✓ & ✓ & ✓ & ✓ & \\ \hline \multicolumn{6}{c}{_Initiatives_} \\ \hline Mentoring: Summer schools and Doctoral Consortia & ✓ & & ✓ & (✓) & \\ Tutorials and courses & ✓ & & ✓ & ✓ & \\ Meetups & (✓) & (✓) & ✓ & ✓ & ✓ \\ Joint seminars & ✓ & ✓ & & ✓ & (✓) \\ Collaboration between industry and academia & ✓ & & ✓ & ✓ & \\ Reviewing & (✓) & & ✓ & & \\ General public dissemination & (✓) & (✓) & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular}
\end{table}
Table 3: Actors generating or consuming resources and initiatives related to education in evaluation for IR and RS. ✓ and (✓) indicate primary and secondary actors, respectively.
and materials at post-graduate level can support colleagues who are willing but need additional support. Various forms of informal or even organized exchange between teachers may be a helpful instrument to grow the competency of educators.
Furthermore, certain evaluation concepts and methodologies cannot be taught before certain topics are covered in the curriculum. A student in recommender systems may need to understand the difference between a classification and regression problem; or the difference between precision and recall (for a given task and user it may be more important to retrieve accurate results, or to retrieve a wider range of results) before they can start thinking about the social implications.
Moreover, some students are prone to satisfice, thinking that "good enough is good enough": there are many methodologies available for evaluation, and the options are difficult to digest in a cost-effective way at entry-level--highlighting the need for availability of tutorials and low-entry level materials as indicated earlier in Section 4.3.3. Embedding participation to shared tasks and competitions (e.g., CLEF labs or TREC tracks) which provide a common framework for robust experimentation may help overcome this challenge--although the synchronization between the semester and participation timelines may not be straightforward.
Finally, there is a growing number of experiments in developing multi-disciplinary curricula - with the appreciation that different disciplines bring to such a program. Successful initiatives include group projects consisting of students in both Social Sciences and Humanities (SSH) and Computer Science. In fact, one of the underlying principles of the continuously growing _iSchools consortium_70 is to foster such interdisciplinarity. The challenge here is not only the design of the content but also accreditation and support from the strategic level of institutions.
Footnote 70: [https://www.ischools.org](https://www.ischools.org)
**Practitioners.** Maintenance of resources used to translate knowledge about models and methodologies for evaluation is challenging given the fast pace of the field. This can make it hard to compare results across studies and to keep up with the SOTA of best practices in experimentation. In this regard lowering the entry barrier to participating in initiatives such as shared tasks/challenges [21, 27] and maintaining documentation of resources commonly used by non-experts are increasingly helpful.
Another issue is the homogeneity of actors. Often there is no active involvement of actors outside a narrow academic Computer Science sphere, who otherwise might have indicated assumptions or limitations early on. It can be challenging to set up productive collaborations between industry and academia, as well as across disciplines. Typical issues include, for instance, common terminology used in a different way, or different levels of knowledge of key performance indicators. Co-design in labs has set a good precedent in this regard. Examples are ICAI in the Netherlands71, its extension in the new 10-year ROBUST initiative72, and the Australian Centre of Excellence for Automated Decision-Making and Society (ADM+S)73, where PhDs in multiple disciplines (Social Sciences & Humanities, Computer Science, Law, etc.) are jointly being trained in shared projects.
Footnote 72: [https://icai.ai/ltp-robust/](https://icai.ai/ltp-robust/)
Footnote 73: [https://www.admscentre.org.au/](https://www.admscentre.org.au/)
Research Advisory Boards are another effective instrument to draw in practitioners but here the challenge is to make the most of the little time that is usually available for the exchange of ideas between practitioners and academics.
**Decision-makers.** The output of evaluation and experimentation in IR and RS may be used to inform decision-making on the societal level. Consequently, if the evaluation is poorly done, or the results incorrectly generalized, the implications may also be poor decision-making with far-reaching impacts on society, e.g. [31, Ch. 10].
The ability of the other actors to support education on evaluation is constrained and shaped by decision-makers. Policy-makers in public organizations and program managers or deans in academia play a crucial role in curriculum design. Scholars and educators will have to communicate effectively the importance of experimental evaluation in information access in order to inform the decision-making process. The challenge here is to initiate change in the first place and to drive such changes. Any new initiative will necessarily involve not just a single decision-maker but more stakeholders and committees making this a more effortful but possibly also more impactful process than many of the other initiatives we have identified.
Additionally, decision-makers within academic institutions, namely libraries and career development centres, can play an important role towards developing the competency of students and educators. Making best practices in evaluation available as a commodity through these channels will require making resources more accessible for non-experts in IR and RS.
#### 4.3.5 Concluding Remarks
Education and dissemination represent key pillars to overcoming methodological challenges in Information Retrieval and Recommender Systems. What we have sketched here can be interpreted as a general roadmap to create more awareness among and beyond the IR and RS communities. We hope the recommendations--and the identified challenges to consider--on what we can do will help to support education for better evaluation in the different stages of the lifelong learning journey. We acknowledge that facets such as incentive mechanisms and processes in institutions are often slow-moving. The vision proposed in this section is therefore also aimed at a longer-term (5-10 years) perspective.
## References
* [1]_Reproducibility of Data-Oriented Experiments in e-Science (Dagstuhl Seminar 16041)_, volume 6, 2016.
* [2] Ahmed Allam, Peter Johannes Schulz, and Kent Nakamoto. The impact of search engine selection and sorting criteria on vaccination beliefs and attitudes: Two experiments manipulating google output. _Journal of Medical Internet Research_, 16(4):e100, 2014.
* [3] Enrique Amigo, Jorge Carrillo de Albornoz, Mario Almagro-Cadiz, Julio Gonzalo, Javier Rodriguez-Vidal, and Felisa Verdejo. Ewall: Open access evaluation for information access systems. In Noriko Kando, Tetsuya Sakai, Hideo Joho, Hang Li, Arjen P. de Vries, and Ryen W. White, editors, _Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, Shinjuku, Tokyo, Japan, August 7-11, 2017_, pages 1301-1304. ACM, 2017.
* [4] Timothy G. Armstrong, Alistair Moffat, William Webber, and Justin Zobel. Improvements that don't add up: ad-hoc retrieval results since 1998. In David Wai-Lok Cheung, Il-Yeol Song, Wesley W. Chu, Xiaohua Hu, and Jimmy Lin, editors, _Proceedings of the 18th ACM Conference on Information and Knowledge Management, CIKM 2009, Hong Kong, China, November 2-6, 2009_, pages 601-610. ACM, 2009.
* [5] Ahmed Hassan Awadallah, Rosie Jones, and Kristina Lisa Klinkner. Beyond DCG: user behavior as a predictor of a successful search. In Brian D. Davison, Torsten Suel, Nick Craswell, and Bing Liu, editors, _Proceedings of the Third International Conference on Web
Search and Web Data Mining, WSDM 2010, New York, NY, USA, February 4-6, 2010_, pages 221-230. ACM, 2010.
* [6] Christine Bauer and Eva Zangerle. Leveraging multi-method evaluation for multi-stakeholder settings. In Oren Sar Shalom, Dietmar Jannach, and Ido Guy, editors, _Proceedings of the 1st Workshop on the Impact of Recommender Systems co-located with 13th ACM Conference on Recommender Systems, ImpactRS@RecSys 2019), Copenhagen, Denmark, September 19, 2019_, volume 2462 of _CEUR Workshop Proceedings_. CEUR-WS.org, 2019.
* 19th International Conference on Theory and Practice of Digital Libraries, TPDL 2015, Poznan, Poland, September 14-18, 2015. Proceedings_, volume 9316 of _Lecture Notes in Computer Science_, pages 153-168. Springer, 2015.
* 13th International Conference of the CLEF Association, CLEF 2022, Bologna, Italy, September 5-8, 2022, Proceedings_, volume 13390 of _Lecture Notes in Computer Science_, pages 311-336. Springer, 2022.
* 12th International Conference of the CLEF Association, CLEF 2021, Virtual Event, September 21-24, 2021, Proceedings_, volume 12880 of _Lecture Notes in Computer Science_, pages 450-467. Springer, 2021.
* [10] Ye Chen, Ke Zhou, Yiqun Liu, Min Zhang, and Shaoping Ma. Meta-evaluation of online and offline web search evaluation metrics. In Noriko Kando, Tetsuya Sakai, Hideo Joho, Hang Li, Arjen P. de Vries, and Ryen W. White, editors, _Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, Shinjuku, Tokyo, Japan, August 7-11, 2017_, pages 15-24. ACM, 2017.
* [11] Kenneth Ward Church and Valia Kordoni. Emerging trends: Sota-chasing. _Nat. Lang. Eng._, 28(2):249-269, 2022.
* [12] Andy Cockburn, Pierre Dragicevic, Lonni Besancon, and Carl Gutwin. Threats of a replication crisis in empirical computer science. _Commun. ACM_, 63(8):70-79, 2020.
* [13] Maurizio Ferrari Dacrema, Paolo Cremonesi, and Dietmar Jannach. Are we really making much progress? A worrying analysis of recent neural recommendation approaches. In Toine Bogers, Alan Said, Peter Brusilovsky, and Domonkos Tikk, editors, _Proceedings of the 13th ACM Conference on Recommender Systems, RecSys 2019, Copenhagen, Denmark, September 16-20, 2019_, pages 101-109. ACM, 2019.
* [14] Tim Draws, Nirmal Roy, Oana Inel, Alisa Rieger, Rishav Hada, Mehmet Orcun Yalcin, Benjamin Timmermans, and Nava Tintarev. Viewpoint diversity in search results. In _ECIR_, 2023.
* [15] Tim Draws, Nava Tintarev, and Ujwal Gadiraju. Assessing viewpoint diversity in search results using ranking fairness metrics. _SIGKDD Explor._, 23(1):50-58, 2021.
* [16] Tim Draws, Nava Tintarev, Ujwal Gadiraju, Alessandro Bozzon, and Benjamin Timmermans. This is not what we ordered: Exploring why biased search result rankings affect user attitudes on debated topics. In Fernando Diaz, Chirag Shah, Torsten Suel, Pablo Castells, Rosie Jones, and Tetsuya Sakai, editors, _SIGIR '21: The 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, Virtual Event, Canada, July 11-15, 2021_, pages 295-305. ACM, 2021.
* [17] Michael D. Ekstrand, Michael Ludwig, Joseph A. Konstan, and John Riedl. Rethinking the recommender research ecosystem: reproducibility, openness, and lenskit. In Bamshad Mobasher, Robin D. Burke, Dietmar Jannach, and Gediminas Adomavicius, editors, _Proceedings of the 2011 ACM Conference on Recommender Systems, RecSys 2011, Chicago, IL, USA, October 23-27, 2011_, pages 133-140. ACM, 2011.
* [18] Theresa Elstner, Frank Loebe, Yamen Aijour, Christopher Akiki, Alexander Bondarenko, Maik Frobe, Lukas Gienapp, Nikolay Kolyada, Janis Mohr, Stephan Sandfuchs, Matti Wiegmann, Jorg Frochte, Nicola Ferro, Sven Hofmann, Benno Stein, Matthias Hagen, and Martin Potthast. Shared Tasks as Tutorials: A Methodical Approach. In _37th AAAI Conference on Artificial Intelligence (AAAI 2023)_. AAAI, 2023.
* [19] Robert Epstein and Ronald E. Robertson. The search engine manipulation effect (SEME) and its possible impact on the outcomes of elections. _Proceedings of the National Academy of Sciences_, 112(33):E4512-E4521, 2015.
* [20] Robert Epstein, Ronald E. Robertson, David Lazer, and Christo Wilson. Suppressing the search engine manipulation effect (SEME). _Proc. ACM Hum. Comput. Interact._, 1(CSCW):42:1-42:22, 2017.
* 10th International Conference of the CLEF Association, CLEF 2019, Lugano, Switzerland, September 9-12, 2019, Proceedings_, volume 11696 of _Lecture Notes in Computer Science_, pages 3-45. Springer, 2019.
* [22] Nicola Ferro, Norbert Fuhr, Gregory Grefenstette, Joseph A. Konstan, Pablo Castells, Elizabeth M. Daly, Thierry Declerck, Michael D. Ekstrand, Werner Geyer, Julio Gonzalo, Tsvi Kuflik, Krister Linden, Bernardo Magnini, Jian-Yun Nie, Raffaele Perego, Bracha Shapira, Ian Soboroff, Nava Tintarev, Karin Verspoor, Martijn C. Willemsen, and Justin Zobel. From evaluating to forecasting performance: How to turn information retrieval, natural language processing and recommender systems into predictive sciences (dagstuhl perspectives workshop 17442). _Dagstuhl Manifestos_, 7(1):96-139, 2018.
* 25, 2022_, pages 280-288. ACM, 2022.
* [24] Maik Frobe, Matti Wiegmann, Nikolay Kolyada, Bastian Grahm, Theresa Elstner, Frank Loebe, Matthias Hagen, Benno Stein, and Martin Potthast. Continuous Integration for Reproducible Shared Tasks with TIRA.io. In _Advances in Information Retrieval. 45th European Conference on IR Research (ECIR 2023)_, Lecture Notes in Computer Science, Berlin Heidelberg New York, 2023. Springer.
* [25] Carlos Alberto Gomez-Uribe and Neil Hunt. The netflix recommender system: Algorithms, business value, and innovation. _ACM Trans. Manag. Inf. Syst._, 6(4):13:1-13:19, 2016.
* [26] Odd Erik Gundersen and Sigbjorn Kjensmo. State of the art: Reproducibility in artificial intelligence. In Sheila A. McIlraith and Kilian Q. Weinberger, editors, _Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innov
ative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018_, pages 1644-1651. AAAI Press, 2018.
* [27] D. K. Harman and E. M. Voorhees, editors. _TREC. Experiment and Evaluation in Information Retrieval_. MIT Press, Cambridge (MA), USA, 2005.
* [28] Ken Hinckley. So you're a program committee member now: On excellence in reviews and meta-reviews and championing submitted work that has merit, 2016.
* [29] Dietmar Jannach and Gediminas Adomavicius. Price and profit awareness in recommender systems. In _Proceedings of the ACM RecSys 2017 Workshop on Value-Aware and Multi-Stakeholder Recommendation_, Como, Italy, 2017.
* [30] Dietmar Jannach and Christine Bauer. Escaping the mcnamara fallacy: Towards more impactful recommender systems research. _AI Mag._, 41(4):79-95, 2020.
* [31] Daniel Kahneman. _Thinking, fast and slow_. Penguin, 2011.
* [32] Joseph A. Konstan and Gediminas Adomavicius. Toward identification and adoption of best practices in algorithmic recommender systems research. In Alejandro Bellogin, Pablo Castells, Alan Said, and Domonkos Tikk, editors, _Proceedings of the International Workshop on Reproducibility and Replication in Recommender Systems Evaluation, RepSys 2013, Hong Kong, China, October 12, 2013_, pages 23-28. ACM, 2013.
* [33] Daniel Lakens. Equivalence tests: A practical primer for t tests, correlations, and meta-analyses. _Social psychological and personality science_, 8(4):355-362, 2017.
* [34] Quentin Lhoest, Albert Villanova del Moral, Yacine Jernite, Abhishek Thakur, Patrick von Platen, Suraj Patil, Julien Chaumond, Mariama Drame, Julien Plu, Lewis Tunstall, Joe Davison, Mario Sasko, Gunjan Chhablani, Bhavittya Malik, Simon Brandeis, Teven Le Scao, Victor Sanh, Canwen Xu, Nicolas Patry, Angelina McMillan-Major, Philipp Schmid, Sylvain Gugger, Clement Delangue, Theo Matussiere, Lysandre Debut, Stas Bekman, Pierric Cistac, Thibault Goehringer, Victor Mustar, Francois Lagunas, Alexander M. Rush, and Thomas Wolf. Datasets: A community library for natural language processing. In Heike Adel and Shuming Shi, editors, _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, EMNLP 2021, Online and Punta Cana, Dominican Republic, 7-11 November, 2021_, pages 175-184. Association for Computational Linguistics, 2021.
* [35] Jimmy Lin, Daniel Campos, Nick Craswell, Bhaskar Mitra, and Emine Yilmaz. Significant improvements over the state of the art? A case study of the MS MARCO document ranking leaderboard. In Fernando Diaz, Chirag Shah, Torsten Suel, Pablo Castells, Rosie Jones, and Tetsuya Sakai, editors, _SIGIR '21: The 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, Virtual Event, Canada, July 11-15, 2021_, pages 2283-2287. ACM, 2021.
* [36] Marianne Lykke, Ann Bygholm, Louise Bak Sondergaard, and Katriina Bystrom. The role of historical and contextual knowledge in enterprise search. _J. Documentation_, 78(5):1053-1074, 2022.
* [37] Sean MacAvaney. Opennir: A complete neural ad-hoc ranking pipeline. In James Caverlee, Xia (Ben) Hu, Mounia Lalmas, and Wei Wang, editors, _WSDM '20: The Thirteenth ACM International Conference on Web Search and Data Mining, Houston, TX, USA, February 3-7, 2020_, pages 845-848. ACM, 2020.
* [38] Sean MacAvaney, Andrew Yates, Sergey Feldman, Doug Downey, Arman Cohan, and Nazli Goharian. Simplified data wrangling with ir_datasets. In Fernando Diaz, Chirag Shah, Torsten Suel, Pablo Castells, Rosie Jones, and Tetsuya Sakai, editors, _SIGIR '21: The 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, Virtual Event, Canada, July 11-15, 2021_, pages 2429-2436. ACM, 2021.
- 5, 2021_, pages 4526-4533. ACM, 2021.
* [40] Jiaxin Mao, Yiqun Liu, Ke Zhou, Jian-Yun Nie, Jingtao Song, Min Zhang, Shaoping Ma, Jiashen Sun, and Hengliang Luo. When does relevance mean usefulness and user satisfaction in web search? In Raffaele Perego, Fabrizio Sebastiani, Javed A. Aslam, Ian Ruthven, and Justin Zobel, editors, _Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval, SIGIR 2016, Pisa, Italy, July 17-21, 2016_, pages 463-472. ACM, 2016.
* [41] Ilya Markov and Maarten de Rijke. What should we teach in information retrieval? _SIGIR Forum_, 52(2):19-39, 2018.
* [42] Lennart E. Nacke. How to write CHI papers, online edition. In Yoshifumi Kitamura, Aaron Quigley, Katherine Isbister, and Takeo Igarashi, editors, _CHI '21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama Japan, May 8-13, 2021, Extended Abstracts_, pages 126:1-126:3. ACM, 2021.
* [43] Frances A. Pogacar, Amira Ghenai, Mark D. Smucker, and Charles L. A. Clarke. The positive and negative influence of search results on people's decisions about the efficacy of medical treatments. In Jaap Kamps, Evangelos Kanoulas, Maarten de Rijke, Hui Fang, and Emine Yilmaz, editors, _Proceedings of the ACM SIGIR International Conference on Theory of Information Retrieval, ICTIR 2017, Amsterdam, The Netherlands, October 1-4, 2017_, pages 209-216. ACM, 2017.
* Lessons Learned from 20 Years of CLEF_, volume 41 of _The Information Retrieval Series_, pages 123-160. Springer, 2019.
* sample sizes, effect sizes, and statistical power. 40, 2018.
* [46] Mark Sanderson, Monica Lestari Paramita, Paul D. Clough, and Evangelos Kanoulas. Do user preferences and evaluation measures line up? In Fabio Crestani, Stephane Marchand-Maillet, Hsin-Hsi Chen, Efthimis N. Efthimidis, and Jacques Savoy, editors, _Proceeding of the 33rd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2010, Geneva, Switzerland, July 19-23, 2010_, pages 555-562. ACM, 2010.
* [47] Ivan Stelmakh, Nihar B. Shah, Aarti Singh, and Hal Daume III. Prior and prejudice: The novice reviewers' bias against resubmissions in conference peer review. _CoRR_, abs/2011.14646, 2020.
* [48] Zhu Sun, Di Yu, Hui Fang, Jie Yang, Xinghua Qu, Jie Zhang, and Cong Geng. Are we evaluating rigorously? benchmarking recommendation for reproducible evaluation and fair comparison. In Rodrygo L. T. Santos, Leandro Balby Marinho, Elizabeth M. Daly, Li Chen, Kim Falk, Noam Koenigstein, and Edleno Silva de Moura, editors, _RecSys 2020: Fourteenth ACM Conference on Recommender Systems, Virtual Event, Brazil, September 22-26, 2020_, pages 23-32. ACM, 2020.
* [49] Johnny van Doorn, Don van den Bergh, Udo Bohm, Fabian Dablander, Koen Derks, Tim Draws, Alexander Etz, Nathan J Evans, Quentin F Gronau, Julia M Haaf, et al. The jasp guidelines for conducting and reporting a bayesian analysis. _Psychonomic Bulletin & Review_, 28(3):813-826, 2021.
* [50] Ellen M. Voorhees. Coopetition in IR research. In Jimmy X. Huang, Yi Chang, Xueqi Cheng, Jaap Kamps, Vanessa Murdock, Ji-Rong Wen, and Yiqun Liu, editors, _Proceedings
of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, SIGIR 2020, Virtual Event, China, July 25-30, 2020_, page 3. ACM, 2020.
* [51] Ryen W. White. _Interactions with Search Systems_. Cambridge University Press, 2016.
* [52] Peilin Yang, Hui Fang, and Jimmy Lin. Anserini: Enabling the use of lucene for information retrieval research. In Noriko Kando, Tetsuya Sakai, Hideo Joho, Hang Li, Arjen P. de Vries, and Ryen W. White, editors, _Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, Shinjuku, Tokyo, Japan, August 7-11, 2017_, pages 1253-1256. ACM, 2017.
* [53] Wei Yang, Kuang Lu, Peilin Yang, and Jimmy Lin. Critically examining the "neural hype": Weak baselines and the additivity of effectiveness gains from neural ranking models. In Benjamin Piwowarski, Max Chevalier, Eric Gaussier, Yoelle Maarek, Jian-Yun Nie, and Falk Scholer, editors, _Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2019, Paris, France, July 21-25, 2019_, pages 1129-1132. ACM, 2019.
* [54] Andrew Yates, Siddhant Arora, Xinyu Zhang, Wei Yang, Kevin Martin Jose, and Jimmy Lin. Capreolus: A toolkit for end-to-end neural ad hoc retrieval. In James Caverlee, Xia (Ben) Hu, Mounia Lalmas, and Wei Wang, editors, _WSDM '20: The Thirteenth ACM International Conference on Web Search and Data Mining, Houston, TX, USA, February 3-7, 2020_, pages 861-864. ACM, 2020.
* [55] Eva Zangerle and Christine Bauer. Evaluating recommender systems: Survey and framework. _ACM Comput. Surv._, 55(8):170:1-170:38, 2023.
* [56] Fan Zhang, Jiaxin Mao, Yiqun Liu, Xiaohui Xie, Weizhi Ma, Min Zhang, and Shaoping Ma. Models versus satisfaction: Towards a better understanding of evaluation metrics. In Jimmy X. Huang, Yi Chang, Xueqi Cheng, Jaap Kamps, Vanessa Murdock, Ji-Rong Wen, and Yiqun Liu, editors, _Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, SIGIR 2020, Virtual Event, China, July 25-30, 2020_, pages 379-388. ACM, 2020.
* [57] J. Zobel. When measurement misleads: The limits of batch assessment of retrieval systems. _SIGIR Forum_, 56(1), 2022.
### Results-blind Reviewing
_Joeran Beel (University of Siegen, DE, [email protected])_
_Timo Breuer (Technische Hochschule Koln, DE, [email protected])_
_Anita Crescenzi (University of North Carolina at Chapel Hill, US, [email protected])_
_Norbert Fuhr (University of Duisburg-Essen, DE, [email protected])_
_Meijie Li (University of Duisburg-Essen, DE, [email protected])_
#### 4.4.1 Motivation
Campbell and Stanley defined experiments as "that portion of research in which variables are manipulated and their effects upon other variables observed" (p. 1 in [1])." Scientific experiments are used in confirmatory research to test a priori hypotheses as well as in exploratory research to gain new insights and help to generate hypotheses for future research [7]. In information access research, the ultimate goal is to gain insights into cause and effect. Unfortunately, many reviewers of information access experiments place undue emphasis on performance, rejecting papers that contain insights if they fail to show improvements
in performance. The focus on performance numbers not only leads to publication bias. It also puts additional pressure on early-career researchers who must publish or perish, thus being tempted to cheat if their proposed method does not yield the desired results. Moreover, reviewers pay little attention to the experimental methodology and analysis [4] in case the results are impressive. Focusing primarily on performance (and in particular aggregated performance) can lead to a neglect of insights; gaining insights is critical to move the information access field forward and essential to be able to make performance predictions [2].
We think that one important step to change the situation is if we alter the review process such that there is more emphasis on the theoretical background, the hypotheses, the methodological plan and the analysis plan of an experiment, while improvement or decline of performance should play less of a role when deciding about the quality of a paper. It is hoped that this will lead to a higher scientific quality of publications, more insights, and improved reproducibility (as there is less incentive for beautifying results). As Woznyj et al. [8] note in their survey of editorial board members, overall there are positive attitudes towards results-blind reviewing and advantages for the scientific community outweigh concerns.
In order to move the review focus away from performance improvement, appealing to reviewers alone will not be sufficient. A more drastic measure is the change of the review process such that reviewers decide about acceptance vs. rejection of a paper without knowing the outcome of the experiments described.
#### 4.4.2 Current Situation and Gaps
As part of IR or RS conferences, the peer-reviewing process usually involves the review of the full paper using double-blinded reviewing, i.e., both authors and reviewers remain anonymous to each other. Before submission, authors are informed about possible reviewing criteria and areas of interest in the Call for Papers (CfP) that can be found on the conference website. Upon submission, the paper should contain all of the relevant information regarding the motivation, the research methodology or study design, the experimental results, and finally, a discussion that puts the results into context.
For each submission, usually, a group of three reviewers is assigned. All of them should align their reviews to those criteria mentioned in the CfP and, depending on the submission system, express their opinion in written text or by pre-defined answers regarding particular aspects. In addition, they can assign (overall) scores. The final decision is based on a discussion among reviewers, which is governed by an additional meta-reviewer, and consolidation with the program chairs.
Even though this traditional review model has been established for several years, it can imply negative impacts on the stakeholders or the scientific community as a whole. Under the assumption that reviewers overemphasize positive outcomes, the authors might be inclined to "search for" performance gains in system-oriented experiments at the cost of scientific rigor and reasoning. Even more, there is the danger of fraud or selecting positive outcomes, considering the need to publish in order to proceed in an academic career.
Alternatives to the traditional review process have emerged with an initial round of peer review of a manuscript with the results blinded or a study protocol and a subsequent round of peer review of the full paper including results. Table 4 shows the traditional peer review model with our recommended results-blind reviewing and two other variants, each of which we describe below. The Center for Open Science notes that, as of January
2023, over 300 journals have adopted one or more variants of this approach.74 In addition, several preliminary analyses of their implementation have been conducted and published (e.g., [3, 5, 8]).
Footnote 74: [https://www.cos.io/initiatives/registered-reports](https://www.cos.io/initiatives/registered-reports)
A results-blind review involves an in-principle acceptance or rejection decision based on peer review of the paper _with the results blinded_ from the reviewers (see the third column of Table 4). The reviewers can put more emphasis on judging the merits of the general motivation, the study design, and what kinds of scientific insights could be gained from the experiments. If the paper is accepted in-principle, it proceeds to a second stage of peer review of the _paper with the results_ included for reviewers. The final decision about the acceptance is based on the second stage of the review in which the reviewers have access to the experimental outcomes.
Other peer-reviewing models have emerged in recent years as part of the growing awareness of preregistration75,76 and its adoption [6]. One such approach to peer review involves the review and in-principle acceptance of the study protocol including the methods and analysis plan before data is collected or analysis begins. Variants of this approach include preregistered research articles and registered reports for confirmatory research 77. Although preregistered reports and registered reports are typically used for confirmatory research, there are variants for exploratory research and some journals also use a separate approach for exploratory research projects which do not have a confirmatory component (e.g., an Exploratory Report article type in journal _Cortex_).
Footnote 75: [https://www.cos.io/initiatives/preregistry/](https://www.cos.io/initiatives/preregistry/)
Preregistered research articles involve researchers submitting a research study protocol including the rationale and hypotheses, methodology including analysis plan, and materials
\begin{table}
\begin{tabular}{l l l l l} \hline & Traditional & Results-Blind & Preregistered & Registered \\ & & & & Report \\ \hline protocol preregistration & optional & optional & yes & (in & no \\ & & & journal & \\ & & & repository) & \\ protocol publication (separate & no & no & no & yes \\ from research article) & no & no & yes & yes \\ peer review of research protocol before data collection & no & yes & no & no \\ peer review of paper with blinded results & yes & yes & yes with & yes (if in- \\ peer review of full paper & yes & yes (if in- & focus on & principle \\ & & & acceptance) & results (if & acceptance) \\ Example publication(s) & ACM SIGIR, ACM CHIIR & BMC PSychology & PLOS Biology & PLOS ONE \\ \hline \end{tabular}
\end{table}
Table 4: Comparison of traditional and emerging approaches to peer review: results-blind, preregistered reports, and registered reports.
to a journal for review and simultaneous depositing into a repository often associated with the journal (see the fourth column of Table 4). The preregistered protocol is peer-reviewed with a focus on methods and the analytic approach, and a provisional in-principle acceptance conditional upon the execution of the study as designed. The researchers execute the study, analyze the results, and submit a full manuscript. After peer review of the new sections, the completed manuscript is published.
Registered Reports also involve submission and peer review of a study protocol (see the third column of Table 4). A key difference from preregistered articles is that accepted protocols are published immediately and a future article with the results of the study is given an in-principle acceptance. After the study execution, the full manuscript is submitted and reviewed.
#### 4.4.3 Next Steps
We propose several changes to the reviewing processes for information access papers to reduce publication biases. Our recommendations are that information access scholarly community:
1. adopts a pilot test of results-blind reviewing for a conference or journal,
2. considers starting from our initial process recommendation for results-blind reviewing,
3. ask authors, conference organizers, and reviewers to place more emphasis within papers on the insights that can be gained from their research,
4. considers allowing additional space for additional details about study methodology, and
5. considers whether to implement a two-stage review process in which research proposals and/or preregistered research reports are reviewed with a tentative acceptance decision before data collection and analysis are conducted.
Each of these is described in more detail below.
##### Recommendation 1: Pilot test of results-blind reviewing in conference(s) or journal(s)
Our first and most important recommendation is that the information access research communities (i.e., IR and RS communities) adopt a results-blind approach to peer reviewing for conference(s) and/or journal(s). We recommend that the community start with a pilot test of results-blind reviewing in an established conference track, perhaps with a new paper track with an earlier deadline to allow for a two-stage review process. In results-blind reviewing, the authors submit two versions of their manuscript: one version of the paper with the full results, and one version with the results blinded. The two submitted versions are the basis of a results-blind reviewing process with two major stages (see Figure 7).
Stage 1 consists of the Results-Blind Review. The results-blind version of the manuscript is reviewed and an in-principle acceptance (or rejection) is made. During Stage 1, as in the traditional reviewing process, the paper is reviewed by multiple reviewers who also make acceptance recommendations. In the case of conferences, the in-principle acceptance (or rejection) decision is made after discussion with the Senior Program Committee (SPC)/meta reviewer and in the Program Committee (PC) meeting. Papers that receive an in-principle acceptance proceed to Stage 2.
Stage 2 consists of the Results Review. The paper containing the results is reviewed by the same set of reviewers with a focus on the results. In the case of a conference, the final acceptance (or rejection) decision is made after a discussion period with the SPC and in the PC meeting.
### Recommendation 2: Initial process recommendation for a results-blind reviewing pilot
Below, we recommend a high-level process for how a results-blind reviewing process pilot might be implemented and important considerations for conference organizers and reviewers as well as authors.
Conference organizersOnce the decision for results-blind reviewing has been made, conference organizers would have to take the following steps:
* First, the CfP for the new track should be written. As the proposed results-blind reviewing process with two stages of review will take longer to complete, an earlier deadline for this track should be set.
* Criteria for both stages of the review (blinded and with results) should be defined. Special attention should be given to the criteria for changing an initial acceptance recommendation into a rejection.
* Author instructions for the results-blind reviewing track have to be formulated, describing not only the new reviewing criteria and process but also specific instructions on how to prepare the blinded version of an article. For the results-blind version of the paper, the authors will need to blind all mentions of the results (e.g., in the abstract, introduction, discussion, and conclusion in addition to in a results section) in a way that it is not technically possible to recover the blinded text. There should be a way for reviewers to easily determine the differences between the results-blind version of the paper and the one with the results.
* Reviewers for the results-blind reviewing track have to be recruited. In the beginning, additional or different expertise will be required for this track. A special introduction of training for the reviewers might be necessary in order to make them familiar with the new process and criteria.
* The reviewing software will need to be configured for multiple stages of review for the results-blind reviewing. In the first stage of reviewing, only the blinded version of the papers should be distributed to reviewers (see below for the process for reviewers).
* After the final decision by the PC, the authors will be provided with the review and informed about the final accept or reject decision. In the case of a rejection decision, authors should also be notified at which stage the paper was rejected.
* The organizers should give special recognition to the PC member of the track (on the conference Web site and in the proceedings)
* The success of the new track and the process should be evaluated.
ReviewersOnce the reviewers are provided with instructions about the general process and received additional training, we recommend the following process:
* In the first stage, the reviewers are provided with the results-blind version of the submission and complete their review including a recommendation about the in-principle acceptance.
Figure 7: Proposed two-stage process for results-blind reviewing (figure adapted from BMC [78])
* Once the reviews are complete, a discussion phase with the SPC follows, leading to a recommendation for each paper.
* The PC for the track meets and makes an initial decision (in-principle acceptance or rejection) for each paper.
* For the second reviewing stage, only in-principle accepted papers are considered. Reviewers get the full versions of the papers they reviewed before. They add an additional part to their review focusing on the results which were previously blinded. Also, they make a second recommendation about acceptance.
* As for the first phase, a discussion phase with the SPC follows leading to a recommendation for each paper.
* The track PC meets for the second time and makes the final decision for each paper.
Authors will have to understand the new reviewing scheme, and possibly be trained/educated for preparing manuscripts that satisfy the new reviewing criteria. They will have to prepare and submit two versions of a paper, a version with the results as in the traditional model as well as one in which the results are blinded.
### Recommendation 3: Emphasize insights in papers
We recommend that authors, conference organizers, and reviewers place additional emphasis on communicating expected insights to be gained from experiments. Guidelines (and review forms) should ask the reviewers to comment on the theoretical background, the hypotheses, the methodological plan and the analysis plan of the experiment(s) described. Special attention should be given to the expected insights to be gained from experiments, i.e. regarding cause and effect.
### Recommendation 4: Extra space for methods information
Another recommendation is for the community to consider explicitly allowing methodological appendices for authors to provide additional methodological details outside of page and/or word limits and to include these appendices with the text of the paper and not as supplementary materials. While not needed for all publications, this would be very beneficial for some types of studies so that the authors can include all study materials. For example, in user studies, researchers may administer multiple questionnaires, conduct a semi-structured interview, and read from a script. It is not uncommon for researchers to administer multiple questionnaires and conduct a semi-structured interview.
This would be especially important if adopting a results-blind reviewing process as careful scrutiny of the study design and all study materials is needed to ascertain whether the authors will be able to answer the research questions. For example, due to page limits, it is common for authors to describe the topics of an interview but uncommon to include the full text of an interview guide due to page limits.
In addition, this would have an additional benefit for other researchers who wish to replicate the study. While, for example, authors can currently make supplementary materials available in ACM Digital Library (ACM DL), these materials are not included in the downloadable version of the article or when reading online in the ACM DL in the eReader or HTML formats.
**Recommendation 5:** Consider a two-stage review process adapted from preregistered or registered reports
Although our primary recommendation is for conference organizers or journal editors to embrace a results-blind reviewing approach, we also recommend that they consider piloting a conference track or article type in which the study protocol undergoes peer review and is accepted in-principle before data collection or analysis begins. This may be more appropriate for certain types of research (e.g., user studies).
#### 4.4.4 Conclusion
At first glance, the new result-blind reviewing scheme might seem to be only attractive for papers describing failed experiments, while authors with successful results would go to the established tracks. In order to avoid this impression, it is essential that the new scheme is piloted as a highly visible and prestigious track in an established conference. Furthermore, it should be clearly communicated that the results-blind reviewing scheme aims at establishing high standards for the design, execution and analysis of experiments while shielding the reviewers from being blinded by shiny experimental results. Thus, it is our hope that papers published in this track will be regarded as high-quality publications which thoroughly address research questions and clearly demonstrate the insights that may be gained from the research.
## References
* [1] Donald T. Campbell and Julian C. Stanley. _Experimental and quasi-experimental designs for research_. Houghton Mifflin Company, Boston, 1963.
* [2] Nicola Ferro, Norbert Fuhr, Gregory Grefenstette, Joseph A. Konstan, Pablo Castells, Elizabeth M. Daly, Thierry Declerck, Michael D. Ekstrand, Werner Geyer, Julio Gonzalo, Tsvi Kuflik, Krister Linden, Bernardo Magnini, Jian-Yun Nie, Raffaele Perego, Bracha Shapira, Ian Soboroff, Nava Tintarev, Karin Verspoor, Martijn C. Willemsen, and Justin Zobel. From evaluating to forecasting performance: How to turn information retrieval, natural language processing and recommender systems into predictive sciences (dagstuhl perspectives workshop 17442). _Dagstuhl Manifestos_, 7(1):96-139, 2018.
* [3] Michael G. Findley, Nathan M. Jensen, Edmund J. Malesky, and Thomas B. Pepinsky. Can Results-Free Review Reduce Publication Bias? The Results and Implications of a Pilot Study. _Comparative Political Studies_, 49(13):1667-1703, 2016. Publisher: SAGE Publications Inc.
* [4] Norbert Fuhr. Some common mistakes in IR evaluation, and how they can be avoided. _SIGIR Forum_, 51(3):32-41, 2017.
* [5] Daniel M. Maggin, Rachel E. Robertson, and Bryan G. Cook. Introduction to the special series on results-blind peer review: An experimental analysis on editorial recommendations and manuscript evaluations. _Behavioral Disorders_, 45(4):195-206, 2020.
* [6] Brian A. Nosek, Charles R. Ebersole, Alexander C. DeHaven, and David T. Mellor. The preregistration revolution. _Proceedings of the National Academy of Sciences_, 115(11):2600-2606, 2018.
* [7] William R Shadish, Thomas D Cook, and Donald T. Campbell. _Experimental and quasi-experimental designs for generalized causal inference_. Houghton, Mifflin and Company, New York, 2002.
* [8] Haley M. Woznyj, Kelcie Grenier, Roxanne Ross, George C. Banks, and Steven G. Rogelberg. Results-blind review: a masked crusader for science. _European Journal of Work and Organizational Psychology_, 27(5):561-576, 2018.
### Guidance for Authors
_Giorgio Maria Di Nunzio (University of Padova, IT, [email protected])_
_Maria Maistro (University of Copenhagen, DK, [email protected])_
_Christin Seifert (University of Duisburg-Essen, DE, [email protected])_
_Julian Urbano (Delft University of Technology, NL, [email protected])_
_Justin Zobel (University of Melbourne, AU, [email protected])_
#### Motivation
The IR community has over time developed a strong shared culture of expectations of published papers, particularly in our leading venues. However, these expectations are not explicit and the evidence of submitted papers is that many authors are not aware of what elements, or omissions, are likely to be of concern to reviewers. While accepted papers do provide an indication of what an author should do, they are, of course, uneven, and the small set of papers that an author is consulting in their new work could easily be unrepresentative of the best IR work as a whole.
In this section, our aim is to provide a basis for general guidance for authors and reviewers, with a focus on people who are new to the community. It should communicate to authors and reviewers a range of factors that the community regards as significant. Such guidance, if well designed, should help authors to lift the standard of their work and provide context should it not be accepted; for reviewers, especially those new to the task, it can provide checklists and (at a high level) advice about the field from beyond their immediate research environment.
Some elements in papers have attracted specific criticism in publications; this is particularly true of effectiveness measurement, where a long history of research on method has argued for and against a range of measures, forms of evidence for statistical validity, treatment of test collections, and so on. Such literature is critical to improving the quality of our research but does not necessarily represent a settled, shared view of best practice.
In our view, it is essential that general advice be constructive, readily understandable by new IR authors and reviewers, and--to the extent that is possible--not the subject of active debate. In the following, we have sought to follow this principle. We first explain the basis of the draft guidance for authors that we have developed and then present that guidance. How this work might develop over time is considered under "next steps".
#### Flaws in Submitted IR Papers
For our goal of developing draft guidelines for authors for the community, we have multiple sources of inspiration. As a first step, it is valuable to understand and list the kinds of issues that lead experienced reviewers to criticize papers, that is, to collect the opinions from the community based on their experience from different roles as scientists: authors, readers, reviewers and meta-reviewers. Another valuable source of information consists in existing guidelines in adjacent research fields, as they reflect a common agreement of what constitutes a good scientific paper in that community and point out commonly agreed issues that may lead to rejection.
By collecting, consolidating, and harmonising the collected information, we aim to establish a strong foundation for the synthesis of a new set of draft guidelines that comprehensively capture the community-agreed strengths aspects of good scientific papers as well as issues
that commonly lead to rejection; and separately to identify significant emerging aspects that are not yet captured by existing guidelines.79 To obtain concise, comprehensive, understandable, and actionable guidelines for early-career researchers, we translated the identified issues, points of criticism, and guideline items, which have been described at varying levels of detail, into observations on elements that papers should include and on elements that can lead to rejection.
Footnote 79: As an example, ACL 2023 includes a “Policy on AI Writing Assistance” in their call for papers [https://2023.aclweb.org/blog/ACL-2023-policy/](https://2023.aclweb.org/blog/ACL-2023-policy/).
We designed the following approach to create our guidelines: (1) search of existing guidelines; (2) brainstorming to identify common pitfalls; (3) categorization of the outcomes from the brainstorming exercise and comparison of these with existing guidelines; and (4) consolidation and integration with existing SIGIR guidelines.80 Throughout each step of the process, we adhere to the principle of keeping only issues that we believe to be widely agreed upon within the community.
Footnote 80: [https://sigir.org/sigir2023/submit/call-for-full-papers/checklist-to-strengthen-an-ir-paper/](https://sigir.org/sigir2023/submit/call-for-full-papers/checklist-to-strengthen-an-ir-paper/)
We now describe our approach.
#### Identifying existing guidelines
We started by searching for existing guidelines for authors and reviewers that have been proposed in adjacent research communities. In our search for existing guidelines, we considered the following sources.
* The ACM Special Interest Group on Information Retrieval (SIGIR) developed recommendations to strengthen IR papers. These are rather general suggestions concerning presentation and experimentation. We used them as the initial stage and extend them to design our list of recommendations for authors (see Section 4.5.3).
* Empirical Evaluation Guidelines from the ACM Special Interest Group on Programming Languages (SIGPLAN).81 This is a checklist that presents best practices meant to support both authors and reviewers within the community. The checklist includes some broad categories (e.g., appropriate presentation of results) and examples of violations for each subcategory (e.g., a misleading summary of results). These are reported in Appendix 7.1. Footnote 81: [https://www.sigplan.org/Resources/EmpiricalEvaluation/](https://www.sigplan.org/Resources/EmpiricalEvaluation/)
* The Special Interest Group on CHI SIGCHI82 published a guide for reviewing papers submitted to the CHI conference. This is a general overview of both quality considerations (e.g., whether the paper contribution is sufficiently original), and more practical considerations related to the paper length and the review process. SIGCHI also suggested the Equitable Reviewing Guide,83 which is a list of recommendations to help reviewers write fair reviews. Some of their points include reflecting on personal bias or considering that many authors are not native English speakers, thus being lenient on writing style and typos. Footnote 82: [https://chi2022.acm.org/for-authors/presenting/papers/guide-to-reviewing-papers/](https://chi2022.acm.org/for-authors/presenting/papers/guide-to-reviewing-papers/)
* The ACL presented an online tutorial to instruct reviewers on the ACL Rolling Review process.84 This tutorial presents some practical suggestions (e.g., planning the reading and reviewing time to avoid rushed reviews), as well as suggestions to evaluate the
quality of the paper and a list of common reasons for rejection, which often lead to author complaints because such reasons are not actual weaknesses but rather easy, unreasonable grounds for rejection.
* Ulmer et al. [1] present a list of best practices and guidelines for experimental standards within NLP. These guidelines contain some broad categories, (e.g., data), and minimal requirements and recommendations for each category (e.g., publish the dataset accessibly and indicate changes). These are reported in Appendix 7.2.
#### Brainstorming to identify common issues
After our search for guidelines, we ran a brainstorming exercise among contributors of the working group. The goal of this exercise was to identify concerns and flaws that we, as reviewers, would not want to find in IR papers and can very likely lead to rejection. This list of reflections is included in Appendix 7.3.
We extended the brainstorming exercise to all participants in the Dagstuhl seminar through an online survey. We asked participants to list "things we don't like to see in papers", and provided some examples for guidance and the full list of SIGPLAN categories for inspiration. We received 35 items. Comments concerning strategic issues, such as "I prefer to have a new paper category" were omitted from further analysis; others were integrated into our findings. As mentioned above, we adhere to the principle of keeping only issues that we did not regard as controversial issues or the subject of debate, with the aim of omitting points that might lead to disagreement in the community.
#### Integration and categorization
Inspired by the SIGPLAN and NLP guidelines, we developed an initial set of broad categories to organize the issues we identified above. We then mapped each item in our list of reflections to the corresponding category. We did the same for the suggestions collected from the participant survey, as well as for the pertinent points identified in the SIGPLAN and NLP guidelines and the SIGIR guidance. In this process, we focused on issues that specifically relate to IR papers and set aside more general issues such as "captions of tables should be clear".
There were several rounds of review to clarify and consolidate similar items, with minor re-categorizations when needed. The final result of this process is a list of what we believe are recognised as common flaws in IR papers. The final list consists of 57 items organized in the following 9 categories (see Appendix 7.4): (1) Design, motivation and hypothesis; (2) Literature; (3) Model and method; (4) Data, data gathering and datasets; (5) Metrics; (6) Experiments; (7) Analysis of results and presentation; (8) Repeatability, reproducibility, and replicability; and (9) Conclusions and claims.
Finally, we used this list of concerns to propose an update to the existing SIGIR guidelines. This is described in the next section.
#### Draft Guidance for Authors
Some years ago, SIGIR introduced brief guidance for authors as "Things that strengthen an IR paper".85 One of us (Zobel) recently updated this guidance for SIGIR-AP'23, in
consultation with the other Program Chairs, but we note that it represented the views of just a couple of individuals. The SIGIR guidance proposed, at a high level, aspects to consider in presentation and experiments. The SIGIR-AP revision primarily addressed some aspects--omissions, oversights, and shortcomings--that are offered as grounds for rejection.
Here, we took the SIGIR-AP draft guidance as a starting point and reviewed it against the list of concerns that we set out in Section 4.5.1. We also took note of generic writing advice that is widely available and decided to omit elements that we regarded as pertinent to computer science research in general. This led to the following, which we propose as a basis for the advice provided by venues that publish IR work.
We have sought to make the advice broad, understandable, and constructive; but it is of necessity brief and some readers may seek more detail. For that reason, when the advice (or a revision of it) is used, it might also be helpful to link to a version of the lists of concerns in Appendix 7.4.
Our proposed draft guidance is as follows.
**Motivation and claims**
* The problem is well characterised and motivated, and the potential impact is discussed.
* The proposed application of the work is contextualised by pertinent knowledge from that domain, including potential ethical, social, or environmental impacts.
* The research goals and original contributions (that is, the elements that are a contrast to the prior art) are stated and are clearly distinguished from prior work.
* The claims are properly scoped and supported.
* There are explicit statements of what was done and what was not.
**Presentation**
* The literature review considers competitive previous solutions for the problem, that is, it is not limited to consideration of other work on the same technology as that explored in the submission.
* There is a reasoned justification for each of the choices made in each step of the research and each element of the method.
* Results are presented in keeping with the norms in the field as exemplified in strong prior work.
* A substantive, focused, and insightful discussion accompanies the results taking into account limitations and scope of the work.
### Experiments
* The experimental design and its scale are appropriate to the problem.
* In comparative studies, appropriate baselines are used; they are deployed and optimized in ways comparable to those used for the proposed method.
* The experimental results are reliable and generalizable, and preferably show illustrative individual cases as well as aggregated results.
* Where appropriate, a diversity of data sets are used, including public-domain data sets used in prior work.
* Sufficient details (with data and code where appropriate) are provided to enable other researchers to assess and reproduce the experiments; this includes the nature, source, and collection process for the data, and the data preparation steps.
### Results and analysis
* The evaluation methods and measures address the research questions; the use of redundant or highly correlated measures should be avoided.
* Statistical analysis is used and reported appropriately.
* Development data, training data, and test data are distinguished from each other.
* User studies are based on adequately sized, representative cohorts; data is gathered in ways that meet ethical norms, or where appropriate in keeping with prescribed ethics practices.
* Final results were obtained after all development was complete, that is, not selected because they are the best outcomes amongst a larger set of experiments or hand-fitted to the data.
### Common problems that lead to rejection
Issues with papers in relation to the recommendations above can lead to rejection. Other problems that can lead to rejection are as follows.
* Literature reviews that lack critical analysis of prior work or that largely consist of lists of papers, that is, do not have an insightful discussion.
* Contributions that consist of small modifications to established techniques, particularly where the contribution is a straightforward variation of the established technique or where there are numerous prior papers exploring similar variations.
* Methods that appear to be developed and hand-tuned on a specific data set without discussion or demonstration of their lessons for future work or of how the methods would be more generally applicable.
* Justification of a method solely by its score in experiments, lacking an a priori rationale for why the method is worth exploring.
* Experiments where the data volumes are too small to support the conclusions.
* Any form of academic fraud, misrepresentation, or dishonesty.
#### 4.5.4 Next Steps
Guidance and lists of issues should be living documents that reflect a current and uncontroversial agreement in the community. Therefore, they should be open to change because
there can always be some disagreements and expectations of authors can change over time, in some cases quite quickly, especially as the subjects of research shift to focus on new topics. For that reason, no set of advice should be regarded as fixed, but revision should be undertaken consultatively and with a spectrum of colleagues.
We suggest that the detailed list of issues of concern in Appendix 7.4 be made available in some form as educative for reviewers. We stress here that it is not our intention that reviewers simply reject papers because of these issues. It could also provide a resource at forums such as doctoral consortia.
We thus believe that it would be valuable for the community to:
* Ensure that the guidelines are prominent in the calls-for-papers at our major conferences and journals, or otherwise disseminated.
* Encourage the SIGIR executive committee to take ownership of the guidelines and to occasionally convene a panel to produce an update.
* Use these resources educatively for new members of the community and for new reviewers.
In this exercise, we have not produced guidance for reviewers, which in other disciplines tends to consist of two parts: general advice on how to approach the task and specifics for the field. An example that we found was produced by the ACL, as discussed above; a particular strength of these guidelines in our view is the enumeration of unfair grounds for rejection. We believe that such guidance would be of value to our community, and could make use of the materials we have presented here.
|
2308.07184 | Auditory cueing strategy for stride length and cadence modification: a
feasibility study with healthy adults | People with Parkinson's Disease experience gait impairments that
significantly impact their quality of life. Visual, auditory, and tactile cues
can alleviate gait impairments, but they can become less effective due to the
progressive nature of the disease and changes in people's motor capability. In
this study, we develop a human-in-the-loop (HIL) framework that monitors two
key gait parameters, stride length and cadence, and continuously learns a
person-specific model of how the parameters change in response to the feedback.
The model is then used in an optimization algorithm to improve the gait
parameters. This feasibility study examines whether auditory cues can be used
to influence stride length in people without gait impairments. The results
demonstrate the benefits of the HIL framework in maintaining people's stride
length in the presence of a secondary task. | Tina LY Wu, Anna Murphy, Chao Chen, Dana Kulic | 2023-08-14T14:44:25Z | http://arxiv.org/abs/2308.07184v1 | Auditory cueing strategy for stride length and cadence modification: a feasibility study with healthy adults
###### Abstract
People with Parkinson's Disease experience gait impairments that significantly impact their quality of life. Visual, auditory, and tactile cues can alleviate gait impairments, but they can become less effective due to the progressive nature of the disease and changes in people's motor capability. In this study, we develop a human-in-the-loop (HIL) framework that monitors two key gait parameters, stride length and cadence, and continuously learns a person-specific model of how the parameters change in response to the feedback. The model is then used in an optimization algorithm to improve the gait parameters. This feasibility study examines whether auditory cues can be used to influence stride length in people without gait impairments. The results demonstrate the benefits of the HIL framework in maintaining people's stride length in the presence of a secondary task.
_Clinical relevance--_ This paper proposes a gait rehabilitation framework that provides a personalized cueing strategy based on the person's real-time response to cues. The proposed approach has potential application to people with Parkinson's Disease.
## I Introduction
Parkinson's Disease (PD) is a progressive neurological disorder that affects movement. In advanced stages of the disease, a common lower body symptom is Freezing of Gait (FoG), where a breakdown of the person's existing stride-length-cadence relationship (SLCrel) occurs. In unimpaired gait, an increase in cadence (i.e. steps per minute) is typically accompanied by an increase in stride length (i.e. the distance travelled by the same foot) [1]. SLCrel can exhibit a positive linear, negative linear, or negative quadratic relationship (ibid.). When participants walk at different self-selected speeds, 90-100% of the participants across different age groups exhibit a SLCrel (ibid.). A freezing episode manifests as an abnormal increase in cadence with a noticeable decrease in step length, i.e. the SLCrel breaks down [2].
The use of wearable devices to monitor the person's gait, combined with feedback mechanisms in the form of visual, auditory, or tactile cues, has been shown to help people alleviate FoG [2]. Existing research on cueing mechanisms focuses on providing on-demand cues, where visual cues at a fixed distance or auditory/tactile cues at a fixed pace are provided at the onset of freezing [2, 3, 4, 5]. However, few studies have focused on adapting the provision of cues. Currently, adjustments to cues are performed by therapists during clinic visits. The lack of cue adaptation can decrease the cue effectiveness, given the day-to-day symptom variability and longitudinal disease progression. For instance, people can respond to cues differently due to changes in motor capability as part of the medication cycle [2] or may experience a decrease in responsiveness due to habituation [6].
In this study, we extend our previous work in [7, 8] that proposes a cue-adaptation method based on the individual's real-time response to cues to address symptom variability and habituation. Previous cue-adaptation methods (e.g. [9, 10]) focus on increasing gait speed (m/s), which is influenced by both stride length and cadence. However, the studies assume the change in gait speed will positively influence step length and cadence. The assumption can be detrimental in Parkinson's as a faster gait speed can also be achieved by increasing cadence, which could trigger FoG [11]. Our current work explicitly models the SLCrel and provides cues that account for both gait parameters. In our previous work, we demonstrated that auditory cues can influence cadence in healthy adults [7] and people with PD [8]. This feasibility study focuses on the stride length aspect of the SLCrel. The evaluation is important as previous studies have suggested that auditory cues are not always effective in changing stride length [12, 11]. Results from this study will inform whether a change in cueing modality is needed and provide an initial evaluation of the proposed HIL framework.
## II Methods
### _Proposed Framework_
We extend our previous work in [7] to incorporate monitoring of the SLCrel. The HIL framework consists of three sub-components: an online gait parameter estimation algorithm, a model of cue influence on gait, and an optimization function for cue provision. The framework is illustrated in Figure 1.
#### Ii-A1 Estimate gait parameters online
A key requirement of the HIL framework is to measure stride length in real-time. The system needs to have fast computation and be low-cost, portable, and easy to set up. Based on these requirements, a solution using two IMU sensors secured onto each foot is implemented. Previously, the zero-velocity update (ZVU) algorithm [13] had been developed to correct for sensor drift during the stance phase of the gait cycle. To apply ZVU,
we extend the method from [14], where a dynamic threshold is computed from the accelerometer and gyroscope signals to distinguish between the stance and swing phases. The method is person-invariant and speed-invariant, and reduces experiment complexity (i.e. no need to tune for each participant) and computation requirements (i.e. no need to train on a large amount of data or need expensive hardware to run). One benefit of real-time stance/swing detection is being able to synchronize cues to a gait phase, similar to others such as [15]. In addition, the detection can be used to estimate cadence (the time of the step from the start of one swing phase to the next). However, [14] did not perform sufficient validation with participants. We found the algorithm was not robust against interpersonal variations and would often lead to false positives. Therefore, we augmented the method and summarized it in Algorithm 1. The algorithm takes each IMU sample (\(\mathbf{a}^{t+1},\boldsymbol{\omega}^{t+1},\mathbf{q}^{t+1}\) for accelerometer, gyroscope, and quaternion) at the current time step, \(t+1\), and returns true if it is greater than the dynamic threshold (\(th_{dynamic}\)). \(th_{dynamic}\) is a function of the aggregated features computed from the IMU samples and it is the key to the person and speed-invariant algorithm. The swing classification result is stored in the array called \(is\_swing\). Two additional lists, called \(rising\_edge\_list\) and \(falling\_edge\_list\) are added as secondary filters to reject false positives.
Examples of the algorithm performance are shown in Figure 2. Once the stance/swing phases are detected, we applied the algorithm described in [16] by adapting the code from xioTechnologies/Gait-Tracking-With-x-IMU. The full implementation is provided in the Appendix. We conducted a validation study for the stride length estimation algorithm with two healthy adult participants using Cometa System IMUs and the Vicon motion capture system. The validation study is divided into both straight-line walking and circle walking due to the limited size of the Vicon rig. Both walking patterns are important as our planned path for the experiment requires both motions, with the circle walking representing turns around the corners of the room. The percent error and standard deviation (STD) for straight-line walking is -0.09\(\pm\)0.03 and the error for circle-walking is -0.024\(\pm\)0.19 meters.
```
Function DetectSwing(\(\mathbf{a}^{t+1},\boldsymbol{\omega}^{t+1},\mathbf{q}^{t+1}\)) Update rolling window Compute features from the window Aggregate features from above ifAggregated feature \(>th_{dynamic}\)then \(is\_swing^{t+1}\gets True\) Initialize debounce counter else \(is\_swing^{t+1}\gets False\) endif ifAggregated feature \(<th_{static}\)then \(is\_swing^{t+1}\gets False\) Increment debounce counter endif if\(is\_swing^{t+1}\) and waiting for a rising edge then Append \(t+1\) to \(rising\_edge\_list\) endif if\(not\)\(is\_swing^{t+1}\) and waiting for a falling edge and is debounced then Update step-specific features Update \(th_{dynamic}\) & \(th_{static}\) Append \(t+1\) to \(falling\_edge\_list\) endif
```
**Algorithm 1** Swing detection algorithm
#### Iii-B2 Model cue influence on gait
A sparse multi-output Gaussian Process (MOGP) is used to model the change in gait parameters (i.e. cadence and stride length) as a result of a given auditory cue. The model takes the form of \(f(x):\mathds{R}^{D}\rightarrow\mathds{R}^{P}\), where D is the dimension of the input and P is the dimension of the output.
\[\mathbf{Y}=f(\mathbf{x})=\mathrm{W}\mathbf{g}(\mathbf{x}), \tag{1}\]
Fig. 1: The block diagram of the HIL framework with its user.
Fig. 2: The gyroscope data in the x-axis from the sensor located on the dominant leg of 3 participants during the experiment. The walking path contains both straight lines and turns. The orange dot indicates the start of the swing phase, and the green x indicates the end. The start detection is consistent, but the end-of-swing can sometimes be premature (as shown in the third sub-figure, where x is labelled when the foot is yet to become stationary). This is consistent with our Vicon validation described in Section II-A1, where the STD is larger during turns compared to straight-line walking.
\(\mathbf{g(x)}\) is a collection of Q independent GPs, where Q is the number of latent GPs:
\[\mathbf{g(x)}=\{g_{q}(\mathbf{x})\}_{q=1}^{Q},g_{q}(\cdot)\sim\mathcal{GP}(0,k_{q }(\cdot,\cdot^{\prime})) \tag{2}\]
The outputs are assumed to be linearly correlated through W, known as the Linear Model of Coregionalization (LMC) following the implementation specified in [17]. The outputs of the MOGP are the estimated cadences and stride lengths, while the inputs are the specified cues at the preceding time step:
\[\mathbf{Y}=\begin{bmatrix}\hat{f}_{1},\hat{\ell}_{1}\\ \hat{f}_{2},\hat{\ell}_{2}\\ \vdots\\ \hat{f}_{N},\hat{\ell}_{N}\end{bmatrix}\qquad\qquad\mathbf{x}=\begin{bmatrix} 0\\ c_{1}\\ \vdots\\ c_{N-1}\end{bmatrix}\]
\(\hat{f}_{n}\) is the estimated cadence and \(\hat{\ell}_{n}\) is the estimated stride length at the \(n^{th}\) step from the gait measurement sub-system. \(c_{n-1}\) is the cue given at the previous step that results in the \(n^{th}\) cadence/stride length. \(n\) is incremented at every footstep and \(n=[1,2,\ldots,N]\). \(n=1\) represents the baseline cadence and stride length when no cue is given.
A challenge associated with MOGP is the computation complexity associated with the covariance matrix manipulation, which grows cubically with respect to the amount of data. [18] has proposed using variational free energy approximation combined with inducing points to construct a sparse approximation that reduces the computation cost. The result is implemented in a python library (GPFlow) [17], which is utilized in this study. Specifically, sparsity is introduced to the MOGP through inducing points, \(\mathbf{Z}=[z_{1},z_{2},...z_{M}]\). Then, the MOGP prior, \(p_{0}\), can be written in terms of \(\mathbf{Z}\), where
\[p_{0}(\mathbf{g}_{q})=\mathcal{N}(m_{q}(\mathbf{Z}),k_{q}(\mathbf{Z},\mathbf{ Z}^{\prime})) \tag{3}\]
The key model parameters are chosen as the following: \(P=Q=2,D=1,M=20\). The covariance, \(k_{q}(\cdot,\cdot^{\prime})\), is chosen to be the sum of a squared exponential kernel and a constant kernel, which allows possible higher order SLCrel to be captured. To use the model to predict change in stride length and cadence as a result of the input cues, the model is evaluated at the new input location following Eq 4.
\[Y^{\star}=Wg(x^{\star}),\quad x^{\star}=[c_{n}^{\star}],\quad Y^{\star}=[\hat {f}_{n+1}^{\star},\hat{\ell}_{n+1}^{\star}] \tag{4}\]
#### Iii-B3 Cue Optimization
The cue-optimizing sub-system aims to provide cues that minimize the difference between the predicted and desired gait states. The cost function penalizes the squared difference between target cadence/stride length and predicted cadence/stride length, as well as rapid cue changes.
\[c_{opt}=\operatorname*{arg\,min}_{c_{n}^{\star}}J,\text{ subjected to }c_{min}\leq c_{n}^{\star}\leq c_{max}\] \[J(c_{n}^{\star})=\alpha_{f}(f_{target}-\hat{f}_{n+1}^{\star})^{2 }+\alpha_{l}(\ell_{target}-\hat{\ell}_{n+1}^{\star})^{2}\] \[+\alpha_{e}(c_{n}^{\star}-c_{n-1})^{2} \tag{5}\]
In Eq 5, \(c_{opt}\) is the cue to provide at the \(n+1^{th}\) step constrained between \(c_{min}=0.65f_{baseline}\) and \(c_{max}=1.35f_{baseline}\). The range is determined empirically based on a previous study [7]. \(\alpha_{f},\alpha_{l},\text{and }\alpha_{e}\) are three scaling factors that weigh the relative importance of each cost term, which are initialized to 1.5, 10, and 0.05 respectively. \(\hat{f}_{n+1}^{\star}\) is the predicted (i.e. \(n+1\) step) cadence and \(\hat{\ell}_{n+1}^{\star}\) is the predicted stride length estimated from the MOGP given the cue at the current step, \(c_{n}^{\star}\), using Eq 4. The cost function is solved using the Nelder-Mead method in SciPy [19].
### _Target Selection_
\(f_{target}\) and \(\ell_{target}\) in Eq 5 are selected based on the participant's baseline cadence (\(f_{baseline}\)) and initial SLCrel measured at the start of the experiment. \(f_{baseline}\) is measured in a 6-Minute Walk Test (6MWT) and the SLCrel is measured by providing the participants with 5 training beats at 1.16, 1.41, 1.58, 1.75, 1.91 Hz in a random order (values adopted from [1]). 50 beats are provided for each frequency. A quadratic and a linear polynomial is fitted to the training data using NumPy [20] and the polynomial with a lower residual becomes the SLCrel. An example of SLCrel is shown in Figure 3. \(\ell_{target}\) is selected to be a 0.1 m increase from the SLCrel, which is based on the error of the gait estimation algorithm. Two candidate targets are evaluated by computing the y-values at \(\pm 10\%f_{baseline}\) on the SLCrel and adding a 0.1 offset. The exact target is chosen by looking at the two candidates \(\ell_{target}\) and the higher value is chosen. \(f_{target}\) is then selected (i.e. the x value of \(\ell_{target}\) on the SLCrel).
### _Experimental Conditions_
We compare the performance of two cueing strategies in two scenarios, leading to 4 experimental conditions. Each condition lasts for 4 minutes. Since healthy participants do not experience gait impairments, we evaluate the performance of the framework in terms of its ability to change people's stride length. The two cueing strategies are the fixed and the adaptive strategy. The fixed strategy delivers cues directly at \(f_{target}\) and the adaptive strategy provides cues using the HIL framework. We initialize the MOGP using the data from the initial SLCrel. The two scenarios are either with or without a secondary task. For the secondary task, participants perform a word reciting task (i.e. reciting as many words beginning with a given letter). A set of letters (S, P, C, and A), are randomly selected before the experiment and randomly assigned. Conditions with secondary task are designed to emulate natural daily living where people could be preoccupied with other tasks while walking.
### _Participants_
We recruited 6 healthy adults (5M/1F, Age \(27.5\pm 3.78\) years; Height \(172.17\pm 6.4\) (cm); Weight \(71.67\pm 8.11\) (kg); mean\(\pm\)standard deviation). We started the experiment with three participants (Group 1) who only received 1 training session (described below) and the cue-triggering conditions are described in Eq 6 & 7 over a 5-step window (\(n_{window}\)):
\[\frac{\sum_{n=N-4}^{N}\hat{\ell}_{n}}{n_{window}}<\ell_{target} \tag{6}\]
\[\frac{\sum_{n=N-4}^{N}\hat{f}_{n}}{n_{window}}-f_{target}\]
\[\text{or }\frac{n_{window}}{f_{target}}\leq 5\% \tag{7}\]
We relaxed the cue-triggering condition for the next three participants (Group 2) to use only Eq 6 (see Section IV for relevant discussion).
### _Protocol_
Participants watched an introduction video and signed the consent form at the start of the experiment. After putting on the IMUs, participants were given the first training session where a metronome beat is randomly selected by the experimenter and participants practice syncing their walking to the metronome beat. The training takes less than one minute. Participants walked around a room, with the longest straight edge being 15 meters. The walking path contained both straight-line walking and 4 corner turns. Participants were free to choose the direction (clockwise or counter-clockwise) of their walk and kept to the same direction throughout. After the training session, participants were told to walk at their natural pace for the 6MWT. After the 6MWT, participants filled out a demographic survey. The initial SLCrel is then constructed (see Section II-B). Participants were told to sync their walking to the metronome beat for each condition.
For the first three participants (Group 1), the next part of the experiment involved the 4 experimental conditions described in Section II-C, which were selected randomly and blinded from the participants. Participants filled out a survey plus the NASA Task Load Index (TLX) after each condition.
For the next three participants (Group 2), a second training session, which takes approximately one minute, was provided before the experimental conditions. In the training, the experimenter first played another randomly selected metronome beat and walked with the participant. After a few steps of syncing to the beat, the experimenter asked the participant to "take bigger steps" while keeping to the same beat. The experimenter then asked the participants to "take smaller steps" while keeping to the same beat. The participants were told that the training aims to demonstrate how various step lengths can be associated with a beat. Participants were then asked to try and figure out the intention of the framework during the experiment in terms of how fast and how far the metronome wanted them to step. They would know they have it correct when the beats turn off, and the goal was to keep the beats off.
Participants in both groups were told they will be evaluated based on their gait performance and the number of words they can recite if the secondary task is present. All participants concluded the experiment after an interview during which the participant reviewed their own data and the experimenter informed the participants of the cueing strategies. The study (ID 34903) was approved by the Monash University Human Research Ethics Committee.
### _Materials_
The IMU sensors from the WaveTrack Inertial System are sampled at 142 Hz (Cometa Systems, Milan, IT) and streamed wirelessly into a custom Python program. The program runs on a laptop (Windows 10, i7 core with no GPU), which plays auditory cues from a speaker (Phillips BT50A). Two sensors are used; one on each foot. The sensor is fixed to the flat part of the foot, which is identified by asking participants to lift their heels; the sensor is then placed on top of the folding crease and secured using duct tape. The sensor is oriented such that the x-y plane of the sensor is parallel to the transverse plane of the body. The sensor's y-axis points in the sagittal plane facing forward and z-axis points towards the head.
### _Analysis_
No statistical analysis was conducted due to the small sample size. Here we focus on the main gait metric (stride length) and participants' subjective ratings.
## III Results
This feasibility study aims to determine whether participants are able to increase their stride length using auditory cues. We calculate the delta stride length, which is the mean difference between the participant's stride length during a
Fig. 3: Example of the initial SLCrel. The dots represent the stride length/cadence data collected during the training beats (labelled from T-1 to T-5 from slow to fast). The participant exhibits a negative parabolic SLCrel and \(f_{target}\&\ell_{target}\) are chosen to be the location of the lighter orange star labelled “Target Up”.
condition compared to the mean of their baseline stride length during the 6MWT. The result is shown in Figure 4. Data in Figure 4(I)-A indicate that participants from Group 1 are not changing their strides even during the condition without the secondary task. Without sufficient information, the cue is delivered inefficiently as it is played almost 100% of the time without being able to influence the participant's gait, as shown in Figure 4(II)-A&C. Despite the lack of instructions, the adaptive condition (A-1) naturally encourages participants to explore a variety of step lengths through the change in cues (as evidenced by the larger variance in the data) as seen in Figure 4(I)-A. With more exploration, participants had a higher chance of meeting the cue-triggering conditions described in Eq 7 & Eq 6, thereby turning the metronome off (i.e. Figure 4(II)-A, where the lines between F1 and A1 trends downwards, favouring the adaptive approach). When the secondary task is added without sufficient instruction, further reduction in step length is observed (Figure 4(I)-C\(<\)A). However, the adaptive condition still encouraged more variations in step length, leading to a closer-to-baseline median (seen in Figure 4(I)-C, where two participants trends upwards between F-1 and A-1, favouring the adaptive approach. One participant has a slight reduction).
Once the instructions are modified for Group 2, participants started changing their stride length, as evident by the positive increase in step length in Figure 4(I)-B. A larger change in step length in the fixed approach than in the adaptive approach is observed in the conditions without secondary task (lines trend downward in Figure 4(I)-B between F-2 and A-2, favouring the fixed condition). This could be due to participants varying their step lengths when beats at a new pace are provided, which lowers the overall change in step length. Overall, the stride length decreases in the presence of the secondary task (Figure 4(I)-B\(>\)D). The median percent on time is lower without secondary task (Figure 4(II)-B\(<\)D). When the secondary task is added, two participants experienced a decrease in percent on time compared to the fixed approach and one increased drastically (Figure 4(II)-D).
The initial results first suggest that without explicit association between stride length and cues (Group 1), the adaptive approach is more effective in changing people's stride length (as most lines trend upwards between fixed and adaptive in Figure 4(I)-A&C) reducing the percent on time (as most lines trend downwards between fixed and adaptive in Figure 4(II)-A&C). When the second training session is added (Group 2), the results suggest there could be two ways of responding to the adaptive approach. For some participants, the adaptive approach continues to improve stride length and reduces percent on time, while others are confused by the non-static nature of the cue. This is supported by the post-condition questionnaires. When asked if and how the participants felt their gait changing, Group 2 answered that their gait is "random" more often than Group 1. This suggests that participants in Group 2 are consciously exploring the variations in stride length, but may still be difficult for them to "figure out" the intention of the framework as per instruction. This is also supported by the Task Load Index (TLX) score questions, where the largest and second largest change is seen in the participant's stress/irritation level (i.e. frustrated about not being able to turn off the metronome) followed by the physical demand of the task (i.e. exploring more step lengths). Overall, TLX increased between Group 1 and Group 2 (16.58\(\pm\)7.12 for Group 1 and 18.18\(\pm\)6.22 for Group 2; mean\(\pm\)standard deviation).
## IV Discussion
In this study, we demonstrate that it is possible to change people's stride length using auditory cues given sufficient instructions. This aligns with how cues are used during rehabilitation by bringing attention to the walking task. For Parkinson's, the attentional mechanism helps bypass the defective automatic control due to the disease, thereby improving their walking [2]. In the modified instruction (i.e. with Group 2), we emphasize attention control, which resulted in a greater change in stride length and higher TLX scores. In addition to the instruction, the cue-triggering conditions described in Eq 6 and 7 could also contribute to participants' lack of stride changes. This is because the \(\ell_{target}\) that is selected is not part of the natural SLCrel, and therefore satisfying both conditions may not be feasible. Finally, since \(f_{target}\) can easily be satisfied (as a participant put it: "[I] simply match the pace with the metronome"), a change in stride length in order to keep the metronome off was difficult to realize due to the natural association between walking frequency and metronome frequency. Therefore, the demonstration session was necessary to highlight the connection between metronome frequency and stride length.
The initial results for Group 2 suggest that the fixed approach performs better without secondary task, but the adaptive approach is better when the secondary task is added, given the reduction in percent on time. In addition, participants mentioned that the adaptive approach in general is more attention-demanding compared to the fixed approach. From these results, we plan to expand the study to include people with Parkinson's as well as a control group of older adults to evaluate the performance of the cueing strategies in relation to the SLCrel.
## Appendix
The full implementation of the gait detection algorithm and instructions on parameter tuning can be found here: [https://doi.org/10.26180/c.6619669.v3](https://doi.org/10.26180/c.6619669.v3).
|
2305.19346 | Dynamics and Statistics of Weak Chaos in a 4--D Symplectic Map | The important phenomenon of "stickiness" of chaotic orbits in low dimensional
dynamical systems has been investigated for several decades, in view of its
applications to various areas of physics, such as classical and statistical
mechanics, celestial mechanics and accelerator dynamics. Most of the work to
date has focused on two-degree of freedom Hamiltonian models often represented
by two-dimensional (2D) area preserving maps. In this paper, we extend earlier
results using a 4-dimensional extension of the 2D McMillan map, and show that a
symplectic model of two coupled McMillan maps also exhibits stickiness
phenomena in limited regions of phase space. To this end, we employ probability
distributions in the sense of the Central Limit Theorem to demonstrate that, as
in the 2D case, sticky regions near the origin are also characterized by "weak"
chaos and Tsallis entropy, in sharp contrast to the "strong" chaos that extends
over much wider domains and is described by Boltzmann Gibbs statistics.
Remarkably, similar stickiness phenomena have been observed in higher
dimensional Hamiltonian systems around unstable simple periodic orbits at
various values of the total energy of the system. | Tassos Bountis, Konstantinos Kaloudis, Helen Christodoulidi | 2023-05-30T18:14:46Z | http://arxiv.org/abs/2305.19346v2 | # Dynamics and Statistics of Weak Chaos in a 4-D Symplectic Map
###### Abstract
The important phenomenon of "stickiness" of chaotic orbits in low dimensional dynamical systems has been investigated for several decades, in view of its applications to various areas of physics, such as classical and statistical mechanics, celestial mechanics and accelerator dynamics. Most of the work to date has focused on two-degree of freedom Hamiltonian models often represented by two-dimensional (2D) area preserving maps. In this paper, we extend earlier results using a 4-dimensional extension of the 2D McMillan map, and show that a symplectic model of two coupled McMillan maps also exhibits stickiness phenomena in limited regions of phase space. To this end, we employ probability distributions in the sense of the Central Limit Theorem to demonstrate that, as in the 2D case, sticky regions near the origin are also characterized by "weak" chaos and Tsallis entropy, in sharp contrast to the "strong" chaos that extends over much wider domains and is described by Boltzmann Gibbs statistics. Remarkably, similar stickiness phenomena have been observed in higher dimensional Hamiltonian systems around unstable simple periodic orbits at various values of the total energy of the system.
Keywords: Coupled McMillan maps, Boltzmann Gibbs and Tsallis entropies, weak and strong chaos
## 1 Introduction
The behavior of nonlinear dynamical systems described by differential and difference equations has been a topic of intense interest for several decades [12, 29, 17, 16, 23]. As is well-known, one the most important questions in this field concerns the distinction between solutions of the equations that are called "regular", since their evolution can be predicted for long times, and those termed "chaotic", whose time evolution becomes unpredictable after relatively short times. This is typically decided by calculating the Lyapunov exponents, measuring the distance between two nearby solutions, represented by trajectories (or orbits) in the \(2N-\) dimensional phase space of the system [22], with \(N\) position and \(N\) momentum variables, with time as the single independent variable. If none of the Lyapunov exponents is positive we call the orbit _regular_, while if at least one exponent is positive we call it _chaotic_.
But is this "duality" between order and chaos all there is? While there is no uncertainty about regular orbits, it has been realized that "chaos" is a lot more subtle to describe by a simple definition. One possibility is to study chaotic phase space domains from a statistical point of view, in terms of correlations and probability distributions. If these correlations decay exponentially away from a chaotic orbit, one might adopt a Boltzmann Gibbs (BG) thermodynamic description of the dynamics (as in the case of an ideal gas) and look for
Gaussian probability functions (pdfs) to describe the associated statistics. What happens, however, if correlations decay by power laws and the pdfs of positions and/or momenta are no longer Gaussian? What would that imply about the corresponding chaotic behavior?
One such widely known example occurs in cases of "stickiness", where chaotic orbits of generally low-dimensional dynamical systems tend to remain confined for very long times trapped within thin chaotic layers surrounding regions of regular motion [16, 10, 14, 11, 15, 4]. Remarkably, this phenomenon does not occur only in low dimensions. It has also been observed in multidimensional Hamiltonian lattices [6, 2, 1, 9, 8, 7], often in cases where chaotic regions arise around simple periodic orbits, when they have just turned unstable, as the total energy of the system is increased.
Regarding dynamical systems in discrete time, it is well-known that 2D Poincare maps describe intersections of the orbits of a 2-degree of freedom continuous dynamical system with a 2D surface of section [16]. Thus, one may consider directly area preserving transformations of a plane onto itself to study the qualitative features of such maps [24].
One famous model in this regard is the 2D McMillan (2DMM) area preserving, non-integrable map [20]. It may be interpreted as describing the dynamics of focusing a "flat" proton beam in a circular particle accelerator model describing the repeated passage of a "flat" beam through a periodic sequence of thin nonlinear lenses [26]:
\[x_{n+1} = y_{n}\] \[y_{n+1} = -x_{n}+\frac{2Ky_{n}}{1+y_{n}^{2}}+\mu y_{n}, \tag{1}\]
where \(x_{n}\) and \(y_{n}\) represent a particle's position and momentum at the nth crossing of a focusing element, while \(\mu\), and \(K\) are physically important parameters. Note that the Jacobian of the transformation is unity, so that (1) is area-preserving and thus may represent the conservative (Hamiltonian) dynamics of proton beams whose radiation effects are considered negligible [26]. If \(\mu=0\) the map is integrable, as it possesses a constant of the motion given by the one parameter family of curves [13]:
\[x_{n}^{2}+y_{n}^{2}+x_{n}^{2}y_{n}^{2}-2K\kappa_{n}y_{n}=const.\]
In [20], (1) was studied following a nonextensive statistical mechanics approach, based on the nonadditive Tsallis entropy \(S_{q}\)[25]. According to this approach, the pdfs optimizing \(S_{q}\), under appropriate constraints, are \(q\)-Gaussian distributions that represent quasistationary states (QSS) of the dynamics, with \(1<q<3\) (\(q=1\) being the Gaussian). As was shown in [20], there are several cases of \(K>1\) and \(\mu>0\) parameters, where the chaotic layer around a saddle point at the origin does _not_ satisfy BG statistics associated with "strong chaos", but is well described by a \(q>1\)-Gaussian pdf, associated with "weak chaos".
It is, therefore, natural to ask whether similar phenomena of spatially limited, weakly chaotic dynamics occur in 4D symplectic maps, such as one encounters e.g. in 3-degree-of-freedom hamiltonian systems commonly encountered in problems of celestial mechanics, see e.g. [10, 14, 11, 15] and particle accelerator dynamics [3, 5].
In this paper, we extend for the first time the above approach to study 4D McMillan (4DMM) maps of the form
\[x_{n+1} = -x_{n-1}+\frac{2K_{1}x_{n}}{1+x_{n}^{2}}+\mu x_{n}-\epsilon x_{n} y_{n}^{2}\] \[y_{n+1} = -y_{n-1}+\frac{2K_{2}y_{n}}{1+y_{n}^{2}}+\mu y_{n}-\epsilon x_{n} ^{2}y_{n} \tag{2}\]
where \(x_{n},y_{n}\) represent horizontal and vertical deflections of the proton beam as it passes through the \(nth\) focusing element and study the chaotic domain arising about the origin of (2), using values of \(K_{1}\), \(K_{2}\) and \(\mu\) for which the origin is unstable. Note that (2) is symplectic, as the evolution of \(x_{n}\) and \(y_{n}\) is determined by a potential function \(V(x_{n},y_{n})\), whose partial derivatives with \(x_{n}\) and \(y_{n}\) respectively yield the two equations of (2).
We choose suitable \(K_{1}\) and/or \(K_{2}\) values, for fixed \(\mu>0,\epsilon>0\) small, such that the origin is (linearly) unstable and calculate the pdfs of the rescaled sums of \(N\) iterates of the map, in the sense of the Central Limit Theorem, in the large \(N\) limit for large sets of initial conditions. We then relate our results to specific properties of the phase space dynamics of the maps and distinguish cases where the pdfs represent long-lived QSS described by \(q\)-Gaussians.
We begin by describing in Section 2 the statistical methods used in this paper to obtain the pdfs describing our data in all cases of the 4DMM map studied here. Next, in Section 3, we apply this analysis to find weak chaos characterized by \(q\)- Gaussian pdfs, for different parameter values connected with an unstable fixed point at the origin of our 4DMM map. We end with our conclusions in Section 4.
## 2 Statistical analysis of weak chaos
Before turning to the 4DMM mapping studied here, we first carried out the same computations for the 2DMM map (1) and compared them to results depicted in Fig. 3(a) of [20]. Employing the same choices of initial conditions and the same number of iterations, we verified that we obtain practically identical results.
For the benefit of the reader, we state that the approach we follow here is to evaluate the solution \(x_{n},y_{n}\), \(n=0,\ldots,N\) of the 4DMM map (2) and construct probability distributions for \(x_{n}\) (similarly for \(y_{n}\)) of appropriately large rescaled sums \(S_{j}(N)\) obtained by adding the corresponding \(N\) iterates
\[S_{j}(N)=\sum_{n=0}^{N}x_{n}^{(0)}\]
where \(j\) refers to the \(j\)-th realisation, taking values from \(1\) to the total number of initial conditions \(N_{lc}\). As in [20], we generate the centered and rescaled sums
\[s_{j}(N)\equiv\frac{S_{j}(N)-\mu_{j}(N)}{\sigma_{N}}= \left(\sum_{n=0}^{N}x_{n}^{(0)}-\frac{1}{N_{lc}}\sum_{j=1}^{N_{ lc}}\sum_{n=0}^{N}x_{n}^{(0)}\right)/\sigma_{N} \tag{3}\]
where \(\mu_{j}(N)\) is the mean value and \(\sigma_{N}\) the standard deviation of \(S_{j}(N)\) over \(N\) iterations
\[\sigma_{N}^{2}=\frac{1}{N_{lc}}\sum_{j=1}^{N_{lc}}\left(S_{j}(N)-\mu_{j}(N) \right)^{2}=\left\langle S_{j}^{2}(N)\right\rangle-\mu_{j}^{2}(N),\]
where \(<\cdot>\) denotes averaging over \(N\) iterations. We thus find many cases, where the obtained empirical distributions are well-described by a \(q\)-Gaussian distribution of the form
\[P\left(s_{j}(N)\right)=\frac{\sqrt{\beta}}{C_{q}}\left[1+\beta(q-1)s_{j}^{2}( N)\right]^{1/1-q} \tag{4}\]
where \(q\) is regarded as an indicator measuring the divergence from the classical Gaussian distribution, \(\beta\) is the 'inverse temperature' fitting parameter and \(C_{q}\) is a normalizing constant.
To describe the statistical properties of the above rescaled sums of the system, we employ standard parameter estimation techniques. Specifically, we are interested in identifying the \(q\)-Gaussian distribution that best describes the observed data. One of the most widely used methods for such estimations, is the Maximum Likelihood Estimator (MLE) [19]. This is a _parametric_ method typically used for statistical fitting among distributions belonging to the same family, e.g. the family of Gaussians parameterized by their mean and standard deviation or the family of \(q\)-Gaussians parameterized by \((q,\beta)\).
The main idea behind the MLE is that the most suitable distribution (of a given family) describing a given data set, is _the most probable_ to describe the observed data. More formally, we are interested in maximizing the likelihood function, which describes "how likely" it is to observe a certain random sample, for the various values of the unknown parameters of the assumed statistical model.
To determine the likelihood function \(p(\beta|\mathbf{X})\), we first calculate the joint probability function of the observed sample \(\mathbf{X}=(X_{1}=x_{1},\ldots,X_{n}=x_{n})\) as a function of the parameters of the problem, \(\beta=(\beta,q)\in\mathbb{R}^{+}\times[1,3)\). Then, the MLE is the value of \(\beta\in\Theta\) that maximizes the likelihood function, i.e. \(\hat{\beta}=\arg\max_{\delta\in\Theta}p(\beta|\mathbf{X})\). For computational purposes, it is convenient to maximize the logarithmic likelihood function, which for a \(q\)-Gaussian statistical model has the form:
\[k_{\mathbf{X}}\left(\beta,q\right)=\sum_{i=1}^{n}\log\frac{\sqrt{\beta}}{C_{q }}\left[1+\beta(q-1)x_{i}^{2}\right]^{1/1-q}.\]
In all simulations that follow, we perform our numerical optimization using the so-called "nlm" (nonlinear minimization) command of the \(R\) software for statistical computing [18].
An alternative approach to derive optimal \(q\)-Gaussian parameters is to apply nonlinear least-square fitting to binned estimates of the probability density (via histograms), using such methods as Gauss-Newton (see e.g. [28]). However, from a statistical point of view, it is more accurate to use MLE instead of curve-fitting estimates, as the MLE are theoretically guaranteed, under general (regularity) conditions, to have such desirable properties, as efficiency, consistency and asymptotic normality [27]. For an interesting discussion of the comparison between curve-based estimates and MLEs for the case of q-Exponential distributions, we refer the reader to Shalizi [21].
## 3 Evidence of weak chaos in 4DMM maps
### Weak chaos in an example of the 4DMM map
We start by fixing the values of \(\mu=0.2\) and \(\epsilon=0.01\), which we will use throughout the paper, as they do not significantly affect the results. Observe now in our Fig. 1(a) a typical example of an optimal pdf of a \(q\)-Gaussian obtained for the choice of parameters \(K_{1}=1.6,K_{2}=0.5\). This is a case we shall call hyperbolic-elliptic (HE), referring to the first 2D map in (2) having a hyperbolic fixed point at the origin, and the second 2D map having an elliptic point. In a later subsection, we also discuss examples of the hyperbolic-hyperbolic (HH) type, where the origin is unstable in both 2D maps of (2). Note that the case EH is entirely analogous to HE due to the symmetrical form of the two 2D maps.
Throughout our study, we use \(10^{6}\) random initial conditions for each of the variables, i.e. \(x_{0},x_{1}\) and \(y_{0},y_{1}\), within the domain \((0,10^{-6})\) close to the origin. To facilitate the visualization of stickiness phenomena, observe the phase plane picture shown in Fig. 1(b). The "warm"
Figure 1: (a) The computation of the pdf for the \(x_{n}\) variable in (2) with parameters \(K_{1}=1.6,K_{2}=0.5\), \(\mu=0.2\), and \(\varepsilon=0.01\). The dashed line represents an optimal fitting of the data by a \(q\)-Gaussian function (4) with \(q=1.38\) and \(\beta=1.19\). (b) The 2D phase space plot of the \(x_{n},x_{n+1}\) projections of the 4D map (2) for the orbits and parameters used in (a), while (c) shows the 2D phase space projection in the \(y_{n},y_{n+1}\) plane variables.
colors represent the more dense parts of the plot, where solutions stick around for very long times, whereas "cold" colors depict orbits that scatter diffusively in phase space. We also show in Fig. 1(c) projections of the orbits in the \(y_{n},y_{n+1}\) plane, which rotate around the origin due to our choice of \(K_{2}<1\).
Each of our initial conditions is iterated \(2\cdot 10^{5}\) times, to achieve reliable statistics. To obtain the results shown in Fig. 1, we have employed appropriate statistical techniques (see e.g. [19, 21]) to optimize both the specific class of suitable pdfs and their parameters to obtain the best fit for such large data sets.
Clearly, a crucial role in this study is played by the fixed point at the origin and its stability properties. A simple linearization of the equations of our 4DMM map (2) about \(x_{n}=y_{n}=0\) shows that the conditions for stability of the central fixed point with respect to deviations in \(x_{n}\) and/or \(y_{n}\) are:
\[|K_{t}+\mu/2|<1,\quad t=1,2 \tag{5}\]
Thus, we identify as EE (doubly elliptic) the case when both conditions \(i=1,2\) in eq. (5) hold, EH (elliptic hyperbolic) if the \(i=2\) inequality is reversed, HE (hyperbolic elliptic) if the \(i=1\) inequality in (5) is inverted, and HH (doubly hyperbolic), when both inequalities in eq. (5) are reversed. Clearly, if the origin is doubly elliptic (EE), it will be surrounded mostly by quasiperiodic orbits and no large scale chaos will be present in its vicinity. Hence, in what follows, we will study both "partly" unstable HE and "fully" unstable cases of the HH type. We start with both \(K_{t}\) positive, but will also consider cases with \(K_{t}<0\), for \(i=1,2\).
### HE Cases of the 4DMM map
We begin with a hyperbolic elliptic (HE) case of the 4DMM map problem (2), with the main parameters chosen so that the \(x\)-map has \(K_{1}>1\), and the \(y\)-map \(0<K_{2}<1\), i.e hyperbolic in the \(x_{n}\) plane and elliptic in the \(y_{n}\) plane.
Setting \(K_{2}=0.5\) and gradually increasing the value of \(K_{1}\) we observe that the thin 'figure-eight' of Fig. 2(a) thickens around the origin as chaos slowly expands, and eventually occupies a wider "cellular" domain in phase space shown in Fig. 2(d).
The pdfs for each of the panels in Fig. 2 are depicted in Fig. 3. We observe that as the trajectory winds around a thin figure-eight in Fig. 2(a) in a nearly organized manner, the corresponding distributions of the sums \(s_{N}^{(j)}\) displayed in Fig. 3(a) follow a \(q\)-Gaussian function for two orders of magnitude, while the tails of the pdf diverge to higher values. The presence of weak chaos, however, for \(K_{1}=1.5,1.7\) in Fig. 2(b) and 2(c) leads to the emergence of optimal \(q\)-Gaussian distributions in 3(b) and 3(c), which, for \(q=1.57,1.67\), respectively, describe well the numerical data for five orders of magnitude!
On the other hand, for a higher \(K_{1}=2\) value (see Fig. 2(d)) where the orbits form complex "cellular" structures, the \(q\)-Gaussian distribution that best describes the data in Fig. 3(d) is successful only over two orders of magnitude and corresponds to \(q=1.87\). It appears, therefore, that with increasing \(K_{1}\) the value of \(q\) increases also.
### HH cases of the 4DMM map
Let us now describe some results obtained when the origin of the map is "fully unstable", i.e. a double saddle point, which we call hyperbolic-hyperbolic (HH). To this end, we will take values of \(K_{1}\) and \(K_{2}\) that violate the condition (5) and are either positive and negative or both negative as follows:
Figure 3: The pdfs for the sums \(\mathbf{s}_{N}^{(l)}\) corresponding to the chaotic domains shown in Fig. 2(a), (b), (c) and (d) respectively. The black dashed line corresponds to the optimal fitting with the \(q\)–Gaussian distribution and the red dashed line is the normal distribution.
Figure 2: 2D phase space plots for the \(x_{n}\) plane for different \(K_{1}\) values. The rest of the parameters, \(K_{2}=0.5,\mu=0.2\) and \(\epsilon=0.01\) remain constant for all panels. (a) \(K_{1}=1.2\), (b) \(K_{1}=1.5\), (c) \(K_{1}=1.7\), and (d) \(K_{1}=2\). The number of iterations is always \(2\times 10^{5}\).
Figure 4: Top row: Phase space plots on the \(x_{n},x_{n+1}\) plane for (a) \(K_{1}=-1.25,K2=1.25\), (b) \(K_{1}=K2=-1.25\). Bottom row: (c) and (d) present the pdf plots corresponding to (a) and (b) respectively.
Figure 5: (a) Top row: Phase space plots on the \(y_{n},y_{n+1}\) plane for (a) \(K_{1}=-1.25,K2=1.25\) and (b) \(K_{1}=K2=-1.25\). In (c) and (d) respectively we plot the pdfs corresponding to (a) and (b). Note the similarities with Fig. 4 above.
1) \(K_{1}=-1.25,K_{2}=1.25\): The dynamics is close to weak chaos, as the phase space plot in Fig. 4 (a) shows, since its pdf in Fig. 4(c) is close to a \(q\)-Gaussian for three orders of magnitude with \(q=2.97\).
2) With \(K_{1}=-1.25,K_{2}=-1.25\): The phase space plot in Fig. 4 (b) corresponds to what we call "strong" chaos, since its pdf, plotted in Fig. 4 (d) is very close to a Gaussian with \(q=1.09\).
Observing Fig. 4 more closely, we suggest that the statistical results may be explained as follows: In the first column, where the orbits form a more "sparse" pattern in Fig. 4(a), the associated \(q\)-Gaussian implies weak chaos, while in the second column, a more uniformly filled pattern in Fig. 4(b) is characterized by a true Gaussian representing strong chaos.
Let us also compare, for these HH cases, the above results, with those corresponding to the \(y_{n},y_{n+1}\) data as plotted in Fig. 5. Clearly, due to the \(x-y\) symmetry of the map, there are strong similarities between Fig. 4 and 5, validating the conclusions of weak chaos on the left column and strong chaos on the right column of the two figures.
### Close to the instability transition
We also examined a case close to the transition of instability for one of the maps. In particular, as shown in Fig. 6 below, we set \(K_{1}=1\) and plot for \(K_{2}=0.9,1.3,1.5\) in Fig. 6(a,c,e) the \(x_{n},x_{n+1}\) projections of the orbits, while in Fig. 6(b,d,f) we present the corresponding statistical analysis. Clearly the pdfs in this case are very well described by a \(q\)-Gaussian with \(q\) increasing from \(1.5\) to \(1.94\) and \(2.04\), close to the value \(q=2\), which is the case of the Cauchy distribution.
## 4 Conclusions
The stickiness of orbits observed in the vicinity of unstable periodic orbits of higher dimensional symplectic maps, or Hamiltonian systems of more than 2 degrees of freedom, is clearly a complex phenomenon. It has been termed "weak chaos" in the literature mainly because its statistical analysis reveals that it is associated with \(q\)-Gaussian probability distributions, as opposed to the simple Gaussians one finds when studying uniformly spread stochasticity associated with Boltzmann Gibbs statistics. This is because the motion in weakly chaotic situations is correlated over long ranges, while in strongly chaotic regions the correlations are short ranged.
In this paper, we attempted to study this phenomenon, for the first time, in a 4-D symplectic map, serving as a paradigm for Hamiltonian systems of 3 degrees of freedom. Our results suggest that "weak chaos" arises typically near unstable fixed points of \(2N\)-dimensional maps and may very well be present also near unstable periodic orbits in higher dimensional settings.
In most examples we considered, chaos tends to form "organized" patterns in phase space, while the pdfs describing their statistics attain \(1<q<2\) values suggesting the presence of strong correlations in the dynamics. However, we have also observed cases where chaos spreads more uniformly in phase space and \(q\) tends to approach the value \(q=1\) yielding purely Gaussian distributions.
We also observed that as the main nonlinear parameters of the model \(K_{i}\), \(i=1,2\) increase, the values of the index \(q\) of the distributions also grow. However, the genericity of these re
Figure 6: Close to the instability transition: Here we set \(K_{1}=1\) and present in (a)-(c)-(e) the phase space plots for \(x_{n}\) and in (b)-(d)-(f) the corresponding pdf plots. The first row corresponds to \(K_{2}=0.9\), the second row to \(K_{2}=1.3\) and the third row to \(K_{2}=1.5\).
sults remains open and needs to be studied further in more general classes of 4-D symplectic maps.
Clearly, every high-dimensional conservative dynamical system will have its own particular features determining the nature of chaos present near its unstable periodic orbits. We believe, however, that the results presented in this paper suggest that weak chaos is generic and may have important implications regarding the dynamics of higher dimensional conservative systems of physical significance.
## Acknowledgements
We happily dedicate the present manuscript to Professor Thanassis Fokas on the occasion of his 70th birthday and wish him many more years of pioneering work in all sciences. We thank the referees for their interesting suggestions and remarks. T. Bountis acknowledges the hospitality of the NCSR "Demokritos" and many discussions with colleagues at the Institute of Nanoscience and Nanotechnology. This work was supported by the Russian Science Foundation (project No. 21-71-30011), [https://rscf.ru/en/project/21-71-30011/](https://rscf.ru/en/project/21-71-30011/). T. Bountis acknowledges financial support for Sections 1, 3.2, 3.3 and 4 of the paper.
|
2306.00347 | Relational superposition measurements with a material quantum ruler | In physics, it is crucial to identify operational measurement procedures to
give physical meaning to abstract quantities. There has been significant effort
to define time operationally using quantum systems, but the same has not been
achieved for space. Developing an operational procedure to obtain information
about the location of a quantum system is particularly important for a theory
combining general relativity and quantum theory, which cannot rest on the
classical notion of spacetime. Here, we take a first step towards this goal,
and introduce a model to describe an extended material quantum system working
as a position measurement device. Such a "quantum ruler" is composed of $N$
harmonically interacting dipoles and serves as a (quantum) reference system for
the position of another quantum system. We show that we can define a quantum
measurement procedure corresponding to the "superposition of positions", and
that by performing this measurement we can distinguish when the quantum system
is in a coherent or incoherent superposition in the position basis. The model
is fully relational, because the only meaningful variables are the relative
positions between the ruler and the system, and the measurement is expressed in
terms of an interaction between the measurement device and the measured system. | Hui Wang, Flaminia Giacomini, Franco Nori, Miles P. Blencowe | 2023-06-01T05:03:21Z | http://arxiv.org/abs/2306.00347v4 | # Relational superposition measurements with a material quantum ruler
###### Abstract
In physics, it is crucial to identify operational measurement procedures to give physical meaning to abstract quantities. There has been significant effort to define time operationally using quantum systems, but the same has not been achieved for space. Developing an operational procedure to obtain information about the location of a quantum system is particularly important for a theory combining general relativity and quantum theory, which cannot rest on the classical notion of spacetime.
Here, we take a first step towards this goal, and introduce a model to describe an extended material quantum system working as a position measurement device. Such a "quantum ruler" is composed of \(N\) harmonically interacting dipoles and serves as a (quantum) reference system for the position of another quantum system.
We show that we can define a quantum measurement procedure corresponding to the "superposition of positions", and that by performing this measurement we can distinguish when the quantum system is in a coherent or incoherent superposition in the position basis. The model is fully relational, because the only meaningful variables are the relative positions between the ruler and the system, and the measurement is expressed in terms of an interaction between the measurement device and the measured system.
###### Contents
* I Introduction
* II A relational toy model with classical particles
* III Position measurements with a material reference frame
* IV A material quantum ruler as an extended reference frame
* IV.1 Transformation between local and nonlocal ruler bases
* IV.2 Second-quantized, tight binding model of the ion-ruler system
* IV.3 Exact solution to the ion-ruler quantum dynamics
* V The quantum ruler as a position measurement device
* V.1 Local description of the ruler dipoles state
* V.2 Ruler response to the ion
* V.3 Quantum measurement scheme for superpositions of positions
VI Conclusions
Acknowledgments A Lagrangian formulation of the ion-ruler system B The equivalence of constraints C Free ruler dynamics D Density matrix elements in the local basis
## I Introduction
In physics, operational measurement procedures give physical meaning to observable quantities in terms of laboratory operations. Such procedures require concrete models of measurement devices, and a realistic description of interactions between the device and the measured system. The definition of such procedures is important not only for practical purposes, but also from a fundamental perspective: in experimentally unexplored regimes of physics the correspondence between abstract observables and physically meaningful quantities can be ambiguous. For instance, in special-relativistic quantum physics this is the case for the relativistic spin operator [1; 2]. More strikingly, at the interface between quantum theory and gravity basic notions such as time, space, and causality cannot be kept unchanged. Hence, it is crucial to develop methods to characterise physically meaningful quantities via procedures that can then be employed in more general scenarios than those currently tested experimentally.
An operational definition of time can be obtained using quantum clocks. In quantum information, quantum clocks have been studied, e.g., in relation to thermodynamics [3] and to the possibility of measuring time more accurately than with classical clocks [4]. In gravity, quantum clocks constitute a promising tool to investigate the properties of physics at the interface between quantum theory and gravity [5; 6; 7; 8; 9].
Despite the attention that operational procedures to measure time have received, not much is known about analogous procedures to measure spatial positions. To overcome idealised position measurements, in which the reading in the laboratory frame corresponds to our standard notion of distance, we need a concrete model of a "quantum ruler", namely a quantum system that provides us with information on the position of another quantum system, i.e. the measured system.
Here, we introduce such a quantum ruler (Fig. 1). A quantum ruler has been previously considered in Refs. [10; 11] in a different context and with different goals to the one we have here. To make our quantum formulation amenable to a general relativistic description, the ruler should be an extended quantum system, which serves as the reference system for the measurement of positions. Considering the ruler as a physical system allows us to define local observables that are also background independent. In the quantum gravity literature, the expectation that a theory of quantum gravity should be background independent has been related to the necessity of considering extended material reference frames [12; 13; 14; 15], such as an elastic medium. Physically, this means that the observables (intended in a broad sense as measurable quantities) should be relational. Hence, they should be expressed in terms of the interaction between two physical systems and not rely on any background or absolute structure [16; 17; 18; 19].
Our quantum ruler is composed of \(N\) identical electric dipoles, which are coupled with a harmonic potential to their neighbouring ones, forming a one-dimensional "mass-spring chain". The measured system is an ion, which can be initially prepared in a localised state, in a mixed state of two positions, or in a pure quantum superposition state in the position basis.
While the considered setup is to be viewed as a simplified model of a more realistic, three-dimensional extended material ruler, one-dimensional trapped ion [20; 21] and dipole atom chain [22] systems with phononic modes have been considered in the lab for quantum information processing applications. Our motivation for considering a dipole rather than an ion chain model, is to have a more local interaction between the ion system and ruler.
We show that the ion system-dipole atom ruler can be approximately described by a multi-mechanical mode optomechanical system Hamiltonian, and the resulting ion-ruler quantum dynamics can be solved for exactly [23]. For \(N\gg 1\) dipoles, the ruler behaves as an oscillator "bath" environment for the ion system when the former is traced over (i.e., not measured), resulting in the decoherence [24] of initial ion position superposition states.
Our goal is to construct a measurement scheme to obtain information about the position of the ion without losing the coherence of the quantum state. This measurement is more general than the usual position measurements, in that it measures the system in a "quantum superposition of positions". Crucially, this procedure is expressed solely in terms of relative quantities, and the results do not depend on the state nor on the dynamics of the centre of mass degree of freedom of the ruler.
The generalisation of such a measurement encounters nontrivial challenges, which are due to the complexity of the quantum ruler as a one-dimensional many-body system. In particular, we want to reduce all possible decoherence effects on the state of the ion. We show that an appropriate choice of parameters of the ruler and measurement procedures ensures that measurements via the quantum ruler can distinguish a pure quantum superposition state of the ion from an incoherent
Figure 1: Intuitive illustration of the functioning of the quantum ruler. We introduce a relational position measurement scheme for systems in a spatial quantum superposition, which does not depend on any abstract or absolute quantity and is solely expressed in terms of relations between two physical systems. Central to the idea is to develop a concrete model of a quantum ruler which interacts (red arrows) with a quantum system initially prepared in a quantum superposition state of two different locations. The ruler is distorted (red spots) as a result of the interaction with the system. We show that after the measurement, which involves both the ruler and the quantum system, the coherence of the quantum superposition (the interference pattern) can be preserved.
mixture. We also comment on how future work could develop and enrich the measurement scheme we introduce here.
The paper is organised as follows. In Section II we introduce a relational toy-model of \(N\) non-relativistic particles, which captures the main conceptual features of the ruler, but is technically much simpler. In Section III we develop a method to construct relational position measurements. In Section IV we introduce the quantum ruler, and in Section V we illustrate the measurement procedure involving the quantum ruler and the ion.
## II A relational toy model with classical particles
We consider \(N\) non-relativistic particles with mass \(m_{\alpha}\) and coordinates \(q_{\alpha}\), with \(\alpha=1,\cdots,N\). The dynamics is governed by the Lagrangian
\[\mathcal{L}=\sum_{\alpha=1}^{N}\frac{m_{\alpha}}{2}\dot{q}_{\alpha}^{2}-\sum_ {\alpha=1}^{N-1}V_{\alpha}(q_{\alpha+1}-q_{\alpha})-\frac{1}{2M}\left(\sum_{ \alpha}m_{\alpha}\dot{q}_{\alpha}\right)^{2}, \tag{1}\]
where \(M=\sum_{\alpha}m_{\alpha}\) is the total mass, the first term is the kinetic energy of the \(N\) particles, and the second term is an interacting potential between two neighbouring particles, which scales with the relative distance between the particles \((q_{\alpha+1}-q_{\alpha})\). The last term rescales the total energy by subtracting the energy of the centre of mass. Equivalently, this term makes the system fully translationally invariant, so that the relative velocities between two particles are the only meaningful quantities. The kinetic term can be cast as [25]
\[T=\sum_{\alpha=1}^{N}\frac{m_{\alpha}}{2}\dot{q}_{\alpha}^{2}-\frac{1}{2M} \left(\sum_{\alpha}m_{\alpha}\dot{q}_{\alpha}\right)^{2}=\sum_{\alpha=1}^{N} \frac{m_{\alpha}}{2}\left(\dot{q}_{\alpha}-\dot{q}_{CM}\right)^{2}, \tag{2}\]
where \(q_{CM}=\sum_{\alpha}\frac{m_{\alpha}}{M}q_{\alpha}\). We now make a coordinate transformation to the centre of mass coordinates and the relative coordinates of the \(N\) particles to the centre of mass:
\[x_{CM}=q_{CM}=\sum_{\alpha}\frac{m_{\alpha}}{M}q_{\alpha},\qquad x_{\alpha}=q _{\alpha}-q_{CM}, \tag{3}\]
with \(\alpha=1,\cdots,N\). Notice that in this step we have introduced a redundant coordinate, which we will eliminate later using the identity \(\sum_{\alpha=1}^{N}m_{\alpha}x_{\alpha}=0\). The Lagrangian can then be expressed in this set of coordinates as
\[\mathcal{L}=\sum_{\alpha=1}^{N}\frac{m_{\alpha}}{2}\dot{x}_{\alpha}^{2}-\sum_ {\alpha=1}^{N-1}V_{\alpha}(x_{\alpha+1}-x_{\alpha}). \tag{4}\]
The canonical momenta are
\[\pi_{\alpha}=\frac{\partial\mathcal{L}}{\partial\dot{x}_{\alpha}}=m_{\alpha} \dot{x}_{\alpha},\qquad\pi_{CM}=\frac{\partial\mathcal{L}}{\partial\dot{x}_{ CM}}=0. \tag{5}\]
The Hamiltonian corresponding to the Lagrangian of the system is then
\[H=\sum_{\alpha}\frac{\pi_{\alpha}^{2}}{2m_{\alpha}}+\sum_{\alpha=1}^{N-1}V_{ \alpha}(x_{\alpha+1}-x_{\alpha})+\mu\mathcal{C}, \tag{6}\]
where \(\mu\) is a Lagrange multiplier and \(\mathcal{C}=\pi_{CM}\) is a constraint coming from the equations of motion, i.e. \(\mathcal{C}\approx 0\)1. This constraint is trivially satisfied, as the Hamiltonian does not depend on it. Finally, we need to impose the identity \(\sum_{\alpha=1}^{N}m_{\alpha}x_{\alpha}=0\). This can be easily done by eliminating one of the particles from the description. For instance, we can choose to remove particle 1 and write \(x_{1}=-\sum_{\alpha=2}^{N}\frac{m_{\alpha}}{m_{1}}x_{\alpha}\). Notice that this condition should not be treated as a dynamical constraint, because it is an artefact of our coordinate transformation. Hence, it holds in general, and not only on the space of solutions of the equations of motion.
Footnote 1: The symbol \(\approx\) denotes a “weak equality”, namely an equality that holds on the constraint surface.
## III Position measurements with a material reference frame
It is generally believed that diffeomorphism invariance is not compatible with the possibility of defining meaningful local measurements. However, this problem is solved when one abandons the abstract notion of a coordinate system and considers material reference frames, namely reference frames associated to physical (matter) systems. In this case, a measurable quantity \(O\) is not evaluated at an abstract point of a manifold \(\mathcal{M}\), i.e. \(O(x)\), with \(x\in\mathcal{M}\), but should be considered as an "event" arising from the interaction between two physical systems. As a consequence, when a diffeomorphism transformation is performed, both the material reference frame and the observed system are transformed, and diffeomorphism invariance is preserved.
In quantum theory, it is possible to construct local observables as quantum operators that are invariant under a gauge (diffeomorphism) transformation. In quantum gravity, relational observables have been defined as Dirac observables by employing an extended, material coordinate system [26; 27; 28; 29; 30; 31].
Usually, a position measurement is defined by choosing an abstract coordinate system labelled by \(x\) and measuring an observable \(\hat{O}_{S}\) acting on a system \(S\). If such an observable corresponds to the position operator, then the measurement returns some classical value \(x^{*}\), which is the position of the system on the abstract coordinate system. This procedure is represented as a projection
\[\hat{O}_{S}(x^{*})\rightarrow\hat{\Pi}_{x^{*}}=\left|x^{*}\right\rangle_{S} \left\langle x^{*}\right|. \tag{7}\]
If we change the coordinate system, this local measurement is not diffeomorphism invariant. In a relational picture, instead of defining the position using an abstract coordinate system, we localise the quantum system \(S\) relative to another quantum system \(r\). In our case, \(r\) is a ruler, described as an extended quantum system on a lattice, where the \(N\) lattice sites are labelled as \(i=1,\cdots,N\). The ruler \(r\) interacts with the system \(S\) via the unitary operator \(\hat{U}_{Sr}=e^{-i\epsilon\hat{O}_{Sr}}\). The effect of this interaction is to entangle the system of the ruler and system. With this procedure, the position measurement becomes a joint measurement of the system and the ruler \(\hat{O}_{S}(\hat{x}_{r})\)
\[\hat{O}_{S}(\hat{x}_{r})\rightarrow\hat{\Pi}_{i}\otimes\left|i\right\rangle_{ r}\left\langle i\right|, \tag{8}\]
where \(i\) labels a physical site of the ruler, and not an abstract coordinate system. Notice that the ruler is constructed to be invariant under global translations, so it does not matter2 where the quantum system is relative to the ruler.
Footnote 2: A subtlety is to make sure that the measurement is not affected by boundary size effects, meaning that the system \(S\) should be distant from the edges of the ruler. We discuss this point in Section IV.
Our goal here is to construct a measurement corresponding to the "quantum superposition of positions". A minimal requirement that we impose is that such a measurement is a Positive Operator Valued Measure (POVM). If we were to use an abstract coordinate system, the most natural choice for a POVM would be to divide the spatial extension of the laboratory into \(N\) slots of length \(\Delta m\).
Calling \(R=\Delta m\) the resolution of the measurement apparatus, it is then possible to check in which slot the system is found3. However, this procedure is suitable to measure the system in a well-defined position, but not in a quantum superposition. We then include the ruler in the picture and define a measurement acting on both the system \(S\) and the ruler \(r\) as in Eq. (8). This allows us at once to i) consider more general position measurements corresponding to projective measurements, and ii) establish a relational, local picture in which position measurements can be performed.
Footnote 3: A similar strategy was used in Refs. [32; 33], for finite-dimensional systems, to define the classical limit of a quantum theory.
We here outline an intuitive description of this procedure by providing an ideal model of the ruler. Key to this procedure is the existence of a suitable interaction between the ruler and the system which gives rise to an entangled state of \(S\) and \(r\). We give the explicit form of the interaction in Section IV and of the measurement in Section V for the actual quantum ruler.
In this idealised description of the quantum ruler, each site \(m\) is a two-level system, where the state \(\left|0\right\rangle_{r_{m}}\) corresponds to the ruler not being distorted and the state \(\left|1\right\rangle_{r_{m}}\) corresponds to the ruler being distorted by the interaction with the quantum system \(S\). A general state of the ruler is then
\[\left|\Psi\right\rangle_{r}=\sum_{s_{1},s_{2},\cdots,s_{N}=0,1}c_{s_{1}s_{2} \cdots s_{N}}\left|s_{1}\right\rangle_{r_{1}}\left|s_{2}\right\rangle_{r_{2}} \cdots\left|s_{N}\right\rangle_{r_{N}}. \tag{9}\]
We are interested in the measurements where only one site, say \(j\), is distorted. As we explain in Section IV, this is not realistic for the actual quantum ruler, which responds to the interaction with the system in a non-local way. We define the state in which one site is distorted as \(\left|0\right\rangle_{r_{1}}\left|0\right\rangle_{r_{2}}\cdots\left|1\right\rangle _{r_{j}}\cdots\left|0\right\rangle_{r_{N}}=\left|j=1\right\rangle_{r}\). The new single-position measurement is an operator on the Hilbert space of the ruler and of the quantum system, namely
\[M_{j}=\left|\psi_{j}\right\rangle_{S}\left\langle\psi_{j}\right|\otimes\left| j=1\right\rangle_{r}\left\langle j=1\right|, \tag{10}\]
where \(\left|\psi_{j}\right\rangle_{S}\) is a normalised state on the Hilbert space of the quantum system centred at the \(j\)th site and roughly constant in the corresponding slot, and having typical width \(\sigma\) which is at least of the order of \(R\). For instance, they could be coherent states with very large \(\sigma\). Notice that, in the case of an ideal ruler, we do not need the states \(\left|\psi_{j}\right\rangle_{S}\) to be orthogonal for different values of \(j\), because the measurement acting on the dipoles of the ruler ensures that \(M_{j}M_{k}=M_{j}\delta_{j,k}\). However, they must form a basis of the Hilbert space of the system \(S\). In addition, we also obtain the completeness condition \(\sum_{m}M_{m}+\sum_{\bar{m}}M_{\bar{m}}=\mathbb{1}_{lab}\), where the first term includes the measurements that are physically relevant for us (for instance, those corresponding to the slot where the system is) and \(\bar{m}\) labels the complementary set. Finally, \(\mathbb{1}_{lab}\) is the identity restricted to the laboratory.
This construction can be extended to measure a quantum superposition state of the system \(S\); up to a relative phase, such a measurement is
\[M_{\pm}=\frac{1}{2}\left[\left|\psi_{j}\right\rangle_{S}\left|j=1\right\rangle _{r}\pm\left|\psi_{k}\right\rangle_{S}\left|k=1\right\rangle_{r}\right]\left[ \left.\left.S\left\langle\psi_{j}\right|_{r}\right\langle j=1\right|\pm_{S} \left\langle\psi_{k}\right|_{r}\left\langle k=1\right|\right]. \tag{11}\]
It is then easy to check that the set of measurements \(\tilde{M}_{i=1}^{N}=\left\{M_{+},M_{-},\{M_{i}\}_{i\neq j,k}\right\}\) satisfies the same relations \(\tilde{M}_{l}\tilde{M}_{m}=\tilde{M}_{l}\delta_{l,m}\) and \(\sum_{m}\tilde{M}_{m}+\sum_{\bar{m}}\tilde{M}_{\bar{m}}=\mathbb{1}_{lab}\). In what follows, we consider a regime in which the interaction strength \(\epsilon\) between the ruler and the system is small. In this case, the measurement is analogous to a "weak" measurement as defined, e.g., in Ref. [34].
## IV A material quantum ruler as an extended reference frame
We introduce a relational model, illustrated in Fig. 2, consisting of a one-dimensional quantum ruler \(r\) interacting with a quantum system \(I\). The ruler is composed of \(N\) identical dipoles of mass \(m_{r}\) (with \(N\) odd, so that a dipole is situated at the geometric midpoint of the ruler in classical static equilibrium), which are coupled via a harmonic nearest-neighbour interaction (indicated by the springs connecting the dipoles in the figure) with effective spring constant \(k_{r}\) and classical, static equilibrium separation \(a_{r}\). The quantum system is located at a fixed, vertical distance \(w\) from the ruler, but otherwise free to move parallel to the ruler \(x\)-coordinate axis. We take this quantum system to be an ion (hence the use of the measured system label '\(I\)') of mass \(M_{I}\), which can, for example, be prepared in a quantum superposition of localised position states. Our goal is to introduce a relational measurement between the ruler and the ion which does not localise the quantum state of the ion, i.e., preserving its superposition properties. Here, by relational we imply that the meaningful coordinates are distances along the \(x\)-coordinate axis between the physical systems involved, and that the centre of mass does not play a role in our description. In addition, the observables that we measure are relational, in that they provide information about the position of the ion (the measured system) in terms of correlations between its quantum state and the states of the dipoles composing the ruler. In this sense, our model does not require an abstract, background absolute coordinate system for its definition.
By allowing for the possibility of arbitrarily large (odd) dipole number \(N\), our ruler model captures some of the features of actual, macroscopic material extended ruler systems--a primary
Figure 2: Schema of one-dimensional quantum ruler model. The ruler is composed of \(N\) (odd) electric dipoles that are coupled via harmonic nearest-neighbour interactions (represented by the springs). The dipoles also interact electrostatically with an ion with charge \(-q_{I}\), whose state is in a quantum superposition in the position basis, restricted to a parallel axis a distance \(w\) from the ruler, and assumed to be far away from the edges of the ruler. The ion induces displacements of the ruler dipoles, from which the position of the ion relative to the ruler can be determined.
motivation for the model. The one-dimensional nature of our model is an idealization, however, allowing for exact analytical solutions to the interacting, many-body ion system-ruler dipole quantum dynamics in terms of phonon modes. A price to pay in working with a one-dimensional mass-spring model, however, is that the ruler elastic displacement response to a localised ion state is nonlocal, extending throughout the length of the ruler. A more realistic two or three-dimensional mass-spring lattice model will exhibit more localised distortions opposite the ion localisation, but with the phonon mode dynamics more challenging to analyze.
The total Hamiltonian operator of the ruler and the ion can be decomposed as \(\hat{H}=\hat{H}_{r}+\hat{H}_{I}+\hat{V}_{Ir}\), where \(\hat{H}_{r}\) is the ruler Hamiltonian:
\[\hat{H}_{r}=\frac{\hat{p}_{\text{rCM}}^{2}}{2M_{r}}+\sum_{n=-\frac{N-1}{2}}^{ \frac{N-1}{2}}\frac{\hat{\pi}_{n}^{2}}{2m_{r}}+\frac{1}{2}k_{r}\sum_{n=-\frac{ N-1}{2}}^{\frac{N-3}{2}}(\hat{\phi}_{n+1}-\hat{\phi}_{n})^{2}. \tag{12}\]
This Hamiltonian has a more transparent physical interpretation in the Lagrangian picture; we summarize our derivation of the Lagrangian in Appendix A. The first term in the Hamiltonian (12) is the kinetic energy of the ruler centre of mass, with \(M_{r}=Nm_{r}\) the total ruler mass. The harmonic potential energy of the ruler is in terms of nearest-neighbour dipole coordinate differences, with \(\hat{\phi}_{n}=\hat{x}_{r,n}-na_{r}-\hat{x}_{\text{rCM}}\), canonically conjugate to the momentum \(\hat{\pi}_{n}\), corresponding to the displacement of \(n\)th ruler dipole relative to its classical equilibrium position. We choose the ruler centre-of-mass to be the observer's frame, and impose the constraint
\[x_{\text{rCM}}=\frac{1}{N}\sum_{n=-\frac{N-1}{2}}^{\frac{N-1}{2}}x_{r,n}=0. \tag{13}\]
Hence the first term of the ruler Hamiltonian in Eq. (12) disappears. This constraint is effectively the same as the following constraint for the relative coordinates (for the details of this equivalence, see Appendix B):
\[\sum_{n=-\frac{N-1}{2}}^{\frac{N-1}{2}}\phi_{n}=0. \tag{14}\]
The free Hamiltonian of the ion is
\[\hat{H}_{I}=\frac{\hat{p}_{I}^{2}}{2M_{I}}, \tag{15}\]
and \(\hat{V}_{Ir}\) describes the electromagnetic, Coulomb interaction between an assumed negatively-charged ion (\(-q_{I}\)) and the ruler through its dipoles:
\[\hat{V}_{Ir}=-\frac{q_{I}q}{4\pi\epsilon_{0}}\sum_{n=-\frac{N-1}{2}}^{\frac{N- 1}{2}}\left[\frac{1}{\sqrt{(w-l/2)^{2}+(\hat{x}_{I}-\hat{x}_{r,n})^{2}}}-\frac {1}{\sqrt{(w+l/2)^{2}+(\hat{x}_{I}-\hat{x}_{r,n})^{2}}}\right], \tag{16}\]
where \(l\) is the distance between the opposite charges \(+q\) and \(-q\) of a given ruler dipole (we adopt the convention \(q,q_{I}>0\)), and recall that \(w\) is the fixed, perpendicular distance between the ruler dipole chain and ion; this potential results in distorting displacements of the ruler dipoles from their equilibrium positions in the presence of the ion (see later below).
We suppose that the ion is perpendicularly located sufficiently close to the ruler such that the latter must be analyzed as a discrete mass-spring lattice system (as opposed to being approximated as an elastic continuum field system), i.e., \(w\lesssim a_{r}\). We furthermore assume that \(l,\delta\phi_{n}\ll w\), where \(\delta\phi_{n}\) is the uncertainty in the \(n\)th dipole's displacement:
\[\delta\phi_{n}=\sqrt{\langle\phi_{n}^{2}\rangle-\langle\phi_{n}\rangle^{2}}. \tag{17}\]
Under these conditions, the potential of Eq. (16) can be Taylor expanded in \(\hat{\phi}_{n}\) to give the following simpler, approximate potential (see Appendix A for details):
\[\hat{V}_{Ir}\approx-\frac{q_{I}\mathsf{p}_{r}w}{4\pi\epsilon_{0}}\sum_{n=- \frac{N-1}{2}}^{\frac{N-1}{2}}\left[(\hat{x}_{In}^{2}+w^{2})^{-3/2}+3\hat{ \phi}_{n}\hat{x}_{In}(\hat{x}_{In}^{2}+w^{2})^{-5/2}\right], \tag{18}\]
where \(\mathsf{p}_{r}=ql\) is the ruler atom electric dipole moment and \(\hat{x}_{In}=\hat{x}_{I}-na_{r}-\hat{x}_{\rm{rCM}}\) is the location of the ion relative to the \(n\)th, rigid ruler dipole position. Constraint (13) gives \(\hat{x}_{\rm{rCM}}=0\); from now on, \(\hat{x}_{\rm{rCM}}\) will no longer appear in the equations. The first term in Eq. (18) describes the potential experienced by the ion due to the ruler atom dipoles in their classical equilibrium lattice positions; we henceforth define \(\hat{V}_{I}=-\frac{q_{I}\mathsf{p}_{r}w}{4\pi\epsilon_{0}}\sum_{n=-\frac{N-1}{ 2}}^{\frac{N-1}{2}}(\hat{x}_{In}^{2}+w^{2})^{-3/2}\) and combine it with the free ion Hamiltonian. The total, approximate ion-ruler Hamiltonian can then be re-expressed as \(\hat{H}=\hat{H}_{r}+\hat{H}_{I}^{\rm eff}+\hat{V}_{Ir}^{\rm eff}\), where \(\hat{H}_{r}\) is given in Eq. (12) (without the centre of mass kinetic energy term), and the effective ion Hamiltonian and ion-ruler dipole elastic displacement interaction are, respectively,
\[\hat{H}_{I}^{\rm eff} = \frac{\hat{p}_{I}^{2}}{2M_{I}}-\frac{q_{I}\mathsf{p}_{r}w}{4\pi \epsilon_{0}}\sum_{n=-\frac{N-1}{2}}^{\frac{N-1}{2}}(\hat{x}_{In}^{2}+w^{2})^ {-3/2}, \tag{19}\] \[\hat{V}_{Ir}^{\rm eff} = -\frac{3q_{I}\mathsf{p}_{r}w}{4\pi\epsilon_{0}}\sum_{n=-\frac{N-1 }{2}}^{\frac{N-1}{2}}\hat{\phi}_{n}\hat{x}_{In}(\hat{x}_{In}^{2}+w^{2})^{-5/2}, \tag{20}\]
with \(\hat{\phi}_{n}\) satisfying the constraint (14). Note that the ion-ruler Hamiltonian now depends only on the relational ion \(\hat{x}_{In}\) and ruler atom \(\hat{\phi}_{n}\) dipole coordinates.
### Transformation between local and nonlocal ruler bases
In Appendix C, we solve for the classical and quantum (Heisenberg picture) free ruler dynamics, utilizing the common approach of working in terms of the nonlocal normal mode "position" operator solutions:
\[\hat{x}_{\alpha}(t)=x_{\alpha,0}\left[\hat{a}_{\alpha}(0)e^{-i\Omega_{\alpha} t}+\hat{a}_{\alpha}^{\dagger}(0)e^{i\Omega_{\alpha}t}\right], \tag{21}\]
where \(x_{\alpha,0}=(\hbar/2m_{\tau}\Omega_{\alpha})^{1/2}\) is the \(\alpha\) normal mode "displacement" uncertainty, and the normal mode frequencies are
\[\Omega_{\alpha}=2\omega_{r}\sin\left(\frac{\alpha\pi}{2N}\right),\,\alpha=1,2,\ldots,N-1, \tag{22}\]
with \(\omega_{r}=\sqrt{k_{r}/m_{r}}\), and \(\hat{a}_{\alpha}(0)\), \(\hat{a}_{\alpha}^{\dagger}(0)\) are the "phonon" annihilation and creation operators, respectively, which satisfy the commutation relation \([\hat{a}_{\alpha}(0),\hat{a}_{\alpha^{\prime}}^{\dagger}(0)]=\delta_{\alpha, \alpha^{\prime}}\) (with all other commutation relations vanishing).
However, in order to understand how this system functions as a ruler, it is necessary to also work in terms of the local, dipole atom position observables \(\phi_{n}\) that correlate with the position observable \(x_{In}\) of the ion system. To achieve this, we need to perform a change of basis from the normal mode position operator eigenstates \(|\{x_{\alpha}\}\rangle_{r}\) to the local dipole displacement operator eigenstates \(|\{\phi_{n}\}\rangle_{r}\) of the ruler. This is implemented through the following linear transformation between the mode position and dipole position coordinates:
\[\phi_{n}(t)=\sum_{\alpha=1}^{N-1}u_{\alpha,n}x_{\alpha}(t), \tag{23}\]
where the \(u_{\alpha,n}\) are the orthonormal mode eigenfunctions of the ruler:
\[u_{\alpha,n}=\sqrt{\frac{2}{N}}\cos\left[\frac{\alpha\pi}{N}\left(n+\frac{N}{ 2}\right)\right],\,\alpha=1,2,\ldots,N-1. \tag{24}\]
Note that we have one fewer mode \((N-1)\) than the total number of ruler atom dipoles \((N)\). This is because we do not include the zero frequency, \(\alpha=0\) mode in the sum, which results in Eq. (23) solving the constraint of Eq. (14). In other words, imposing the constraint is equivalent to simply removing the zero frequency, centre of mass mode of the ruler (hence showing a key advantage of first working in terms of the nonlocal normal mode solutions to the free ruler equations of motion). The inverse transformation is
\[x_{\alpha}(t)=\sum_{n=-\frac{N-3}{2}}^{\frac{N-1}{2}}\tilde{u}_{\alpha,n}\phi _{n}(t), \tag{25}\]
where4\(\tilde{u}_{\alpha,n}=u_{\alpha,n}-u_{\alpha,-\frac{N-1}{2}}\).
Footnote 4: We originally have \(x_{\alpha}(t)=\sum_{n=-\frac{N-1}{2}}^{\frac{N-1}{2}}u_{\alpha,n}\phi_{n}(t)\), and a technical subtlety is that we need to eliminate one of the atom dipole sites in the sum in order to enforce the relational constraint equation (14). Here, it does not matter which coordinate we eliminate, as long as it is not one of the ruler coordinates opposite the ion location. We choose to eliminate the left-most ruler edge coordinate \(\phi_{-\frac{N-1}{2}}=-\sum_{n=-\frac{N-3}{2}}^{\frac{N-1}{2}}\phi_{n}\).
### Second-quantized, tight binding model of the ion-ruler system
In this section, we will first derive an approximate, second-quantized tight binding model description of the ion-ruler system, and then obtain analytical solutions for the resulting ion-ruler quantum dynamics. From Eqs. (21) and (23), the free ruler dipole position and momentum operators operators are respectively defined as (see Appendix C for the derivation details):
\[\hat{\phi}_{n}(t)=\sum_{\alpha=1}^{N-1}x_{\alpha,0}u_{\alpha,n}\left[\hat{a} _{\alpha}(0)e^{-i\Omega_{\alpha}t}+\hat{a}_{\alpha}^{\dagger}(0)e^{i\Omega_{ \alpha}t}\right], \tag{26}\]
\[\hat{\pi}_{n}(t)=-i\sum_{\alpha=1}^{N-1}\sqrt{\frac{m_{r}\Omega_{\alpha}\hbar }{2}}u_{\alpha,n}\left[\hat{a}_{\alpha}(0)e^{-i\Omega_{\alpha}t}-\hat{a}_{ \alpha}^{\dagger}(0)e^{i\Omega_{\alpha}t}\right], \tag{27}\]
where the normal mode ruler frequencies are given by Eq. (22). Expressed in terms of the normal mode annihilation and creation operators, the ruler Hamiltonian \(\hat{H}_{r}\) takes the standard form of a sum over decoupled harmonic oscillator Hamiltonians, one for each mode:
\[\hat{H}_{r}=\frac{1}{2}\sum_{\alpha=1}^{N-1}\hbar\Omega_{\alpha}\left(\hat{a}_{ \alpha}^{\dagger}\hat{a}_{\alpha}+\hat{a}_{\alpha}\hat{a}_{\alpha}^{\dagger} \right). \tag{28}\]
Throughout this work, we restrict to situations where the ion is distant from the ruler edges labelled by \(n=\pm(N-1)/2\) (with ruler dipole number \(N\gg 1\)). Let us first assume therefore that we initially prepare the ion in a localised state opposite the \(i\)th ruler dipole, where \(|i|\ll(N-1)/2\). While the ion will classically remain trapped by the potential \(V_{I}\) in the ion Hamiltonian \(H_{I}^{\text{eff}}\), quantum mechanically the ion can "hop" from one dipole site to the next by tunnelling through the potential barriers between the sites, leading to delocalisation of the ion5. However, we shall work in a parameter regime where the ion hopping timescale between nearest neighbour ruler dipoles is long compared to the timescale over which the ruler responds to the initial presence of the ion, i.e., the timescale for the ruler to measure the position of the ion. Restricting to \(w\ll a_{r}\), i.e., the perpendicular distance between the ion and ruler is much smaller than the ruler dipoles' classical, static equilibrium separation \(a_{r}\), we can then approximate \(V_{I}\) as a delta function potential: \(w^{2}(x^{2}+w^{2})^{-3/2}/2\to\delta(x)\), giving \(V_{I}(x)\approx-q_{I}\mathsf{p}_{r}\delta(x-x_{i})/(2\pi\epsilon_{0}w)\), with \(x_{i}=ia_{r}\). Neglecting for now tunnelling to the neighbouring sites, the (degenerate) ground states of the ion Hamiltonian \(\hat{H}_{I}^{\text{eff}}\) are given approximately by the bound states
Footnote 5: Taking into account the quantum dynamical response of the ruler dipoles (i.e., phonons) to the ion may in fact serve to localise the ion under certain conditions [35; 36; 37]; we do not consider such a possibility in the present work.
\[|i\rangle_{I}=\sqrt{\kappa}\int dx\,e^{-\kappa|x-x_{i}|}|x\rangle_{I}, \tag{29}\]
where \(\kappa=M_{I}q_{I}\mathsf{p}_{r}/(2\pi\hbar^{2}\epsilon_{0}w)\). Here, \(|i\rangle_{I}\) denotes that the ion is localised at site \(i\) when the position uncertainty of the ion satisfies \(\Delta x=1/(\sqrt{2}\kappa)<a_{r}\).
In order to account for the ion being localised at different sites, as well as account for tunnelling between neighbouring sites, it is convenient to adopt a second-quantized tight-binding description, where the position of the ion is expressed in terms of the number of ions at each site \(n\), forming a Fock space spanned by the following Fock state basis:
\[|\psi\rangle_{I}=|s_{-\frac{N-1}{2}}\rangle_{I}\otimes\cdots|s_{n}\rangle_{I} \otimes\cdots|s_{\frac{N-1}{2}}\rangle_{I}\,. \tag{30}\]
If the ion is localised at site \(i\), the Fock state corresponding to the single ion state \(|i\rangle_{I}\) satisfies \(s_{n}=\delta_{i,n}\). We then introduce creation and annihilation operators, \(\hat{c}_{n}^{\dagger}\) and \(\hat{c}_{n}\) respectively for an ion at the \(n\)th site, and suppose that these operators satisfy anticommutation relations (i.e., the ions are treated as Fermions), so that not more than one ion can occupy a given site. However, since we will be considering below only initial states that consist of a single ion, the quantum dynamics ensures that the ion number is conserved and always equal to one, and whether we treat the ions as Fermions or Bosons is then immaterial6. Note that we do not in fact require the use of the full \(N\)-fold tensor product in Eq. (30), since we will only consider ion states that correspond to the ion being distant from the ruler edges (\(|i|\ll N\)), as mentioned above. In particular, including the \(|s_{-\frac{N-1}{2}}\rangle_{I}\) state space in Eq. (30) will not give rise to any inconsistencies in the following.
Footnote 6: In principle, initial states consisting of more than one ion could also be considered by working with this second quantized model. In this case, it would be necessary to also take into account the ion-ion repulsive interaction, and different quantum dynamics would result depending on whether the ions are Bosons or Fermions.
The effective ion Hamiltonian (19) with rigid ruler potential
tight-binding formulation:
\[\hat{H}_{I}^{\text{eff}}=\sum_{n=-\frac{N-1}{2}}^{\frac{N-1}{2}}\left[\nu\hat{c}_{ n}^{\dagger}\hat{c}_{n}+\gamma\left(\hat{c}_{n+1}^{\dagger}\hat{c}_{n}+\hat{c}_{n}^{ \dagger}\hat{c}_{n+1}\right)\right], \tag{31}\]
where the on-site binding energy \(\nu\) is given by
\[\nu={}_{I}\langle n|\hat{H}_{I}^{\text{eff}}|n\rangle_{I}=-\frac{\hbar^{2} \kappa^{2}}{2M_{I}}, \tag{32}\]
and the hopping strength \(\gamma\) is given by
\[\gamma={}_{I}\langle n+1|\hat{H}_{I}^{\text{eff}}|n\rangle_{I}=-\frac{\hbar^{ 2}\kappa^{2}}{M_{I}}e^{-\kappa a_{r}}\left(\kappa a_{r}+1\right). \tag{33}\]
With the ion wavefunction localisation condition giving \(\kappa a_{r}>1\) (see above), we have that \(e^{-\kappa a_{r}}\ll 1\), so that the overlap integral and hence hopping strength is only significant for nearest-neighbour sites. From Eqs. (32) and (33), we also have that \(\gamma\ll\nu\) and from now on we neglect the hopping terms in \(\hat{H}_{I}^{\text{eff}}\).
The ion-ruler dipole elastic displacement interaction \(\hat{V}_{Ir}^{\text{eff}}\) defined in Eq. (20) takes the following second-quantized, tight binding form:
\[\hat{V}_{Ir}^{\text{eff}}=-\frac{3q_{I}\mathsf{p}_{r}w}{4\pi\epsilon_{0}}\sum _{n=-\frac{N-1}{2}}^{\frac{N-1}{2}}\hat{\phi}_{n}\hat{x}_{In}\left(\hat{x}_{In }^{2}+w^{2}\right)^{-5/2}|{}_{\hat{x}_{In}=\hat{x}_{I}-na_{r}}=-\lambda\sum_{ n=-\frac{N-1}{2}}^{\frac{N-1}{2}}\hat{c}_{n}^{\dagger}\hat{c}_{n}\hat{\phi}_{n}, \tag{34}\]
where the coupling strength \(\lambda\) is given by
\[\lambda =-\frac{\langle n|\hat{V}_{Ir}^{\text{eff}}|n\rangle}{\hat{\phi} _{n}}=\frac{3q_{I}\mathsf{p}_{r}w\kappa}{4\pi\epsilon_{0}}\int_{0}^{\infty}dx \,e^{-2\kappa x}x(x^{2}+w^{2})^{-5/2} \tag{35}\] \[=\frac{3q_{I}\mathsf{p}_{r}w\kappa^{4}}{4\pi\epsilon_{0}}\xi( \kappa w),\]
with \(\xi(z)=\int_{0}^{\infty}d\tilde{x}\,e^{-2\tilde{x}}\tilde{x}(\tilde{x}^{2}+z^ {2})^{-5/2}\), \(\tilde{x}=\kappa x\).
The full, second quantized tight-binding form of the ion-ruler Hamiltonian is then
\[\hat{H}=\sum_{n=-\frac{N-1}{2}}^{\frac{N-1}{2}}\nu\hat{c}_{n}^{\dagger}\hat{c} _{n}+\sum_{\alpha=1}^{N-1}\hbar\Omega_{\alpha}\hat{a}_{\alpha}^{\dagger}\hat{ a}_{\alpha}-\sum_{n=-\frac{N-1}{2}}^{\frac{N-1}{2}}\sum_{\alpha=1}^{N-1}\hbar \Omega_{\alpha}\lambda_{\alpha,n}\,\hat{c}_{n}^{\dagger}\hat{c}_{n}\left(\hat {a}_{\alpha}+\hat{a}_{\alpha}^{\dagger}\right), \tag{36}\]
where the dimensionless ion-ruler mode coupling strength is defined as follows:
\[\lambda_{\alpha,n}=\frac{\lambda x_{\alpha,0}u_{\alpha,n}}{\hbar\Omega_{\alpha }}, \tag{37}\]
with \(x_{\alpha,0}=(\hbar/2m_{r}\Omega_{\alpha})^{1/2}\), and \(\Omega_{\alpha}\), \(u_{\alpha,n}\) defined in Eqs. (22) and (24) respectively.
### Exact solution to the ion-ruler quantum dynamics
We shall focus on the quantum dynamics resulting from the following example initial ion-ruler product state:
\[|\Psi(0)\rangle=\frac{1}{\sqrt{2}}\left(|i_{1}\rangle_{I}+|i_{2} \rangle_{I}\right)\otimes|\psi(0)\rangle_{r}, \tag{38}\]
where the ion is in an equal amplitude quantum superposition of stationary wavepackets centred at distinct sites \(i_{1}\) and \(i_{2}\), and the ruler is in the ground state of its free Hamiltonian, expressed in the non-local normal mode basis as
\[|\psi(0)\rangle_{r}=|0\rangle_{1}\otimes|0\rangle_{2}\otimes... |0\rangle_{\alpha}\otimes...|0\rangle_{N}. \tag{39}\]
In particular, the ruler normal mode phonon occupation numbers are all initially zero.
Assuming such an ion-ruler initial product state (38) is equivalent to "plucking" (i.e., suddenly switching on) the ion-ruler interaction at time \(t=0\). Therefore, the ruler will behave as a wavelike medium with the formation of "ripples" that reflect from the ruler ends and do not dissipate away; the ruler would then function very poorly in terms of measuring the ion position. This problem can be addressed by inserting a switching function \(S(t)\) in the interaction part of Hamiltonian (36), which ensures that the interaction between the ion and the ruler is turned on sufficiently slowly:
\[\hat{H}=\sum_{n=-\frac{N-1}{2}}^{\frac{N-1}{2}}\nu\hat{c}_{n}^{ \dagger}\hat{e}_{n}+\sum_{\alpha=1}^{N-1}\hbar\Omega_{\alpha}\hat{a}_{\alpha} ^{\dagger}\hat{a}_{\alpha}-\sum_{n=-\frac{N-1}{2}}^{\frac{N-1}{2}}\sum_{ \alpha=1}^{N-1}S(t)\hbar\Omega_{\alpha}\lambda_{\alpha,n}\hat{c}_{n}^{\dagger }\hat{e}_{n}\left(\hat{a}_{\alpha}+\hat{a}_{\alpha}^{\dagger}\right). \tag{40}\]
Here, we model the switching function as follows:
\[S(t)=\begin{cases}1-e^{-t/\Delta t}&t>0\\ 0&t<0\end{cases}. \tag{41}\]
This switching function equals zero for \(t<0\) and approaches one as \(t\rightarrow\infty\), switching on at around \(t=0\) over the duration \(\Delta t\); in the limit \(\Delta t\to 0\), the switching function (41) coincides with the step function \(\Theta(t)\).
Hamiltonian (40), which neglects the tunnelling of the ion between neighbouring sites, resembles that for an optomechanical many-body system comprising multiple cavity modes and multiple mechanical modes [23]. The fact that the ion-ruler interaction Hamiltonian commutes with the on-site ion Hamiltonian allows for the quantum dynamics to be solved analytically in closed form; we apply the method of analysis given in Ref. [23] to express the unitary time-evolution operator \(\hat{U}(t)\) as follows:
\[\hat{U}(t)=e^{-i\frac{\nu}{\hbar}\sum_{n}\hat{c}_{n}^{\dagger} \hat{c}_{n}t}e^{-i\sum_{\alpha,n}f(\alpha,n,t)(\hat{c}_{n}^{\dagger}\hat{c}_{ n})^{2}}\,e^{\sum_{\alpha,n}\left[g(\alpha,n,t)\hat{a}_{a}^{\dagger}-g^{*}(\alpha,n,t) \hat{a}_{\alpha}\right]\hat{c}_{n}^{\dagger}\hat{c}_{n}}\,e^{-i\sum_{\alpha} \Omega_{\alpha}t\hat{a}_{\alpha}^{\dagger}\hat{a}_{\alpha}}, \tag{42}\]
where \(f=(F_{1}+F_{2}F_{3})\), \(g=(F_{3}-iF_{2})e^{-i\Omega_{\alpha}t}\), and \(F_{1}\), \(F_{2}\), and \(F_{3}\) are time-dependent functions
defined respectively as
\[\begin{split} F_{1}(\alpha,n,t)&=-2\lambda_{\alpha,n}^ {2}\int_{0}^{\Omega_{\alpha}t}d\tau\left(1-e^{-\frac{\tau}{\Omega_{\alpha} \Delta t}}\right)\sin\left(\tau\right)\int_{0}^{\tau}d\tau^{\prime}\left(1-e^{ -\frac{\tau^{\prime}}{\Omega_{\alpha}\Delta t}}\right)\cos\left(\tau^{\prime} \right),\\ F_{2}(\alpha,n,t)&=-\lambda_{\alpha,n}\int_{0}^{ \Omega_{\alpha}t}d\tau\left(1-e^{-\frac{\tau}{\Omega_{\alpha}\Delta t}}\right) \cos\left(\tau\right),\\ F_{3}(\alpha,n,t)&=-\lambda_{\alpha,n}\int_{0}^{ \Omega_{\alpha}t}d\tau\left(1-e^{-\frac{\tau}{\Omega_{\alpha}\Delta t}}\right) \sin\left(\tau\right).\end{split} \tag{43}\]
The unitary operator (42) neglects cross term contributions of the form \(e^{-i\sum_{\alpha,n\neq m}f(\alpha,n,m,t)(\hat{c}_{n}^{\dagger}\hat{c}_{m}^{ \dagger}\hat{c}_{m})}\), which are only relevant for states describing more than one ion occupying different sites; here we restrict ourselves to single ion states.
The initial state \(\ket{\Psi(0)}\) given by Eq. (38) evolves into an entangled state between the ion and the ruler:
\[\ket{\Psi(t)}=\hat{U}(t)\ket{\Psi(0)}=\frac{1}{\sqrt{2}}e^{-i\frac{\varphi}{ \Omega}t}\left[\ket{i_{1}}_{I}\ket{\Phi_{1}(t)}_{r}+\ket{i_{2}}_{I}\ket{\Phi_{ 2}(t)}_{r}\right], \tag{44}\]
where \(\ket{\Phi_{m}(t)}_{r}=\prod_{\alpha=1}^{N-1}e^{-if(\alpha,i_{m},t)}|g(\alpha, i_{m},t)\rangle_{r}\), \(m=1,2\), with \(|g(\alpha,i_{m},t)\rangle_{r}\) a coherent state of normal mode \(\alpha\). From Eq. (44), the density matrix of the ion-ruler system is
\[\hat{\rho}(t)=\ket{\Psi(t)}\bra{\Psi(t)}=\frac{1}{2}\sum_{m,m^{\prime}=1}^{2} \ket{i_{m}}_{I}\bra{i_{m^{\prime}}}\otimes\ket{\Phi_{m}}_{r}\bra{\Phi_{m^{ \prime}}}. \tag{45}\]
A necessary condition for the ruler to measure the position of the ion is that the off-diagonal terms of the reduced density matrix of the ion subsystem in its site position basis become suppressed over time, i.e., the ruler decoheres the initial ion superposition state. We obtain for the off-diagonal term of the ion reduced density matrix
\[\begin{split}\rho_{I}^{i_{1}i_{2}}(t)&={}_{I}\bra{ i_{1}}\operatorname{Tr}_{r}\left[\hat{\rho}(t)\right]\ket{i_{2}}_{I}=\\ &=\frac{1}{2}\operatorname{Tr}_{r}\left\{\prod_{\alpha,\beta=1}^{ N-1}e^{-i[f(\alpha,i_{1},t)-f(\beta,i_{2},t)]}\ket{g(\alpha,i_{1},t)}_{r}\bra{g( \beta,i_{2},t)}\right\}.\end{split} \tag{46}\]
Recalling the formula for the inner product between two coherent states \(\ket{a}\) and \(\ket{b}\), \(\bra{b}=\exp\left[-1/2(|b|^{2}+|a|^{2}-2b^{*}a)\right]\), from Eqs. (43) and (46) we obtain for the ion coherence
\[\begin{split} C_{I}(t)&=2|\rho_{I}^{i_{1}i_{2}}(t)| =\prod_{\alpha,\beta=1}^{N-1}\left|\bra{g(\alpha,i_{1},t)}|g(\beta,i_{2},t) \right.|=\\ &=\prod_{\alpha,\beta=1}^{N-1}\delta_{\alpha,\beta}\exp\left\{- \frac{1}{2}\text{Re}\left[|g(\alpha,i_{1},t)|^{2}+|g(\beta,i_{2},t)|^{2}-2g^{* }(\alpha,i_{1},t)g(\beta,i_{2},t)|\right]\right\}=\\ &=\exp\left\{-\frac{1}{2}\sum_{\alpha=1}^{N-1}\left[(F_{3}(\alpha,i_{1},t)-F_{3}(\alpha,i_{2},t))^{2}+(F_{2}(\alpha,i_{1},t)-F_{2}(\alpha,i_{2 },t))^{2}\right]\right\}.\end{split} \tag{47}\]
For times longer than the switch-on duration \(\Delta t\), the functions \(F_{2}\) and \(F_{3}\) become approximately
\[F_{2}(\alpha,n,t) \approx-\lambda_{\alpha,n}\left(\sin(\Omega_{\alpha}t)-\frac{\Omega _{\alpha}\Delta t}{1+\Omega_{\alpha}^{2}\Delta t^{2}}\right), \tag{48}\] \[F_{3}(\alpha,n,t) \approx\lambda_{\alpha,n}\left(\cos(\Omega_{\alpha}t)-\frac{1}{1 +\Omega_{\alpha}^{2}\Delta t^{2}}\right), \tag{49}\]
where \(\lambda_{\alpha,n}\) is defined in Eq. (37). Provided the switch-on duration satisfies \(\Delta t\gg 1/\Omega_{\alpha=1}\), expressions (48) and (49) can then be further simplified to
\[F_{2}(\alpha,n,t)\approx-\lambda_{\alpha,n}\sin(\Omega_{\alpha}t),\qquad F_{3 }(\alpha,n,t)\approx\lambda_{\alpha,n}\cos(\Omega_{\alpha}t). \tag{50}\]
For \(N\gg 1\), the above condition on the switch-on duration can be rewritten as \(\Delta t\gg L/(\pi c_{r})\), where \(L=(N-1)a_{r}\approx Na_{r}\) is the classical, equilibrium free ruler length and \(c_{r}=\omega_{r}a_{r}\) is the ruler elastic wave propagation speed in the long wavelength (equivalently low frequency) limit. In particular, the ion-ruler coupling is switched on more slowly than the time for an acoustic wave to propagate the length of the ruler. For \(t>\Delta t\), the ion coherence (47) is then given approximately by the following long time limit expression
\[\lim_{t\to\infty}C_{I}(t)=\exp\left\{-\sum_{\alpha=1}^{N-1}\frac{\lambda^{2}} {2\hbar Nm_{r}\Omega_{\alpha}^{3}}\left[\cos\left(\frac{\alpha\pi}{N}\bigg{(} i_{1}+\frac{N}{2}\bigg{)}\right)-\cos\left(\frac{\alpha\pi}{N}\bigg{(}i_{2}+ \frac{N}{2}\bigg{)}\right)\right]^{2}\right\}. \tag{51}\]
Note that Eq. (51) is time-independent.
Fig. 3 plots the dependence of the long-time limit coherence given by Eq. (51) on both the ion superposition separation \(|i_{1}-i_{2}|\) and the ruler dipole number \(N\), for some example ion-ruler system parameters. As might be expected from environmentally induced decoherence (with the ruler acting as an environment for the ion), the coherence becomes smaller the larger the ion superposition separation \(|i_{1}-i_{2}|\), as can be seen from Fig. 3a by fixing a given value for \(N\) and looking at the dependence on the separation. Less expected in Fig. 3a is a weaker, but still progressive decrease in coherence with increasing ruler dipole number \(N\) (equivalently increasing ruler length) for a fixed given ion superposition separation \(|i_{1}-i_{2}|\); we might have expected that the fixed separation coherence would not depend on ruler length when the latter is much larger than the former (recall we are assuming that the ion is distant from the ruler edges). The resolution lies in the fact that we are considering a one-dimensional mass-spring model of a ruler, which in contrast to a more realistic two or three-dimensional model, gives rise to infra-red type signatures (i.e., boundary size effects) in local properties. In particular, as our one-dimensional ruler becomes longer, it gets more "floppy", and the zero-point fluctuations in the dipole displacements \(\delta\phi_{n}\) grow. Such fluctuations will cause dephasing and hence a progressive decrease in the ion coherence (51) with increasing \(N\) as seen in Fig. 3a. Note that such dephasing due to ruler zero-point fluctuations is not the same as decoherence due to the ruler becoming entangled with the ion (and measuring the position of the latter); both decoherence and dephasing result in a reduction of the ion coherence (51) that cannot be distinguished without measuring the ruler response to the ion as well (as discussed in Sec. V below).
The above-described ruler length signature can be avoided by scaling both the ruler atom mass \(m_{r}\) and spring constant \(k_{r}\) by the factor \(N^{s}\), with \(s>0\) some scaling exponent, i.e., by making the ruler progressively stiffer and correspondingly more massive as its length increases, hence capturing some distinguishing scaling aspects of longitudinal vibrational modes in two and three-dimensional rulers. In Fig. 3b, we show the ion coherence for the scalings \(m_{r}\to Nm_{r}\) and \(k_{r}\to Nk_{r}\) (i.e., \(s=1\)), which effectively corresponds to a two-dimensional model. Since including the scaling makes the
ruler stiffer, we have correspondingly increased the ion-ruler coupling strength from \(\lambda=0.3\) to \(\lambda=2\) in order to increase the local distortion of the ruler opposite to the ion location. Note that the coherence now in fact increases with ruler length and fixed ion superposition separation, in contrast to when there is no scaling; this is because the dipole zero-point fluctuations decrease with increasing dipole mass and spring stiffness, resulting in less dephasing.
## V The quantum ruler as a position measurement device
In the previous section, we examined the reduced ion system state in the long time limit, tracing out the state of the ruler. This gave us some indirect information about the behaviour of the ruler as a quantum, many degree of freedom environment interacting with the ion. In this section, we investigate the behaviour of the ruler as a quantum measuring device for the ion that is initially in a superposition state (38) and with the ruler initially in its ground state (39). We shall first obtain the ion-ruler density matrix in the local position representation \(|\{\phi_{n}\}\rangle_{r}\) of the ruler dipoles, and investigate the ruler response to the ion by tracing out the latter and considering the average \(\langle\phi_{n}\rangle\) and variance \(\delta\phi_{n}\) of the ruler dipole displacements versus dipole site \(n\). We then selectively trace out the ruler dipoles, except for those directly opposite the ion, giving a reduced density matrix for the ion and two nearest dipoles whose displacements \(\phi_{i_{1(2)}}\) respond to the ion's local presence. The resulting reduced density matrix is then compared to that which assumes an initial mixed state for
Figure 3: Ion coherence (51) versus ruler dipole number \(N\) and separation \(|i_{1}-i_{2}|\) between ion superposition states; the ruler dipole parameter units are \(m_{r}=k_{r}=\hbar=1\). (a) Coupling strength \(\lambda=0.3\). (b) Coupling strength \(\lambda=2\) and scaling \(m_{r}\to Nm_{r}\), \(k_{r}\to Nk_{r}\).
the ions through a certain joint measurement of the ion and ruler dipole displacements, in order to quantify the extent to which the extended material ruler acts as a quantum position measuring device.
### Local description of the ruler dipoles state
We first express the density operator (45) in the nonlocal mode position representation \(|\{x_{\alpha}\}\rangle_{r}\) as follows:
\[\begin{split}\hat{\rho}(t)=&\prod_{\alpha,\beta=1}^{N -1}\int dx_{\alpha}d\tilde{x}_{\beta}|x_{\alpha}\rangle_{r}\langle x_{\alpha}| \Psi(t)\rangle\langle\Psi(t)|\tilde{x}_{\beta}\rangle_{r}\langle\tilde{x}_{ \beta}|\\ =&\frac{1}{2}\sum_{m,m^{\prime}=1}^{2}\prod_{\alpha, \beta=1}^{N-1}\int dx_{\alpha}d\tilde{x}_{\beta}\psi_{i_{m}}(x_{\alpha},t) \psi_{i_{m^{\prime}}}^{*}(\tilde{x}_{\beta},t)|i_{m}\rangle_{I}\langle i_{m^{ \prime}}|\otimes|x_{\alpha}\rangle_{r}\langle\tilde{x}_{\beta}|,\end{split} \tag{52}\]
where the wave function \(\psi_{i_{m}}(x_{\alpha},t)\) of mode \(\alpha\) is defined as \(\psi_{i_{m}}(x_{\alpha},t)={}_{r}\langle x_{\alpha}|\Phi_{m}(t)\rangle_{r}\). From Eq. (50) and the definition for \(|\Phi_{m}(t)\rangle_{r}\) given just below (44), we have in the long time limit, \(|\Phi_{m}\rangle_{r}=\prod_{\alpha=1}^{N-1}e^{-if(\alpha,i_{m},t)}|\lambda_{ \alpha,i_{m}}\rangle_{r}\), and the mode \(\alpha\) wave function is
\[\psi_{n}(x_{\alpha},t)=\frac{1}{\sqrt{\sqrt{2\pi}x_{\alpha,0}}}e^{-if(\alpha, n,t)}e^{-\frac{1}{4}\left(\frac{x_{\alpha}}{x_{\alpha,0}}-2\lambda_{\alpha,n} \right)^{2}}, \tag{53}\]
where \(f=F_{1}+F_{2}F_{3}\) [with the \(F_{i}\) functions defined in Eq. (50)], \(x_{\alpha,0}=(\hbar/2m_{r}\Omega_{\alpha})^{1/2}\), and \(\lambda_{\alpha,n}=\lambda x_{\alpha,0}u_{\alpha,n}/(\hbar\Omega_{\alpha})\) [see Eq. (37)].
We next express the ion-ruler system density operator (52) in terms of the local, ruler dipole displacement coordinate representation \(|\{\phi_{n}\}\rangle_{r}\), by inserting on the left and right sides the following resolution of the identity7:
Footnote 7: Note that \(n\) ranges from \(-(N-3)/2\) to \((N-1)/2\), since we have eliminated the \(\phi_{(N-1)/2}\) coordinate by expressing it in terms of the remaining \(\phi_{n}\)s through the constraint (14).
\[\mathbb{1}=\sqrt{N}\prod_{n=-\frac{N-3}{2}}^{\frac{N-1}{2}}\int d\phi_{n}| \phi_{n}\rangle_{r}\langle\phi_{n}|. \tag{54}\]
The overall \(\sqrt{N}\) normalization factor is the determinant of the \(N-1\) dimensional Jacobian matrix: \(\frac{\partial\left(\{x_{\beta}\}\right)}{\partial(\{\phi_{n}\})}=\sqrt{N}\). Integrating over the nonlocal mode coordinates \(x_{\alpha}\), \(\tilde{x}_{\alpha}\), and using the fact that \({}_{r}\langle\{\phi_{n}\}|x_{\alpha}\rangle_{r}=\delta(x_{\alpha}-\sum_{n} \tilde{u}_{\alpha,n}\phi_{n})\) [see Eq. (25)], the density operator (52) becomes
\[\hat{\rho}(t)=\frac{\sqrt{N}}{2}\sum_{m,m^{\prime}=1}^{2}\prod_{i,j=-\frac{N- 3}{2}}^{\frac{N-1}{2}}\int d\phi_{i}d\phi_{j}^{\prime}\prod_{\alpha=1}^{N-1} \tilde{\psi}_{i_{m}}(\{\phi_{n}\},t)\tilde{\psi}_{i_{m^{\prime}}}^{*}(\{\phi_{ n}\},t)|i_{m}\rangle_{I}\langle i_{m^{\prime}}|\otimes|\phi_{i}\rangle_{r} \langle\phi_{j}^{\prime}|, \tag{55}\]
where \(\tilde{\psi}_{i_{m}}(\{\phi_{n}\},t)=\psi_{i_{m}}\left(\sum_{n}\tilde{u}_{ \alpha,n}\phi_{n},\,t\right)\).
We now give the expression for the reduced density operator of the ion and the dipoles located at the ruler sites \(i_{1}\), \(i_{2}\). This amounts to tracing the density operator (55) over the dipoles at all of
the other sites \(n\neq i_{1}\), \(i_{2}\). We obtain
\[\begin{split}\hat{\rho}_{\rm lr}(t)&={\rm Tr}_{\phi_{n \neq i_{1},i_{2}}}\left[\hat{\rho}(t)\right]=\\ &=\int d\phi_{i_{1}}d\phi^{\prime}_{i_{1}}d\phi_{i_{2}}d\phi^{ \prime}_{i_{2}}|\phi_{i_{1}}\rangle_{r}\langle\phi^{\prime}_{i_{1}}|\otimes| \phi_{i_{2}}\rangle_{r}\langle\phi^{\prime}_{i_{2}}|\prod_{n\neq i_{1},i_{2}} \int d\phi_{n}\,_{r}\langle\{\phi_{n}\}|\hat{\rho}(t)|\{\phi_{n}\}\rangle_{r}, \end{split} \tag{56}\]
where now the reduced state lives on the tensor product of the Hilbert spaces of three subsystems: the ion and the ruler dipoles at sites \(i_{1}\) and \(i_{2}\). The elements of this density operator are given explicitly in Appendix D.
### Ruler response to the ion
In this section, we determine how the ruler responds to the ion in the long time limit. Consider first the situation where the ion is localised at a single site \(i\) [i.e., in a bound state (29)] and the ruler initially in its ground state (39); using the long time solution \(|\Phi(t)\rangle_{r}=\prod_{\alpha=1}^{N-1}e^{-if(\alpha,i,t)}|\lambda_{\alpha,i}\rangle_{r}\), with \(\phi_{n}=\sum_{\alpha=1}^{N-1}x_{\alpha}u_{\alpha,n}\), we obtain respectively for the average ruler dipole \(n\) coordinate displacement and the uncertainty in the latter:
\[\langle\phi_{n}\rangle=\lim_{t\to\infty}{\rm Tr}\left[\hat{\phi}_{n}\hat{\rho }(t)\right]=2\sum_{\alpha=1}^{N-1}u_{\alpha,n}x_{\alpha,0}\lambda_{\alpha,i} \tag{57}\]
and
\[\delta\phi_{n}=\sqrt{\langle\phi_{n}^{2}\rangle-\langle\phi_{n}\rangle^{2}}= \sqrt{\sum_{\alpha=1}^{N-1}u_{\alpha,n}^{2}x_{\alpha,0}^{2}}, \tag{58}\]
where \(u_{\alpha,n}\) is defined in Eq. (24), \(x_{\alpha,0}=(\hbar/2m_{r}\Omega_{\alpha})^{1/2}\), and \(\lambda_{\alpha,i}=\lambda x_{\alpha,0}u_{\alpha,i}/(\hbar\Omega_{\alpha})\), with \(\Omega_{\alpha}\) defined in Eq. (22). Note that the ruler dipole displacement uncertainty (58) does not depend on the interaction with the ion (i.e., no dependence on the coupling strength \(\lambda\)), coinciding with the free ruler quantum zero-point uncertainty in its ground state.
In Fig. 4, we plot the average ruler dipole displacement \(\langle\phi_{n}\rangle\) and uncertainty \(\delta\phi_{n}\) versus dipole label \(n\) for an example ruler length \(N=41\), and two different ion-ruler coupling strengths: \(\lambda=2\) (Fig. 4a) and \(\lambda=25\) (Fig. 4b). The ruler dipole mass and spring constant are scaled respectively as \(m_{r}=Nm_{r0}=41\) and \(k_{r}=Nk_{r0}=41\), with dipole parameter units \(m_{r0}=k_{r0}=\hbar=1\).
From Fig. 4, we see that the ruler dipole displacement \(\langle\phi_{n}\rangle\) is a local maximum where the ion is localised at the example sites \(i=0\) and \(i=-5\). This is as we require in order to have the ruler locate the ion. However, the ruler response is non-local, with the dipoles having non-negligible displacement magnitudes all the way to the edges of the ruler at \(n=\pm(N-1)/2\). As mentioned in the beginning of Sec. IV, this non-local ruler response is a consequence of the one-dimensional mass-spring nature of the ruler model; if we were to instead use a more realistic two or three-dimensional, extended mass-spring model of a material ruler, then the dipoles would be most displaced in the neighbourhood of the ion's location, with the displacement amplitudes decaying away in magnitude as we move away from the ion location, hence given a more desirable localised response8.
Footnote 8: A price to pay, however, would be a more complicated normal vibrational mode analysis of the extended two or three dimensional mass-spring structures.
Another way to understand the nonlocal ruler response seen in Fig. 4 is as a consequence of the relative coordinate constraint (14). In particular, since there is a segment of the ruler where
\(\langle\phi_{n}\rangle>1\), we must also have complementary segments where \(\langle\phi_{n}\rangle<1\), such that the negative region "areas" of the \(\langle\phi_{n}\rangle\) versus \(n\) curve cancel the positive region "area"; we have verified that \(\sum_{n=-(N-1)/2}^{(N-1)/2}\langle\phi_{n}\rangle=0\), as must follow from Eq. (14).
In Fig. 4, we note that the dipole displacement uncertainties \(\delta\phi_{n}\) increase towards the ruler edges. This is again a feature of the one dimensional nature of the mass-spring ruler model, where the dipoles become progressively more "floppy" with decreasing effective spring constants, the closer they are located to the ruler edges. With our example choice of coupling strength \(\lambda=2\) (Fig. 4a), we have \(\langle\phi_{i}\rangle<\delta\phi_{i}\), i.e., the average local dipole displacement at the ion location \(i\) is smaller than the dipole zero-point uncertainty there; this corresponds to the "weak" measurement regime, where a large (ensemble) number of repeated measurements of the ruler dipole displacements is required in order to accurately determine the ion location. On the other hand, for the example choice of coupling strength \(\lambda=25\) (Fig. 4b), we have \(\langle\phi_{i}\rangle\gg\delta\phi_{i}\), i.e., the average local dipole displacement at the ion location \(i\) is much larger than the dipole zero-point uncertainty there; this corresponds to the "strong" measurement regime, where we can accurately determine the ion location without a large (ensemble) number of repeated measurements on the ruler dipole displacements.
Returning to the situation where the ion is in a superposition of two states localised at distinct sites \(i_{1}\) and \(i_{2}\), with the ruler initially in the ground state of its free Hamiltonian [Eq. (38)], we obtain respectively for the ruler dipole \(n\) average displacement and displacement uncertainty in the
Figure 4: Long time limit average ruler dipole displacement \(\langle\phi_{n}\rangle\) versus dipole site label \(n\) for an ion localised at \(i=0\) (purple stars) and \(i=-5\) (red triangles), and ion-ruler coupling strengths (a) \(\lambda=2\); (b) \(\lambda=25\). The dipole displacement uncertainty \(\delta\phi_{n}\) is also shown for comparison (blue dots). The other parameters used are \(N=41\), \(k_{r}=m_{r}=41\), and \(\hbar=1\).
long time limit:
\[\langle\phi_{n}\rangle = \sum_{\alpha=1}^{N-1}u_{\alpha,n}x_{\alpha,0}\left(\lambda_{\alpha,i _{1}}+\lambda_{\alpha,i_{2}}\right), \tag{59}\] \[\delta\phi_{n} = \sqrt{\sum_{\alpha=1}^{N-1}u_{\alpha,n}^{2}x_{\alpha,0}^{2}+\left[ \sum_{\alpha=1}^{N-1}u_{\alpha,n}x_{\alpha,0}\left(\lambda_{\alpha,i_{1}}- \lambda_{\alpha,i_{2}}\right)\right]^{2}}. \tag{60}\]
Note that, in contrast to the ruler dipole displacement uncertainty for the ion localised at a single site considered above, the displacement uncertainty now depends on the interaction between the ion and the ruler (characterised by the coupling strength \(\lambda\)).
Fig. 5 plots the average ruler dipole displacement \(\langle\phi_{n}\rangle\) and uncertainty \(\delta\phi_{n}\) versus dipole site label \(n\) for the same example parameters as in the single site localisation situation considered above, and with superposition sites \(i_{1}=-5\) and \(i_{2}=5\). From Fig. 5, we see that the ruler dipole displacement \(\langle\phi_{n}\rangle\) is a local maximum where the ion is localised at the sites \(i_{1}=-5\) and \(i_{2}=5\) in the superposition. One may also verify that the average dipole displacements satisfy the relative coordinate constraint (14): \(\sum_{n=-(N-1)/2}^{(N-1)/2}\langle\phi_{n}\rangle=0\). However, the ion-ruler \(\lambda\) coupling-dependent contribution to the uncertainty [second term in the square root expression (60)] dominates over the free ruler zero-point uncertainty [first term in the square root expression (60)], and in fact is larger than the local maximum average dipole displacements \(\langle\phi_{i_{1(2)}}\rangle\), independently of the selected coupling strength \(\lambda\). This implies that, even for a large ion-ruler coupling strength, a much larger (ensemble) number of repeated measurements on the ruler dipole displacements are required in order to accurately determine the ion locations in a superposition (or mixture) of localised site states than for the situation where the ion is in a single site localised state. From Fig. 5b, we see that the large, \(\lambda\) coupling-dependent uncertainty magnitude \(\delta\phi_{n}\) extends to the ends of the ruler at an approximately constant value, and dips sharply between \(i_{1}(=-5)<n<i_{2}(=5)\). This non-local, \(\lambda\)-dependent uncertainty is again a consequence of the one-dimensional nature of the ruler model; for a two or three-dimensional mass-spring model of an extended material ruler, we would expect the uncertainty to be more localised in the neighbourhood of the localised ion positions in the considered superposition state.
### Quantum measurement scheme for superpositions of positions
While the ruler exhibits a large quantum uncertainty in its dipole displacements for a strongly coupled ion that is initially in a superposition of localised site states \(i_{1}\) and \(i_{2}\) (see Fig. 5), it is not possible to distinguish such a state by measurements of the ruler alone from alternatively having the ion in a mixture of the localised site states \(i_{1}\) and \(i_{2}\); as discussed in Sec. III, we necessarily require a joint measurement that acts on both the ion system and ruler. We define such a joint quantum measurement through the projector \(\hat{\Pi}^{*}=\left|\Psi^{*}\right\rangle\langle\Psi^{*}|\), where
\[\left|\Psi^{*}\right\rangle=\frac{1}{\sqrt{2}}\left[\left|i_{1}\right\rangle \left|\bar{\chi}_{1|1}\right\rangle\left|\bar{\chi}_{2|1}\right\rangle+\left| i_{2}\right\rangle\left|\bar{\chi}_{1|2}\right\rangle\left|\bar{\chi}_{2|2} \right\rangle\right], \tag{61}\]
with
\[\left|\bar{\chi}_{l|m}\right\rangle=\frac{1}{\sqrt{2c\delta\phi_{l_{l}}}} \int_{\langle\phi_{i_{l}}\rangle_{m}-c\delta\phi_{i_{l}}}^{\langle\phi_{i_{l} }\rangle_{m}+c\delta\phi_{i_{l}}}d\phi_{i_{l}}\left|\phi_{i_{l}}\right\rangle,\ l,m=1,2. \tag{62}\]
Here, \(\langle\phi_{i_{l}}\rangle_{m}\) is the average displacement of the \(i_{l}\)-th ruler site when the ion is localised at \(i_{m}\); for the case \(i_{1}=-i_{2}\), we have from Eq. (59):
\[\langle\phi_{i_{1}}\rangle_{1} = \langle\phi_{i_{2}}\rangle_{2}=2\sum_{\alpha=1}^{N-1}u_{\alpha,i_ {1}}x_{\alpha,0}\lambda_{\alpha,i_{1}},\] \[\langle\phi_{i_{2}}\rangle_{1} = \langle\phi_{i_{1}}\rangle_{2}=2\sum_{\alpha=1}^{N-1}u_{\alpha,i_ {1}}x_{\alpha,0}\lambda_{\alpha,i_{2}}. \tag{63}\]
The integration range in the definition (62) for the state \(|\bar{\chi}_{l|m}\rangle\) reflects the precision of the dipole displacement measurement, with \(c\) an adjustable "precision" parameter and with the scale set by \(\delta\phi_{i_{l}}\), the uncertainty in the free ruler dipole displacement at the ion site \(i_{l}\) [see Eq. (58)]:
\[\delta\phi_{i_{l}}=\sqrt{\sum_{\alpha=1}^{N-1}u_{\alpha,i_{l}}^{2}x_{\alpha,0 }^{2}}. \tag{64}\]
Figure 5: Long time limit average ruler dipole displacement \(\langle\phi_{n}\rangle\) (red squares) and dipole displacement uncertainty \(\delta\phi_{n}\) (blue dots) versus dipole site label \(n\) for an ion in a superposition state with \(i_{1}=-5\) and \(i_{2}=5\). The ion-ruler coupling strengths are (a) \(\lambda=2\); (b) \(\lambda=25\). The other parameters used are \(N=41\), \(\lambda=25\), \(k_{r}=m_{r}=41\), and \(\hbar=1\).
We then define the ion-ruler joint measurement coherence as follows:
\[C^{*}=\frac{\mathrm{Tr}\left[\hat{\Pi}^{*}\hat{\rho}_{\mathrm{pure}}\right]- \mathrm{Tr}\left[\hat{\Pi}^{*}\hat{\rho}_{\mathrm{mix}}\right]}{\mathrm{Tr} \left[\hat{\Pi}^{*}\hat{\rho}_{\mathrm{mix}}\right]}, \tag{65}\]
where \(\hat{\rho}_{\mathrm{pure}}\) denotes the ion-ruler density operator in the long time limit, with the ion initially prepared in a pure superposition state and the ruler initially in its free ground state, while \(\hat{\rho}_{\mathrm{mix}}\) denotes the ion-ruler density operator in the long-time limit, with the ion initially prepared in a mixed state and the ruler initially in its free ground state. The coherence \(C^{*}\) quantifies the extent to which the joint measurement that we have defined can distinguish between an initial coherent superposition and an initial incoherent mixture of localised ion positions. Utilizing the properties of the trace operation, we have
\[\mathrm{Tr}\left[\hat{\Pi}^{*}\hat{\rho}\right]=\mathrm{Tr}_{\phi_{n\neq i_{1}, i_{2}}}\left[\langle\Psi^{*}|\hat{\rho}|\Psi^{*}\rangle\right]=\langle\Psi^{*}| \left[\mathrm{Tr}_{\phi_{n\neq i_{1},i_{2}}}\hat{\rho}\right]|\Psi^{*}\rangle, \tag{66}\]
with \(\mathrm{Tr}_{\phi_{n\neq i_{1},i_{2}}}[\hat{\rho}_{\mathrm{pure}}(t)]=\hat{ \rho}_{\mathrm{Ir}}(t)\), where \(\hat{\rho}_{\mathrm{Ir}}(t)\) is given by Eq. (56), while for initially mixed ion states, \(\mathrm{Tr}_{\phi_{n\neq i_{1},i_{2}}}[\hat{\rho}_{\mathrm{mix}}(t)]=\hat{ \rho}_{\mathrm{Ir}}^{i_{1},i_{1}}(t)+\hat{\rho}_{\mathrm{Ir}}^{i_{2},i_{2}}(t)\). (See Appendix D for more details concerning the reduced density matrix \(\hat{\rho}_{\mathrm{Ir}}\).) Using the equalities \(\langle\Psi^{*}|\rho_{\mathrm{Ir}}^{i_{1},i_{1}}|\Psi^{*}\rangle=\langle\Psi^ {*}|\rho_{\mathrm{Ir}}^{i_{2},i_{2}}|\Psi^{*}\rangle\) and \(|\langle\Psi^{*}|\rho_{\mathrm{Ir}}^{i_{1},i_{2}}|\Psi^{*}\rangle|=|\langle \Psi^{*}|\rho_{\mathrm{Ir}}^{i_{2},i_{1}}|\Psi^{*}\rangle|\), the ion-ruler coherence (65) then simplifies to
\[C^{*}=\frac{|\langle\Psi^{*}|\hat{\rho}_{\mathrm{Ir}}^{i_{1},i_{2}}|\Psi^{*} \rangle|}{\langle\Psi^{*}|\hat{\rho}_{\mathrm{Ir}}^{i_{1},i_{1}}|\Psi^{*} \rangle}. \tag{67}\]
Figure 6 compares the joint ion-ruler measurement coherence \(C^{*}\) dependence on the ion-superposition separation \(|i_{1}-i_{2}|\) for ion-ruler coupling strength \(\lambda=2\) and example ruler length \(N=41\) (i.e., the same parameters as considered above in Fig. 5a). The precision parameter is chosen to be \(c=0.1\); smaller values of \(c\) do not lead to any significant increases in \(C^{*}\). The ion-ruler
coherence \(C^{*}\) decreases the further apart the localized states are in the superposition. If we were to choose a larger ion-ruler coupling, e.g., \(\lambda=25\), the coherence \(C^{*}\) immediately drops to a negligible value: for \(|i_{1}-i_{2}|=2\), we have \(C^{*}\approx 10^{-6}\). Thus, the ruler must operate in the weak measurement regime in order to be able to distinguish between superpositions and mixtures of localized ion position states for a range of superposition separations. The qualitative trend of decreasing ion-ruler joint coherence with increasing coupling strength is related to the nonlocal entanglement that develops between the ion and all of the ruler dipoles when they interact--a consequence of the one-dimensional ruler model; partially tracing out the other ruler dipoles \(n\neq i_{1},i_{2}\) results in decoherence. For a more realistic two or three dimensional mass-spring model of an extended material ruler, we expect that the ion-ruler entanglement will be more localised to the dipoles in the neighbourhood of the localised ion positions, resulting in a weaker decrease in ion-ruler coherence \(C^{*}\) with increasing superposition separation \(|i_{1}-i_{2}|\) (and perhaps allowing for the quantum ruler to operate in the strong measurement regime with comparatively larger ion-ruler coupling strengths). Another possible way to increase the ion-ruler coherence \(C^{*}\) would be to include more dipole sites in the ion-ruler joint measurement projector construction, although at the expense of having a less accurate measure of the ion's location.
## VI Conclusions
In this work, we have introduced a concrete model of a quantum position measurement device: a quantum ruler, which interacts with an ion, whose position we would like to measure. We then constructed relational observables on the joint system composed of the ruler and the ion, and showed that such a measurement procedure can distinguish between the cases in which the ion is prepared in a mixed state or in a quantum superposition state in the position basis. This generalises the usual position measurement, which localises the measured system around a single position.
This work constitutes the first step towards the long-term goal of bridging the gap between the abstract relational observables defined in quantum gravity approaches and physical quantities that can be measured in the laboratory with concrete operational procedures. Here, we have considered the simplest case, by restricting ourselves to i) a non-relativistic and static measured system and ii) the simplest possible measurement, acting non-trivially only on two sites of the ruler at once.
In the future, it will be important to extend this approach. For what concerns the measured system, one could allow for some non-trivial dynamics of the ion, such as uniform velocity or acceleration. In these scenarios, one might study the emission of, respectively, Cherenkov [38] or Unruh [39] radiation from the ion, and the measurement procedure should be adapted to capture the properties of the radiation.
The quantum ruler model we introduce here can be easily generalised to quantum field theory, by taking the continuum limit in the distance between the sites. From a fundamental perspective, a field-theoretic description of the measurement apparatus is desirable to relate more directly the operational results of this work to quantum gravity approaches.
Another direction is to refine the measurement model to involve a larger number of sites of the ruler, either as an extended two or three-dimensional lattice. We speculate that this could increase the difference between the response of the ruler when the ion is in a mixed state versus in a quantum superposition state. The reason is that the ruler responds to the interaction with the ion more locally, with the nearest-neighbour induced dipole distortions becoming more diluted as we move radially away from the localised ion position.
Conceptually, a motivation for introducing the model of the quantum ruler is previous work on quantum reference frames [40; 41; 42; 43; 44], namely reference frames associated to physical systems, which can be in a quantum superposition or entangled relative to each other. One general goal of
the quantum reference frames programme is to substitute the abstract description of a coordinate system by providing a more physical one, which relies on measurements performed on physical objects. It would be interesting to associate a quantum reference frame to a quantum ruler, and compare the perspectives of different quantum rulers. On the one hand, this would be an important step to achieve a relational perspective on nonclassical spacetime closer to research in quantum gravity, where quantum reference frames are extended material systems. On the other hand, it would also identify a procedure to measure the position of a quantum system relative to a quantum reference frame, whose concrete implementation is still an open question in the field.
###### Acknowledgements.
We thank S.A. Ahmad and A.R.H. Smith for very helpful discussions. F.N. is supported in part by: Nippon Telegraph and Telephone Corporation (NTT) Research, the Japan Science and Technology Agency (JST) [via the Quantum Leap Flagship Program (Q-LEAP), and the Moonshot R&D Grant Number JPMJMS2061], the Asian Office of Aerospace Research and Development (AOARD) (via Grant No. FA2386-20-1-4069), and the Foundational Questions Institute Fund (FQXi) via Grant No. FQXi-IAF19-06. F.G. acknowledges support from Perimeter Institute for Theoretical Physics and from the Swiss National Science Foundation via the Ambizione Grant PZ00P2-208885. Research at Perimeter Institute is supported in part by the Government of Canada through the Department of Innovation, Science and Economic Development and by the Province of Ontario through the Ministry of Colleges and Universities. M.P.B. and H.W. acknowledge support from the U.S. National Science Foundation under grant number PHY-2011382.
## Appendix A Lagrangian formulation of the ion-ruler system
The individual Lagrangians of the ion and the ruler subsystems are
\[L_{I} = \frac{1}{2}M_{I}\dot{x}_{I}^{2}, \tag{10}\]
\[L_{r}=\frac{1}{2}m_{r}\sum_{n=-\frac{N-1}{2}}^{\frac{N-1}{2}}\dot{x}_{r,n}^{2 }-\frac{1}{2}k_{r}\sum_{n=-\frac{N-1}{2}}^{\frac{N-3}{2}}(x_{r,n+1}-x_{r,n}-a _{r})^{2}. \tag{11}\]
Introducing the ruler centre-of-mass and relative coordinates, we have
\[x_{\rm{rCM}}=\frac{1}{N}\sum_{n=-\frac{N-1}{2}}^{\frac{N-1}{2}}x_{r,n}, \tag{12}\]
\[\phi_{n}=x_{r,n}-x_{\rm{rCM}}-na_{r}. \tag{13}\]
The coordinate \(\phi_{n}\) gives the displacement of the \(n\)th dipole relative to its classical equilibrium position. While a more natural ruler coordinate would be \(\tilde{x}_{r,n}=x_{r,n}-x_{\rm{rCM}}\), marking the distance from the ruler's midpoint in equilibrium, the former \(\phi_{n}\) coordinates are more suited for indicating local elastic displacements induced by the nearby ion; we can always easily convert to a ruler length coordinate by considering \(\phi_{n}+na_{r}=\tilde{x}_{r,n}\). In terms of the above coordinates, the ruler Lagrangian
becomes
\[L_{r}=\frac{1}{2}M_{r}\dot{x}_{\rm rCM}^{2}+\frac{1}{2}m_{r}\sum_{n=-\frac{N-1}{2} }^{\frac{N-1}{2}}\dot{\phi}_{n}^{2}-\frac{1}{2}k_{r}\sum_{n=-\frac{N-1}{2}}^{ \frac{N-3}{2}}(\phi_{n+1}-\phi_{n})^{2}, \tag{100}\]
where \(M_{r}=Nm_{r}\) is the ruler's total mass. The Coulomb interaction potential energy between the ruler and ion is
\[V_{Ir} = -\frac{q_{I}q}{4\pi\epsilon_{0}}\sum_{n=-\frac{N-1}{2}}^{\frac{N-1} {2}}\left[\frac{1}{\sqrt{(w-l/2)^{2}+(x_{I}-x_{r,n})^{2}}}-\frac{1}{\sqrt{(w+l /2)^{2}+(x_{I}-x_{r,n})^{2}}}\right] \tag{101}\] \[\approx -\frac{q_{I}\mathsf{p}_{r}w}{4\pi\epsilon_{0}}\sum_{n=-\frac{N-1} {2}}^{\frac{N-1}{2}}\left[w^{2}+(x_{I}-x_{r,n})^{2}\right]^{-3/2},\]
where the approximation is valid under the limit \(l\ll w\), with \(l\) the distance between the ruler dipole charges \(q\) and \(-q\), \(\mathsf{p}_{r}=ql\) is the ruler atom electric dipole moment, and \(x_{r,n}=\phi_{n}+na_{r}+x_{\rm rCM}\). The interaction potential is further approximated as
\[V_{Ir}\approx\left.-\frac{q_{I}\mathsf{p}_{r}w}{4\pi\epsilon_{0}}\sum_{n=- \frac{N-1}{2}}^{\frac{N-1}{2}}\left[(x_{In}^{2}+w^{2})^{-3/2}+3x_{In}\phi_{n}( x_{In}^{2}+w^{2})^{-5/2}\right]\right|_{x_{In}=x_{I}-na_{r}-x_{\rm rCM}}, \tag{102}\]
under the limit \(\phi_{n}\ll w\).
## Appendix B The equivalence of constraints
Starting with the ruler Lagrangian of Eq. (100)
\[L_{r}=\frac{1}{2}M_{r}\dot{x}_{\rm rCM}^{2}+\frac{1}{2}m_{r}\sum_{n=-\frac{N-1 }{2}}^{\frac{N-1}{2}}\dot{\phi}_{n}^{2}-\frac{1}{2}k_{r}\sum_{n=-\frac{N-1}{2} }^{\frac{N-3}{2}}(\phi_{n+1}-\phi_{n})^{2}, \tag{103}\]
and imposing the constraint of Eq. (13)
\[x_{\rm rCM}=\frac{1}{N}\sum_{n=-\frac{N-1}{2}}^{\frac{N-1}{2}}x_{r,n}=0, \tag{104}\]
we obtain the corresponding quantized ruler Hamiltonian (28).
On the other hand, starting with Hamiltonian (28) and substituting in
\[\hat{a}_{\alpha}=\sqrt{\frac{m_{r}\Omega_{\alpha}}{2\hbar}}\hat{x}_{\alpha}+ \frac{i}{\sqrt{2\hbar m_{r}\Omega_{\alpha}}}\hat{p}_{\alpha}, \tag{105}\]
we obtain the ruler Hamiltonian in terms of the nonlocal canonical coordinate pairs \(\hat{x}_{\alpha}\) and \(\hat{p}_{\alpha}\):
\[\hat{H}_{r}=\sum_{\alpha=1}^{N-1}\frac{\hat{p}_{\alpha}^{2}}{2m_{r}}+\frac{1}{2 }m_{r}\sum_{\alpha=1}^{N-1}\Omega_{\alpha}^{2}\hat{x}_{\alpha}^{2}, \tag{100}\]
and the corresponding Lagrangian is
\[L_{r}=\sum_{\alpha=1}^{N-1}\frac{m_{r}\hat{x}_{\alpha}^{2}}{2}-\frac{1}{2}m_{r }\sum_{\alpha=1}^{N-1}\Omega_{\alpha}^{2}x_{\alpha}^{2}. \tag{101}\]
Using the relation \(x_{\alpha}=\sum_{n=-\frac{N-1}{2}}^{\frac{N-1}{2}}u_{\alpha,n}\phi_{n}\) to express the Lagrangian in terms of the local coordinates \(\phi_{n}\), we obtain
\[L_{r} = \sum_{\alpha=1}^{N-1}\left(\frac{1}{2}m_{r}\sum_{n,n^{\prime}=- \frac{N-1}{2}}^{\frac{N-1}{2}}u_{\alpha,n}u_{\alpha,n^{\prime}}\dot{\phi}_{n} \dot{\phi}_{n^{\prime}}-\frac{1}{2}m_{r}\Omega_{\alpha}^{2}\sum_{n,n^{\prime} =-\frac{N-1}{2}}^{\frac{N-1}{2}}u_{\alpha,n}u_{\alpha,n^{\prime}}\phi_{n}\phi _{n}^{\prime}\right) \tag{102}\] \[= \sum_{n,n^{\prime}=-\frac{N-1}{2}}^{\frac{N-1}{2}}\left(\frac{1}{ 2}m_{r}\dot{\phi}_{n}\dot{\phi}_{n^{\prime}}\sum_{\alpha=1}^{N-1}u_{\alpha,n} u_{\alpha,n^{\prime}}-\frac{1}{2}m_{r}\phi_{n}\phi_{n^{\prime}}\sum_{\alpha=1}^{N-1} \Omega_{\alpha}^{2}u_{\alpha,n}u_{\alpha,n^{\prime}}\right)\] \[= \sum_{n=-\frac{N-1}{2}}^{\frac{N-1}{2}}\frac{1}{2}m_{r}\dot{\phi} _{n}^{2}-\frac{m_{r}}{N}\left(\sum_{n=-\frac{N-1}{2}}^{\frac{N-1}{2}}\dot{\phi }_{n}\right)^{2}-\frac{1}{2}k_{r}\sum_{n=-\frac{N-1}{2}}^{\frac{N-3}{2}}(\phi_ {n+1}-\phi_{n})^{2}\] \[= \sum_{n=-\frac{N-1}{2}}^{\frac{N-1}{2}}\frac{1}{2}m_{r}\dot{\phi} _{n}^{2}-\frac{1}{2}k_{r}\sum_{n=-\frac{N-1}{2}}^{\frac{N-3}{2}}(\phi_{n+1}- \phi_{n})^{2}.\]
The last equality holds due to the constraint on the relative coordinates [Eq. (14)]:
\[\sum_{n=-\frac{N-1}{2}}^{\frac{N-1}{2}}\hat{\phi}_{n}=0. \tag{103}\]
We then recover the ruler Lagrangian (100) when the constraint \(x_{\rm rCM}=0\) is satisfied. Therefore, the constraint \(\sum_{n=-\frac{N-1}{2}}^{\frac{N-1}{2}}\phi_{n}=0\) is effectively the same as the constraint \(x_{\rm rCM}=0\).
## Appendix C Free ruler dynamics
The free ruler's dipole displacement equations of motion can be easily solved in terms of normal modes as we now show. From the Lagrangian (100), the equations of motion are
\[\ddot{\phi}_{n}=\omega_{r}^{2}\left(\phi_{n-1}-2\phi_{n}+\phi_{n+1}\right),\, -\frac{1}{2}(N-3)\leq n\leq+\frac{1}{2}(N-3), \tag{104}\]
with boundary conditions
\[\ddot{\phi}_{\frac{N-1}{2}} = -\omega_{r}^{2}\left(\phi_{\frac{N-1}{2}}-\phi_{\frac{N-3}{2}}\right) \tag{100}\] \[\ddot{\phi}_{-\frac{N-1}{2}} = -\omega_{r}^{2}\left(\phi_{-\frac{N-1}{2}}-\phi_{-\frac{N-3}{2}} \right), \tag{101}\]
where \(\omega_{r}=\sqrt{k_{r}/m_{r}}\). It is convenient to introduce symmetric and antisymmetric coordinates:
\[\phi_{n}^{s}=\frac{1}{2}\left(\phi_{n}+\phi_{-n}\right);\,\phi_{n}^{a}=\frac{1 }{2}\left(\phi_{n}-\phi_{-n}\right), \tag{102}\]
where \(\phi^{s}\) and \(\phi^{a}\) still satisfy the equation of motion (100) and the boundary condition (100), but where now we have the following boundary conditions at \(n=0\):
\[\phi_{0}^{a} = 0, \tag{103}\] \[\ddot{\phi}_{0}^{s} = -2\omega_{r}^{2}\left(\phi_{0}^{s}-\phi_{1}^{s}\right). \tag{104}\]
The constraint (14) is now imposed only on the symmetric coordinate:
\[\phi_{0}^{s}+2\sum_{n=1}^{\frac{N-1}{2}}\phi_{n}^{s}=0. \tag{105}\]
Consider a mode solution Ansatz of the form:
\[\phi_{n}(t)=\cos\left(\Omega t+\varphi\right)\left[A\cos\left(kna_{r}\right)+B \sin\left(kna_{r}\right)\right]. \tag{106}\]
Substituting into the equation of motion (100), we obtain after some algebra the following dispersion relation between mode frequency \(\Omega\) and wave number \(k\):
\[\Omega=2\omega_{r}\sin\left(\frac{ka_{r}}{2}\right). \tag{107}\]
Imposing the antisymmetric coordinate boundary condition (103), we have \(A=0\), while for the symmetric coordinate boundary condition (104), we have \(B=0\). Imposing the boundary condition (100) at the free end of the ruler, we obtain \(k_{\alpha}=\frac{2\alpha\pi}{Na_{r}}\), \(\alpha=0,1,2,\ldots,\frac{N-1}{2}\) for the symmetric mode solutions, and \(k_{\alpha}=\frac{(2\alpha+1)\pi}{Na_{r}}\), \(\alpha=0,1,2,\ldots,\frac{N-3}{2}\) for the antisymmetric mode solutions, where here \(\alpha\) denotes the mode label.
Putting everything together so far, the symmetric and antisymmetric normal mode solutions can be written as follows:
\[\phi_{\alpha,n}^{s}(t)=A_{\alpha}\cos\left(\Omega_{\alpha}^{s}t+\varphi_{ \alpha}^{s}\right)\cos\left(\frac{2\alpha n\pi}{N}\right),\,\alpha=1,2,\ldots,\frac{N-1}{2}, \tag{108}\]
\[\phi_{\alpha,n}^{a}(t)=B_{\alpha}\cos\left(\Omega_{\alpha}^{a}t+\varphi_{ \alpha}^{a}\right)\sin\left[\frac{\left(2\alpha+1\right)n\pi}{N}\right],\, \alpha=0,1,2,\ldots,\frac{N-3}{2}, \tag{109}\]
where \(\Omega_{\alpha}^{s}=2\omega_{r}\sin\left(\frac{\alpha\pi}{N}\right)\) and \(\Omega_{\alpha}^{a}=2\omega_{r}\sin\left[\frac{\left(\alpha+\frac{1}{2}\right) \pi}{N}\right]\). Note that we do not include the zero frequency, \(\alpha=0\) mode for the symmetric case, a consequence of the constraint (105); for all symmetric normal mode solutions (108) with \(\alpha\geq 1\), one can verify that the constraint condition is
satisfied. In other words, imposing the constraint removes the zero frequency, centre of mass mode.
The symmetric and antisymmetric normal mode frequencies can be combined as \(\Omega_{\alpha}=2\omega_{r}\sin(\frac{\alpha\pi}{2N})\), \(\alpha=1,2,\ldots,N-1\). In terms of the above, derived normal mode solutions, an arbitrary ruler spatial coordinate solution can be expressed as a linear combination of the former as follows:
\[\phi_{n}(t)=\sum_{\alpha=1}^{N-1}\left[\cos\left(\Omega_{\alpha}t\right)x_{ \alpha}(0)+\sin\left(\Omega_{\alpha}t\right)\frac{p_{\alpha}(0)}{m_{r}\Omega_ {\alpha}}\right]u_{\alpha,n}, \tag{100}\]
where \((x_{\alpha}(0),p_{\alpha}(0))\) are the initial mode \(\alpha\) canonical position and momentum coordinates, and where
\[u_{\alpha,n}=\sqrt{\frac{2}{N}}\cos\left[\frac{\alpha\pi}{N}\left(n+\frac{N}{2 }\right)\right] \tag{101}\]
are the orthonormal mode eigenfunctions.
With expression (100), it is straightforward to quantize the ruler coordinates in the Heisenberg picture. In terms of the \(\alpha\) mode lowering operator:
\[\hat{a}_{\alpha}(0)=\sqrt{\frac{m_{r}\Omega_{\alpha}}{2\hbar}}\hat{x}_{\alpha} (0)+\frac{i}{\sqrt{2\hbar m_{r}\Omega_{\alpha}}}\hat{p}_{\alpha}(0), \tag{102}\]
Eq. (100) becomes
\[\hat{\phi}_{n}(t)=\sum_{\alpha=1}^{N-1}\sqrt{\frac{\hbar}{2m_{r}\Omega_{ \alpha}}}\left[\hat{a}_{\alpha}(0)e^{-i\Omega_{\alpha}t}+\hat{a}_{\alpha}^{ \dagger}(0)e^{i\Omega_{\alpha}t}\right]u_{\alpha,n}. \tag{103}\]
The momentum operator \(\hat{\pi}_{n}(t)\) canonically conjugate to \(\hat{\phi}_{n}(t)\) is
\[\hat{\pi}_{n}(t) = -i\sum_{\alpha=1}^{N-1}\sqrt{\frac{\hbar m_{r}\Omega_{\alpha}}{2} }\left[\hat{a}_{\alpha}(0)e^{-i\Omega_{\alpha}t}-\hat{a}_{\alpha}^{\dagger}(0 )e^{i\Omega_{\alpha}t}\right]u_{\alpha,n}. \tag{104}\]
## Appendix D Density matrix elements in the local basis
Take one element of the reduced density matrix Eq. (56), for example
\[\rho_{\rm Ir}^{i_{1},i_{2}}(t) = \frac{\sqrt{N}}{2}|i_{1}\rangle\langle i_{2}|\otimes\int d\phi_{i _{1}}d\phi_{i_{1}}^{\prime}d\phi_{i_{2}}d\phi_{i_{2}}^{\prime}|\phi_{i_{1}} \rangle\langle\phi_{i_{1}}^{\prime}|\otimes|\phi_{i_{2}}\rangle\langle\phi_{i_ {2}}^{\prime}| \tag{105}\] \[\times\prod_{n\neq i_{1},i_{2}}\int d\phi_{n}\prod_{\alpha=1}^{N- 1}\psi_{i_{1}}\left(\sum_{n}\tilde{u}_{\alpha,n}\phi_{n},t\right)\psi_{i_{2}}^ {*}\left(\sum_{n}\tilde{u}_{\alpha,n}\phi_{n}^{\prime},t\right).\]
The second line of Eq. (105) is a function of \(\phi_{i_{1}},\phi_{i_{1}}^{\prime},\phi_{i_{2}}\) and \(\phi_{i_{2}}^{\prime}\). The other reduced density elements are
\[\rho_{\rm Ir}^{i_{1},i_{1}}(t) = \frac{\sqrt{N}}{2}|i_{1}\rangle\langle i_{1}|\otimes\int d\phi_{i _{1}}d\phi_{i_{1}}^{\prime}d\phi_{i_{2}}d\phi_{i_{2}}^{\prime}|\phi_{i_{1}} \rangle\langle\phi_{i_{1}}^{\prime}|\otimes|\phi_{i_{2}}\rangle\langle\phi_{i_ {2}}^{\prime}| \tag{106}\] \[\times\prod_{n\neq i_{1},i_{2}}\int d\phi_{n}\prod_{\alpha=1}^{N- 1}\psi_{i_{1}}\left(\sum_{n}\tilde{u}_{\alpha,n}\phi_{n},t\right)\psi_{i_{1}}^ {*}\left(\sum_{n}\tilde{u}_{\alpha,n}\phi_{n}^{\prime},t\right),\]
\[\rho_{\rm lr}^{i_{2},i_{2}}(t) = \frac{\sqrt{N}}{2}|i_{2}\rangle\langle i_{2}|\otimes\int d\phi_{i_{1} }d\phi_{i_{1}}^{\prime}d\phi_{i_{2}}d\phi_{i_{2}}^{\prime}|\phi_{i_{1}}\rangle \langle\phi_{i_{1}}^{\prime}|\otimes|\phi_{i_{2}}\rangle\langle\phi_{i_{2}}^{ \prime}| \tag{101}\] \[\times\prod_{n\neq i_{1},i_{2}}\int d\phi_{n}\prod_{\alpha=1}^{N-1 }\psi_{i_{2}}\left(\sum_{n}\tilde{u}_{\alpha,n}\phi_{n},t\right)\psi_{i_{2}}^{* }\left(\sum_{n}\tilde{u}_{\alpha,n}\phi_{n}^{\prime},t\right),\]
\[\rho_{\rm lr}^{i_{2},i_{1}}(t)=\rho_{\rm lr}^{i_{1},i_{2}*}(t). \tag{102}\]
If we trace out the ion site state, we obtain the reduced state of the \((i_{1},i_{2})\) ruler dipole state:
\[\rho_{\rm r}(t) = \frac{\sqrt{N}}{2}\int d\phi_{i_{1}}d\phi_{i_{1}}^{\prime}d\phi_{ i_{2}}d\phi_{i_{2}}^{\prime}|\phi_{i_{1}}\rangle\langle\phi_{i_{1}}^{\prime}| \otimes|\phi_{i_{2}}\rangle\langle\phi_{i_{2}}^{\prime}| \tag{103}\] \[\times\prod_{n\neq i_{1},i_{2}}\int d\phi_{n}\prod_{\alpha=1}^{N- 1}\left[\psi_{i_{1}}\left(\sum_{n}\tilde{u}_{\alpha,n}\phi_{n},t\right)\psi_{ i_{1}}^{*}\left(\sum_{n}\tilde{u}_{\alpha,n}\phi_{n}^{\prime},t\right)\right.\] \[\left.+\psi_{i_{2}}\left(\sum_{n}\tilde{u}_{\alpha,n}\phi_{n},t \right)\psi_{i_{2}}^{*}\left(\sum_{n}\tilde{u}_{\alpha,n}\phi_{n}^{\prime},t \right)\right].\]
The joint probability density of finding the \((i_{1},i_{2})\) ruler dipoles at locations \((\phi_{i_{1}},\phi_{i_{2}})\) is
\[p(\phi_{i_{1}},\phi_{i_{2}},t) = \frac{\sqrt{N}}{2}\prod_{n\neq i_{1},i_{2}}\int d\phi_{n}\prod_{ \alpha=1}^{N-1}\left[\left|\psi_{i_{1}}\left(\sum_{n}\tilde{u}_{\alpha,n}\phi _{n},t\right)\right|^{2}+\left|\psi_{i_{2}}\left(\sum_{n}\tilde{u}_{\alpha,n} \phi_{n},t\right)\right|^{2}\right], \tag{104}\]
while the probability density of finding the \(j\)th ruler dipole at location \(\phi_{j}\) has the folllowing simpler form:
\[p(\phi_{j},t) = \frac{\sqrt{N}}{2}\prod_{n\neq j}\int d\phi_{n}\prod_{\alpha=1}^{N -1}\left[\left|\psi_{i_{1}}\left(\sum_{n}\tilde{u}_{\alpha,n}\phi_{n},t\right) \right|^{2}+\left|\psi_{i_{2}}\left(\sum_{n}\tilde{u}_{\alpha,n}\phi_{n},t \right)\right|^{2}\right]. \tag{105}\]
|
2305.06266 | Jet-powered turbulence in common envelope evolution | We conduct a three-dimensional hydrodynamical simulation of a common envelope
evolution (CEE) where a neutron star (NS) spirals-in inside the envelope of a
red supergiant (RSG) star in a predetermined orbit. We find that the jets shed
pairs of vortices in an expanding spiral pattern, inflate two expanding
spirally-shaped low-density bubbles, one above and one below the equatorial
plane, and deposit angular momentum to the envelope. In the simulation we do
not include the gravity of the NS such that all effects we find are solely due
to the jets that the spiralling-in NS launches. The angular momentum that the
jets deposit to the envelope is of the same order of magnitude as the orbital
angular momentum and has the same direction. The turbulence that the jets
induce in the common envelope might play a role in transporting energy and
angular momentum. The jet-deposited energy that is radiated away (a process not
studied here) leads to a transient event that is termed common envelope jets
supernova (CEJSN) and might mimic an energetic core collapse supernova. The
turbulence and the spiral pattern that we explore here might lead to bumps in
the late light curve of the CEJSN when different segments of the ejected
envelope collide with each other. This study emphasizes the roles that jets can
play in CEE (including jets launched by black hole companions) and adds to the
rich variety of processes in CEJSN events. | Shlomi Hillel, Ron Schreier, Noam Soker | 2023-05-10T16:01:17Z | http://arxiv.org/abs/2305.06266v2 | # Jet-powered turbulence in common envelope evolution
###### Abstract
We conduct a three-dimensional hydrodynamical simulation of a common envelope evolution (CEE) where a neutron star (NS) spirals-in inside the envelope of a red supergiant (RSG) star in a predetermined orbit. We find that the jets shed pairs of vortices in an expanding spiral pattern, inflate two expanding spirally-shaped low-density bubbles, one above and one below the equatorial plane, and deposit angular momentum to the envelope. In the simulation we do not include the gravity of the NS such that all effects we find are solely due to the jets that the spiralling-in NS launches. The angular momentum that the jets deposit to the envelope is of the same order of magnitude as the orbital angular momentum and has the same direction. The turbulence that the jets induce in the common envelope might play a role in transporting energy and angular momentum. The jet-deposited energy that is radiated away (a process not studied here) leads to a transient event that is termed common envelope jets supernova (CEJSN) and might mimic an energetic core collapse supernova. The turbulence and the spiral pattern that we explore here might lead to bumps in the late light curve of the CEJSN when different segments of the ejected envelope collide with each other. This study emphasises the roles that jets can play in CEE (including jets launched by black hole companions) and adds to the rich variety of processes in CEJSN events.
(stars:) binaries (including multiple): close; (stars:) supernovae: general; transients: supernovae; stars: jets
## 1 Introduction
Compact objects that spiral-in inside the extended envelope of giant stars, i.e., common envelope evolution (CEE), might accrete mass through an accretion disk and launch jets. This is very likely to be the case when the compact objects are neutron stars (NSs) (e.g., Armitage and Livio, 2000; Chevalier, 2012) and black holes (BHs), and to some degree also main sequence stars, as some planetary nebulae hint at (e.g., Blackman and Lucchini, 2014; for a review see Soker, 2016) and theory supports (e.g., Soker, 2023). A key aspect is that the jets regulate the accretion rate onto the compact object and by that the energising process of the CEE. Namely, the jets operate in a negative feedback mechanism, e.g., Soker (2016) for a review, Grichener, Cohen, and Soker (2021) for one-dimensional (1D) simulations, and Hillel, Schreier, and Soker (2022) for 3D simulations. There is also a positive feedback component where the jets remove energy from the accreting body vicinity, thereby facilitating accretion at a higher rate (e.g., Shiber, Schreier, and Soker, 2016; Chmandy et al., 2018).
Most 3D simulations of the CEE do not include jets (e.g., Passy et al., 2012; Ricker and Taam, 2012; Nandez et al., 2014; Staff et al., 2016; Kuruwita et al., 2016; Ohlmann et al., 2016; Iaconi et al., 2017; Chmandy et al., 2019; Law-Smith et al., 2020; Glanz and Perets, 2021a, b; Gonzalez-Bolivar et al., 2022; Lau et al., 2022a, b; Ondratschek et al., 2022; Chmandy et al., 2023 for a very limited list; for a recent thorough review with many more references see Roepke and De Marco, 2023). The limited number of 3D hydrodynamical simulations of the CEE (and the grazing envelope evolution) that do include jets launched by the companion (e.g., Moreno Mendez et al., 2017; Shiber and Soker, 2018; Lopez-Camara et al., 2019; Schreier et al., 2019; Shiber et al., 2019; Lopez-Camara et al., 2020; Lopez-Camara et al., 2022; Zou et al., 2022; Schreier, Hillel, and Soker, 2023) are far from revealing all aspects of jet-powered CEE (see Soker, 2022 for a review of processes due to jets that NS/BH launch in CEE and possible outcomes). A different class of simulations (e.g., Zou et al. (2020); Moreno et al. (2022)) study collimated outflow from the distorted envelope at the final phases of the CEE (similar to the suggestion by
Soker, 1992), but this setting is not related to the present study.
In this study we extend our exploration of CEE with jets that a NS companion launches as it orbits inside the envelope of a red supergiant (RSG) star. Using the hydrodynamical code flash (section 2) we simulate the effect of the jets that a NS launches as it spirals-in. Before we present the effects of the jets in sections 4 and 5 we discuss the construction of the 3D stellar model (section 3). We summarise our main results in section 6.
## 2 The Numerical Setup
Our numerical procedure is similar to that in our previous paper (Schreier, Hillel, & Soker, 2023), but in some simulations we employ higher resolution and, most importantly, we set the NS to spiral-in into the RSG envelope rather than have a constant orbit. We therefore do not describe all numerical details here, but rather only the essential ingredients.
### The stellar model and the NS orbit
Using the MESA one-dimensional (1D) stellar evolution code (Paxton et al., 2011, 2013, 2015, 2018, 2019) we evolve a zero-age-main-sequence star of metalicity \(Z=0.02\) and mass \(M_{1,\rm ZAMS}=15M_{\odot}\) to the RSG phase. We place the RSG stellar model at the centre of the 3D hydrodynamical numerical grid at an age of \(1.1\times 10^{6}\) yr when its radius is \(R_{\rm RSG}=881\,R_{\odot}\), its mass is \(M_{1}=12.5M_{\odot}\), and its effective temperature is \(T_{\rm eff}=3160K\). We use the 3D hydrodynamical code flash(Fryxell et al., 2000) with fully ionised pure hydrogen. We set the numerical grid cells outside the stellar model to have a very low density \(\rho_{\rm grid,0}=2.1\times 10^{-13}\) g cm\({}^{-3}\) and have a temperature of \(T_{\rm grid,0}=1100\) K.
To save numerical time we do not calculate the flow in the inner 20% of the stellar radius, \(R_{\rm in}=176\,R_{\odot}\). We rather take this inert core to be a central sphere with constant density, pressure and temperature. We fully take into account the gravity of the inert core in the entire grid. We also include the gravity of the envelope as it is at \(t=0\). Namely, the gravity of the stellar model is constant throughout the evolution and at each radius equals that of the stellar model at \(t=0\). We study the behaviour of this model in the 3D grid in section 3.
We assume a common envelope jet supernova (CE-JSN) event where the NS spirals-in inside the RSG accretes mass and launches jets. We do not include the gravity of the NS nor the orbital energy or angular momentum. We preset the orbit of the NS as follows. The NS spiral-in from \(a_{\rm i}=850\,R_{\odot}\) to \(a_{\rm RS}=300\,R_{\odot}\) in a time period of 3 years that mimics the plunge-in phase of the CEE, after which it stays in a constant circular orbit which mimics the self-regulated CEE phase (for that the subscript 'SR'). The radial velocity during the plunge-in phase is constant, whereas the orbital velocity is Keplerian.
### The numerical grid
Our simulations are performed on a cubic Cartesian computational grid with a side of \(L_{\rm G}=5\times 10^{14}\) cm. We set outflow conditions on all boundary surfaces of the 3D grid. Adaptive mesh refinement (AMR) is employed with a refinement criterion of a modified Lohner error estimator (with default parameters) on the \(z\)-component of the velocity. The gas in the whole computational domain is an ideal gas with an adiabatic index of \(\gamma=5/3\) including radiation pressure. The centre of the RSG is fixed at the origin. The smallest cell size in the simulation is \(L_{\rm G}/128=3.90625\times 10^{12}\) cm.
In one simulation of the 3D stellar model without jets (section 3) we use higher resolution where all grid cells have the same size of \(L_{\rm G}/512=9.77\times 10^{11}\) cm.
### Jet-launching procedure
Our limited computational resources force us to employ a sub-grid procedure to study the effects of the jets. In the new scheme that we developed in Schreier, Hillel, & Soker (2023) we inject energy and momentum that serve as the jets' parameters rather than the jets' energy and velocity. This sub-grid procedure does not change the mass of any computational grid cell. We do not add or remove mass in the launching procedure. We only change the velocity and internal energy of the already existing mass in the volume where we inject the jets' energy.
The power of the two jets should vary with density \(\rho(a)\) at the location of the NS \(a\) and the relative velocity of the NS inside the RSG envelope, \(v_{\rm rel}(a)\), according to
\[\dot{E}_{\rm 2j}=\zeta\frac{GM_{\rm NS}}{R_{\rm NS}}\dot{M}_{\rm BHL,0}, \tag{1}\]
where
\[\dot{M}_{\rm BHL,0}=\pi\rho(a)v_{\rm rel}(a)\left[\frac{2GM_{\rm NS}}{v_{\rm rel }^{2}(a)}\right]^{2} \tag{2}\]
is the Bondi-Hoyle-Lyttleton (BHL) mass accretion rate from the unperturbed envelope and \(\zeta\simeq 0.002-0.005\)(Grichener, Cohen, & Soker, 2021; Hilleel, Schreier, & Soker, 2022). For the mass and radius of the NS we take \(M_{\rm NS}=1.4M_{\odot}\) and \(R_{\rm NS}=12\) km, respectively. In the mass accretion rate expression we neglect the sound speed in the envelope, which reduces the accretion rate, and the envelope rotation, which reduces the relative
velocity and therefore increases the accretion rate. Because we do not include the gravity of the NS, the properties of the NS enter through equations (1) and (2). Because of numerical limitations we mostly simulate jets with powers smaller than what equation (1) gives.
We set the NS to spiral-in from \(a=850\,R_{\odot}\) to \(a=300\,R_{\odot}\) in a time span of 3 years. The local unperturbed density increases from \(\rho(850\,R_{\odot})=3.3\times 10^{-9}\) g cm\({}^{-3}\) to \(\rho(300\,R_{\odot})=5.9\times 10^{-8}\) g cm\({}^{-3}\). The power of the two jets changes accordingly from \(\dot{E}_{2j}=3.3\times 10^{40}\) erg s\({}^{-1}\) to \(3\times 10^{41}\) erg s\({}^{-1}\), keeping \(\zeta=2\times 10^{-5}\) constant.
As stated before, we cannot resolve the launching region of the jets. Instead, we insert in the grid the two opposite jet-envelope interaction zones near the NS. We take these zones to be two cylinders touching each other at the orbital plane, i.e., they form one cylinder with the NS at its centre. The jets' axis is the axis of the cylinder. The base of the cylinder has a radius of \(4\times 10^{12}\) cm and the total height is \(14\times 10^{12}\) cm, i.e., \(7\times 10^{12}\) cm on each side of the equatorial plane. The momentum that we deposit inside the cylinder is in the direction of the axis of the cylinder and away from the NS, i.e., two zones perpendicular to the equatorial plane with two opposite outflows away from the equatorial plane.
The total momentum discharge rate is \(\dot{P}_{\rm 2j}\equiv|\dot{P}_{1}|+|\dot{P}_{2}|\), where the indices stand for the two opposite jets. The value of \(\dot{P}_{\rm 2j}\) is related to the power of the two jets by postulating that the jets are launched at a constant speed of \(v_{\rm j}=5\times 10^{4}\) km s\({}^{-1}\), i.e.,
\[\dot{P}_{\rm 2j}=\frac{2\dot{E}_{\rm 2j}}{v_{\rm j}}. \tag{3}\]
In each time step \(\Delta t\) we first change the velocity inside the jet-injection cylinder as a result of the momentum that we add. To each grid cell inside the jet-injection cylinder we add a momentum of
\[\Delta p_{\rm c}=f_{\rm V,c}\dot{P}_{\rm 2j}\Delta t, \tag{4}\]
where \(f_{\rm V,c}\) is the fraction of the cylinder volume that the cell occupies. Using the mass in each cell and its previous velocity we compute the new velocity of the cell. In the second step we compute the new energy of each grid cell inside the jet-injection cylinder
\[E_{\rm new,c}=E_{\rm old,c}+f_{\rm V,c}\dot{E}_{\rm 2j}\Delta t, \tag{5}\]
where \(E_{\rm old,c}\) is the old total energy in the cell, including only the kinetic energy and the internal energy because the gravitational energy does not change during the jet-injection process. Because we know already the new kinetic energy in the cell, \(E_{\rm new,kin,c}\), from the new velocity as we calculate from the new momentum (equation 4), equation (5) serves to calculate the new thermal energy in each cell in the jet-injection cylinder. Namely, the third step is taking \(E_{\rm new,therm,c}=E_{\rm new,c}-E_{\rm new,kin,c}\) in each cell inside the jet-injection cylinder.
This scheme conserves mass, momentum, and energy (see Schreier, Hillel, & Soker 2023 for more numerical details).
## 3 The three-dimensional stellar model
Our goal in this section is to reveal the behaviour of the 3D model that we transported from the 1D model of mesa (section 2.1) and consider the implications for the building of 3D giant models. For that goal we follow the evolution of the 3D stellar model without jets. We recall that in the 3D model we use there is a numerical spherical inert core of radius \(R_{\rm in}=176\,R_{\odot}\) (section 2.1) that saves us expensive computational time. Its gravity is fully included in the simulations. We set \(t=0\) when we start the 3D simulations, at which time the RSG radius is \(R_{\rm RSG}=881R_{\odot}\) and the RSG mass is \(M_{1}=12.5M_{\odot}\).
We present the results of the regular resolution that we use in the simulations with jets (section 4) and of a simulation with higher resolution. The smallest cell size in the regular resolution is \(L_{\rm G}/128=3.90625\times 10^{12}\) cm, while in the high resolution simulation all cells have the same size of \(L_{\rm G}/512=9.77\times 10^{11}\) cm. (We have no computer resources to simulate jets with the high-resolution grid; the simulations without jets are much faster than those with jets, and we can afford the high-resolution grid.)
There are two timescales that we will refer to in the discussion to follow,
\[P_{\rm Kep}=2.35\ {\rm yr},\quad{\rm and}\quad P_{\rm D}\equiv(G\bar{\rho})^{- 1/2}=0.76\ {\rm yr}, \tag{6}\]
where \(P_{\rm Kep}\) is the Keplerian orbital time on the surface of the initial RSG model, \(P_{\rm D}\) is the dynamical time, and \(\bar{\rho}\) is the average density of the initial RSG model.
In Fig. 1 we present density maps in the plane \(z=0\) for the regular (left column) and high (right column) resolution simulations without jets at three times (see caption). We mark the initial surface of the 1D (where the photosphere is well-defined) model with a black circle. The pale-blue and blue colours depict densities below the initial photospheric density, which is \(\rho_{\rm p,0}=2\times 10^{-9}\) g cm\({}^{-3}\). In these regions the results of our simulations are less reliable. The general behaviour of the two resolutions is the same, but there are clear small-scale differences. The density maps reveal an important behaviour of the 3D stellar model, namely, that the star rapidly expands on a dynamical time scale of \(\simeq P_{\rm D}\) and then contracts somewhat. It actually performs two oscillations before it relaxes.
Figure 1: Density maps of the regular-resolution (left column) and of the high-resolution (right column) simulations without any jets in the \(z=0\) plane and at three times of, from top row to bottom, \(t=0.7\) yr, \(t=1.6\) yr and \(t=6.4\) yr. The black circle marks the surface the RSG model at \(t=0\), which is \(R_{\rm RSG}=881R_{\odot}=61.3\times 10^{12}\) cm. The density colour coding is according to the upper colour bar from \(10^{-12}\) g cm\({}^{-3}\) (deep blue) to \(10^{-6}\) g cm\({}^{-3}\) (deep red). Units on axes are in cm.
It is hard to follow the contraction from the density maps because we cannot follow the photosphere as we do not include radiative transfer. Instead, we mark each of two initial spherical shells within the star with 'tracers', two different tracers for the two shells. We assign all cells inside a shell at \(t=0\) a value of tracer \(=1\). As the material inside the initial shell mixes with gas outside the shell the value of the tracer decreases and it represents the fraction of mass in each cell that originated in the shell. The tracer value in each cell is always between zero and one. The two initial shells we follow are \(600R_{\odot}<r<650R_{\odot}\) and \(800R_{\odot}<r<850R_{\odot}\).
In Fig. 2 we present the tracer maps at three times for the regular-resolution (left column) and the high-resolution (right column) simulations. Here we clearly notice the limitation of the regular-resolution simulations. The cells in the regular-resolution cannot resolve the shells well, and the shells lose their identity very early in the simulations, i.e., in less than the dynamical time (not shown here) which suggests a large numerical effect. The high-resolution simulation maintains the identity of the shells for longer than the dynamical time. The mixing of the shells with each other and with the rest of the envelope that we see in the last panel on the right column (the high-resolution simulation) is a real physical effect due to the convection that is developed in the envelope (see below).
We also calculate at each time step the average radius of the gas that started in each of the initial two shells. We present the average radii of the gas that started in the two shells as a function of time in Fig. 3.
Figs. 2 and 3 show two prominent types of behaviour. (1) The two shells together with the entire star perform two oscillations (two maxima) before the star relaxes. (2) The two shells in the high-resolution simulation are mixed with each other on a time scale of \(\simeq 3.4\ \mathrm{yr}\simeq 1.5P_{\mathrm{Kep}}\). The two shells in the regular-resolution simulation maintain their average separation but are mixed as well. We follow only the tracers of the two shells, but the entire envelope is actually mixing with itself. Both of these types of behaviour are physically real, as we now discuss.
Long period variables (LPVs) reach, in the non-linear regime, variations in their radius (maximum radius minus minimum radius in a cycle) that are about equal to their average radius, \(\Delta R\simeq R\) (e.g., Trabucchi et al.2021 for a recent study). With our code we cannot follow the photosphere as we include no radiative transfer. If we take the average radius of the outer shell that we follow (blue lines in Fig. 3) before the shells are smeared, i.e., \(t<3\ \mathrm{yr}\) (see Fig. 2), we find the average radius to be \(\bar{R}\simeq 850R_{\odot}\) and the variation to be \(\simeq 200R_{\odot}\). The photosphere is somewhat larger than the average radius of the shell. Examining the sharp edge of the models in the first two panels of Fig. 3 we find \(\Delta R/\bar{R}\simeq 0.25-0.3\). Comparing the variations in the RSG radius during the time \(t\lesssim P_{\mathrm{Kep}}\simeq 3P_{\mathrm{D}}\) with the theoretical results of, e.g., Trabucchi et al. (2021), we conclude that the 3D stellar model simply performs the expected oscillations of RSG stars, likely in the non-linear regime.
Consider then the mixing of the two shells, and actually the entire envelope. From the 1D model we know that the envelope of our model is unstable to convection, i.e., entropy decreases outward. What we find here is that the initial static 3D model develops convection that flattens the entropy profile.
We first present the velocity maps in two planes and at two times, separated by about the dynamical time, in Fig. 4 for the regular-resolution simulation and in Fig. 5 for the high-resolution simulation. As said, we do not study here the flow in the very-low density zones (pale-blue and blue regions) because of large numerical uncertainties. The red inner zone is the inert core and we do not consider the flow in its vicinity. We discuss the flow structure in the green and yellow zones.
We note the following flow properties. (1) There are large vortices as obtained in other 3D simulations of convection (e.g., Gilkis and Soker, 2016; Fields and Couch, 2020). (2) The velocity is stochastic. We infer this property by comparing the velocity maps in the same planes at the two times \(t=4\ \mathrm{yr}\) and \(t=4.5\ \mathrm{yr}\) in Figs. 4 and Fig. 5. The velocity structure substantially changes between these two times separated by about the dynamical time. We also learn this from the different flow structures in the two perpendicular planes at a given time. (3) We find that the typical maximum convective velocity in our simulations in the shell around \(r\simeq 700R_{\odot}\) is \(v_{\mathrm{con,3D}}\simeq 3\ \mathrm{km}\ \mathrm{s}^{-1}-8\ \mathrm{km}\ \mathrm{s}^{-1}\). In the 1D model that we used to build the 3D model the typical convective velocity, from the mixing length theory, at the same zone is \(v_{\mathrm{con,1D}}=5\ \mathrm{km}\ \mathrm{s}^{-1}\). We find that \(v_{\mathrm{con,3D}}\simeq v_{\mathrm{con,1D}}\). However, we do not include the nuclear energy source which would have forced the convection speed to be larger. More accurate simulations find the 3D convective velocity to be larger. Fields and Couch (2021) find that the angle-averaged convective speeds in the core of massive stars before core collapse are 3-4 times larger than the values of 1D models by MESA.
In Figs. 6 and 7 we present the evolution of the density profile (in units of g cm\({}^{-3}\)) and of the profile of the quantity \(F_{\mathrm{S}}\equiv\log(P/\rho^{5/3})\) (in units of g\({}^{-2/3}\) cm\({}^{4}\) s\({}^{-2}\)) which is about proportional to the entropy (the accurate entropy profiles have very similar slopes, but this function is easy to follow here). We present the profiles at
Figure 2: Maps in the \(z=0\) plane of the tracers of two shells (see text) in the regular resolution (left column) and in the high resolution (right column) of the no-jets simulations. The times from top to bottom are \(t=0\), \(t=1.5\) yr and \(t=3.2\) yr. The value of the tracer is according to the upper colour bar from 0 (deep blue) to 1 in the upper two panels in the right column, and to 0.5 in the left column and in the lower right panel (deep red). The initial value of the tracer is 1, but in the regular-resolution grid the cells do not fully resolve the initial shell. Units on axes are in cm.
four times as indicated. The plots are made of points, each representing one cell in the numerical grid. The initial slope of \(F_{\rm S}\) has large regions of negative gradient \(dF_{\rm S}/dr<0\) which implies convectively unstable zones. And indeed, convection sets in and flattens the entropy profiles (green to black to red colour). The density profile does not change much in most of the envelope. Only in the outer parts the density increases.
We found that the regular-resolution simulations add mass above the envelope due to the steep density gradient. This makes the results of the no-jets simulation unreliable in those outer regions at times of \(t\gtrsim P_{\rm Kep}\simeq 2.5\) yr. The high-resolution simulations is reliable up to \(t\simeq 3P_{\rm Kep}\) in the outer regions and for much longer times in the inner regions. In the simulation with jets we are forced to use the regular-resolution grid. Because in a short time the NS spirals in (section 4) and the inner regions are reliable, we find that our regular-resolution grid is adequate for the purposes of the present study.
We summarise this section as follows. The implication of our results to numerical simulations that involve giant stars (RGB, AGB, RSG) that perform binary interaction is that there is no need to stabilise the giant star to a high degree when transferring a 1D model to a 3D grid. If the 3D stellar model performs oscillations, even non-periodic oscillations in the non-linear regime (\(\Delta R\approx R\)), this is perfectly fine as such stars are expected to have large-amplitude non-regular pulsations. In addition, instabilities as we present in Fig. 2 are expected in the convective envelope of giant stars. Researchers simulating interacting red giant stars should just let the 3D model oscillate in large amplitudes and mix different layers by turbulence. These processes are more realistic than building a completely stable red giant model.
## 4 Jet-powered turbulence
In this section we set jets from an orbiting NS. We do not include the mass or the gravity of the NS, but rather only the jets we assume the NS launches perpendicular to the orbital plane. As we described in section 2.1 we let the NS to spiral-in along a predetermined orbit from \(a_{\rm i}=850\,R_{\odot}\) to \(a_{\rm SR}=300\,R_{\odot}\) in a time of 3 years. It then performs a circular orbit at radius \(a_{\rm SR}\). The radial velocity during the rapid in-spiral (plunge-in) phase is constant. The power of the two jets changes according to equation (1) with \(\zeta=2\times 10^{-5}\) along the entire evolution. We find that the power of the jets increases from \(\dot{E}_{2j}(a_{\rm i})=3.3\times 10^{40}\) erg s\({}^{-1}\) at the outer orbital separation to \(3\times 10^{41}\) erg s\({}^{-1}\) at the final orbit \(a_{\rm f}\).
The jets that the NS launches induce fast flows inside the envelope, including regions of outflows and regions of vortices. The jets inflate two bubbles at mid-latitudes in the expanding envelope. In 3D the morphology of each bubble is a low-density spiral that expands outward along mid-latitude directions. We can see the cross section of these two bubbles, one above and one below the equatorail plane \(z=0\), in the plane of Fig. 8 in four zones, two for each bubbles. The two pale-blue regions to the right of the center that are surrounded by the denser green regions and with a faster outflow velocity than the surroundings show the cross section of the bubbles' segments (one above and one below the equatorial plane \(z=0\)) that the jets inflated at earlier times. The green zones to the left of the center that are surrounded by yellow (higher density) regions and which have very high velocities are segments of the bubbles that the NS inflated recently. At the time of this figure (\(t=3.2\) yr) the NS is at \((x,y,z)_{\rm NS}=(-1.05\times 10^{13}\) cm, \(1.8\times 10^{13}\) cm, \(0\)).
The outflow pattern while the NS is still in the outer regions of the envelope is similar to our earlier studies where the NS had a fixed orbital separation in the outer envelope (e.g., Hillel, Schreier, & Soker, 2022; Schreier, Hillel, & Soker, 2023). The morphology of the ejected envelope at these early times is qualitatively similar to that in the GEE (e.g., Shiber, Kashi, & Soker, 2017; Shiber & Soker, 2018). We here do not study the outflow morphology but rather examine the vortices that the jets induce.
In addition to the large scale flow that inflate the bubbles and sets an outflow, we notice the vortices that the jets induce inside the envelope. To better reveal the vortices we present in Fig. 9 the \(z\) component of the curl of the velocity, \((\overrightarrow{\nabla}\times\overrightarrow{v})_{z}\), in the \(z=6\times 10^{12}\) cm plane.
Figure 3: The average radii of the tracers of two initial spherical shells \(800R_{\odot}<r<850R_{\odot}\) (blue lines) and \(600R_{\odot}<r<650R_{\odot}\) (red lines) in the simulations without jets. The solid lines are for the high-resolution simulation and the dashed lines are for the regular-resolution simulation. For reference, the Keplerian orbital period on the surface of the unperturbed RSG is \(P_{\rm Kep}=2.35\) yr and \(P_{\rm D}\equiv(G\bar{\rho})^{-1/2}=0.76\) yr where \(\bar{\rho}\) is the average density of the initial RSG model.
Figure 4: Velocity vectors on top of density maps in part of the \(x=0\) plane (left) and in part of the the \(z=0\) plane (right), at \(t=4\) yr (top), and \(t=4.5\) yr (bottom) for the regular-resolution no-jets simulation. The density scale and units of axes are as in Fig. 1. The flow speed at each point is according to the length of the arrow. The maximum velocities in the vortices at \(700R_{\odot}\simeq 50\times 10^{12}\) cm are \(v_{\rm con,3D}\simeq 3\) km s\({}^{-1}-8\) km s\({}^{-1}\).
Figure 5: Similar to Fig. 4 but for the high-resolution simulation.
This figure shows that the jets shed pairs of vortices with opposite signs as the NS spirals-in inside the RSG envelope. The typical width of each vortex with the typical value of \(|(\overrightarrow{\nabla}\times\overrightarrow{v})_{z}|\simeq 10^{-7}\) s\({}^{-1}\) is \(d_{\rm v}\simeq 10^{13}\) cm. The corresponding typical velocity of gas in the vortices is \(v_{\rm v}\simeq 10\) km s\({}^{-1}\). Some smaller regions within the above regions have higher values of \(|\overrightarrow{\nabla}\times\overrightarrow{v})_{z}\simeq 3\times 10^{-7}\) s\({}^{-1}\) and in these smaller regions \(v_{\rm v}\simeq 20-30\) km s\({}^{-1}\)
To further explore the properties of the vortices we present in Fig. 10 the vortices in the inner region of our numerical grid. In the upper-left panel we present \((\overrightarrow{\nabla}\times\overrightarrow{v})_{z}\) in the \(z=6\times 10^{12}\) cm plane and in the upper-right panel we present \((\overrightarrow{\nabla}\times\overrightarrow{v})_{y}\) in the plane \(y=1.8\times 10^{13}\) cm. The vortices have larger values of \((\overrightarrow{\nabla}\times\overrightarrow{v})_{y}\) than of \((\overrightarrow{\nabla}\times\overrightarrow{v})_{z}\). This is because the jets are injected along the \(z\) direction. In the lower two panels we present the velocity directions by arrows on top of the velocity magnitude coded by the colour. The time and planes are as in the upper panels. The outflow velocities reach values of up to \(v_{\rm out}\simeq 100\) km s\({}^{-1}\). Overall, the outflow velocities are higher than the circularisation velocity in the vortices. Nonetheless, there are small regions where there is an inflow. As we discuss in section 6, these inflowing regions might lead to fall back material at late phases of the evolution.
## 5 Deposition of angular momentum
Figure 8: Velocity arrows on top of the density maps of the spiralling-in simulation with jets in the \(y=0\) plane at \(t=3.2\) yr, when the NS orbits at \(a_{\rm SR}=300R_{\odot}\). This plane cuts the low-density and high-outflow velocity bubbles that the jets inflate in two regions above and two regions below the equatorial plane. The older two regions (above and below the equatorial plane) are to the right of the centre and appear as pale-blue zones surrounded by green region. The two newer regions are to the left of the centre and appear as two green zones surrounded by yellow (higher density) regions. Velocity magnitude is proportional to the length of the arrows. Maximum velocity inside the old bubbles is 76 km s\({}^{-1}\).
Figure 6: Radial profiles of the density (top; density in \({\rm~{}g~{}cm^{-3}}\)) and of \(F_{\rm S}=log(P/\rho^{5/3})\), which is about proportional to the entropy (bottom; \(P/\rho^{5/3}\) in units of \({\rm~{}g^{-2/3}~{}cm^{4}~{}s^{-2}}\)) at four times as indicated in the insets and for the regular-resolution no-jets simulation. Each scatterplot is the collection of the values in all cells at the given time (a total of about \(43,000\) points).
Figure 7: Similar to Fig. 6 but for the high-resolution simulation where we have \(1.1\times 10^{6}\) points in each scatterplot.
We calculate the angular momentum that the jets deposit to the envelope as in Schreier, Hillel, & Soker (2023). We take the envelope to be the gas between the inert core at \(r_{\rm inert}=0.2R_{\rm RSG}=1.23\times 10^{13}\) cm and an outer radius of \(r=1.25\times 10^{14}\) cm which is about twice the initial radius of the RSG star. Namely, in calculating the envelope angular momentum we sum the quantity \(m_{i}\vec{r_{i}}\times\vec{v_{i}}\) over all cells \(i\) with \(r_{\rm inert}<r_{i}<1.25\times 10^{14}\) cm, where \(m_{i}\), \(\vec{r_{i}}\) and \(\vec{v_{i}}\) are the mass, the location with respect to the centre of the RSG, and the velocity of cell \(i\). We recall that we do not include the orbital angular momentum, nor the gravity of the NS. Therefore, only jets influence the envelope as our goal is to explore the role of jets.
In Fig. 11 we present the angular momentum that the jet deposit to the envelope \(J_{z}\), and the specific angular momentum in the envelope, \(j_{z}=J_{z}/M_{\rm env}\), as function of time. The increase in angular momentum occurs while the NS, which is the source of the jets, spirals-in. At \(t>3\) yr the NS continues to orbit the core of the RSG star at a constant radius of \(a_{\rm SR}=300R_{\odot}\). The increase in \(J_{z}\) (and in \(j_{z}\)) is not monotonic. There is a time period when the angular momentum stays almost constant for about half a year, \(t\simeq 1.4-1.9\) yr, before the NS reaches its final orbit. This time corresponds more or less to the time when the NS completes an orbital angle of \(\simeq 360^{\circ}\).
In our previous study (Schreier, Hillel, & Soker, 2023) we simulated jets with a constant orbital radius of \(a=700R_{\odot}=4.9\times 10^{13}\) cm and followed the angular momentum that the jets deposit to the envelope. We found that when the jets are perpendicular to the orbital plane the jets deposit substantial amount of angular momentum to the envelope only in the first two orbits. Based on that we suggested that jets deposit angular momentum to the envelope only during the plunge-in phase, when the NS rapidly spirals-in, but only small amounts during the self-regulated phase when the spiralling-in is very slow or does not take place at all. Our results here where we let the NS to plunge-in in three years confirm our expectation.
The total angular momentum that the jets deposit to the envelope at the end of the plunge-in phase here (\(t=3\) yr) is \(J_{\rm jets}\simeq 1.8\times 10^{53}\ {\rm g\ cm^{2}\ s^{-1}}\). In addition, the spiralling-in NS deposits orbital angular momentum to the envelope, a process we do not simulate here. In spiralling-in from \(a_{0}=850R_{\odot}\) to the final orbit during the self-regulated phase at \(a_{\rm SR}=300R_{\odot}\) the decrease in the orbital angular momentum is \(J_{\rm orb}=4.5\times 10^{53}\ {\rm g\ cm^{2}\ s^{-1}}\). Considering that the jets in reality might be more powerful than what we simulate (which we cannot numerically simulate with our resources due t
Figure 9: The quantity \((\vec{\nabla}\times\overrightarrow{v})_{z}\) in the \(z=6\times 10^{12}\) cm plane for the simulation with jets. Values according to the color bar in units of \(\ {\rm s^{-1}}\). From top to bottom are plots at \(t=2\) yr just as the NS reaches an orbital separation of \(a=500R_{\odot}\), \(t=3\) yr when the NS reaches \(a_{\rm SR}=300R_{\odot}\), and \(t=4\) yr as the NS continues to orbit at \(a_{\rm SR}=300R_{\odot}\). The black spiral in the centre indicates the trajectory of the spiralling-in NS.
to too small time steps), we conclude that the contribution of jets to spinning-up the envelope is substantial, and might event dominate.
## 6 Summary
This study continues our exploration of the roles that jets that a NS launches play in CEE. Our numerical resources do not allow for high-resolution simulations with jets. In section 3 we examined the behaviour of our 3D RSG stellar model without any jets, which does allow for high resolution. We found that the stellar model, in both the regular and the high-resolution simulations, performs two non-linear oscillations before it relaxes in a time of about 3 years (Figs. 1 and 3). Additionally, turbulence is developed in the envelope (Figs. 2, 4 and 5).
Both the non-linear oscillations and the turbulence are processes that take place in giant stars (RGB, AGB, RSG). Because we have no source of nuclear burning in the centre, the oscillations decay. We concluded in section 3 that in 3D hydrodynamical simulations of the CEE with giant stars there is no need to stabilise the giant star to a high degree when transferring a 1D model to a 3D grid. One can start the binary simulations immediately, even if the giant model is not completely
static. Just let it oscillate and develop turbulence, as such giants should actually have.
We then presented the results of a simulation with regular resolution where the NS, that serves here only as the source of the jets as we do not include its gravity (section 2), to spiral-in with a predetermined orbit (spiral line in the three panels of Fig. 9). The exclusion of the NS gravity as in our earlier studies (e.g., Hillel, Schreier, & Soker, 2022; Schreier, Hillel, & Soker, 2023) allows us to identify the role of jets and to perform the simulations on our computer.
The new ingredient of this study is the spiralling-in of the NS. The jets inflate two expanding spiral-shaped low-density bubbles, one above and one below the equatorial plane. The two cross sections of each bubble with the \(y=0\) plane are seen in Fig. 8 as two low-density high-velocity regions above (one bubble) and two below (the other bubble) the equatorial plane \(z=0\).
We also find that the NS sheds pairs of opposite-sign vortices as it spirals-in, best seen as blue-red pairs in Figs. 9 and 10. The pairs of vortices form an expanding large-scale spiral pattern. Overall, the jets substantially increase the turbulence in the common envelope, both as random motion on small scales and as a global pattern that substantially deviates from a spherical structure. We emphasise that the spiral structure seen in and near the equatorial plane as pairs of vortices (Fig. 9 and upper-left panel of Fig. 10) and in the density (lower-left panel of Fig. 10) are formed solely by jets, as we do not include the NS gravity.
Convection in the envelope of red giants can efficiently transport angular momentum (e.g., Gagnier & Pejcha, 2023) and energy (e.g., Grichener, Sabach, & Soker, 2018; Wilson & Nordhaus, 2019, 2020, 2022). We showed here that the jets can substantially increase the convection (turbulence) strength. This makes energy transport more efficient. Namely, a fraction of the energy that the jets deposit to the envelope is carried away and radiated. The transient event, termed CEJSN, is very bright, lasts for months to a few years, and might mimic a very energetic core collapse supernova.
The jet-induced non-spherical morphology of the ejected envelope influences the late light curve. This occurs if at later times inner envelope gas is ejected at higher velocities and collides with early ejecta. This collision converts kinetic energy to thermal energy and then radiation. The non-spherical structure leads to bumps in the light curve and to polarised emission.
We also confirm our suggestion from Schreier, Hillel, & Soker (2023) that the jets deposit a non-negligible amount of angular momentum to the envelope during the plunge-in phase, when the spiralling-in is on a dynamical timescale, but not much during the self-regulated phase when the spiralling-in is very slow. The direction of the angular momentum that the jets deposit is the same as that of the orbital angular momentum. The deposition of positive angular momentum results from the fact that the jets eject envelope mass with negative angular momentum.
This study supports the general claim that jets that NSs (and BHs) launch during CEE, namely during a CEJSN event, cannot be neglected. Jets power the CEJSN event, influence the morphology of the ejected envelope, induce vortices that strengthen the convection, and deposit angular momentum to the common envelope.
## Acknowledgments
This research was supported by the Amnon Pazy Research Foundation.
## Data Availability
The data underlying this article will be shared on reasonable request to the corresponding author.
|
2306.10154 | Seaweed algebras and the unimodal spectrum property | If $\mathfrak{g}$ is a Frobenius Lie algebra, then the spectrum of
$\mathfrak{g}$ is an algebraic invariant equal to the multiset of eigenvalues
corresponding to a particular operator acting on $\mathfrak{g}$. In the case of
Frobenius seaweed subalgebras of $A_{n-1}=\mathfrak{sl}(n)$, or type-A seaweeds
for short, it has been shown that the spectrum can be computed combinatorially
using an attendant graph. With the aid of such graphs, it was further shown
that the spectrum of a type-A seaweed consists of an unbroken sequence of
integers centered at $\frac{1}{2}$. It has been conjectured that if the
eigenvalues are arranged in increasing order, then the sequence of
multiplicities forms a unimodal sequence about $\frac{1}{2}$. Here, we
establish this conjecture for certain families of Frobenius type-A seaweeds by
finding explicit formulas for their spectra; in fact, for some families we are
able to show that the corresponding sequences of multiplicities form
log-concave sequences. All arguments are combinatorial. | Nicholas Mayers, Nicholas Russoniello | 2023-06-16T19:51:44Z | http://arxiv.org/abs/2306.10154v1 | # Seaweed algebras and the unimodal spectrum property
###### Abstract
If \(\mathfrak{g}\) is a Frobenius Lie algebra, then the spectrum of \(\mathfrak{g}\) is an algebraic invariant equal to the multiset of eigenvalues corresponding to a particular operator acting on \(\mathfrak{g}\). In the case of Frobenius seaweed subalgebras of \(A_{n-1}=\mathfrak{sl}(n)\), or type-A seaweeds for short, it has been shown that the spectrum can be computed combinatorially using an attendant graph. With the aid of such graphs, it was further shown that the spectrum of a type-A seaweed consists of an unbroken sequence of integers centered at \(\frac{1}{2}\). It has been conjectured that if the eigenvalues are arranged in increasing order, then the sequence of multiplicities forms a unimodal sequence about \(\frac{1}{2}\). Here, we establish this conjecture for certain families of Frobenius type-A seaweeds by finding explicit formulas for their spectra; in fact, for some families we are able to show that the corresponding sequences of multiplicities form log-concave sequences. All arguments are combinatorial.
_Mathematics Subject Classification 2020:_ 05E16, 05C25, 17B45
_Key Words and Phrases:_ unimodal, log-concave, spectrum, meander, seaweed, Frobenius Lie algebra
## 1 Introduction
A biparabolic (seaweed) subalgebra of a complex reductive Lie algebra \(\mathfrak{r}\) is the intersection of two parabolic subalgebras whose sum is \(\mathfrak{r}\). They - along with certain associated planar graphs called "meanders" - were first introduced in the case \(\mathfrak{r}=\mathfrak{gl}(n)\) by Dergachev and Kirillov (**[**6**]**, 2000). One of the main results of **[**6**]** is that the algebra's "index," a notoriously difficult-to-compute Lie-algebraic invariant of recent interest (see **[**3, 5, 15, 17**]**), can be found by counting the number of paths and cycles in its associated meander. Of particular significance are those seaweed subalgebras of \(\mathfrak{gl}(n)\) whose meanders consist of a single path and no cycles. For such algebras, imposing a vanishing trace condition results in a seaweed subalgebra of \(\mathfrak{sl}(n)\) with index zero. In general, algebras with index zero are called "Frobenius" and have been studied extensively in the context of invariant theory (see **[**13**]**) and are connected to the classical Yang-Baxter equation (see **[**10**]** and **[**11**]**). Here, we are concerned with the "spectrum" of a Frobenius Lie algebra, which is an invariant multiset of eigenvalues arising from a certain operator's action on the algebra. In the case of a Frobenius seaweed subalgebra of \(\mathfrak{sl}(n)\), it is conjectured that the multiplicities of eigenvalues in the spectrum form a unimodal sequence. We establish this conjecture for particular families of Frobenius seaweed subalgebras of \(\mathfrak{sl}(n)\) by utilizing the seaweeds' meanders to find explicit formulas for their spectra.
To fix the notation, let \(\mathfrak{g}\) be a Lie algebra over \(\mathbb{C}\). From any linear form \(F\in\mathfrak{g}^{*}\) arises a skew-symmetric, bilinear two-form \(B_{F}(-,-)=F([-,-])\), called the _Kirillov form_. The index of \(\mathfrak{g}\) is then given by
\[\text{ind }\mathfrak{g}=\min_{F\in\mathfrak{g}^{*}}\dim\ker(B_{F})\]
(see **[**7**]**). The Lie algebra \(\mathfrak{g}\) is called _Frobenius_ if its index is zero, or equivalently, if there exists a linear form \(F\in\mathfrak{g}^{*}\) such that \(B_{F}\) is non-degenerate. We call such an \(F\) a _Frobenius (linear) form_, and the natural
map \(\mathfrak{g}\to\mathfrak{g}^{*}\) defined by \(x\mapsto F[x,-]\) is an isomorphism. The inverse image of \(F\) under this map is called a _principal element_ of \(\mathfrak{g}\) and will be denoted \(\widehat{F};\) that is, \(\widehat{F}\) is the unique element of \(\mathfrak{g}\) such that
\[F\circ\text{ad }\widehat{F}=F([\widehat{F},-])=F.\]
Ooms ([14], 1980) showed that the spectrum of the adjoint action of a principal element acting on its Frobenius Lie algebra is independent of the choice of principal element (see also **[12]**, Theorem 3). Consequently, if \(\mathfrak{g}\) is Frobenius and \(\widehat{F}\in\mathfrak{g}\) is any principal element, then the multiset of eigenvalues of \(ad_{\widehat{F}}\ :\ \mathfrak{g}\to\mathfrak{g}\) is an invariant of \(\mathfrak{g},\) and so we call such a multiset the _spectrum of_\(\mathfrak{g}.\) In the case that \(\mathfrak{g}\) is a seaweed subalgebra of \(\mathfrak{sl}(n),\) Gerstenhaber and Giaquinto (**[12]**, 2009) asserted that the spectrum consists of an unbroken set of integers. Coll et al. (**[1]**, 2016) were able to prove the unbrokenness property initially stated in **[12]**, showing that the distinct eigenvalues in the spectrum of a Frobenius seaweed subalgebra of \(\mathfrak{sl}(n)\) forms an interval of integers centered at \(\frac{1}{2}\). Moreover, the authors of **[1]** conjectured that if the eigenvalues are arranged in increasing order, then the corresponding sequence of multiplicities form a unimodal sequence centered at \(\frac{1}{2}\). The primary goal of this article is to establish this unimodality conjecture for certain families of seaweed subalgebras of \(\mathfrak{sl}(n);\) we do so by finding explicit formulas for the spectra by combinatorial means.
The methods used in this paper make use of a family of graphs, called "meanders," which can be associated to seaweed subalgebras of \(\mathfrak{sl}(n)\) via the seaweed's defining compositions. See Section 2 for the relationship between seaweed subalgebras of \(\mathfrak{sl}(n)\) and compositions, as well as a detailed construction of a meander. The raison d'etre for the introduction of meanders into the theory of seaweed subalgebras was the ability to implement them in the computation of algebraic invariants, including index and spectrum. With this in mind, we are able to translate questions of spectrum to questions about graphs.
The outline of the paper is as follows. In Section 2, we outline the necessary preliminaries concerning seaweed subalgebras of \(\mathfrak{sl}(n)\) and meanders. In Section 3, we develop lemmas concerning the spectra of Frobenius seaweed subalgebras of \(\mathfrak{sl}(n)\) associated with ordered pairs of compositions containing three parts in total; these lemmas are then used to find explicit formulas for the spectra of such algebras. In Section 4, we consider the spectra of families of Frobenius seaweed subalgebras of \(\mathfrak{sl}(n)\) parametrized by the number of occurrences of a fixed value in the defining compositions; for such families, we observe new behavior concerning the stability of the set of distinct eigenvalues. Section 5 consists of a discussion of our results, a number of questions arising from our work, and directions for further research.
## 2 Preliminaries
Given a reductive Lie algebra \(\mathfrak{r},\) a _seaweed subalgebra of_\(\mathfrak{r}\) is the intersection of two parabolic subalgebras \(\mathfrak{p},\mathfrak{p}^{\prime}\subset\mathfrak{r}\) satisfying \(\mathfrak{p}+\mathfrak{p}^{\prime}=\mathfrak{r}.\) While seaweed subalgebras are defined generally as above, when restricting to seaweed subalgebras of \(\mathfrak{gl}(n,\mathbb{C})=\mathfrak{gl}(n)\) and \(\mathfrak{sl}(n,\mathbb{C})=\mathfrak{sl}(n),\) an equivalent definition in terms of compositions is available. In particular, we define a _seaweed subalgebra of_\(\mathfrak{gl}(n),\) or simply a _seaweed_, to be any matrix Lie algebra constructed from two compositions of \(n\) as follows. Let \(V\) be a complex \(n\)-dimensional vector space with basis \(\{e_{1},\ldots,e_{n}\},\) let \(\underline{a}=(a_{1},\ldots,a_{m})\) and \(\underline{b}=(b_{1},\ldots,b_{t})\) be two compositions of \(n,\) and consider the flags
\[\mathscr{V}=\big{\{}\{0\}\subset V_{1}\subset\cdots\subset V_{m-1}\subset V_ {m}=V\big{\}}\text{ and }\mathscr{W}=\big{\{}V=W_{0}\supset W_{1}\supset\cdots\supset W_{t}=\{0\} \big{\}},\]
where \(V_{i}=span\{e_{1},\ldots,e_{a_{1}+\cdots+a_{i}}\}\) and \(W_{j}=span\{e_{b_{1}+\cdots+b_{j}+1},\ldots,e_{n}\}.\) Then the seaweed \(\mathfrak{p}(\underline{a}|\underline{b})=\mathfrak{p}\frac{a_{1}|\ldots|a_{m }}{b_{1}|\ldots|b_{t}}\) is the subalgebra of \(\mathfrak{gl}(n)\) that preserves the flags \(\mathscr{V}\) and \(\mathscr{W}.\) Similarly, the seaweed subalgebra of \(A_{n-1}=\mathfrak{sl}(n),\) or _type-A seaweed_, \(\mathfrak{p}^{A}(\underline{a}|\underline{b})=\mathfrak{p}^{A}\frac{a_{1}| \ldots|a_{m}}{b_{1}|\ldots|b_{t}}\) is the subalgebra of \(\mathfrak{sl}(n)\) that preserves the flags \(\mathscr{V}\) and \(\mathscr{W}\). One special case is of note: if \(\underline{a}=(n)\) and \(\underline{b}=(b_{1},b_{2}),\) then the type-A seaweeds \(\mathfrak{p}^{A}(\underline{a}|\underline{b})\) and \(\mathfrak{p}^{A}(\underline{b}|\underline{a})\) are called _maximal parabolic_.
**Remark 1**.: _Typically, one includes \(n\) in the notation for seaweed subalgebras of \(\mathfrak{gl}(n)\) and \(\mathfrak{sl}(n)\), writing \(\mathfrak{p}_{n}(\underline{a}|\underline{b})=\mathfrak{p}_{n}\frac{a_{1}| \ldots|a_{m}}{b_{1}|\ldots|b_{t}}\) and \(\mathfrak{p}_{n}^{A}(\underline{a}|\underline{b})=\mathfrak{p}_{n}^{A}\frac{a_{ 1}|\ldots|a_{m}}{b_{1}|\ldots|b_{t}}\) for the corresponding (type-\(A\)) seaweeds; however, since \(n\) is encoded in the compositions \(\underline{a}\) and \(\underline{b},\) we omit it for ease of notation._
The evocative "seaweed" is descriptive of the shape of the algebra when exhibited in matrix form. For example, the seaweed algebra \(\mathfrak{p}^{A}\frac{2|4}{1|2|3}\) consists of traceless matrices of the form depicted in Figure 1 (left), where \(*\)'s indicate potential non-zero entries.
In [6], Dergachev and Kirillov assign to each seaweed \(\mathfrak{g}=\mathfrak{p}\frac{a_{1}|\cdots|a_{m}}{b_{1}|\cdots|b_{t}}\) with \(\sum_{i=1}^{m}a_{i}=\sum_{j=1}^{t}b_{j}=n\) a planar graph \(M(\mathfrak{g})\) on \(n\) vertices, called a _meander_, as follows. Begin by placing \(n\) labeled vertices \(v_{1},v_{2},\ldots,v_{n}\) from left to right along a horizontal line. Next, partition the vertices by grouping together the first \(a_{1}\) vertices, then the next \(a_{2}\) vertices, and so on, lastly grouping together the final \(a_{m}\) vertices. We call each set of vertices formed a _block_. For each block in the prescribed set partition, add a concave-down edge, called a _top edge_, from the first vertex of the block to the last vertex of the block, then add a top edge between the second vertex of the block and the second-to-last vertex of the block, and so on within each block, assuming that the vertices being connected are distinct. In a similar way, partition the vertices according to the composition \(\underline{b}\), and then place _bottom edges_, i.e., concave-up edges, between vertices in each block. See Figure 1 (right).
The main theorem of [6] establishes the combinatorial formula for the index of a seaweed given in Theorem 2 below.
**Theorem 2**.: _If \(\mathfrak{g}\) is a seaweed, then_
\[\operatorname{ind}\mathfrak{g}=2C+P,\]
_where \(C\) is the number of cycles and \(P\) is the number of paths in \(M(\mathfrak{g})\)._
Here, a path is any acyclic connected component of a meander; that is, in addition to traditional path graphs, connected components consisting of a single vertex are considered paths. Utilizing Theorem 2, the corollary below follows immediately.
**Corollary 3**.: _If \(\mathfrak{g}\) is a type-A seaweed, then_
\[\operatorname{ind}\mathfrak{g}=2C+P-1,\]
_where \(C\) is the number of cycles and \(P\) is the number of paths in \(M(\mathfrak{g})\)._
Interestingly, in ([11], 2008) and [12], Gerstenhaber and Giaquinto show, in particular, that one can also determine the spectrum of a Frobenius, type-A seaweed \(\mathfrak{g}\) using \(M(\mathfrak{g})\). The result is based on the construction of a principal element \(\widehat{F}\in\mathfrak{g}\) for which each matrix unit \(e_{i,j}\in\mathfrak{g}\) and each \(e_{i,i}-e_{i+1,i+1}\in\mathfrak{g}\) is an eigenvector of \(\operatorname{ad}\widehat{F}\). To describe their choice of \(\widehat{F}\), we first need to decorate \(M(\mathfrak{g})\) by adding an orientation to its edges: top edges are oriented from right to left, while bottom edges are oriented from left to right. We refer to a meander with such an orientation as the _oriented meander_, denoted \(\overrightarrow{M}(\mathfrak{g}).\) Considering Corollary 3,
\(M(\mathfrak{g})\) must consist of a single path. Thus, for all \(1\leq i,j\leq n,\) there is a unique path from vertex \(v_{i}\) to vertex \(v_{j}\) in \(M(\mathfrak{g}),\) denoted \(P_{i,j}(\mathfrak{g}).\) Define the _weight_\(w(P_{i,j}(\mathfrak{g}))\) of the path \(P_{i,j}(\mathfrak{g})\) in \(M(\mathfrak{g})\) from \(v_{i}\) to \(v_{j}\) to be the number of forward edges minus the number of backward edges encountered when moving along \(P_{i,j}(\mathfrak{g})\) from \(v_{i}\) to \(v_{j}\) in \(\overrightarrow{M}(\mathfrak{g}).\)
The following results appear in [1] as a consequence of a result from Gerstenhaber and Giaquinto [11]; however, we include proofs for completeness.
**Lemma 4**.: _Let \(\mathfrak{g}\subset\mathfrak{gl}(n)\) be a seaweed with oriented meander \(\overrightarrow{M}(\mathfrak{g})=(V,E)\) consisting of one path and no cycles, where \(V=\{v_{1},\ldots,v_{n}\}\) is the vertex set and \(E\) is the edge set of \(\overrightarrow{M}(\mathfrak{g}).\) If_
\[F=\sum_{(v_{i},v_{j})\in E}e_{i,j}^{*}\in\mathfrak{g}^{*}\]
_and_
\[\widehat{F}=\sum_{i=1}^{n}w(P_{i,n}(\mathfrak{g}))e_{i,i}\in\mathfrak{g},\]
_then \(F\left(\left[\widehat{F},x\right]\right)=F(x),\) for all \(x\in\mathfrak{g}.\)_
Proof.: If \(P_{i,j}(\mathfrak{g})\) is the unique path from \(v_{i}\) to \(v_{j}\) in \(\overrightarrow{M}(\mathfrak{g}),\) then it immediately follows that \(w(P_{i,j}(\mathfrak{g}))=-w(P_{j,i}(\mathfrak{g}))\) and \(w(P_{i,j}(\mathfrak{g}))+w(P_{j,k}(\mathfrak{g}))=w(P_{i,k}(\mathfrak{g})),\) for all \(1\leq i,j,k\leq n.\) Now, if \(\widehat{F}=\sum_{i=1}^{n}w(P_{i,n}(\mathfrak{g}))e_{i,i},\) then we have that
\[\left[\widehat{F},e_{i,j}\right]=\big{(}w(P_{i,n}(\mathfrak{g}))-w(P_{j,n}( \mathfrak{g}))\big{)}e_{i,j}=\big{(}w(P_{i,n}(\mathfrak{g}))+w(P_{n,j}( \mathfrak{g}))\big{)}e_{i,j},\]
for all \(1\leq i,j\leq n\). Note that if \(i=j,\) then \(F\left(\left[\widehat{F},e_{i,j}\right]\right)=\left[\widehat{F},e_{i,j}\right]=0.\) On the other hand, if \(i\neq j,\) then
\[\left[\widehat{F},e_{i,j}\right]=\big{(}w(P_{i,n}(\mathfrak{g}))+w(P_{n,j}( \mathfrak{g}))\big{)}e_{i,j}=\big{(}w(P_{i,j}(\mathfrak{g}))\big{)}e_{i,j}.\]
Therefore, since \(w(P_{i,j}(\mathfrak{g}))=1\) for all \((v_{i},v_{j})\in E,\) we have that
\[F\left(\left[\widehat{F},e_{i,j}\right]\right)=\begin{cases}1,&\text{if }(v_{i},v_{j})\in E;\\ 0,&\text{otherwise}.\end{cases}\]
The result follows from the linearity of \(F\) and the fact that the set of all \(e_{i,j}\) such that \((i,j)\) is a potentially nonzero entry in the matrix form of \(\mathfrak{g}\) is a basis for \(\mathfrak{g}.\)
**Theorem 5**.: _Let \(\mathfrak{g}\subset\mathfrak{sl}(n)\) be a Frobenius, type-\(A\) seaweed with oriented meander \(\overrightarrow{M}(\mathfrak{g})=(V,E),\) where \(V=\{v_{1},\ldots,v_{n}\}\) is the vertex set and \(E\) is the edge set of \(\overrightarrow{M}(\mathfrak{g}).\) If_
\[F=\sum_{(v_{i},v_{j})\in E}e_{i,j}^{*}\in\mathfrak{g}^{*},\]
_is a Frobenius form on \(\mathfrak{g},\) then the principal element corresponding to \(F\) is given by_
\[\widehat{F}=\sum_{i=1}^{n}\left(w(P_{i,n}(\mathfrak{g}))-\frac{\sum_{j=1}^{n} w(P_{j,n}(\mathfrak{g}))}{n}\right)e_{i,i}\in\mathfrak{g}.\]
Proof.: First, note that since \(\mathfrak{g}\) is a Frobenius, type-\(A\) seaweed, the oriented meander \(\overrightarrow{M}(\mathfrak{g})\) consists of one path and no cycles. Now, if we view \(F=\sum_{(v_{i},v_{j})\in E}e_{i,j}^{*}\in\mathfrak{g}^{*}\) as an element of \((\mathfrak{gl}(n))^{*},\) then we may apply
Lemma 4 to conclude that
\[F\left(\left[\widehat{F},e_{i,j}\right]\right) =F\left(\left[\sum_{k=1}^{n}\left(w(P_{k,n}(\mathfrak{p}))-\frac{ \sum_{\ell=1}^{n}w(P_{\ell,n}(\mathfrak{g}))}{n}\right)e_{k,k},e_{i,j}\right]\right)\] \[=F\left(\left[\sum_{k=1}^{n}w(P_{k,n}(\mathfrak{g}))e_{k,k},e_{i,j}\right]\right)\] \[=F(e_{i,j}),\]
for all \(e_{i,j}\) such that \((i,j)\) is a potentially nonzero entry in the matrix form of \(\mathfrak{g}.\) Therefore, \(\widehat{F}\) is an element of \(\mathfrak{g}\) - in particular, \(\widehat{F}\) has trace zero - with the property that \(F\left(\left[\widehat{F},x\right]\right)=F(x),\) for all \(x\in\mathfrak{g}.\) Since \(F\) is Frobenius, we have that \(\widehat{F}\) is a principal element of \(F.\) The result follows from uniqueness of the principal element.
In (Dougherty [8], 2019), it is shown that the \(F\) in Theorem 5 is Frobenius. Consequently, to determine the spectrum of a Frobenius, type-A seaweed \(\mathfrak{g},\) it suffices to compute the eigenvalues of \(ad\)\(\widehat{F},\) for \(\widehat{F}\) as described in Theorem 5. Since \(\widehat{F}\) is diagonal, the eigenvalues corresponding to \(e_{i,i}-e_{i+1,i+1}\in\mathfrak{g}\) are \(0.\) As for basis elements of the form \(e_{i,j}\in\mathfrak{g},\) the proof of Theorem 5 shows that the eigenvalue is given by \(w(P_{i,j}(\mathfrak{g}))\). The collection of such values can be nicely organized to form the _spectrum matrix_ of a Frobenius, type-A seaweed \(\mathfrak{g},\) denoted \(\Sigma(\mathfrak{g})\). The matrix \(\Sigma(\mathfrak{g})\) is the element of \(\mathfrak{g}\) whose \(i,j\) entry is equal to \(w(P_{i,j}(\mathfrak{g})).\) See Example 6.
**Example 6**.: _In Figure 2, we illustrate the spectrum matrix \((\)left\()\) and oriented meander \((\)right\()\) corresponding to the Frobenius, type-A seaweed \(\mathfrak{g}=\mathfrak{p}^{A}\frac{2|4}{1|2|3}\). Using \(\Sigma(\mathfrak{g})\), we find that the spectrum of \(\mathfrak{g}\) is equal to \(\{-2,-1^{2},0^{5},1^{5},2^{2},3\}\)._
Given a Frobenius, type-A seaweed \(\mathfrak{g},\) care must be taken when using \(\Sigma(\mathfrak{g})\) to compute its spectrum. Since we are working with seaweed subalgebras of \(\mathfrak{sl}(n),\) we do not have basis elements of the form \(e_{i,i}.\) Instead, basis elements corresponding to diagonal matrices are of the form \(e_{i,i}-e_{i+1,i+1}\). Consequently, the multiset of values contained in \(\Sigma(\mathfrak{g})\) is equal to the spectrum of \(\mathfrak{g}\) with exactly one additional \(0\). It would be more precise to exclude the \(0\) in the \(n,n\)-location of \(\Sigma(\mathfrak{g})\) and interpret a \(0\) in the \(i,i\)-location of \(\Sigma(\mathfrak{g})\) as the eigenvalue corresponding to the basis element \(e_{i,i}-e_{i+1,i+1},\) for \(1\leq i<n\). However, for the sake of arguments in the following sections, we leave the \(0\) in the bottom right corner. So that we may recall this subtlety later, we include the remark below.
**Remark 7**.: _Let \(\mathfrak{g}\) be a Frobenius, type-A seaweed. Then the multiset of values contained in \(\Sigma(\mathfrak{g})\) is equal to the spectrum of \(\mathfrak{g}\) with exactly one additional value of 0._
Utilizing the relationship between the spectrum of a Frobenius, type-A seaweed \(\mathfrak{g}\) and its oriented meander, the authors of [1] were able to show that the set of distinct eigenvalues in the spectrum of \(\mathfrak{g}\) consists of an unbroken sequence of integers centered at one-half. Moreover, the authors supplied the following conjecture.
**Conjecture 8** (Coll et al. [1], 2016).: _Let \(\mathfrak{g}\) be a Frobenius, type-A seaweed. If the eigenvalues in the spectrum of \(\mathfrak{g}\) are written in increasing order, then the corresponding sequence of multiplicities forms a unimodal sequence about one-half._
Recall that a sequence \(a_{0},\ldots,a_{n}\) of positive integers is called _unimodal_ if there exists \(0\leq i\leq n\) such that \(a_{0}\leq a_{1}\leq\ldots\leq a_{i}\geq\ldots\geq a_{n-1}\geq a_{n}\) and is called _log-concave_ if \(a_{i}^{2}\geq a_{i+1}a_{i-1}\), for \(0<i<n\). Ongoing, we will say that a Frobenius, type-A seaweed \(\mathfrak{g}\) satisfying Conjecture 8 has the _unimodal spectrum property_. Similarly, if the sequence of mulitplicities, ordered as in Conjecture 8, forms a log-concave sequence, then we say that \(\mathfrak{g}\) has the _log-concave spectrum property_. Note that the log-concave spectrum property implies the unimodal spectrum property.
**Example 9**.: _Taking \(\mathfrak{g}\) to be the type-A seaweed of Example 6, the sequence of multiplicities associated with the spectrum of \(\mathfrak{g}\) is equal to \(1,2,5,5,2,1\). Clearly, \(\mathfrak{g}\) has the log-concave \((\)and, consequently, the unimodal\()\) spectrum property._
One of the main goals of this paper is to show that certain families of Frobenius, type-A seaweeds have the unimodal spectrum property. In this pursuit, an extended notion of spectrum will prove helpful. Extending the spectrum matrix to contain all weights of paths between vertices \(v_{i}\) and \(v_{j}\) of \(M(\mathfrak{g})\) results in what we will call the _extended spectrum matrix_ of \(\mathfrak{g}\), denoted \(\widehat{\Sigma}(\mathfrak{g})\). Analogous to spectrum, excluding a single value of 0, we refer to the remaining multiset of values occurring in the extended spectrum matrix of a Frobenius, type-A seaweed \(\mathfrak{g}\) as the _extended spectrum_ of \(\mathfrak{g}\). See Example 10.
**Example 10**.: _In Figure 3, we illustrate the extended spectrum matrix corresponding to the Frobenius, type-A seaweed \(\mathfrak{g}=\mathfrak{p}^{A}\frac{2|4}{1|2|3}\). Using \(\widehat{\Sigma}(\mathfrak{g})\), we find that the extended spectrum of \(\mathfrak{g}\) is equal to \(\{-3^{2},-2^{4},-1^{7},0^{9},1^{7},2^{4},3^{2}\}\)._
**Remark 11**.: _Let \(\mathfrak{g}\) be a Frobenius, type-A seaweed. Since \(w(P_{i,j}(\mathfrak{g}))=-w(P_{j,i}(\mathfrak{g}))\), it follows that \(\widehat{\Sigma}(\mathfrak{g})\) is skew-symmetric._
## 3 Maximal parabolic
In this section, we consider the spectra of Frobenius, maximal parabolic, type-A seaweeds, i.e., Frobenius, type-A seaweeds of the form \(\mathfrak{p}^{A}\frac{a|b}{a+b}\) or \(\mathfrak{p}^{A}\frac{a+b}{a|b}\). It is well known that such algebras are Frobenius if and only
Figure 3: The extended spectrum matrix \(\widehat{\Sigma}(\mathfrak{p})\)
if \(\gcd(a,b)=1\).
Before calculating any spectra, we prove some structural lemmas. The first group, beginning with Lemma 12 and ending with Corollary 18, concern general Frobenius, type-A seaweeds. In particular, given a Frobenius, type-A seaweed, we show that switching and reversing the defining compositions does not affect the algebra's (extended) spectrum. The final group of structural lemmas - consisting of Lemmas 21 through 32- concern Frobenius, maximal parabolic, type-A seaweeds.
**Lemma 12**.: _Let \(\mathfrak{g}_{1}=\mathfrak{p}^{A\,a_{1}|\ldots|a_{m}}\) and \(\mathfrak{g}_{2}=\mathfrak{p}^{A\,b_{1}|\ldots|b_{t}}\) be Frobenius, type-A seaweeds, where \(n=\sum_{i=1}^{m}a_{i}=\sum_{j=1}^{t}b_{j}\). Then \(w(P_{i,j}(\mathfrak{g}_{1}))=w(P_{j,i}(\mathfrak{g}_{2}))\), for all \(1\leq i,j\leq n.\)_
Proof.: Take an arbitrary path \(P_{i,j}(\mathfrak{g}_{1})\) in \(M(\mathfrak{g}_{1})\). Recall that \(P_{i,j}(\mathfrak{g}_{1})\) is the unique path from \(v_{i}\) to \(v_{j}\) in \(M(\mathfrak{g}_{1})\). Let \(l\) denote a line parallel to the line through the vertices of \(M(\mathfrak{g}_{1}).\) Note that reflecting \(M(\mathfrak{g}_{1})\) about \(l\) results in \(M(\mathfrak{g}_{2})\) with each vertex remaining fixed; that is, vertex \(v_{i}\) of \(M(\mathfrak{g}_{1})\) becomes vertex \(v_{i}\) of \(M(\mathfrak{g}_{2})\), for \(1\leq i\leq n\). Moreover, reflecting \(\overrightarrow{M}(\mathfrak{g}_{1})\) about \(l\) results in the meander \(M(\mathfrak{g}_{2})\) with each edge oriented in the opposite direction to that in \(\overrightarrow{M}(\mathfrak{g}_{2})\). Thus, \(w(P_{i,j}(\mathfrak{g}_{1}))=-w(P_{i,j}(\mathfrak{g}_{2}))=w(P_{j,i}( \mathfrak{g}_{2}))\), for all \(1\leq i,j\leq n\).
**Corollary 13**.: _Let \(\mathfrak{g}_{1}=\mathfrak{p}^{A\,a_{1}|\ldots|a_{m}}\) and \(\mathfrak{g}_{2}=\mathfrak{p}^{A\,\frac{b_{1}|\ldots|b_{t}}{a_{1}|\ldots|a_{m}}}\) be Frobenius, type-A seaweeds. Then \(\Sigma(\mathfrak{g}_{1})=\Sigma(\mathfrak{g}_{2})^{t}\) and \(\widehat{\Sigma}(\mathfrak{g}_{1})=\widehat{\Sigma}(\mathfrak{g}_{2})^{t}\)._
Proof.: Applying Lemma 12, it follows that the \(i,j\)-entry of \(\widehat{\Sigma}(\mathfrak{g}_{1})\) is equal to the \(j,i\)-entry of \(\widehat{\Sigma}(\mathfrak{g}_{2})\), i.e., \(\widehat{\Sigma}(\mathfrak{g}_{1})=\widehat{\Sigma}(\mathfrak{g}_{2})^{t}\). Consequently, since \(e_{i,j}\in\mathfrak{g}_{1}\) if and only if \(e_{j,i}\in\mathfrak{g}_{2}\), \(\Sigma(\mathfrak{g}_{1})=\Sigma(\mathfrak{g}_{2})^{t}\).
**Corollary 14**.: _Let \(\mathfrak{g}_{1}=\mathfrak{p}^{A\,a_{1}|\ldots|a_{m}}\) and \(\mathfrak{g}_{2}=\mathfrak{p}^{A\,\frac{b_{1}|\ldots|b_{t}}{a_{1}|\ldots|a_{m}}}\) be Frobenius, type-A seaweeds. Then the \((\)extended\()\) spectra of \(\mathfrak{g}_{1}\) and \(\mathfrak{g}_{2}\) are equal._
**Lemma 15**.: _Let \(\mathfrak{g}_{1}=\mathfrak{p}^{A\,a_{1}|\ldots|a_{m}}\) and \(\mathfrak{g}_{2}=\mathfrak{p}^{A\,\frac{a_{m}|\ldots|a_{1}}{b_{t}|\ldots|b_{1}}}\) be Frobenius, type-A seaweeds, where \(n=\sum_{i=1}^{m}a_{i}=\sum_{j=1}^{t}b_{j}\). Then \(w(P_{i,j}(\mathfrak{g}_{1}))=w(P_{n-j+1,n-i+1}(\mathfrak{g}_{2}))\), for all \(1\leq i,j\leq n.\)_
Proof.: Take an arbitrary path \(P_{i,j}(\mathfrak{g}_{1})\) in \(M(\mathfrak{g}_{1})\). Let \(l\) denote a line perpendicular to the line through the vertices of \(M(\mathfrak{g}_{1})\). Note that reflecting \(M(\mathfrak{g}_{1})\) about \(l\) results in \(M(\mathfrak{g}_{2})\) with vertex \(v_{i}\) becoming vertex \(v_{n-i+1}\), for \(1\leq i\leq n\). Moreover, reflecting \(\overrightarrow{M}(\mathfrak{g}_{1})\) about \(l\) results in the meander \(M(\mathfrak{g}_{2})\) with each edge oriented in the opposite direction to that in \(\overrightarrow{M}(\mathfrak{g}_{2})\). Thus,
\[w(P_{i,j}(\mathfrak{g}_{1}))=-w(P_{n-i+1,n-j+1}(\mathfrak{g}_{2}))=w(P_{n-j+1, n-i+1}(\mathfrak{g}_{2})).\]
**Corollary 16**.: _Let \(\mathfrak{g}_{1}=\mathfrak{p}^{A\,\frac{a_{1}|\ldots|a_{m}}{b_{1}|\ldots|b_{t}}}\) and \(\mathfrak{g}_{2}=\frac{a_{m}|\ldots|a_{1}}{b_{t}|\ldots|b_{1}}\) be Frobenius, type-A seaweeds. Then \(\Sigma(\mathfrak{g}_{1})=\Sigma(\mathfrak{g}_{2})^{\tau}\) and \(\widehat{\Sigma}(\mathfrak{g}_{1})=\widehat{\Sigma}(\mathfrak{g}_{2})^{\tau},\) where \(\tau\) denotes transpose with respect to the antidiagonal._
Proof.: Applying Lemma 15, it follows that the \(i,j\)-entry of \(\widehat{\Sigma}(\mathfrak{g}_{1})\) is equal to the \(n-j+1,n-i+1\)-entry of \(\widehat{\Sigma}(\mathfrak{g}_{2})\), i.e., \(\widehat{\Sigma}(\mathfrak{g}_{1})=\widehat{\Sigma}(\mathfrak{g}_{2}).\) Consequently, since \(e_{i,j}\in\mathfrak{g}\) if and only if \(e_{n-j+1,n-i+1}\), \(\Sigma(\mathfrak{g}_{1})=\Sigma(\mathfrak{g}_{2})^{\tau}\).
**Corollary 17**.: _Let \(\mathfrak{g}_{1}=\mathfrak{p}^{A\,\frac{a_{1}|\ldots|a_{m}}{b_{1}|\ldots|b_{t}}}\) and \(\mathfrak{g}_{2}=\mathfrak{p}^{A\,\frac{a_{m}|\ldots|a_{1}}{b_{t}|\ldots|b_{1}}}\) be Frobenius, type-A seaweeds. Then the \((\)extended\) spectra of \(\mathfrak{g}_{1}\) and \(\mathfrak{g}_{2}\) are equal._
**Corollary 18**.: _Let \(\mathfrak{g}_{1}=\mathfrak{p}^{A\,\frac{a_{1}|\ldots|a_{m}}{b_{1}|\ldots|b_{t}}}\) and \(\mathfrak{g}_{2}=\mathfrak{p}^{A\,\frac{b_{t}|\ldots|b_{1}}{a_{m}|\ldots|a_{1}}}\) be Frobenius, type-A seaweeds. Then the \((\)extended\) spectra of \(\mathfrak{g}_{1}\) and \(\mathfrak{g}_{2}\) are equal._
Proof.: The result follows upon combining Corollaries 14 and 17.
**Remark 19**.: _One can show that, for fixed compositions \((a_{1},\ldots,a_{m})\) and \((b_{1},\ldots,b_{t})\) of \(n\), the type-A seaweeds \(\mathfrak{p}^{A}\frac{a_{1}|\ldots|a_{m}}{b_{1}|\ldots|b_{t}}\), \(\mathfrak{p}^{A}\frac{b_{1}|\ldots|b_{t}}{a_{1}|\ldots|a_{m}}\), \(\mathfrak{p}^{A}\frac{a_{m}|\ldots|a_{1}}{b_{t}|\ldots|b_{1}}\), and \(\mathfrak{p}^{A}\frac{b_{1}|\ldots|b_{1}}{a_{m}|\ldots|a_{1}}\) are isomorphic, regardless of their \((\)shared\()\) respective index values._
Now, turning to Frobenius, maximal parabolic, type-A seaweeds, the following lemmas relate the spectra of "bigger" algebras to naturally associated "smaller" algebras. First, given such an algebra \(\mathfrak{g}=\mathfrak{p}^{A}\frac{a|b}{n}\) with \(a>b\), Lemma 21 below relates the values in the top left \(a\times a\) block of \(\Sigma(\mathfrak{g})\) to the values contained in \(\widehat{\Sigma}(\mathfrak{g}^{\prime})\), where \(\mathfrak{g}^{\prime}\) is a seaweed subalgebra of \(\mathfrak{sl}(a)\). To aid the proof of Lemma 21, we first illustrate the result in Example 20.
**Example 20**.: _Let \(\mathfrak{g}=\mathfrak{p}^{A}\frac{5|2}{7}\) and \(\mathfrak{g}^{\prime}=\mathfrak{p}^{A}\frac{3|2}{5}\). See Figure 4 for (a)\(\Sigma(\mathfrak{g})\) and (b)\(M(\mathfrak{g})\), and see Figure 5 for (a)\(\widehat{\Sigma}(\mathfrak{g}^{\prime})\) and (b)\(M(\mathfrak{g}^{\prime})\). Note that the top left \(5\times 5\) block of \(\Sigma(\mathfrak{g})\)\((\)to the left of the vertical dashed line\()\) consists of the same multiset of values as \(\widehat{\Sigma}(\mathfrak{g}^{\prime})\). Moreover, note that there is a copy of \(M(\mathfrak{g}^{\prime})\) inside of \(M(\mathfrak{g})\). In particular, identifying the vertices connected by dashed edges in Figure 4 (b) and then rotating the resulting graph by 180 degrees yields \(M(\mathfrak{g}^{\prime})\)._
**Lemma 21**.: _Let \(k_{1},k_{2}\in\mathbb{Z}_{>0}\) satisfy \(\gcd(k_{1},k_{2})=1\). If \(\mathfrak{g}_{1}=\mathfrak{P}^{A}\frac{mk_{1}+k_{2}|k_{1}}{(m+1)k_{1}+k_{2}}\) and \(\mathfrak{g}_{2}=\mathfrak{P}^{A}\frac{(m-1)k_{1}+k_{2}|k_{1}}{mk_{1}+k_{2}}\), then the block corresponding to rows \(\{1,\ldots,mk_{1}+k_{2}\}\) and columns \(\{1,\ldots,mk_{1}+k_{2}\}\) of \(\Sigma(\mathfrak{g}_{1})\) consists of the same multiset of values as \(\widehat{\Sigma}(\mathfrak{g}_{2})\)._
Proof.: Define the sets
\[H=\{1,\ldots,mk_{1}+k_{2}\}\quad\text{and}\quad T=\{mk_{1}+k_{2}+1,\ldots,(m+1 )k_{1}+k_{2}\}.\]
Note that \(\{v_{i}\mid i\in H\}\) and \(\{v_{i}\mid i\in T\}\) form a partition of the vertices of \(M(\mathfrak{g}_{1})\). The block \(\Delta\) of \(\Sigma(\mathfrak{g}_{1})\) corresponding to rows \(r\in H\) and columns \(c\in H\) consists of the weights of paths between the vertices of \(\{v_{i}\mid i\in H\}\) in \(M(\mathfrak{g}_{1})\). Clearly, both \(\Delta\) and \(\widehat{\Sigma}(\mathfrak{g}_{2})\) have the same number, i.e., \(mk_{1}+k_{2}\), of \(0\)'s on their respective diagonals. Thus, we need to show that \(\Delta\) and \(\widehat{\Sigma}(\mathfrak{g}_{2})\) have the same multisets of values corresponding to off-diagonal entries. To do so, we define a weight-preserving bijection
\[\phi:\ \{P_{i,j}(\mathfrak{g}_{1})\mid i,j\in H,\ i\neq j\}\to\{P_{i,j}( \mathfrak{g}_{2})\mid i,j\in H,\ i\neq j\},\]
i.e., a weight-preserving bijection from paths in \(\overrightarrow{M}(\mathfrak{g}_{1})\) between distinct vertices \(v_{i}\) and \(v_{j}\) with \(i,j\in H\) to all paths between distinct vertices in \(\overrightarrow{M}(\mathfrak{g}_{2})\). To aid in defining \(\phi\), we make use of a transformation \(\mathcal{S}\) of \(\overrightarrow{M}(\mathfrak{g}_{1})\) which is defined as follows. If \(v_{h}\) is adjacent to \(v_{t}\) in \(\overrightarrow{M}(\mathfrak{g}_{1})\), for \(h\in H\) and \(t\in T\), then identify \(v_{h}\) and \(v_{t}\) and remove any edges between them. Let \(\mathfrak{g}_{3}=\mathfrak{p}^{A}\frac{mk_{1}+k_{2}}{k_{1}|(m-1)k_{1}+k_{2}}\). Note that \(\mathcal{S}(\overrightarrow{M}(\mathfrak{g}_{1}))=\overrightarrow{M}( \mathfrak{g}_{3})\) with \(\mathcal{S}\) mapping vertices \(v_{i}\) and \(v_{(m+1)k_{1}+k_{2}-i+1}\) in \(\overrightarrow{M}(\mathfrak{g}_{1})\) to \(v_{i}\) in \(\overrightarrow{M}(\mathfrak{g}_{3})\), for \(1\leq i\leq k_{1}\), and vertices \(v_{j}\) in \(\overrightarrow{M}(\mathfrak{g}_{1})\) to \(v_{j}\) in \(\overrightarrow{M}(\mathfrak{g}_{3})\), for \(k_{1}+1\leq j\leq mk_{1}+k_{2}\). We claim that \(w(P_{i,j}(\mathfrak{g}_{1}))=w(P_{i,j}(\mathfrak{g}_{3}))\), for all distinct \(i,j\in H\).
To establish the claim, take distinct \(i,j\in H\). If \(P_{i,j}(\mathfrak{g}_{1})\) contains no \(v_{t}\), for \(t\in T\), then \(P_{i,j}(\mathfrak{g}_{1})\) along with its orientation in \(\overrightarrow{M}(\mathfrak{g}_{1})\) remain invariant under \(\mathcal{S}\); that is, \(w(P_{i,j}(\mathfrak{g}_{1}))=w(P_{i,j}(\mathfrak{g}_{3}))\). Otherwise, \(P_{i,j}(\mathfrak{g}_{1})\) contains a vertex \(v_{t}\), for \(t\in T\). In this case, \(P_{i,j}(\mathfrak{g}_{1})\) contains subpaths \(P_{h_{1},h_{2}}(\mathfrak{g}_{1})\) defined by sequences of vertices \(v_{h_{1}},v_{t_{1}},v_{t_{2}},v_{h_{2}}\), where
* \(h_{1},h_{2}\in H\) and \(t_{1},t_{2}\in T\) are distinct,
* \(v_{h_{1}}\) is adjacent to \(v_{t_{1}}\) via a bottom edge directed from \(v_{h_{1}}\) to \(v_{t_{1}}\) in \(\overrightarrow{M}(\mathfrak{g}_{1})\),
* \(v_{t_{1}}\) is adjacent to \(v_{t_{2}}\) via a top edge in \(\overrightarrow{M}(\mathfrak{g}_{1})\), and
* \(v_{t_{2}}\) is adjacent to \(v_{h_{2}}\) via a bottom edge directed from \(v_{h_{2}}\) to \(v_{t_{2}}\) in \(\overrightarrow{M}(\mathfrak{g}_{1})\).
To establish the claim in this case, it suffices to show that the weights of such subpaths are invariant under \(\mathcal{S}\). Note that
\[w(P_{h_{1},h_{2}}(\mathfrak{g}_{1})) =w(P_{h_{1},t_{1}}(\mathfrak{g}_{1}))+w(P_{t_{1},t_{2}}(\mathfrak{ g}_{1}))+w(P_{t_{2},h_{2}}(\mathfrak{g}_{1}))\] \[=1+w(P_{t_{1},t_{2}}(\mathfrak{g}_{1}))-1\] \[=w(P_{t_{1},t_{2}}(\mathfrak{g}_{1}))\] \[=w(P_{h_{1},h_{2}}(\mathfrak{g}_{3}))\] \[=w(\mathcal{S}(P_{h_{1},h_{2}}(\mathfrak{g}_{1}))),\]
where for the penultimate equality we have used the fact that \(\mathcal{S}\) maps the vertices \(v_{t_{i}}\) to \(v_{h_{i}}\), for \(i=1,2\), and the edge connecting \(v_{t_{1}}\) and \(v_{t_{2}}\) in \(\overrightarrow{M}(\mathfrak{g}_{1})\) to the edge connecting \(v_{h_{1}}\) and \(v_{h_{2}}\) in \(\overrightarrow{M}(\mathfrak{g}_{3})\), preserving their orientations. Thus, \(w(P_{h_{1},h_{2}}(\mathfrak{g}_{1}))=w(\mathcal{S}(P_{h_{1},h_{2}}(\mathfrak{ g}_{1})))\), and we may conclude that \(w(P_{i,j}(\mathfrak{g}_{1}))=w(P_{i,j}(\mathfrak{g}_{3}))\). Consequently, \(w(P_{i,j}(\mathfrak{g}_{1}))=w(P_{i,j}(\mathfrak{g}_{3}))\), for all distinct \(i,j\in H\), as claimed.
Now, applying Lemmas 12 and 15, it follows that \(w(P_{i,j}(\mathfrak{g}_{1}))=w(P_{mk_{1}+k_{2}-i+1,mk_{1}+k_{2}-j+1}(\mathfrak{ g}_{2}))\), for all distinct \(i,j\in H\). Considering our work above, it follows that we can form our desired weight-preserving bijection \(\phi\) by mapping the path \(P_{i,j}(\mathfrak{g}_{1})\) to \(P_{mk_{1}+k_{2}-i+1,mk_{1}+k_{2}-j+1}(\mathfrak{g}_{2})\), for all distinct \(i,j\in H\). The result follows.
**Remark 22**.: _In fact, for \(\mathfrak{g}_{1}\) and \(\mathfrak{g}_{2}\) as in Lemma 21, the block corresponding to rows \(\{1,\ldots,k_{1}+k_{2}\}\) and columns \(\{1,\ldots,k_{1}+k_{2}\}\) of \(\Sigma(\mathfrak{g}_{1})\) is equal to \(-\widehat{\Sigma}(\mathfrak{g}_{2})^{t}.\)_
Next, given a Frobenius, maximal parabolic, type-A seaweed \(\mathfrak{g}=\mathfrak{p}^{A}\frac{a|b}{n}\) with \(a>b,\) we show how the values in the bottom right \(b\times b\) block of \(\Sigma(\mathfrak{g})\) can be related to the values contained in \(\widehat{\Sigma}(\mathfrak{g}^{\prime}),\) where \(\mathfrak{g}^{\prime}\) is a seaweed subalgebra of \(\mathfrak{sl}(b)\). Unlike in Lemma 21, we have to consider two separate cases corresponding to \(\lfloor\frac{\mathfrak{g}}{b}\rfloor=1\) and \(\lfloor\frac{\mathfrak{g}}{b}\rfloor>1\). Lemma 24 addresses the case \(\lfloor\frac{\mathfrak{g}}{b}\rfloor=1\). An illustration of Lemma 24 is given in Example 23.
**Example 23**.: _Let \(\mathfrak{g}=\mathfrak{p}^{A}\frac{5|3}{8}\) and \(\mathfrak{g}^{\prime}=\mathfrak{p}^{A}\frac{1|2}{3}.\) See Figure 6 for (a)\(\Sigma(\mathfrak{g})\) and (b)\(M(\mathfrak{g}),\) and see Figure 7 for (a)\(\widehat{\Sigma}(\mathfrak{g}^{\prime})\) and (b)\(M(\mathfrak{g}^{\prime}).\) Note that the bottom right \(3\times 3\) block \((\)below the horizontal dashed line\()\) of \(\Sigma(\mathfrak{g})\) consists of the same multiset of values as \(\widehat{\Sigma}(\mathfrak{g}^{\prime}).\) Moreover, note that there is a copy of \(M(\mathfrak{g}^{\prime})\) inside of \(M(\mathfrak{g}).\) In particular, identifying the vertices connected by dashed edges in Figure 6 (b) and then rotating the resulting graph by 180 degrees yields \(M(\mathfrak{g}^{\prime}).\)_
**Lemma 24**.: _Let \(k_{1},k_{2}\in\mathbb{Z}_{>0}\) satisfy \(k_{1}>k_{2}\) and \(\gcd(k_{1},k_{2})=1\). If \(\mathfrak{g}_{1}=\mathfrak{p}^{A}\frac{k_{1}+k_{2}|k_{1}}{2k_{1}+k_{2}}\) and \(\mathfrak{g}_{2}=\mathfrak{p}^{A}\frac{k_{1}-k_{2}|k_{2}}{k_{1}}\), then the block corresponding to rows \(\{k_{1}+k_{2}+1,\ldots,2k_{1}+k_{2}\}\) and columns \(\{k_{1}+k_{2}+1,\ldots,2k_{1}+k_{2}\}\) in \(\Sigma(\mathfrak{g}_{1})\) consists of the same multiset of values as \(\widehat{\Sigma}(\mathfrak{g}_{2})\)._
Proof.: Define the sets
\[H_{1}=\{1,\ldots,k_{2}\},\quad H_{2}=\{k_{2}{+}1,\ldots,k_{1}\},\quad H_{3}=\{ k_{1}{+}1,\ldots,k_{1}{+}k_{2}\},\quad\text{and}\quad T=\{k_{1}{+}k_{2}{+}1, \ldots,2k_{1}{+}k_{2}\}.\]
Note that \(\{v_{i}\ |\ i\in H_{1}\},\{v_{i}\ |\ i\in H_{2}\},\{v_{i}\ |\ i\in H_{3}\}\), and \(\{v_{i}\ |\ i\in T\}\) form a partition of the vertices of \(M(\mathfrak{g}_{1})\) and are defined in such a way that top edges in \(M(\mathfrak{g}_{1})\) only connect pairs of vertices \(u\) and \(v\) with
* \(u\in\{v_{i}\ |\ i\in H_{1}\}\) and \(v\in\{v_{i}\ |\ i\in H_{3}\}\),
* \(u,v\in\{v_{i}\ |\ i\in H_{2}\}\), or
* \(u,v\in\{v_{i}\ |\ i\in T\}\)
and bottom edges in \(M(\mathfrak{g}_{1})\) only connect pairs of vertices \(u\) and \(v\) with
* \(u\in\{v_{i}\ |\ i\in H_{1}\cup H_{2}\}\) and \(v\in\{v_{i}\ |\ i\in T\}\) or
* \(u,v\in\{v_{i}\ |\ i\in H_{3}\}\).
Now, the block \(\Delta\) of \(\Sigma(\mathfrak{g}_{1})\) corresponding to rows \(r\in T\) and columns \(c\in T\) consists of the values \(w(P_{i,j}(\mathfrak{g}_{1}))\), for \(i,j\in T\). Clearly, both \(\Delta\) and \(\widehat{\Sigma}(\mathfrak{g}_{2})\) have the same number, i.e., \(k_{1}\), of \(0\)'s on their respective diagonals. Thus, we need to show that \(\Delta\) and \(\widehat{\Sigma}(\mathfrak{g}_{2})\) have the same multisets of values corresponding to off-diagonal entries. To do so, we define a weight-preserving bijection
\[\phi:\ \{P_{i,j}(\mathfrak{g}_{1})\ |\ i,j\in T,\ i\neq j\}\to\{P_{i,j}( \mathfrak{g}_{2})\ |\ i,j\in H_{1}\cup H_{2},\ i\neq j\},\]
i.e., a weight-preserving bijection from paths in \(\overrightarrow{M}(\mathfrak{g}_{1})\) between distinct vertices \(u,v\in\{v_{i}\ |\ i\in T\}\) to all paths between distinct vertices in \(\overrightarrow{M}(\mathfrak{g}_{2})\). To aid in defining our bijection, we make use of a transformation \(\mathcal{S}\) of \(\overrightarrow{M}(\mathfrak{g}_{1})\) which is defined as follows.
1. If \(u\in\{v_{i}\ |\ i\in T\}\) is adjacent to \(v\in\{v_{i}\ |\ i\in H_{1}\cup H_{2}\}\), then identify \(u\) and \(v\) while removing any edges between them, and then
2. if \(u\in\{v_{i}\ |\ i\in H_{3}\}\) is adjacent to \(v\in\{v_{i}\ |\ i\in H_{1}\}\), then identify \(u\) and \(v\) while removing any edges between them.
Let \(\mathfrak{g}_{3}=\mathfrak{p}^{A}\frac{k_{2}|k_{1}-k_{2}}{k_{1}}.\) Note that \(\mathcal{S}(\overrightarrow{M}(\mathfrak{g}_{1}))=\overrightarrow{M}( \mathfrak{g}_{3})\) with \(\mathcal{S}\) mapping vertices \(v_{i}\) and \(v_{2k_{1}+k_{2}-i+1}\) in \(M(\mathfrak{g}_{1})\) to \(v_{i}\) in \(M(\mathfrak{g}_{3})\), for \(1\leq i\leq k_{1}\), and vertices \(v_{k_{1}+k_{2}-j+1}\) in \(M(\mathfrak{g}_{1})\) to \(v_{j}\) in \(M(\mathfrak{g}_{3})\), for \(1\leq j\leq k_{2}\). We claim that \(w(P_{i,j}(\mathfrak{g}_{1}))=w(P_{2k_{1}+k_{2}-i+1,2k_{1}+k_{2}-j+1}( \mathfrak{g}_{3}))\), for distinct \(i,j\in T\).
To establish the claim, take distinct \(i,j\in T\). If \(v_{i}\) and \(v_{j}\) are adjacent in \(M(\mathfrak{g}_{1})\), then \(\mathcal{S}\) maps \(P_{i,j}(\mathfrak{g}_{1})\) along with its orientation in \(\overrightarrow{M}(\mathfrak{g}_{1})\) to \(P_{2k_{1}+k_{2}-i+1,2k_{1}+k_{2}-j+1}(\mathfrak{g}_{3})\) along with its orientation in \(\overrightarrow{M}(\mathfrak{g}_{3})\); that is, \(w(P_{i,j}(\mathfrak{g}_{1}))=w(P_{i,j}(\mathfrak{g}_{3})).\) On the other hand, if \(v_{i}\) and \(v_{j}\) are not adjacent in \(M(\mathfrak{g}_{1})\), then \(P_{i,j}(\mathfrak{g}_{1})\) contains subpaths \(P_{i_{1},t_{2}}(\mathfrak{g}_{1})\) defined by sequences of vertices \(v_{t_{1}},v_{h_{1}},\ldots,v_{h_{t}},v_{t_{2}}\) with \(t_{1},t_{2}\in T\) and \(h_{1},\ldots,h_{\ell}\in H_{1}\cup H_{2}\cup H_{3}\). To establish the claim, it suffices to show that the weights of such subpaths are invariant under \(\mathcal{S}\). There are two cases.
**Case 1:** Subpaths \(P_{t_{1},t_{2}}\) defined by a sequence of vertices \(v_{t_{1}},v_{h_{1}^{1}},v_{h_{1}^{3}},v_{h_{2}^{2}},v_{h_{2}^{1}},v_{t_{2}}\), where
* \(t_{1},t_{2}\in T\), \(h_{1}^{1},h_{2}^{1}\in H_{1}\), and \(h_{1}^{3},h_{2}^{3}\in H_{3}\),
* \(v_{t_{1}}\) is adjacent to \(v_{h_{1}^{1}}\) via a bottom edge directed from \(v_{h_{1}^{1}}\) to \(v_{t_{1}}\) in \(\overrightarrow{M}(\mathfrak{g}_{1})\),
* \(v_{h_{1}^{1}}\) is adjacent to \(v_{h_{1}^{3}}\) via a top edge directed from \(v_{h_{1}^{3}}\) to \(v_{h_{1}^{1}}\) in \(\overrightarrow{M}(\mathfrak{g}_{1})\),
* \(v_{h_{1}^{3}}\) is adjacent to \(v_{h_{2}^{3}}\) via a bottom edge in \(\overrightarrow{M}(\mathfrak{g}_{1})\),
* \(v_{h_{2}^{3}}\) is adjacent to \(v_{h_{2}^{1}}\) via a top edge directed from \(v_{h_{2}^{3}}\) to \(v_{h_{2}^{1}}\) in \(\overrightarrow{M}(\mathfrak{g}_{1})\), and
* \(v_{h_{2}^{1}}\) is adjacent to \(v_{t_{2}}\) via a bottom edge directed from \(v_{h_{2}^{1}}\) to \(v_{t_{2}}\) in \(\overrightarrow{M}(\mathfrak{g}_{1})\).
Note that
\[w(P_{t_{1},t_{2}}(\mathfrak{g}_{1})) =w(P_{t_{1},h_{1}^{1}}(\mathfrak{g}_{1}))+w(P_{h_{1}^{1},h_{1}^{ 3}}(\mathfrak{g}_{1}))+w(P_{h_{1}^{3},h_{2}^{3}}(\mathfrak{g}_{1}))+w(P_{h_{2 }^{3},h_{2}^{1}}(\mathfrak{g}_{1}))+w(P_{h_{2}^{1},t_{2}}(\mathfrak{g}_{1}))\] \[=-1-1+w(P_{h_{1}^{3},h_{2}^{3}}(\mathfrak{g}_{1}))+1+1\] \[=w(P_{h_{1}^{3},h_{2}^{3}}(\mathfrak{g}_{1}))\] \[=w(P_{h_{1}^{1},h_{2}^{1}}(\mathfrak{g}_{3}))\] \[=w(\mathcal{S}(P_{t_{1},t_{2}}(\mathfrak{g}_{1}))),\]
where for the penultimate equality we have used the fact that \(\mathcal{S}\) maps \(v_{h_{i}^{3}}\) to \(v_{h_{i}^{1}}\), for \(i=1,2\), as well as the edge connecting \(v_{h_{1}^{3}}\) and \(v_{h_{2}^{3}}\) in \(\overrightarrow{M}(\mathfrak{g}_{1})\) to the edge connecting \(v_{h_{1}^{1}}\) and \(v_{h_{2}^{1}}\) in \(\overrightarrow{M}(\mathfrak{g}_{1})\), preserving their orientations. Thus, it follows that \(w(P_{t_{1},t_{2}}(\mathfrak{g}_{1}))\) is invariant under \(\mathcal{S}\).
**Case 2:** Subpaths \(P_{t_{1},t_{2}}(\mathfrak{g}_{1})\) defined by a sequence of vertices \(v_{t_{1}},v_{h_{1}^{2}},v_{h_{2}^{2}},v_{t_{2}}\), where
* \(t_{1},t_{2}\in T\) and \(h_{1}^{2},h_{2}^{2}\in H_{2}\),
* \(v_{t_{1}}\) is adjacent to \(v_{h_{1}^{2}}\) via a bottom edge directed from \(v_{h_{1}^{2}}\) to \(v_{t_{1}}\) in \(\overrightarrow{M}(\mathfrak{g}_{1})\),
* \(v_{h_{1}^{2}}\) is adjacent to \(v_{h_{2}^{2}}\) via a top edge in \(\overrightarrow{M}(\mathfrak{g}_{1})\), and
* \(v_{h_{2}^{2}}\) is adjacent to \(v_{t_{2}}\) via a bottom edge directed from \(v_{h_{2}^{2}}\) to \(v_{t_{2}}\) in \(\overrightarrow{M}(\mathfrak{g}_{1})\).
Note that
\[w(P_{t_{1},t_{2}}(\mathfrak{g}_{1})) =w(P_{t_{1},h_{1}^{2}}(\mathfrak{g}_{1}))+w(P_{h_{1}^{2},h_{2}^{ 2}}(\mathfrak{g}_{1}))+w(P_{h_{2}^{2},t_{2}}(\mathfrak{g}_{1}))\] \[=-1+w(P_{h_{1}^{2},h_{2}^{2}}(\mathfrak{g}_{1}))+1\] \[=w(P_{h_{1}^{2},h_{2}^{2}}(\mathfrak{g}_{1}))\] \[=w(P_{h_{1}^{2},h_{2}^{2}}(\mathfrak{g}_{3}))\] \[=w(\mathcal{S}(P_{t_{1},t_{2}}(\mathfrak{g}_{1}))).\]
Thus, it follows that \(w(P_{t_{1},t_{2}}(\mathfrak{g}_{1}))\) is invariant under \(\mathcal{S}\).
Consequently, \(w(P_{i,j}(\mathfrak{g}_{1}))=w(P_{2k_{1}+k_{2}-i+1,2k_{1}+k_{2}-j+1}( \mathfrak{g}_{3}))\), for all distinct \(i,j\in T\), as claimed.
Now, applying Lemma 15, it follows that
\[w(P_{i,j}(\mathfrak{g}_{1}))=w(P_{j-k_{1}-k_{2},i-k_{1}-k_{2}}(\mathfrak{g}_{2 })),\]
for all distinct \(i,j\in T\). Therefore, the desired weight-preserving bijection \(\phi\) is given by mapping the path \(P_{i,j}(\mathfrak{g}_{1})\) to \(P_{j-k_{1}-k_{2},i-k_{1}-k_{2}}(\mathfrak{g}_{2})\), for all distinct \(i,j\in T.\) The result follows.
**Remark 25**.: _In fact, for \(\mathfrak{g}_{1}\) and \(\mathfrak{g}_{2}\) as in Lemma 24, the block corresponding to rows \(\{k_{1}+k_{2}+1,\ldots,2k_{1}+k_{2}\}\) and columns \(\{k_{1}+k_{2}+1,\ldots,2k_{1}+k_{2}\}\) of \(\Sigma(\mathfrak{g}_{1})\) is the same as \(\widehat{\Sigma}(\mathfrak{g}_{2})^{t}\)._
Now, in Lemma 27 below, we extend Lemma 24 so that it applies to the bottom right \(b\times b\) block of \(\Sigma(\mathfrak{g})\) when \(\lfloor\frac{a}{b}\rfloor>1\). The following example provides an illustration of Lemma 27.
**Example 26**.: _Let \(\mathfrak{g}=\mathfrak{p}^{A}\frac{5\upharpoonright 2}{7}\) and \(\mathfrak{g}^{\prime}=\mathfrak{p}^{A}\frac{1\upharpoonright 1}{2}\). See Figure 4 above for (a) \(\Sigma(\mathfrak{g})\) and (b) \(M(\mathfrak{g})\) and Figure 8 for (a) \(\widehat{\Sigma}(\mathfrak{g}^{\prime})\) and (b) \(M(\mathfrak{g}^{\prime})\). Note that the bottom right \(2\times 2\) block of \(\Sigma(\mathfrak{g})\) consists of the same multiset of values as \(\widehat{\Sigma}(\mathfrak{g}^{\prime})\). Moreover, note that there is a copy of \(M(\mathfrak{g}^{\prime})\) inside of \(M(\mathfrak{g})\). In particular, identifying vertices connected by all edges except \(\{v_{6},v_{7}\}\) and then rotating the resulting graph by 180 degrees yields \(M(\mathfrak{g}^{\prime})\)._
**Lemma 27**.: _Let \(k_{1},k_{2}\in\mathbb{Z}_{>0}\) satisfy \(k_{1}>k_{2}\) and \(\gcd(k_{1},k_{2})=1\). If \(\mathfrak{g}_{1}=\mathfrak{p}^{A}\frac{mk_{1}+k_{2}\upharpoonright k_{1}}{(m+1) k_{1}+k_{2}}\) and \(\mathfrak{g}_{2}=\mathfrak{p}^{A}\frac{k_{1}-k_{2}\upharpoonright k_{2}}{k_{1}}\), then the block corresponding to rows \(\{mk_{1}+k_{2}+1,\ldots,(m+1)k_{1}+k_{2}\}\) and columns \(\{mk_{1}+k_{2}+1,\ldots,(m+1)k_{1}+k_{2}\}\) of \(\Sigma(\mathfrak{g}_{1})\) consists of the same values as \(\widehat{\Sigma}(\mathfrak{g}_{2})\)._
Proof.: Note that if \(m=1\), then the result follows by Lemma 24. So, assume that \(m>1\). Define the sets of vertices
\[H=\{1,\ldots,mk_{1}+k_{2}\},\quad H_{1}=\{1,\ldots,k_{1}\},\quad T_{1}=\{mk_{ 1}+k_{2}+1,\ldots,(m+1)k_{1}+k_{2}\},\]
and
\[T_{2}=\{(m-1)k_{1}+k_{2}+1,\ldots,mk_{1}+k_{2}\}.\]
Let \(\mathfrak{g}_{3}=\mathfrak{p}^{A}\frac{(m-1)k_{1}+k_{2}\upharpoonright k_{1}}{ mk_{1}+k_{2}}\). First, we show that the block \(\Delta_{1}\) of \(\Sigma(\mathfrak{g}_{1})\) corresponding to rows \(r\in T_{1}\) and columns \(c\in T_{1}\) contains the same values as the block \(\Delta_{2}\) of \(\Sigma(\mathfrak{g}_{3})\) corresponding to rows \(r\in T_{2}\) and columns \(c\in T_{2}\). The block \(\Delta_{1}\) of \(\Sigma(\mathfrak{g}_{1})\) consists of the values \(w(P_{i,j}(\mathfrak{g}_{1}))\), for \(i,j\in T_{1}\), and the block \(\Delta_{2}\) of \(\Sigma(\mathfrak{g}_{3})\) consists of the values \(w(P_{i,j}(\mathfrak{g}_{3}))\), for \(i,j\in T_{2}\). Clearly, \(\Delta_{1}\) and \(\Delta_{2}\) have the same number, i.e., \(k_{1}\), of \(0\)'s on their respective diagonals. Thus, we need to show that \(\Delta_{1}\) and \(\Delta_{2}\) have the same multisets of values corresponding to off-diagonal entries. To do so, we define a weight-preserving bijection
\[\phi:\ \{P_{i,j}(\mathfrak{g}_{1})\ |\ i,j\in T_{1},\ i\neq j\}\to\{P_{i,j}( \mathfrak{g}_{3})\ |\ i,j\in T_{2},\ i\neq j\}.\]
To aid in defining \(\phi\), we make use of a transformation \(\mathcal{S}\) of \(\overrightarrow{M}(\mathfrak{g}_{1})\) which is defined as follows. Note that in \(M(\mathfrak{g}_{1})\) (resp., \(M(\mathfrak{g}_{3})\)), each vertex of \(u\in\{v_{i}\ |\ i\in H_{1}\}\) is adjacent to a unique vertex of \(v\in\{v_{i}\ |\ i\in T_{1}\}\) (resp., \(v\in\{v_{i}\ |\ i\in T_{2}\}\)) via an arc below. If \(u\in\{v_{i}\ |\ i\in H_{1}\}\) is adjacent to \(v\in\{v_{i}\ |\ i\in T_{1}\}\), then \(\mathcal{S}\) identifies \(u\) and \(v\) while removing any edges between them. Let \(\mathfrak{g}_{4}=\mathfrak{p}^{A}\frac{mk_{1}+k_{2}}{k_{1}(m-1)k_{1}+k_{2}}\). Note that \(\mathcal{S}(\overrightarrow{M}(\mathfrak{g}_{1}))=\overrightarrow{M}(\mathfrak{ g}_{4})\) with \(\mathcal{S}\) mapping vertices \(v_{i}\) and \(v_{(m+1)k_{1}+k_{2}-i+1}\) in \(\overrightarrow{M}(\mathfrak{g}_{1})\) to \(v_{i}\) in \(\overrightarrow{M}(\mathfrak{g}_{4})\), for \(1\leq i\leq k_{1}\), and vertices \(v_{i}\) in \(\overrightarrow{M}(\mathfrak{g}_{1})\) to \(v_{i}\) in \(\overrightarrow{M}(\mathfrak{g}_{4})\), for \(k_{1}+1\leq i\leq mk_{1}+k_{2}\). We claim that \(w(P_{i,j}(\mathfrak{g}_{1}))=w(P_{(m+1)k_{1}+k_{2}-i+1,(m+1)k_{1}+k_{2}-j+1}( \mathfrak{g}_{4}))\), for distinct \(i,j\in T_{1}\).
To establish the claim, take distinct \(i,j\in T_{1}\). If \(v_{i}\) and \(v_{j}\) are adjacent in \(M(\mathfrak{g}_{1})\), then \(\mathcal{S}\) maps \(P_{i,j}(\mathfrak{g}_{1})\) along with its orientation in \(\overrightarrow{M}(\mathfrak{g}_{1})\) to \(P_{(m+1)k_{1}+k_{2}-i+1,(m+1)k_{1}+k_{2}-j+1}(\mathfrak{g}_{4})\) along with its orientation in
\(\overrightarrow{M}(\mathfrak{g}_{4})\); that is, \(w(P_{i,j}(\mathfrak{g}_{1}))=w(P_{i,j}(\mathfrak{g}_{4}))\). On the other hand, if \(v_{i}\) and \(v_{j}\) are not adjacent in \(M(\mathfrak{g}_{1})\), then \(P_{i,j}(\mathfrak{g}_{1})\) contains subpaths of the form \(v_{t_{1}},v_{h_{1}},\ldots,v_{h_{t}},v_{t_{2}}\) with \(t_{1},t_{2}\in T_{1}\) and \(h_{1},\ldots,h_{t}\in H\). To establish the claim, it suffices to show that the weights of such subpaths \(P_{t_{1},t_{2}}(\mathfrak{g}_{1})\) are invariant under \(\mathcal{S}.\) Since
\[w(P_{t_{1},t_{2}}(\mathfrak{g}_{1})) =w(P_{t_{1},h_{1}}(\mathfrak{g}_{1}))+w(P_{h_{1},h_{t}}(\mathfrak{ g}_{1}))+w(P_{h_{\ell},t_{2}}(\mathfrak{g}_{1}))\] \[=-1+w(P_{h_{1},h_{\ell}}(\mathfrak{g}_{1}))+1\] \[=w(P_{h_{1},h_{\ell}}(\mathfrak{g}_{1}))\] \[=w(P_{h_{1},h_{\ell}}(\mathfrak{g}_{4}))\] \[=w(\mathcal{S}(P_{t_{1},t_{2}}(\mathfrak{g}_{1}))),\]
the claim follows.
Now, applying Lemmas 12 and 15, it follows that \(w(P_{i,j}(\mathfrak{g}_{1}))=w(P_{i-k_{1},j-k_{1}}(\mathfrak{g}_{3}))\), for all distinct \(i,j\in T_{1}\). Consequently, our desired weight-preserving bijection sends the path \(P_{i,j}(\mathfrak{g}_{1})\) to \(P_{i-k_{1},j-k_{1}}(\mathfrak{g}_{3})\), for all distinct \(i,j\in T_{1}\). As this argument can be repeated until \(\mathfrak{g}_{3}=\mathfrak{p}^{A}\frac{k_{1}+k_{2}|k_{1}}{2k_{1}+k_{2}}\), i.e., the case \(m=1\), the result follows from Lemma 24.
Having now addressed the top left \(a\times a\) block and bottom right \(b\times b\) block of \(\Sigma(\mathfrak{g})\), where \(a>b\) and \(\mathfrak{g}=\mathfrak{p}^{A}\frac{a|b}{n}\) is Frobenius, we proceed to the top right \(a\times b\) block of \(\Sigma(\mathfrak{g}).\) As was the case with the bottom right \(b\times b\) block, we have two cases: one corresponding to \(\lfloor\frac{a}{b}\rfloor=1\) and another corresponding to \(\lfloor\frac{a}{b}\rfloor>1\). The former of the two cases is covered by Lemma 29, and the latter by Lemma 32. Examples 28 and 31 illustrate each result respectively.
**Example 28**.: _Let \(\mathfrak{g}=\mathfrak{p}^{A}\frac{3|2}{5}\) and \(\mathfrak{g}^{\prime}=\mathfrak{p}^{A}\frac{2|1}{3}\). See Figure 9 for (a)\(\Sigma(\mathfrak{g})\) and (b)\(M(\mathfrak{g})\), and see Figure 10 for (a)\(\Sigma(\mathfrak{g}^{\prime})\) and (b)\(M(\mathfrak{g}^{\prime})\). Note that the top right \(3\times 2\) block (outlined by dashed lines) of \(\Sigma(\mathfrak{g})\) consists of the same multiset of values as the top \(2\times 3\) block of \(\Sigma(\mathfrak{g}^{\prime})\) with each value incremented by 1. Moreover, note that there is a copy of \(M(\mathfrak{g}^{\prime})\) inside of \(M(\mathfrak{g})\). In particular, identifying the vertices connected by dashed edges in Figure 9 (b) and then reflecting the resulting graph across the horizontal line through the vertices yields \(M(\mathfrak{g}^{\prime})\)._
**Lemma 29**.: _Let \(k_{1},k_{2}\in\mathbb{Z}_{>0}\) satisfy \(k_{1}>k_{2}\) and \(\gcd(k_{1},k_{2})=1\). If \(\mathfrak{g}_{1}=\mathfrak{p}^{A}\frac{k_{1}+k_{2}|k_{1}}{2k_{1}+k_{2}}\) and \(\mathfrak{g}_{2}=\mathfrak{p}^{A}\frac{k_{1}|k_{2}}{k_{1}+k_{2}}\), then the block corresponding to rows \(\{1,\ldots,k_{1}+k_{2}\}\) and columns \(\{k_{1}+k_{2}+1,\ldots,2k_{1}+k_{2}\}\) of \(\Sigma(\mathfrak{g}_{1})\) consists of the same multiset of values of entries as the block corresponding to rows \(\{1,\ldots,k_{1}\}\) and columns \(\{1,\ldots,k_{1}+k_{2}\}\) of \(\Sigma(\mathfrak{g}_{2})\) with each value incremented by 1._
Proof.: Define the sets
\[H_{1}=\{1,\ldots,k_{1}+k_{2}\},\quad H_{2}=\{1,\ldots,k_{1}\},\quad\text{and} \quad T=\{k_{1}+k_{2}+1,\ldots,2k_{1}+k_{2}\}.\]
Note that \(\{v_{i}\ |\ i\in H_{1}\}\) and \(\{v_{i}\ |\ i\in T\}\) form a partition of the vertices of \(\overrightarrow{M}(\mathfrak{g}_{1})\). Also, note that \(H_{1}\), \(H_{2}\), and \(T\) are defined in such a way that top edges in \(M(\mathfrak{g}_{1})\) only connect pairs of vertices \(u\) and \(v\) with
* \(u,v\in\{v_{i}\ |\ i\in H_{1}\}\) or
* \(u,v\in\{v_{i}\ |\ i\in T\}\)
and bottom edges in \(M(\mathfrak{g}_{1})\) only connect pairs of vertices \(u\) and \(v\) with
* \(u\in\{v_{i}\ |\ i\in H_{2}\}\) and \(v\in\{v_{i}\ |\ i\in T\}\) or
* \(u,v\in\{v_{i}\ |\ i\in H_{1}\backslash H_{2}\}\).
Now, the block of \(\Sigma(\mathfrak{g}_{1})\) corresponding to rows \(r\in H_{1}\) and columns \(c\in T\) consists of the values \(w(P_{i,j}(\mathfrak{g}_{1}))\), for \(i\in H_{1}\) and \(j\in T\), and the block of \(\Sigma(\mathfrak{g}_{2})\) corresponding to rows \(r\in H_{2}\) and columns \(c\in H_{1}\) consists of the values \(w(P_{i,j}(\mathfrak{g}_{2}))\), for \(i\in H_{2}\) and \(j\in H_{1}\). To establish the result, we define a bijection
\[\phi:\{P_{i,j}(\mathfrak{g}_{1})\ |\ i\in H_{1},\ j\in T\}\to\{P_{i,j}( \mathfrak{g}_{2})\ |\ i\in H_{2},\ j\in H_{1}\}\]
such that \(w(P_{i,j}(\mathfrak{g}_{1}))=w(\phi(P_{i,j}(\mathfrak{g}_{1})))+1\). To aid in defining \(\phi\), we make use of a transformation \(\mathcal{S}\) of \(\overrightarrow{M}(\mathfrak{g}_{1})\) which is defined as follows. If \(u\in\{v_{i}\ |\ i\in H_{2}\}\) is adjacent to \(v\in\{v_{i}\ |\ i\in T\}\), then identify \(u\) and \(v\) while removing edges between them. Let \(\mathfrak{g}_{3}=\mathfrak{p}^{A}\frac{k_{1}+k_{2}}{k_{1}|k_{2}}\). Note that \(\mathcal{S}(\overrightarrow{M}(\mathfrak{g}_{1}))=\overrightarrow{M}( \mathfrak{g}_{3})\) with \(\mathcal{S}\) mapping vertices \(v_{i}\) and \(v_{2k_{1}+k_{2}-i+1}\) in \(\overrightarrow{M}(\mathfrak{g}_{1})\) to \(v_{i}\) in \(\overrightarrow{M}(\mathfrak{g}_{3})\), for \(i\in H_{2}\), and vertices \(v_{i}\) in \(\overrightarrow{M}(\mathfrak{g}_{1})\) to \(v_{i}\) in \(\overrightarrow{M}(\mathfrak{g}_{3})\), for \(i\in H_{1}\backslash H_{2}\). We claim that \(w(P_{h,t}(\mathfrak{g}_{1}))=w(\mathcal{S}(P_{h,t}(\mathfrak{g}_{1})))+1\), for all \(h\in H_{1}\) and \(t\in T\).
To establish the claim, take \(h\in H_{1}\) and \(t\in T\). There must exist a unique \(h^{\prime}\in H_{1}\) such that \((v_{h^{\prime}},v_{t})\) is an edge of \(\overrightarrow{M}(\mathfrak{g}_{1})\) and \(\mathcal{S}(P_{h,t}(\mathfrak{g}_{1}))=P_{h,h^{\prime}}(\mathfrak{g}_{3})\). Recall that, in the proof of Lemma 21, it is shown that \(w(P_{h,h^{\prime}}(\mathfrak{g}_{1}))=w(P_{h,h^{\prime}}(\mathfrak{g}_{3}))\). Now, if \((v_{h^{\prime}},v_{t})\) belongs to \(P_{h,t}(\mathfrak{g}_{1})\), then
\[w(P_{h,t}(\mathfrak{g}_{1})) =w(P_{h,h^{\prime}}(\mathfrak{g}_{1}))+w(P_{h^{\prime},t}( \mathfrak{g}_{1}))\] \[=w(P_{h,h^{\prime}}(\mathfrak{g}_{1}))+1\] \[=w(P_{h,h^{\prime}}(\mathfrak{g}_{3}))+1.\]
Otherwise, \(P_{h,h^{\prime}}(\mathfrak{g}_{1})\) consists of \(P_{h,t}(\mathfrak{g}_{1})\) along with the edge \((v_{h^{\prime}},v_{t})\). Consequently,
\[w(P_{h,t}(\mathfrak{g}_{1})) =w(P_{h,h^{\prime}}(\mathfrak{g}_{1}))-w(P_{h,h^{\prime}}(\mathfrak{ g}_{1}))\] \[=w(P_{h,h^{\prime}}(\mathfrak{g}_{1}))+1\] \[=w(P_{h,h^{\prime}}(\mathfrak{g}_{3}))+1.\]
Thus, in either case,
\[w(P_{h,t}(\mathfrak{g}_{1}))=w(\mathcal{S}(P_{h,t}(\mathfrak{g}_{1})))+1=w(P_ {h,h^{\prime}}(\mathfrak{g}_{3}))+1,\]
establishing the claim.
Now, applying Lemma 12, we have \(w(P_{h,t}(\mathfrak{g}_{1}))=w(P_{h^{\prime},h}(\mathfrak{g}_{2}))+1\). Therefore, the desired bijection \(\phi\) is given by mapping the path \(P_{i,j}(\mathfrak{g}_{1})\) to \(P_{2k_{1}+k_{2}-j+1,i}(\mathfrak{g}_{2}),\) for all \(i\in H_{1}\) and \(j\in T.\) The result follows.
**Remark 30**.: _In fact, for \(\mathfrak{g}_{1}\) and \(\mathfrak{g}_{2}\) as in Lemma 29, the block corresponding to rows \(\{1,\ldots,k_{1}+k_{2}\}\) and columns \(\{k_{1}+k_{2}+1,\ldots,2k_{1}+k_{2}\}\) of \(\Sigma(\mathfrak{g}_{1})\) is the same as the block corresponding to rows \(\{1,\ldots,k_{1}\}\) and columns \(\{1,\ldots,k_{1}+k_{2}\}\) of \(\Sigma(\mathfrak{g}_{2})\) with every value incremented by 1 and rotated 90 degrees clockwise._
**Example 31**.: _Let \(\mathfrak{g}=\mathfrak{p}^{A}\frac{5|2}{7}\) and \(\mathfrak{g}^{\prime}=\mathfrak{p}^{A}\frac{3|2}{5}\). See Figure 11 for (a)\(\Sigma(\mathfrak{g})\) and (b)\(M(\mathfrak{g})\), and see Figure 12 for (a)\(\Sigma(\mathfrak{g}^{\prime})\) and (b)\(M(\mathfrak{g}^{\prime})\). Note that the top right \(5\times 2\) block (outlined by dashed lines) of \(\Sigma(\mathfrak{g})\) consists of the same multiset of values as the rightmost \(5\times 2\) block of \(\Sigma(\mathfrak{g}^{\prime})\) with each value incremented by 1. Moreover, note that there is a copy of \(M(\mathfrak{g}^{\prime})\) inside of \(M(\mathfrak{g})\). In particular, identifying the vertices connected by dashed edges in Figure 11 (b) and then rotating the resulting graph by 180 degrees yields \(M(\mathfrak{g}^{\prime})\)._
**Lemma 32**.: _Let \(k_{1},k_{2}\in\mathbb{Z}_{>0}\) satisfy \(k_{1}>k_{2}\) and \(\gcd(k_{1},k_{2})=1\). If \(\mathfrak{g}_{1}=\mathfrak{p}^{A}\frac{mk_{1}+k_{2}|k_{1}}{(m+1)k_{1}+k_{2}}\) and \(\mathfrak{g}_{2}=\mathfrak{p}^{A}\frac{(m-1)k_{1}+k_{2}|k_{1}}{mk_{1}+k_{2}}\), then the block corresponding to rows \(\{1,\ldots,mk_{1}+k_{2}\}\) and columns \(\{mk_{1}+k_{2}+1,\ldots,(m+1)k_{1}+k_{2}\}\) of \(\Sigma(\mathfrak{g}_{1})\) consists of the same values as the block corresponding to rows \(\{1,\ldots,mk_{1}+k_{2}\}\) and columns \(\{(m-1)k_{1}+k_{2}+1,\ldots,mk_{1}+k_{2}\}\) of \(\Sigma(\mathfrak{g}_{2})\) with each value incremented by 1._
Proof.: Define the sets
\[H_{1}=\{1,\ldots,k_{1}\},\quad H_{2}=\{k_{1}+1,\ldots,mk_{1}+k_{2}\},\quad T_{ 1}=\{mk_{1}+k_{2}+1,\ldots,(m+1)k_{1}+k_{2}\},\]
and
\[T_{2}=\{(m-1)k_{1}+k_{2}+1,\ldots,mk_{1}+k_{2}\}.\]
Note that \(\{v_{i}\ |\ i\in H_{1}\},\{v_{i}\ |\ i\in H_{2}\}\), and \(\{v_{i}\ |\ i\in T_{1}\}\) form a partition of the vertices of \(M(\mathfrak{g}_{1})\) and are defined in such a way that top edges in \(M(\mathfrak{g}_{1})\) only connect pairs of vertices \(u\) and \(v\) with
* \(u,v\in\{v_{i}\ |\ i\in H_{1}\cup H_{2}\}\) or
* \(u,v\in\{v_{i}\ |\ i\in T_{1}\}\)
and bottom edges in \(M(\mathfrak{g}_{1})\) only connect pairs of vertices \(u\) and \(v\) with
* \(u\in\{v_{i}\ |\ i\in H_{1}\}\) and \(v\in\{v_{i}\ |\ i\in T_{1}\}\) or
* \(u,v\in\{v_{i}\ |\ i\in H_{2}\}\).
Now, the block of \(\Sigma(\mathfrak{g}_{1})\) corresponding to rows \(r\in H_{1}\cup H_{2}\) and columns \(c\in T_{1}\) consists of the values \(w(P_{i,j}(\mathfrak{g}_{1}))\), for \(i\in H_{1}\cup H_{2}\) and \(j\in T_{1}\), and the block of \(\Sigma(\mathfrak{g}_{2})\) corresponding to rows \(r\in H_{1}\cup H_{2}\) and columns \(c\in T_{2}\) consists of the values \(w(P_{i,j}(\mathfrak{g}_{2}))\), for \(i\in H_{1}\cup H_{2}\) and \(j\in T_{2}\). To establish the result, we define a bijection
\[\phi:\{P_{i,j}(\mathfrak{g}_{1})\ |\ i\in H_{1}\cup H_{2},\ j\in T_{1}\} \rightarrow\{P_{i,j}(\mathfrak{g}_{2})\ |\ i\in H_{1}\cup H_{2},\ j\in T_{2}\}\]
such that \(w(P_{i,j}(\mathfrak{g}_{1}))=w(\phi(P_{i,j}(\mathfrak{g}_{1})))+1\). To aid in defining \(\phi\), we make use of a transformation \(\mathcal{S}\) of \(\overrightarrow{M}(\mathfrak{g}_{1})\) which is defined as follows. If \(u\in\{v_{i}\ |\ i\in H_{1}\}\) is adjacent to \(v\in\{v_{i}\ |\ i\in T_{1}\}\), then identify \(u\) and \(v\) while removing edges between them. Let \(\mathfrak{g}_{3}=\mathfrak{p}^{A}\frac{mk_{1}+k_{2}}{k_{1}(m-1)k_{1}+k_{2}}\). Note that \(\mathcal{S}(\overrightarrow{M}(\mathfrak{g}_{1}))=\overrightarrow{M}( \mathfrak{g}_{3})\) with \(\mathcal{S}\) mapping vertices \(v_{i}\) and \(v_{(m+1)k_{1}+k_{2}-i+1}\) in \(\overrightarrow{M}(\mathfrak{g}_{1})\) to \(v_{i}\) in \(\overrightarrow{M}(\mathfrak{g}_{3})\), for \(i\in H_{1}\), and vertices \(v_{i}\) in \(\overrightarrow{M}(\mathfrak{g}_{1})\) to \(v_{i}\) in \(\overrightarrow{M}(\mathfrak{g}_{3})\), for \(i\in H_{2}\). We claim that \(w(P_{h,t}(\mathfrak{g}_{1}))=w(\mathcal{S}(P_{h,t}(\mathfrak{g}_{1})))+1\), for \(h\in H_{1}\cup H_{2}\) and \(t\in T_{1}\).
To establish the claim, take \(h\in H_{1}\cup H_{2}\) and \(t\in T_{1}\). There must exist a unique \(h^{\prime}\in H_{1}\) such that \((v_{h^{\prime}},v_{t})\) is an edge of \(\overrightarrow{M}(\mathfrak{g}_{1})\) and \(\mathcal{S}(P_{h,t}(\mathfrak{g}_{1}))=P_{h,h^{\prime}}(\mathfrak{g}_{3})\). Note that in the proof of Lemma 21 it is shown that \(w(P_{h,h^{\prime}}(\mathfrak{g}_{1}))=w(P_{h,h^{\prime}}(\mathfrak{g}_{3}))\). Now, if \((v_{h^{\prime}},v_{t})\) belongs to \(P_{h,t}(\mathfrak{g}_{1})\), then
\[w(P_{h,t}(\mathfrak{g}_{1})) =w(P_{h,h^{\prime}}(\mathfrak{g}_{1}))+w(P_{h^{\prime},t}( \mathfrak{g}_{1}))\] \[=w(P_{h,h^{\prime}}(\mathfrak{g}_{1}))+1\] \[=w(P_{h,h^{\prime}}(\mathfrak{g}_{3}))+1.\]
Otherwise, \(P_{h,h^{\prime}}(\mathfrak{g}_{1})\) consists of \(P_{h,t}(\mathfrak{g}_{1})\) along with the edge \((v_{h^{\prime}},v_{t})\). Consequently,
\[w(P_{h,t}(\mathfrak{g}_{1})) =w(P_{h,h^{\prime}}(\mathfrak{g}_{1}))-w(P_{t,h^{\prime}}( \mathfrak{g}_{1}))\] \[=w(P_{h,h^{\prime}}(\mathfrak{g}_{1}))+1\] \[=w(P_{h,h^{\prime}}(\mathfrak{g}_{3}))+1.\]
Thus, in either case,
\[w(P_{h,t}(\mathfrak{g}_{1}))=w(\mathcal{S}(P_{h,t}(\mathfrak{g}_{1})))+1=w(P_ {h,h^{\prime}}(\mathfrak{g}_{3}))+1,\]
establishing the claim.
Now, applying Lemmas 12 and 15, we have
\[w(P_{h,t}(\mathfrak{g}_{1}))=w(P_{mk_{1}+k_{2}-h+1,mk_{1}+k_{2}-h^{\prime}+1}( \mathfrak{g}_{2}))+1.\]
Therefore, the desired bijection \(\phi\) is given by mapping the path \(P_{i,j}(\mathfrak{g}_{1})\) to \(P_{mk_{1}+k_{2}-i+1,j-k_{1}}(\mathfrak{g}_{2})\), for all \(i\in H_{1}\cup H_{2}\) and \(j\in T_{1}.\) The result follows.
In the following subsections, we utilize the structural lemmas above to inductively determine closed formulas for the spectra of certain families of Frobenius, maximal parabolic, type-A seaweeds.
### \(\mathfrak{p}^{A}\frac{k|1}{k+1}\)
In this subsection, we compute the (extended) spectra of Frobenius, type-A seaweed algebras of the form \(\mathfrak{p}^{A}\frac{k|1}{k+1}\), for \(k\geq 1\). See Figure 13 for illustrations of the oriented meanders corresponding to \(\mathfrak{p}^{A}\frac{k|1}{k+1}\) for \(k=1,2,\) and \(3\).
To establish our result, we require the following lemma which determines the multiset of values contained in the rightmost \((k+1)\times 1\) block of \(\Sigma(\mathfrak{g})\), for \(\mathfrak{g}=\mathfrak{p}^{A}\frac{k|1}{k+1}\).
**Lemma 33**.: _Let \(\mathfrak{g}_{k}=\mathfrak{p}^{A}\frac{k|1}{k+1}\), for \(k\geq 1\). Then_
\[\{w(P_{i,k+1}(\mathfrak{g}_{k}))\ |\ 1\leq i\leq k+1\}=\{0,1,\ldots,k\}\]
_as multisets._
Proof.: By induction on \(k\). For \(k=1\), using Figure 13 (a) we compute that
\[\{w(P_{i,k+1}(\mathfrak{g}_{k}))\ |\ 1\leq i\leq k+1\}=\{w(P_{1,2}(\mathfrak{g}_{1} )),w(P_{2,2}(\mathfrak{g}_{1}))\}=\{0,1\}.\]
Assume the result holds for \(k-1\geq 1\). Evidently, \(w(P_{k+1,k+1}(\mathfrak{g}_{k}))=0\). Note that the path \(P_{i,k+1}(\mathfrak{g}_{1})\) in \(\overrightarrow{M}(\mathfrak{g}_{k})\), for \(1\leq i<k+1\), must contain the edge \((v_{1},v_{k+1})\). Thus,
\[w(P_{i,k+1}(\mathfrak{g}_{k}))=w(P_{i,1}(\mathfrak{g}_{k}))+w(P_{1,k+1}( \mathfrak{g}_{k}))=w(P_{i,1}(\mathfrak{g}_{k}))+1.\]
Consequently, it suffices to compute the multiset
\[\{w(P_{i,1}(\mathfrak{g}_{k}))\ |\ 1\leq i<k+1\}.\]
Let \(\mathfrak{g}=\mathfrak{p}^{A}\frac{k}{1|k-1}\). Note that removing the edge \((v_{1},v_{k+1})\) and vertex \(v_{k+1}\) from \(\overrightarrow{M}(\mathfrak{g}_{k})\) results in \(\overrightarrow{M}(\mathfrak{g})\). Therefore, since no path \(P_{i,1}(\mathfrak{g}_{k})\), for \(1\leq i<k+1\), contains the edge \((v_{1},v_{k}+1)\), it follows that
\[\{w(P_{i,1}(\mathfrak{g}_{k}))\ |\ 1\leq i<k+1\}=\{w(P_{i,1}(\mathfrak{g}))\ |\ 1 \leq i<k+1\}.\]
Applying Lemmas 12 and 15 along with our induction hypothesis, we find that
\[\{w(P_{i,1}(\mathfrak{g}))\ |\ 1\leq i<k+1\}=\{w(P_{k-i+1,k}(\mathfrak{g}_{k-1} ))\ |\ 1\leq i<k+1\}=\{0,1,\ldots,k-1\};\]
that is,
\[\{w(P_{i,k+1}(\mathfrak{g}_{k}))\ |\ 1\leq i<k+1\}=\{1,2,\ldots,k\}.\]
The result follows.
**Theorem 34**.: _Let \(\mathfrak{g}_{k}=\mathfrak{p}^{A}\frac{k|1}{k+1}\), for \(k\geq 1\). The spectrum of \(\mathfrak{g}_{k}\) is equal to_
\[\bigcup_{i=1}^{k}\left\{(-k+i)^{i},(k-i+1)^{i}\right\}\]
_and the extended spectrum is equal to_
\[\{0^{k}\}\cup\bigcup_{i=0}^{k-1}\left\{(-k+i)^{i+1},(k-i)^{i+1}\right\}.\]
Proof.: By induction on \(k\). Let \(\mathfrak{g}_{k}=\mathfrak{p}^{A}\frac{k|1}{k+1}\). For the base case, constructing the (extended) spectrum matrix of the seaweed algebra \(\mathfrak{g}_{1}=\mathfrak{p}^{A}\frac{1|1}{2}\) directly from \(\overrightarrow{M}(\mathfrak{g}_{1})\) (see Figure 13 (a)), we find
\[\Sigma(\mathfrak{g}_{1})=\begin{bmatrix}0&1\\ &0\end{bmatrix}\qquad\text{and}\qquad\widehat{\Sigma}(\mathfrak{g}_{1})= \begin{bmatrix}0&1\\ -1&0\end{bmatrix}.\]
Thus, the spectrum of \(\mathfrak{g}_{1}\) is equal to \(\{0,1\}\) and the extended spectrum is equal to \(\{-1,0,1\}\).
Now, assume the result holds for \(k-1\geq 1\). Note that
\[\Sigma(\mathfrak{g}_{k})=\begin{bmatrix}B_{1}&B_{2}\\ &0\end{bmatrix},\]
where
* \(B_{1}\) is the \(k\times k\) block consisting of the values \(w(P_{i,j}(\mathfrak{g}_{k}))\), for \(1\leq i,j\leq k\), and
* \(B_{2}\) is the \(k\times 1\) block consisting of the values \(w(P_{i,k+1}(\mathfrak{g}_{k}))\), for \(1\leq i\leq k\).
Consequently, considering Remark 11, it follows that
\[\widehat{\Sigma}(\mathfrak{g}_{k})=\begin{bmatrix}B_{1}&B_{2}\\ -B_{2}^{t}&0\end{bmatrix}.\]
Now, by Lemma 21, \(B_{1}\) contains the same multiset of values as \(\widehat{\Sigma}(\mathfrak{g}_{k-1})\). Thus, applying our inductive hypothesis - and keeping Remark 7 in mind - it follows that \(B_{1}\) contributes
\[\{0^{k}\}\cup\bigcup_{i=1}^{k-1}\{(-k+i)^{i},(k-i)^{i}\}\]
to the multiset of values contained in the (extended) spectrum matrix of \(\mathfrak{g}_{k}\). It then follows from Lemma 33 that \(B_{2}\) contributes \(\{1,\ldots,k\}\) to the multiset of values contained in the (extended) spectrum of \(\mathfrak{g}_{k}\) and \(-B_{2}^{t}\) contributes \(\{-k,\ldots,-1\}\) to the multiset of values contained in the extended spectrum of \(\mathfrak{g}_{k}\). Putting these contributions together, the result follows.
Considering the spectrum formula of Theorem 34, we are immediately led to the following.
**Corollary 35**.: _For \(k\geq 1\), if \(\mathfrak{g}=\mathfrak{p}^{A}\frac{k|1}{k+1}\), then \(\mathfrak{g}\) has the log-concave spectrum property._
**Remark 36**.: _Combining Theorem 34 and Corollary 35 with Corollaries 14, 17, and 18, we obtain similar results to Theorem 34 and Corollary 35 for related families of Frobenius, type-A seaweeds. In particular, the (extended) spectrum of each of \(\mathfrak{p}^{A}\frac{k+1}{k|1}\), \(\mathfrak{p}^{A}\frac{1|k}{k+1}\), and \(\mathfrak{p}^{A}\frac{k+1}{1|k},\) for \(k\geq 1,\) is given in Theorem 34, and each algebra possesses the log-concave spectrum property._
### \(\mathfrak{p}^{A}\frac{k|2}{k+2}\)
In this section, we consider the (extended) spectra of Frobenius, type-A seaweed algebras of the form \(\mathfrak{p}^{A}\frac{k|2}{k+2}\), for \(k>1\) odd. See Figure 14 for illustrations of the oriented meanders of \(\mathfrak{p}^{A}\frac{k|2}{k+2}\), for \(k=3\) and \(5\).
Analogous to the case of type-A seaweeds of the form \(\mathfrak{p}^{A}\frac{k|1}{k+1}\), we require the following lemma, which determines the multiset of values contained in the top right \((k+2)\times 2\) block of \(\Sigma(\mathfrak{g})\), for \(\mathfrak{g}=\mathfrak{p}^{A}\frac{k|2}{k+2}\).
**Lemma 37**.: _Let \(\mathfrak{g}_{k}=\mathfrak{p}^{A}\frac{k|2}{k+2}\) for \(k=2m-1>5\). Then the multiset of values contained in the block of \(\Sigma(\mathfrak{g}_{k})\) corresponding to rows \(\{1,\ldots,k\}\) and columns \(\{k+1,k+2\}\) is equal to_
\[\{0,1^{3},(m-1)^{3},m^{2},m+1\}\cup\bigcup_{i=2}^{m-2}\{i^{4}\}.\]
_Moreover, the multiset of values contained in the block of \(\Sigma(\mathfrak{g}_{k})\) corresponding to rows \(\{1,\ldots,k+2\}\) and columns \(\{k+1,k+2\}\) is equal to_
\[\{-1,0^{3},(m-1)^{3},m^{2},m+1\}\cup\bigcup_{i=1}^{m-2}\{i^{4}\}.\]
Proof.: By induction on \(m\). For \(m=4\) (or \(k=7\)) one can compute directly that the multiset of values of \(\Sigma(\mathfrak{g}_{7})\) contained in rows \(\{1,2,3,4,5,6,7\}\) and columns \(\{8,9\}\) is
\[\{0,1^{3},2^{4},3^{3},4^{2},5\}\]
and the multiset of values of \(\Sigma(\mathfrak{g}_{7})\) contained in rows \(\{1,2,3,4,5,6,7,8,9\}\) and columns \(\{8,9\}\) is
\[\{-1,0^{3},1^{4},2^{4},3^{3},4^{2},5\}.\]
Now, assume the result holds for \(m-1\geq 4\) (or \(k-2=2(m-1)-1=2m-3\geq 7\)). By Lemma 32, the multiset of values of \(\Sigma(\mathfrak{g}_{k})\) contained in rows \(\{1,\ldots,k\}\) and columns \(\{k+1,k+2\}\) is equal to the multiset of values of \(\Sigma(\mathfrak{g}_{k-2})\) contained in rows \(\{1,\ldots,k\}\) and columns \(\{k-1,k\}\) with all values incremented by \(1\); that is, by the inductive hypothesis, the multiset of values of \(\Sigma(\mathfrak{g}_{k})\) contained in rows \(\{1,\ldots,k\}\) and columns \(\{k+1,k+2\}\) is equal to
\[M =\{-1+1,(0+1)^{3},(m-2+1)^{3},(m-1+1)^{2},m+1\}\cup\bigcup_{i=1}^ {m-3}\{(i+1)^{4}\}\] \[=\{0,1^{3},(m-1)^{3},m^{2},m+1\}\cup\bigcup_{i=2}^{m-2}\{i^{4}\},\]
as desired. As for the multiset of values contained in rows \(\{1,\ldots,k+2\}\) and columns \(\{k+1,k+2\}\) of \(\Sigma(\mathfrak{g}_{k})\), we simply need to add the values contained in rows \(\{k+1,k+2\}\) and columns \(\{k+1,k+2\}\) to \(M\). By Lemma 27, the multiset of values contained in rows \(\{k+1,k+2\}\) and columns \(\{k+1,k+2\}\) of \(\Sigma(\mathfrak{g}_{k})\) is equal to the extended spectrum of \(\mathfrak{p}^{A\frac{1|1}{2}}\), i.e., \(\{-1,0^{2},1\}\). The result follows by induction.
**Theorem 38**.: _Let \(\mathfrak{g}_{k}=\mathfrak{p}^{A}\frac{k|2}{k+2}\), for \(k=2m-1>1\). Then the spectrum of \(\mathfrak{g}_{k}\) is equal to \(\{-2,-1^{3},0^{5},1^{5},2^{3},3\}\) if \(m=2\) (or \(k=3\)) and is equal to_
\[\{-m,(-m+1)^{3},0^{2k-1},1^{2k-1},m^{3},m+1\}\cup\bigcup_{i=2}^{m-1}\{(-m+i) ^{4i-2},(m-i+1)^{4i-2}\},\]
_for \(m>2\) (or \(k>3\)). Furthermore, the extended spectrum of \(\mathfrak{g}_{k}\) is equal to \(\{-3,-2^{3},-1^{5},0^{6},1^{5},2^{3},3\}\) if \(m=2\) (or \(k=3\)) and is equal to_
\[\{-m-1,-m^{3},-1^{2k-1},0^{2k},1^{2k-1},m^{3},m+1\}\cup\bigcup_{i=1}^{m-2}\{( -m+i)^{4i+2},(m-i)^{4i+2}\},\]
_for \(m>2\) (or \(k>3\))._
Proof.: By induction on \(m\). For \(m=2\) (or \(k=3\)), computing directly we find that the spectrum of \(\mathfrak{g}_{3}\) is equal to
\[\{-2,-1^{3},0^{5},1^{5},2^{3},3\},\]
while the extended spectrum is equal to
\[\{-3,-2^{3},-1^{5},0^{6},1^{5},2^{3},3\}.\]
Similarly, for \(m=3\) (or \(k=5\)), we find that the spectrum of \(\mathfrak{g}_{5}\) is equal to
\[\{-3,-2^{3},-1^{6},0^{9},1^{9},2^{6},3^{3},4\},\]
while the extended spectrum is equal to
\[\{-4,-3^{3},-2^{6},-1^{9},0^{10},1^{9},2^{6},3^{3},4\}.\]
Now, assume the result holds for \(m-1\geq 3\) (or \(k-2=2(m-1)-1=2m-3\geq 5\)). Note that
\[\Sigma(\mathfrak{g}_{k})=\begin{bmatrix}B_{1}&B_{2}\\ &B_{3}\end{bmatrix},\]
where
* \(B_{1}\) is the \(k\times k\) block consisting of the values \(w(P_{i,j}(\mathfrak{g}_{k}))\), for \(1\leq i,j\leq k\),
* \(B_{2}\) is the \(k\times 2\) block consisting of the values \(w(P_{i,k+1}(\mathfrak{g}_{k}))\) and \(w(P_{i,k+2}(\mathfrak{g}_{k}))\), for \(1\leq i\leq k\),
* and \(B_{3}\) is the \(2\times 2\) block consisting of the values \(w(P_{k+1,k+1}(\mathfrak{g}_{k}))\), \(w(P_{k+1,k+2}(\mathfrak{g}_{k}))\), \(w(P_{k+2,k+1}(\mathfrak{g}_{k}))\), and \(w(P_{k+2,k+2}(\mathfrak{g}_{k}))\).
Consequently, considering Remark 11, it follows that
\[\widehat{\Sigma}(\mathfrak{g}_{k})=\begin{bmatrix}B_{1}&B_{2}\\ -B_{2}^{t}&B_{3}\end{bmatrix}.\]
Now, by Lemma 21, \(B_{1}\) contains the same multiset of values as \(\widehat{\Sigma}(\mathfrak{g}_{k-2})\). Thus, applying our induction hypothesis - and keeping Remark 7 in mind - it follows that \(B_{1}\) contributes
\[\{-m,(-m+1)^{3},-1^{2k-5},0^{2k-3},1^{2k-5},(m-1)^{3},m\}\cup\bigcup_{i=1}^{m -3}\{(-m+i+1)^{4i+2},(m-i-1)^{4i+2}\} \tag{1}\]
to the multiset of values contained in the (extended) spectrum matrix of \(\mathfrak{g}_{k}\). It then follows from Lemma 37 that \(B_{2}\) and \(B_{3}\) together contribute
\[\{-1,0^{3},(m-1)^{3},m^{2},m+1\}\cup\bigcup_{i=1}^{m-2}\{i^{4}\} \tag{2}\]
to the multiset of values contained in the (extended) spectrum matrix of \(\mathfrak{g}_{k}\). Thus, combining the contributions (1) and (2), the multiset of values contained in \(\Sigma(\mathfrak{g}_{k})\) is equal to
\[\{-m,(-m+1)^{3},0^{2k},1^{2k-1},m^{3},m+1\}\cup\bigcup_{i=2}^{m-1}\{(-m+i)^{ 4i-2},(m-i+1)^{4i-2}\}.\]
Considering Remark 7, we have established the claimed form of the spectrum of \(\mathfrak{g}_{k}\).
As for the extended spectrum, it remains to add the multiset of values contained in \(-B_{2}^{t}\) to the spectrum of \(\mathfrak{g}_{k}\). Applying Lemma 37, it follows that \(-B_{2}^{t}\) contains the multiset of values
\[\{-m-1,-m^{2},(-m+1)^{3},-1^{3},0\}\cup\bigcup_{i=2}^{m-2}\{-i^{4}\};\]
that is, the extended spectrum of \(\mathfrak{g}_{k}\) is equal to
\[\{-m-1,-m^{3},-1^{2k-1},0^{2k},1^{2k-1},m^{3},m+1\}\cup\bigcup_{i=1}^{m-2}\{ (-m+i)^{4i+2},(m-i)^{4i+2}\},\]
as desired. The result follows by induction.
Considering the spectrum formula of Theorem 38, we are immediately led to the following.
**Corollary 39**.: _For \(k\geq 1\) odd, if \(\mathfrak{g}=\mathfrak{p}^{A}\frac{k|2}{k+2}\), then \(\mathfrak{g}\) has the log-concave spectrum property._
**Remark 40**.: _Combining Theorem 38 and Corollary 39 with Corollaries 14, 17, and 18, we obtain similar results to Theorem 38 and Corollary 39 for related families of Frobenius, type-A seaweeds. In particular, the \((\)extended\()\) spectrum of each of \(\mathfrak{p}^{A}\frac{k+2}{k|2}\), \(\mathfrak{p}^{A}\frac{2|k}{k+2}\), and \(\mathfrak{p}^{A}\frac{k+2}{2|k}\), for \(k\geq 1\) odd, is given in Theorem 38, and each algebra possesses the log-concave spectrum property._
As displayed by Theorems 34 and 38, the structural lemmas established at the beginning of this section allow for the inductive determination of formulas for the spectra of Frobenius, maximal parabolic, type-A seaweeds \(\mathfrak{p}^{A}\frac{a|b}{n}\) with \(a>b\) for fixed \(b\). Unfortunately, each value of \(b\) seems to require its own supporting lemma analogous to Lemma 37; that is, it does not seem possible to extend the inductive procedure above to obtain explicit formulae - or establish log-concavity or unimodality - for the spectra of all Frobenius, maximal parabolic, type-A seaweeds. Consequently, we do not proceed by finding the spectra of Frobenius, type-A seaweeds of the form \(\mathfrak{p}^{A}\frac{k|3}{k+3}\).
Instead, to finish this section, we use Theorems 34 and 38 to determine the spectra of Frobenius, type-A seaweeds of the form \(\mathfrak{p}^{A}\frac{a|b}{n}=\mathfrak{p}^{A}\frac{k+1|k}{2k+1}\) and \(\mathfrak{p}^{A}\frac{k+2|k}{2k+2}\), where, unlike in our previous examples, neither \(a\) nor \(b\) is fixed.
### \(\mathfrak{p}^{A}\frac{k+1|k}{2k+1}\)
In this subsection, we compute the spectra of Frobenius, type-A seaweed algebras of the form \(\mathfrak{p}^{A}\frac{k+1|k}{2k+1}\), for \(k\geq 1\). See Figure 15 for illustrations of the oriented meanders corresponding to \(\mathfrak{p}^{A}\frac{k+1|k}{2k+1}\) for \(k=1,2\), and \(3\).
**Remark 41**.: _In the preceding two sections, to compute the spectra of the algebras of interest it was necessary to also compute the extended spectra. This is not true of the seaweeds considered for the remainder of this paper, so we omit discussion about extended spectra ongoing._
**Theorem 42**.: _Let \(\mathfrak{g}_{k}=\mathfrak{p}^{A}\frac{k+1|k}{2k+1}\), for \(k\geq 1\). The spectrum of \(\mathfrak{g}_{k}\) is equal to \(\{-1,0^{2},1^{2},2\}\) if \(k=1\) and is equal to_
\[\{-k,0^{3k-1},1^{3k-1},k+1\}\cup\bigcup_{i=1}^{k-1}\{(-k+i)^{3i},(k-i+1)^{3i}\},\]
_for \(k>1\)._
Proof.: By induction on \(k\). The cases \(k=1\) and \(2\) were considered in Theorems 34 and 38, respectively. So, assume the result holds for \(k-1\geq 2\). Note that
\[\Sigma(\mathfrak{g}_{k})=\begin{bmatrix}B_{1}&B_{2}\\ &B_{3}\end{bmatrix},\]
where
* \(B_{1}\) is the \((k+1)\times(k+1)\) block consisting of the values \(w(P_{i,j}(\mathfrak{g}_{k}))\), for \(1\leq i,j\leq k+1\),
* \(B_{2}\) is the \((k+1)\times k\) block consisting of the values \(w(P_{i,j}(\mathfrak{g}_{k}))\), for \(1\leq i\leq k+1\) and \(k+2\leq j\leq 2k+1\), and
* \(B_{3}\) is the \(k\times k\) block consisting of the values \(w(P_{i,j}(\mathfrak{g}_{k}))\), for \(k+2\leq i,j\leq 2k+1\).
Now, if \(\mathfrak{p}_{k}=\mathfrak{p}^{A}\frac{k!1}{k+1}\), then by Lemma 21 combined with Corollary 17, \(B_{1}\) contains the same multiset of values as \(\widehat{\Sigma}(\mathfrak{p}_{k})\). Thus, applying Theorem 34 - and keeping Remark 7 in mind - it follows that \(B_{1}\) contributes
\[\{0^{k+1}\}\cup\bigcup_{i=0}^{k-1}\left\{(-k+i)^{i+1},(k-i)^{i+1}\right\} \tag{3}\]
to the multiset of values contained in \(\Sigma(\mathfrak{g}_{k})\). It then follows from Lemma 29 that the multiset of values contained in \(B_{2}\) is equal to that of \(\Sigma(\mathfrak{p}_{k})\) with each value increased by \(1\); that is, applying Theorem 34, \(B_{2}\) contributes
\[\bigcup_{i=2}^{k+1}\left\{(-k+i)^{i-1},(k-i+3)^{i-1}\right\} \tag{4}\]
to the multiset of values contained in \(\Sigma(\mathfrak{g}_{k})\). Finally, by Lemma 24, it follows that the multiset of values contained in \(B_{3}\) is equal to that of \(\widehat{\Sigma}(\mathfrak{p}_{k-1})\); that is, applying Theorem 34 - and keeping Remark 7 in mind - \(B_{3}\) contributes
\[\{0^{k}\}\cup\bigcup_{i=0}^{k-2}\left\{(-k+i+1)^{i+1},(k-i-1)^{i+1}\right\} \tag{5}\]
to the multiset of values contained in \(\Sigma(\mathfrak{g}_{k})\). Therefore, putting contributions (3), (4), and (5) together - keeping Remark 7 in mind - we find that the spectrum of \(\mathfrak{g}_{k}\) is equal to
\[\{-k,0^{3k-1},1^{3k-1},k+1\}\cup\bigcup_{i=1}^{k-1}\{(-k+i)^{3i},(k-i+1)^{3i}\},\]
as desired.
Considering the spectrum formula of Theorem 42, we are immediately led to the following.
**Corollary 43**.: _For \(k\geq 1\), if \(\mathfrak{g}=\mathfrak{p}^{A}\frac{k+1|k}{2k+1}\), then \(\mathfrak{g}\) has the log-concave spectrum property._
**Remark 44**.: _Combining Theorem 42 and Corollary 43 with Corollaries 14, 17, and 18, we obtain similar results to Theorem 42 and Corollary 43 for related families of Frobenius, type-A seaweeds. In particular, the spectrum of each of \(\mathfrak{p}^{A}\frac{2k+1}{k+1|k}\), \(\mathfrak{p}^{A}\frac{k|k+1}{2k+1}\), and \(\mathfrak{p}^{A}\frac{2k+1}{k|k+1}\), for \(k\geq 1\), is given in Theorem 42, and each algebra possesses the log-concave spectrum property._
### \(\mathfrak{p}^{A}\frac{k+2|k}{2k+2}\)
In this subsection, we compute the spectra of Frobenius, type-A seaweed algebras of the form \(\mathfrak{p}^{A}\frac{k+2|k}{2k+2}\), for \(k\geq 1\) odd. See Figure 16 for illustrations of the oriented menaders corresponding to \(\mathfrak{p}^{A}\frac{k+2|k}{2k+2}\) for \(k=1\) and \(3\).
**Theorem 45**.: _Let \(\mathfrak{g}_{k}=\mathfrak{p}^{A}\frac{k+2|k}{2k+2},\) for \(k=2m-1\geq 1.\) The spectrum of \(\mathfrak{g}_{k}\) is equal to_
\[\{-2,-1^{2},0^{3},1^{3},2^{2},3\}\]
_if \(k=1\),_
\[\{-3,-2^{4},-1^{8},0^{11},1^{11},2^{8},3^{4},4\}\]
_if \(k=3\),_
\[\{-4,-3^{4},-2^{10},-1^{17},0^{22},1^{22},2^{17},3^{10},4^{4},5\}\]
_if \(k=5\),_
\[\{-5,-4^{4},-3^{10},-2^{19},-1^{28},0^{34},1^{34},2^{28},3^{19},4^{10},5^{4},6\}\]
_if \(k=7\), and_
\[\{-m-1,-m^{4},(-m+1)^{10},(-m+2)^{19},-1^{6k-14},0^{6k-8},1^{6k-8},2^{6k-14},( m-1)^{19},m^{10},(m+1)^{4},m+2\}\]
\[\cup\bigcup_{i=1}^{m-4}\{(-m+i+2)^{12i+18},(m-i-1)^{12i+18}\}\]
_if \(k>7\)._
Proof.: By induction on \(m.\) The spectra of \(\mathfrak{g}_{1}\), \(\mathfrak{g}_{3}\), \(\mathfrak{g}_{5}\), and \(\mathfrak{g}_{7}\) can be easily verified by direct calculation. For \(m=5\) (or \(k=9\)), computing directly we find that the spectrum of \(\mathfrak{g}_{9}\) is equal to
\[\{-6,-5^{4},-4^{10},-3^{19},-2^{30},-1^{40},0^{46},1^{46},2^{40},3^{30},4^{19},5^{10},6^{4},7\}.\]
So, assume the result holds for \(m-1\geq 5.\) Note that
\[\Sigma(\mathfrak{g}_{k})=\begin{bmatrix}B_{1}&B_{2}\\ &B_{3}\end{bmatrix},\]
where
* \(B_{1}\) is the \((k+2)\times(k+2)\) block consisting of the values \(w(P_{i,j}(\mathfrak{g}_{k})),\) for \(1\leq i,j\leq k+2,\)
* \(B_{2}\) is the \((k+2)\times k\) block consisting of the values \(w(P_{i,j}(\mathfrak{g}_{k})),\) for \(1\leq i\leq k+2\) and \(k+3\leq j\leq 2k+2,\) and
* \(B_{3}\) is the \(k\times k\) block consisting of the values \(w(P_{i,j}(\mathfrak{g}_{k})),\) for \(k+3\leq i,j\leq 2k+2.\)
Now, if \(\mathfrak{p}_{k}=\mathfrak{p}^{A}\frac{k|2}{k+2},\) then by Lemma 21 combined with Corollary 17, \(B_{1}\) contains the same multiset of values of entries as \(\widehat{\Sigma}(\mathfrak{p}_{k}).\) Thus, applying Theorem 38 - and keeping Remark 7 in mind - it follows that \(B_{1}\) contributes
\[\{-m-1,-m^{3},-1^{2k-1},0^{2k+1},1^{2k-1},m^{3},m+1\}\cup\bigcup_{i=1}^{m-2} \{(-m+i)^{4i+2},(m-i)^{4i+2}\} \tag{6}\]
to the multiset of values contained in \(\Sigma(\mathfrak{g}_{k}).\) It then follows from Lemma 29 that the multiset of values contained in \(B_{2}\) is equal to that of rows \(\{1,\ldots,k\}\) and columns \(\{1,\ldots,k+2\}\) in \(\Sigma(\mathfrak{p}_{k})\) with each value increased by \(1\). By the proof of Theorem 38, the multiset of values contained in rows \(\{1,\ldots,k\}\) and columns \(\{1,\ldots,k\}\) of \(\mathfrak{p}_{k}\) is
\[\{-m,(-m+1)^{3},-1^{2k-5},0^{2k-3},1^{2k-5},(m-1)^{3},m\}\cup\bigcup_{i=1}^{m- 3}\{(-m+i+1)^{4i+2},(m-i-1)^{4i+2}\},\]
and by Lemma 37, the multiset of values contained in rows \(\{1,\ldots,k\}\) and columns \(\{k+1,k+2\}\) of \(\mathfrak{p}_{k}\) is
\[\{0,1^{3},(m-1)^{3},m^{2},m+1\}\cup\bigcup_{i=2}^{m-2}\{i^{4}\}.\]
Thus, \(B_{2}\) contributes
\[\begin{split}\{-m+1,(-m+2)^{3},0^{2k-5},1^{2k-2},2^{2k-2},m^{6}, (m+1)^{3},m+2\}\\ \cup\bigcup_{i=1}^{m-3}\{(-m+i+2)^{4i+2},(m-i)^{4i+6}\}\end{split} \tag{7}\]
to the multiset of values contained in \(\Sigma(\mathfrak{g}_{k}).\) Finally, by Lemma 24, it follows that the multiset of values contained in \(B_{3}\) is equal to that of \(\widehat{\Sigma}(\mathfrak{p}_{k-2});\) that is, applying Theorem 38, \(B_{3}\) contributes
\[\{-m,(-m+1)^{3},-1^{2k-5},0^{2k-3},1^{2k-5},(m-1)^{3},m\}\cup\bigcup_{i=1}^{ m-3}\{(-m+i+1)^{4i+2},(m-i-1)^{4i+2}\} \tag{8}\]
to the multiset of values contained in \(\Sigma(\mathfrak{g}_{k}).\) Therefore, putting contributions (6), (7), and (8) together - keeping Remark 7 in mind - we find that the spectrum of \(\mathfrak{g}_{k}\) is equal to
\[\begin{split}\{-m-1,-m^{4},(-m+1)^{10},(-m+2)^{19},-1^{6k-14},0^ {6k-8},1^{6k-8},2^{6k-14},(m-1)^{19},m^{10},(m+1)^{4},m+2\}\\ \cup\bigcup_{i=1}^{m-4}\{(-m+i+2)^{12i+18},(m-i-1)^{12i+18}\} \end{split}\]
as desired.
Considering the spectrum formula of Theorem 45, we are immediately led to the following.
**Corollary 46**.: _For \(k\geq 1\), if \(\mathfrak{g}=\mathfrak{p}^{A}\frac{k+2|k}{2k+2}\), then \(\mathfrak{g}\) has the log-concave spectrum property._
**Remark 47**.: _Combining Theorem 45 and Corollary 46 with Corollaries 14, 17, and 18, we obtain similar results to Theorem 45 and Corollary 46 for related families of Frobenius, type-A seaweeds. In particular, the specectrum of each of \(\mathfrak{p}^{A}\frac{2k+2}{k+2|k},\)\(\mathfrak{p}^{A}\frac{k|k+2}{2k+2},\) and \(\mathfrak{p}^{A}\frac{2k+2}{k|k+2}\), for \(k\geq 1\) odd, is given in Theorem 45, and each algebra possesses the log-concave spectrum property._
One quality that makes the family of maximal parabolic, type-A seaweeds a desirable candidate family in our investigations is Elashvili's ([9], 1990) simple closed-form index formula
\[\operatorname{ind}\,\mathfrak{p}^{A}\frac{a|b}{n}=\gcd(a,b)-1,\]
allowing for quick identification of Frobenius such algebras. Outside of this family, there are two others known with similar index formulas: the families of type-A seaweeds of the form \(\mathfrak{p}^{A}\frac{a|b|c}{a+b+c}\) and of the form \(\mathfrak{p}^{A}\frac{a|b}{c|d}\) which both have index given by \(\gcd(a+b,b+c)-1\) (see [2]). For the sake of comparison, we provide the formulas for the spectra of two such families of seaweeds below. The proofs are left to the interested reader.
**Theorem 48**.: _Let \(\mathfrak{g}_{k}=\mathfrak{p}^{A}\frac{2k|1}{1|2k}\), for \(k\geq 1\). Then the spectrum of \(\mathfrak{g}_{k}\) is equal to_
\[\bigcup_{i=1}^{k}\{(-k+i)^{4i-2},(k-i+1)^{4i-2}\}.\]
_Moreover, \(\mathfrak{g}_{k}\) has the log-concave spectrum property._
**Theorem 49**.: _Let \(\mathfrak{g}_{k}=\mathfrak{p}^{A}\frac{2k|1|1}{2k+2}\), for \(k\geq 1\). Then the spectrum of \(\mathfrak{g}_{k}\) is equal to_
\[\{-k,k+1\}\cup\bigcup_{i=1}^{k}\{(-k+i)^{4i},(k-i+1)^{4i}\}.\]
_Moreover, \(\mathfrak{g}_{k}\) has the log-concave spectrum property._
Note that all families of Frobenius, type-A seaweeds considered above have been defined by compositions with a fixed number of parts, while the sizes of the parts vary. In the next section, we consider families of Frobenius, type-A seaweeds parametrized by the number of parts of a fixed size in their defining compositions. In contrast to the spectra of this section - where both the set of distinct eigenvalues and the sequence of multiplicities varied with the parameter - the seaweeds discussed in Section 4 have spectra whose sets of distinct eigenvalues exhibit a suprising stability property.
## 4 Stability
In this section, we consider families of Frobenius, type-A seaweeds that are parametrized by the number of 2's (Section 4.1) and by the number of 4's (Section 4.2) in the defining compositions. For such families, we find that the sets of distinct eigenvalues in the spectra stabilize. At the end of Section 4.2, we conjecture that such stabilization occurs among more general families as well. Note that this behavior stands in sharp contrast to that of the Frobenius seaweeds \(\mathfrak{p}^{A}\frac{k|1}{k+1}\), \(\mathfrak{p}^{A}\frac{k|2}{k+2}\), \(\mathfrak{p}^{A}\frac{k+1|k}{2k+1}\), \(\mathfrak{p}^{A}\frac{k+2|k}{2k+2}\), \(\mathfrak{p}^{A}\frac{2k|1}{1|2k}\), and \(\mathfrak{p}^{A}\frac{2k|1|1}{2k+2}\), where the number of eigenvalues contained in the spectrum strictly increased with \(k\).
**Remark 50**.: _We do not consider families of Frobenius, type-A seaweeds parametrized by the number of occurrences of an odd integer in the defining compositions because such families do not exist. For a given type-A seaweed \(\mathfrak{g}\), each odd part in the defining composition of \(\mathfrak{g}\) contributes a vertex of degree 1 to \(M(\mathfrak{g})\). Consequently, if \(\mathfrak{g}\) is to be Frobenius, i.e., if \(M(\mathfrak{g})\) consists of a single path, then \(\mathfrak{g}\) can have at most two odd parts in its defining composition._
### Parameterized by number of 2's
In this subsection, we consider families of Frobenius, type-A seaweeds which are parameterized by the number of 2's in the defining compositions. More specifically, we determine the spectra of type-A seaweeds of the form
\[\mathfrak{p}^{A}\frac{k|2|\cdots|2}{k+1|2|\cdots|2|1},\qquad\mathfrak{p}^{A} \frac{k|2|\cdots|2|1}{k+1|2|\cdots|2},\qquad\text{and}\qquad\mathfrak{p}^{A} \frac{2|\cdots|2|1}{2r+1}.\]
To start, we establish a general result concerning the relationship between the spectra of the Frobenius, type-A seaweeds
\[\mathfrak{g}=\mathfrak{p}^{A}\frac{a_{1}|\cdots|a_{m}|1}{b_{1}|\ldots|b_{t}}, \quad\mathfrak{g}^{\prime}=\mathfrak{p}^{A}\frac{a_{1}|\ldots|a_{m}|\overbrace{ 2|\cdots|2}^{r}}{b_{1}|\ldots|b_{t}|\underbrace{2|\cdots|2|1}_{r-1}},\quad \text{and}\quad\mathfrak{g}^{\prime\prime}=\mathfrak{p}^{A}\frac{a_{1}| \ldots|a_{m}|\overbrace{2|\cdots|2}^{r}}{b_{1}|\ldots|b_{t}|\underbrace{2| \cdots|2}_{r}},\]
for \(r\geq 1\). For example, taking \(\mathfrak{g}=\frac{3|1}{4}\) and \(r=1\), we have \(\mathfrak{g}^{\prime}=\frac{3|2}{4|1}\) and \(\mathfrak{g}^{\prime\prime}=\frac{3|2|1}{4|2}\). The oriented meanders of these type-A seaweeds are illustrated in Figure 17.
**Lemma 51**.: _If \(\mathfrak{g}=\mathfrak{p}^{A}\frac{a_{1}|\ldots|a_{m}|1}{b_{1}|\ldots|b_{t}}\) is a Frobenius, type-A seaweed with spectrum \(S\), then \(\mathfrak{g}^{\prime}=\mathfrak{p}^{A}\frac{a_{1}|\ldots|a_{m}|2}{b_{1}| \ldots|b_{t}|1}\) is a Frobenius, type-A seaweed with spectrum \(S\cup\{0,1\}\)._
Proof.: Let \(n=1+\sum_{i=1}^{m}a_{i}=\sum_{j=1}^{t}b_{j}\). Evidently, \(\overrightarrow{M}(\mathfrak{g}^{\prime})\) is equal to \(\overrightarrow{M}(\mathfrak{g})\) with the addition of a vertex \(v_{n+1}\) and a directed edge \((v_{n+1},v_{n})\). Considering Corollary 3, it follows that \(\mathfrak{g}^{\prime}\) is Frobenius. Now, setting \(a_{0}=b_{0}=0\), note that the spectrum of \(\mathfrak{g}\) is the multiset
\[S=\bigcup_{k=1}^{m}\left\{w(P_{i,j}(\mathfrak{g}))\ |\ 1+\sum_{l=0}^{k-1}a_{l}\leq j<i\leq \sum_{l=1}^{k}a_{l}\right\}\] \[\qquad\qquad\cup\bigcup_{k=1}^{t}\left\{w(P_{i,j}(\mathfrak{g})) \ |\ 1+\sum_{l=0}^{k-1}b_{l}\leq i<j\leq\sum_{l=1}^{k}b_{l}\right\}\cup\{0^{n-1}\},\]
and the spectrum of \(\mathfrak{g}^{\prime}\) is the multiset
\[S^{\prime}=\bigcup_{k=1}^{m}\left\{w(P_{i,j}(\mathfrak{g}^{\prime }))\ |\ 1+\sum_{l=0}^{k-1}a_{l}\leq j<i\leq\sum_{l=1}^{k}a_{l}\right\}\] \[\qquad\qquad\cup\bigcup_{k=1}^{t}\left\{w(P_{i,j}(\mathfrak{g}^{ \prime}))\ |\ 1+\sum_{l=0}^{k-1}b_{l}\leq i<j\leq\sum_{l=1}^{k}b_{l}\right\}\cup\{0^{n},w(P_{ n+1,n}(\mathfrak{g}^{\prime}))\}.\]
Given the relationship between \(\overrightarrow{M}(\mathfrak{g}^{\prime})\) and \(\overrightarrow{M}(\mathfrak{g})\) outlined above, it follows that
\[S^{\prime}=S\cup\{0,w(P_{n+1,n}(\mathfrak{g}^{\prime}))\}=S\cup\{0,1\}.\]
**Theorem 52**.: _If \(\mathfrak{g}=\mathfrak{p}^{A}\frac{a_{1}|\ldots|a_{m}|1}{b_{1}|\ldots|b_{t}}\) is a Frobenius, type-A seaweed with spectrum \(S\), then_
1. \[\mathfrak{g}^{\prime}=\mathfrak{p}^{A}\frac{a_{1}|\ldots|a_{m}|\overbrace{2| \cdots|2}^{r}}{b_{1}|\ldots|b_{t}|\underbrace{2|\cdots|2}_{r-1}|1},\] _for_ \(r\geq 1\)_, is a Frobenius, type-A seaweed with spectrum_ \(S\cup\{0^{2r-1},1^{2r-1}\}\)_; and_
2. \[\mathfrak{g}^{\prime\prime}=\mathfrak{p}^{A}\frac{a_{1}|\ldots|a_{m}|}{b_{1}| \ldots|b_{t}|\underbrace{2|\cdots|2}_{r}},\] _for_ \(r\geq 1\)_, is a Frobenius, type-A seaweed with spectrum_ \(S\cup\{0^{2r},1^{2r}\}\)_._
Proof.: By induction on \(r\). An application of Lemma 51 establishes the result for \(r=1\) in (1). Then applying Corollary 14, followed by Lemma 51, and then Corollary 14 again, to
\[\mathfrak{p}^{A}\frac{a_{1}|\ldots|a_{m}|2}{b_{1}|\ldots|b_{t}|1}\]
establishes the case \(r=1\) for (2). Assume the result holds for \(r-1\geq 1\). In particular, assume that the algebra
\[\mathfrak{p}^{A}\frac{a_{1}|\ldots|a_{m}|\overbrace{2|\cdots|2}^{r-1}|2}{b_{1 }|\ldots|b_{t}|\underbrace{2|\cdots|2}_{r-1}}\]
is Frobenius with spectrum equal to \(S\cup\{0^{2r-2},1^{2r-2}\}\). Applying Lemma 51, we find that the algebra
\[\mathfrak{g}^{\prime}=\mathfrak{p}^{A}\frac{a_{1}|\ldots|a_{m}|\overbrace{2| \cdots|2}^{r}}{b_{1}|\ldots|b_{t}|\underbrace{2|\cdots|2}_{r-1}|1}\]
is Frobenius with spectrum equal to \(S\cup\{0^{2r-1},1^{2r-1}\}\), as desired. Now, applying Corollary 14 followed by Lemma 51 and then a second application of Corollary 14 to \(\mathfrak{g}^{\prime}\), we find that the algebra
\[\mathfrak{g}^{\prime\prime}=\mathfrak{p}^{A}\frac{a_{1}|\ldots|a_{m}|}{b_{1}| \ldots|b_{t}|}\underbrace{\overbrace{2|\cdots|2}^{r}}_{r}\]
is Frobenius with spectrum equal to \(S\cup\{0^{2r},1^{2r}\}\), as desired. The result follows by induction.
Considering the spectrum formulas of Theorem 52, we are immediately led to the following.
**Corollary 53**.: _Let \(\mathfrak{g}=\mathfrak{p}^{A}\frac{a_{1}|\ldots|a_{m}|1}{b_{1}|\ldots|b_{t}|}\) be a Frobenius, type-A seaweed with the unimodal spectrum property. If_
\[\mathfrak{g}^{\prime}=\mathfrak{p}^{A}\frac{a_{1}|\ldots|a_{m}|}{b_{1}|\ldots| b_{t}|}\underbrace{2|\cdots|2}_{r-1}\quad\text{and}\quad\mathfrak{g}^{\prime \prime}=\mathfrak{p}^{A}\frac{a_{1}|\ldots|a_{m}|}{b_{1}|\ldots|b_{t}|} \underbrace{2|\cdots|2}_{r},\]
_then \(\mathfrak{g}^{\prime}\) and \(\mathfrak{g}^{\prime\prime}\) have the unimodal spectrum property._
It is important to note that "unimodal" cannot be strengthened to "log-concave" in the conclusion of Corollary 53. A counterexample is provided by the following theorem (see Remark 55) which is a corollary of Theorems 34 and 52.
**Theorem 54**.:
1. _For_ \(k\geq 1\)_, if_ \[\mathfrak{g}_{k}=\mathfrak{p}^{A}\frac{k|\overbrace{2|\cdots|2}^{r}}{k+1| \underbrace{2|\cdots|2}_{r-1}|1},\] _then_ \(\mathfrak{g}_{k}\) _has the unimodal spectrum property with spectrum equal to_ \[\{0^{k+2r-1},1^{k+2r-1}\}\cup\bigcup_{i=1}^{k-1}\{(-k+i)^{i},(k-i)^{i}\}.\]
2. _For_ \(k\geq 1\)_, if_ \[\mathfrak{g}_{k}=\mathfrak{p}^{A}\frac{k|\overbrace{2|\cdots|2}^{r}|1}{k+1| \underbrace{2|\cdots|2}_{r}},\] _then_ \(\mathfrak{g}_{k}\) _has the unimodal spectrum property with spectrum equal to_ \[\{0^{k+2r},1^{k+2r}\}\cup\bigcup_{i=1}^{k-1}\{(-k+i)^{i},(k-i)^{i}\}.\]
**Remark 55**.: _Utilizing Theorem 54, we can construct examples of Frobenius, type-A seaweeds which do not have the log-concave spectrum property. For example, let \(\mathfrak{g}=\mathfrak{p}^{A}\frac{3|1}{4}\) and \(\mathfrak{g}^{\prime}=\mathfrak{p}^{A}\frac{3|2|2}{4|2|1}.\) Recall from Theorem 34 that the spectrum of \(\mathfrak{g}\) is \(\{-2,-1^{2},0^{3},1^{3},2^{2},3\}\). On the other hand, by Theorem 52, we find that the spectrum of \(\mathfrak{g}^{\prime}\) is \(\{-2,-1^{2},0^{6},1^{6},2^{2},3\}.\) Clearly, \(\mathfrak{g}^{\prime}\) does not have the log-concave spectrum property._
**Remark 56**.: _Note that for fixed \(k\) and varying values of \(r\), the collection of distinct eigenvalues in the spectra of the Frobenius, type-A seaweeds considered in Theorem 54 is fixed._
**Remark 57**.: _Note that one can also apply Corollary 53 to the Frobenius, type-A seaweed \(\mathfrak{p}^{A}\frac{1|1}{2}\) to determine the spectrum of type-A seaweeds of the form \(\mathfrak{g}_{r}=\mathfrak{p}^{A}\frac{1|2|\cdots|2}{2|\cdots|2|1}\), where \(r\geq 1\) is the number of \(2\)'s in the numerator and denominator. See Figure 18 for some example meanders of such type-A seaweeds. In particular, it is straightforward to prove that the spectrum of \(\mathfrak{g}_{r}\) is \(\{0^{2r},1^{2r}\}\). Now, one can show that type-A seaweeds of the form \(\mathfrak{p}^{A}\frac{1|2|\cdots|2}{2|\cdots|2|1}\) are related to algebras of another combinatorially defined family of Lie subalgebras of \(\mathfrak{sl}(n)\). In particular, \(\mathfrak{p}^{A}\frac{1|2|\cdots|2}{2|\cdots|2|1}\) is isomorphic to a type-A Lie poset algebra \((\)see [3]\()\). Type-A Lie poset algebras are subalgebras of \(\mathfrak{sl}(n)\) parametrized by posets on \(\{1,\ldots,n\}\). It is known that the spectra of Frobenius, type-A Lie poset algebras corresponding to posets with chains of cardinality at most two must consist of an equal number of 0's and 1's \((\)see [4]\()\). It is conjectured that this is true in general for Frobenius Lie poset algebras \((\)see [4, 5]\()\)._
**Remark 58**.: _Additionally, one can apply Theorem 52 to the Frobenius, type-A seaweeds of Theorems 48 and 49. We leave such applications as exercises for interested readers._
We now consider Frobenius, type-A seaweeds of the form \(\mathfrak{p}^{A}\frac{2|\cdots|2|1}{2r+1}.\) Although such algebras are parametrized by the number of 2's in their defining compositions, they are parabolic and so descriptions of their spectra do not follow from Corollary 53. See Figure 19 for some example meanders of such type-A seaweeds.
To determine the form of the spectrum of the Frobenius, type-A seaweed \(\mathfrak{p}^{A}\frac{2|\cdots|2|1}{2r+1}\), we require the following lemma.
**Lemma 59**.: _Let \(\mathfrak{g}_{r}=\mathfrak{p}^{A}\frac{2|\cdots|2|1}{2r+1}\), for \(r\geq 1\). Then_
\[\{w(P_{i,2r+1}(\mathfrak{g}_{r}))\ |\ 1\leq i<2r+1\}=\{2^{\lceil\frac{r}{2} \rceil},1^{r},0^{\lceil\frac{r-1}{2}\rceil}\}.\]
Proof.: By induction on \(r\). For \(r=1\), we can compute directly from \(\overrightarrow{M}(\mathfrak{g}_{1})\) (see Figure 19(a)) that
\[w(P_{1,3}(\mathfrak{g}_{1}))=1\quad\text{and}\quad w(P_{2,3}(\mathfrak{g}_{1}) )=2.\]
Thus,
\[\{w(P_{i,3}(\mathfrak{g}_{1}))\ |\ 1\leq i<3\}=\{2,1\}=\{2^{\lceil\frac{r}{2} \rceil},1^{1},0^{\lceil\frac{1-1}{2}\rceil}\}.\]
Now, assume that the result holds for \(r-1\geq 1\). Let \(\mathfrak{p}_{r-1}=\mathfrak{p}^{A}\frac{1|2|\cdots|2}{2r-1}\). Note that removing the edges \((v_{1},v_{2r+1})\) and \((v_{2},v_{1})\) and the vertices \(v_{1}\) and \(v_{2r+1}\) from \(\overrightarrow{M}(\mathfrak{g}_{r})\) yields \(\overrightarrow{M}(\mathfrak{p}_{r-1})\), with \(v_{i+1}\) in place of \(v_{i}\), for \(1\leq i\leq 2r-1\). Thus,
\[w(P_{i,2r+1}(\mathfrak{g}_{r})) =w(P_{i,2}(\mathfrak{g}_{r}))+w(P_{2,1}(\mathfrak{g}_{r}))+w(P_{ 1,2r+1}(\mathfrak{g}_{r}))\] \[=w(P_{i-1,1}(\mathfrak{p}_{r-1}))+2,\]
for \(2\leq i\leq 2r\). Applying Lemma 15, it follows that
\[w(P_{i,2r+1}(\mathfrak{g}_{k}))=w(P_{i-1,1}(\mathfrak{p}_{r-1}))+2=-w(P_{2r-i +1,2r-1}(\mathfrak{g}_{r-1}))+2,\]
for \(2\leq i\leq 2r\). Therefore, applying the induction hypothesis,
\[\{w(P_{i,2r+1}(\mathfrak{g}_{r}))\ |\ 3\leq i\leq 2r\}=\{-w(P_{i,2r-1}( \mathfrak{g}_{r-1}))+2\ |\ 1\leq i<2r-1\}=\{0^{\lceil\frac{r-1}{2}\rceil},1^{r-1},2^{\lceil\frac{r-2}{2} \rceil}\}.\]
As \(w(P_{1,2r+1}(\mathfrak{g}_{r}))=1\) and \(w(P_{2,2r+1}(\mathfrak{g}_{r}))=2\), it follows that
\[\{w(P_{i,2r+1}(\mathfrak{g}_{r}))\ |\ 1\leq i<2r+1\}=\{0^{\lceil\frac{r-1}{2} \rceil},1^{r},2^{\lceil\frac{r}{2}\rceil}\},\]
as desired.
**Theorem 60**.: _Let \(\mathfrak{g}_{r}=\mathfrak{p}^{A}\frac{2|\cdots|2|1}{2r+1}\), for \(r\geq 1\). There exists positive integers \(a_{r}>b_{r}\) such that the spectrum of \(\mathfrak{g}_{r}\) is given by_
\[\{-1^{b_{r}},0^{a_{r}},1^{a_{r}},2^{b_{r}}\}.\]
Proof.: By induction on \(r\). For \(r=1\), calculating directly using \(\overrightarrow{M}(\mathfrak{g}_{1})\) (see Figure 19(a)) we find that the spectrum of \(\mathfrak{g}_{1}\) is
\[\{-1^{1},0^{2},1^{2},2^{1}\}.\]
Assume the result holds for \(r-1\geq 1\). We break the spectrum of \(\mathfrak{g}_{r}\) into three groups:
\[G_{1}=\{w(P_{i,j}(\mathfrak{g}_{r}))\ |\ 2\leq i\leq j\leq 2r\}\cup\{w(P_{2j,2j- 1}(\mathfrak{g}_{r}))\ |\ 1\leq j\leq r\},\]
\[G_{2}=\{w(P_{i,2r+1}(\mathfrak{g}_{r}))\ |\ 1\leq i<2r+1\},\]
and
\[G_{3}=\{w(P_{1,i}(\mathfrak{g}_{r}))\ |\ 1\leq i\leq 2r\}.\]
Let \(\mathfrak{p}_{r-1}=\mathfrak{p}^{A}\frac{1|2|\cdots|2}{2r-1}\). Note that removing the edges \((v_{1},v_{2r+1})\) and \((v_{2},v_{1})\) and the vertices \(v_{1}\) and \(v_{2r+1}\) from \(\overrightarrow{M}(\mathfrak{g}_{r})\) yields \(\overrightarrow{M}(\mathfrak{p}_{r-1})\) with \(v_{i+1}\) in place of \(v_{i}\), for \(1\leq i\leq 2r-1\). Thus,
\[w(P_{i,j}(\mathfrak{g}_{r}))=w(P_{i-1,j-1}(\mathfrak{p}_{r-1})),\]
for \(2\leq i\leq j\leq 2r\), and
\[w(P_{2j,2j-1}(\mathfrak{g}_{r}))=w(P_{2j-1,2j-2}(\mathfrak{p}_{r-1})),\]
for \(2\leq j\leq r\). Consequently, letting \(S\) denote the spectrum of of \(\mathfrak{p}_{r-1}\), we have that
\[G_{1}=S\cup\{w(P_{2r,2r}(\mathfrak{g}_{r})),w(P_{2,1}(\mathfrak{g}_{r}))\}=S \cup\{0,1\}.\]
Applying Corollary 17, the spectrum of \(\mathfrak{g}_{r-1}\) is also \(S\); that is, applying our induction hypothesis,
\[G_{1}=\{-1^{b_{r-1}},0^{a_{r-1}+1},1^{a_{r-1}+1},2^{b_{r-1}}\}.\]
Now, considering Lemma 59, we have that
\[G_{2}=\{2^{\lceil\frac{r}{2}\rceil},1^{r},0^{\lceil\frac{r-1}{2}\rceil}\}.\]
As for \(G_{3}\), note that
\[w(P_{1,i}(\mathfrak{g}_{r})) =-w(P_{i,1}(\mathfrak{g}_{r}))\] \[=-(w(P_{i,2r+1}(\mathfrak{g}_{r}))-w(P_{1,2r+1}(\mathfrak{g}_{r} )))\] \[=-w(P_{i,2r+1}(\mathfrak{g}_{r}))+1,\]
for \(1\leq i\leq 2r\). Thus, applying Lemma 59, it follows that
\[G_{3}=\{-1^{\lceil\frac{r}{2}\rceil},0^{r},1^{\lceil\frac{r-1}{2}\rceil}\}.\]
Putting everything together, we find that the spectrum of \(\mathfrak{g}_{r}\) is given by
\[\{-1^{b_{r-1}+\lceil\frac{r}{2}\rceil},0^{a_{r-1}+r+\lceil\frac{r-1}{2} \rceil+1},1^{a_{r-1}+r+\lceil\frac{r-1}{2}\rceil+1},2^{b_{r-1}+\lceil\frac{r }{2}\rceil}\}.\]
The result follows.
Considering the spectrum formula of Theorem 60, we are immediately led to the following.
**Corollary 61**.: _For \(r\geq 1\), if \(\mathfrak{g}_{r}=\mathfrak{p}^{A}\frac{2|\cdots|2|1}{2r+1}\), then \(\mathfrak{g}_{r}\) has the log-concave spectrum property._
**Remark 62**.: _Combining Theorems 52, 54, and 60 and Corollaries 61 and 53 with Corollaries 14, 17, and 18, we obtain similar results for related families of Frobenius, type-A seaweeds._
### Parameterized by number of 4's
In this subsection, we consider families of Frobenius, type-A seaweeds which are parameterized by the number of 4's in the defining compositions. More specifically, we determine the spectra of type-A seaweeds of the form
\[\mathfrak{p}^{A}\frac{k|4|\cdots|4}{k+2|4|\cdots|4|2}\qquad\text{and}\qquad \mathfrak{p}^{A}\frac{k|4|\cdots|4|2}{k+2|4|\cdots|4}.\]
Similar to Section 4.1, we start by establishing a general result concerning the relationship between the spectra of the Frobenius, type-A seaweeds
\[\mathfrak{g}=\mathfrak{p}^{A}\frac{a_{1}|\cdots|a_{m}|2}{b_{1}|\ldots|b_{t}}, \quad\mathfrak{g}^{\prime}=\mathfrak{p}^{A}\frac{a_{1}|\ldots|a_{m}|\overbrace{ 4|\cdots|4}^{r}}{b_{1}|\ldots|b_{t}|\underbrace{4|\cdots|4}_{r-1}|2},\quad \text{and}\quad\mathfrak{g}^{\prime\prime}=\mathfrak{p}^{A}\frac{a_{1}|\ldots| a_{m}|\overbrace{4|\cdots|4}^{r}|2}{b_{1}|\ldots|b_{t}|\underbrace{4|\cdots|4}_{r}},\]
for \(r\geq 1\). For example, taking \(\mathfrak{g}=\frac{1|2}{3}\) and \(r=1\), we have \(\mathfrak{g}^{\prime}=\frac{1|4}{3|2}\) and \(\mathfrak{g}^{\prime\prime}=\frac{1|4|2}{3|4}\). The oriented meanders of these type-A seaweeds are illustrated in Figure 20.
**Lemma 63**.: _If \(\mathfrak{g}=\mathfrak{p}^{A}\frac{a_{1}|\ldots|a_{m}|2}{b_{1}|\ldots|b_{t}}\) is a Frobenius, type-A seaweed with spectrum \(S\), then \(\mathfrak{g}^{\prime}=\mathfrak{p}^{A}\frac{a_{1}|\ldots|a_{m}|4}{b_{1}|\ldots| b_{t}|2}\) is a Frobenius, type-A seaweed with spectrum \(S\cup\{-1,0^{3},1^{3},2\}\)._
Proof.: Let \(n=2+\sum_{i=1}^{m}a_{i}=\sum_{j=1}^{t}b_{j}\). Evidently, \(\overrightarrow{M}(\mathfrak{g}^{\prime})\) is equal to \(\overrightarrow{M}(\mathfrak{g})\) with the removal of the directed edge \((v_{n},v_{n-1})\), the addition of vertices \(v_{n+1}\) and \(v_{n+2}\), and the addition of directed edges \((v_{n+1},v_{n})\), \((v_{n+2},v_{n-1})\), and \((v_{n+1},v_{n+2})\). Considering Corollary 3, it follows that \(\mathfrak{g}^{\prime}\) is Frobenius. Now, setting \(a_{0}=b_{0}=0\) and \(a_{m+1}=2\), note that the spectrum of \(\mathfrak{g}^{\prime}\) is the multiset
\[S^{\prime}=\bigcup_{k=1}^{m+1} \left\{w(P_{i,j}(\mathfrak{g}^{\prime}))\ |\ 1+\sum_{l=0}^{k-1}a_{l}\leq j<i\leq\sum_{l=1}^{k}a_{l}\right\}\] \[\cup\bigcup_{k=1}^{t}\left\{w(P_{i,j}(\mathfrak{g}^{\prime}))\ |\ 1+ \sum_{l=0}^{k-1}b_{l}\leq i<j\leq\sum_{l=1}^{k}b_{l}\right\}\] \[\cup\{0^{n+1},w(P_{n+1,n-1}(\mathfrak{g}^{\prime})),w(P_{n+1,n}( \mathfrak{g}^{\prime})),w(P_{n+1,n+2}(\mathfrak{g}^{\prime}))\}\] \[\cup\{w(P_{n+2,n-1}(\mathfrak{g}^{\prime})),w(P_{n+2,n}( \mathfrak{g}^{\prime})),w(P_{n+2,n+1}(\mathfrak{g}^{\prime}))\};\]
that is,
\[S^{\prime}=\bigcup_{k=1}^{m+1} \left\{w(P_{i,j}(\mathfrak{g}^{\prime}))\ |\ 1+\sum_{l=0}^{k-1}a_{l}\leq j<i\leq\sum_{l=1}^{k}a_{l}\right\}\] \[\cup\bigcup_{k=1}^{t}\left\{w(P_{i,j}(\mathfrak{g}^{\prime}))\ |\ 1+ \sum_{l=0}^{k-1}b_{l}\leq i<j\leq\sum_{l=1}^{k}b_{l}\right\}\] \[\cup\{-1,0^{n+2},1^{3},2\}.\]
We claim that \(w(P_{i,j}(\mathfrak{g}^{\prime}))=w(P_{i,j}(\mathfrak{g}))\), for \(1\leq i\neq j\leq n\). To establish the claim, we break it into two cases.
**Case 1:**\(\{v_{n-1},v_{n}\}\) is not an edge in \(P_{i,j}(\mathfrak{g})\). Considering the relationship between \(\overrightarrow{M}(\mathfrak{g})\) and \(\overrightarrow{M}(\mathfrak{g}^{\prime})\) outlined above, we have that \(P_{i,j}(\mathfrak{g})=P_{i,j}(\mathfrak{g}^{\prime})\), and the claim follows.
**Case 2:**\(\{v_{n-1},v_{n}\}\) is an edge in \(P_{i,j}(\mathfrak{g})\). In this case \(P_{i,j}(\mathfrak{g})\) can be decomposed into three (possibly trivial) subpaths as follows: either
* \(P_{i,j}(\mathfrak{g})\) is equal to the concatenation of the paths \(P_{i,n-1}(\mathfrak{g})\), \(P_{n-1,n}(\mathfrak{g})\), and \(P_{n,j}(\mathfrak{g})\), or
* \(P_{i,j}(\mathfrak{g})\) is equal to the concatenation of the paths \(P_{i,n}(\mathfrak{g})\), \(P_{n,n-1}(\mathfrak{g})\), and \(P_{n-1,j}(\mathfrak{g})\).
Assume that \(P_{i,j}(\mathfrak{g})\) is equal to the concatenation of the paths \(P_{i,n-1}(\mathfrak{g})\), \(P_{n-1,n}(\mathfrak{g})\), and \(P_{n,j}(\mathfrak{g})\); the other case follows via a similar argument. Considering the relationship between \(\overrightarrow{M}(\mathfrak{g})\) and \(\overrightarrow{M}(\mathfrak{g}^{\prime})\) outlined above,
it follows that \(P_{i,j}(\mathfrak{g}^{\prime})\) is the concatenation of the paths \(P_{i,n-1}(\mathfrak{g}^{\prime})\), \(P_{n-1,n+2}(\mathfrak{g}^{\prime})\), \(P_{n+2,n+1}(\mathfrak{g}^{\prime})\), \(P_{n+1,n}(\mathfrak{g}^{\prime})\), and \(P_{n,j}(\mathfrak{g}^{\prime})\), where \(P_{i,n-1}(\mathfrak{g}^{\prime}))=P_{i,n-1}(\mathfrak{g}))\) and \(P_{n,j}(\mathfrak{g}^{\prime}))=P_{n,j}(\mathfrak{g}))\). Thus,
\[w(P_{i,j}(\mathfrak{g}^{\prime})) =w(P_{i,n-1}(\mathfrak{g}^{\prime}))+w(P_{n-1,n+2}(\mathfrak{g}^ {\prime}))+w(P_{n+2,n+1}(\mathfrak{g}^{\prime}))+w(P_{n+1,n}(\mathfrak{g}^{ \prime}))+w(P_{n,j}(\mathfrak{g}^{\prime}))\] \[=w(P_{i,n-1}(\mathfrak{g}^{\prime}))-1-1+1+w(P_{n,j}(\mathfrak{g }^{\prime}))\] \[=w(P_{i,n-1}(\mathfrak{g}))-1+w(P_{n,j}(\mathfrak{g}))\] \[=w(P_{i,n-1}(\mathfrak{g}))+w(P_{n-1,n}(\mathfrak{g}))+w(P_{n,j}( \mathfrak{g}))\] \[=w(P_{i,j}(\mathfrak{g})),\]
establishing the claim.
Therefore,
\[S^{\prime}=\bigcup_{k=1}^{m+1} \left\{w(P_{i,j}(\mathfrak{g}))\ |\ 1+\sum_{l=0}^{k-1}a_{l}\leq j<i\leq\sum_{l=1}^{k}a_{l}\right\}\] \[\cup\bigcup_{k=1}^{t}\left\{w(P_{i,j}(\mathfrak{g}))\ |\ 1+\sum_{l=0}^{k-1}b_{l}\leq i<j\leq\sum_{l=1}^{k}b_{l}\right\}\] \[\cup\{-1,0^{n+2},1^{3},2\};\]
that is,
\[S^{\prime}=S\cup\{-1,0^{3},1^{3},2\}.\]
**Theorem 64**.: _If \(\mathfrak{g}=\mathfrak{p}^{A}\frac{a_{1}|\ldots|a_{m}|2}{b_{1}|\ldots|b_{t}}\) is a Frobenius, type-A seaweed with spectrum \(S\), then_
1. \[\mathfrak{g}^{\prime}=\mathfrak{p}^{A}\frac{a_{1}|\ldots|a_{m}|\overbrace{ 4|\cdots|4}^{r}}{b_{1}|\ldots|b_{t}|\underbrace{4|\cdots|4}_{r-1}|2},\]
_for_ \(r\geq 1\)_, is a Frobenius, type-A seaweed with spectrum_ \(S\cup\{-1^{2r-1},0^{6r-3},1^{6r-3},2^{2r-1}\}\)_; and_
2. \[\mathfrak{g}^{\prime}=\mathfrak{p}^{A}\frac{a_{1}|\ldots|a_{m}|\overbrace{4| \cdots|4}^{r}|2}{b_{1}|\ldots|b_{t}|\underbrace{4|\cdots|4}_{r}},\] _for_ \(r\geq 1\)_, is a Frobenius, type-A seaweed with spectrum_ \(S\cup\{-1^{2r},0^{6r},1^{6r},2^{2r}\}\)_._
Proof.: By induction on \(r\). An application of Lemma 63 establishes the result for \(r=1\) in (1). Then applying Corollary 14, followed by Lemma 63, and then Corollary 14 again, to
\[\mathfrak{p}^{A}\frac{a_{1}|\ldots|a_{m}|4}{b_{1}|\ldots|b_{t}|2}\]
establishes the case \(r=1\) for (2). Assume the result holds for \(r-1\geq 1\). In particular, assume that the algebra
\[\mathfrak{p}^{A}\frac{a_{1}|\ldots|a_{m}|\overbrace{4|\cdots|4}^{r-1}|2}{b_{1 }|\ldots|b_{t}|\underbrace{4|\cdots|4}_{r-1}}\]
is Frobenius with spectrum equal to \(S\cup\{-1^{2r-2},0^{6r-6},1^{6r-6},2^{2r-2}\}\). Applying Lemma 63, we find that the algebra
\[\mathfrak{g}^{\prime}=\mathfrak{p}^{A}\frac{a_{1}|\ldots|a_{m}| \overbrace{4|\cdots|4}^{r}}{b_{1}|\ldots|b_{t}|\underbrace{4|\cdots|4}_{r-1}|2}\]
is Frobenius with spectrum equal to \(S\cup\{-1^{2r-1},0^{6r-3},1^{6r-3},2^{2r-1}\}\), as desired. Now, applying Corollary 14, followed by Lemma 63, and then Corollary 14 again, to \(\mathfrak{g}^{\prime}\), we find that the algebra
\[\mathfrak{g}^{\prime\prime}=\mathfrak{p}^{A}\frac{a_{1}|\ldots|a_{m}| \overbrace{4|\cdots|4}^{r}|2}{b_{1}|\ldots|b_{t}|\underbrace{4|\cdots|4}_{r}}\]
is Frobenius with spectrum equal to \(S\cup\{-1^{2r},0^{6r},1^{6r},2^{2r}\}\), as desired. The result follows by induction.
Considering the spectrum formula of Theorem 64, we are immediately led to the following.
**Corollary 65**.: _Let \(\mathfrak{g}=\mathfrak{p}^{A}\frac{a_{1}|\ldots|a_{m}|2}{b_{1}|\ldots|b_{t}| \underbrace{4|\cdots|4}_{r-1}|2}\) be a Frobenius, type-A seaweed with the unimodal spectrum property. If_
\[\mathfrak{g}^{\prime}=\mathfrak{p}^{A}\frac{a_{1}|\ldots|a_{m}| \overbrace{4|\cdots|4}^{r}}{b_{1}|\ldots|b_{t}|\underbrace{4|\cdots|4}_{r-1}|2} \quad\text{and}\quad\mathfrak{g}^{\prime\prime}=\mathfrak{p}^{A}\frac{a_{1}| \ldots|a_{m}|\overbrace{4|\cdots|4}^{r}|2}{b_{1}|\ldots|b_{t}|\underbrace{4| \cdots|4}_{r}},\]
_then \(\mathfrak{g}^{\prime}\) and \(\mathfrak{g}^{\prime\prime}\) have the unimodal spectrum property._
As in Corollary 53, "unimodal" cannot be strengthened to "log-concave" in; the conclusion of Corollary 65. A counterexample is provided by the following theorem (see Remark 67) which is a corollary of Theorems 38 and 64.
**Theorem 66**.: _Let \(k=2m-1\geq 1\)._
1. _For_ \(k=2m-1\geq 1\)_, if_ \[\mathfrak{g}_{k,r}=\mathfrak{p}^{A}\frac{k|\overbrace{4|\cdots|4}^{r}}{k+2| \underbrace{4|\cdots|4}_{r-1}|2},\] _then_ \(\mathfrak{g}_{k,r}\) _has the unimodal spectrum property with spectrum equal to_ \(\{-1^{2r},0^{6r-1},1^{6r-1},2^{2r}\}\) _if_ \(k=1\)_,_ \(\{-2,-1^{2r+2},0^{6r+2},1^{6r+2},2^{2r+2},3\}\) _if_ \(k=3\)_, and_ \[\{-m,(-m+1)^{3},-1^{2(k+r)-5},0^{2(k+3r)-4},1^{2(k+3r)-4},2^{2(k+r)- 5},m^{3},m+1\}\] \[\cup\bigcup_{i=2}^{m-2}\{(-m+i)^{4i-2},(m-i+1)^{4i-2}\},\] _if_ \(k>3\)_._
2. _For_ \(k=2m-1\geq 1\)_, if_ \[\mathfrak{g}_{k,r}=\mathfrak{p}^{A}\frac{k|\overbrace{4|\cdots|4}^{r}|2}{k+2| \underbrace{4|\cdots|4}_{r}},\]
_then_ \(\mathfrak{g}_{k,r}\) _has the unimodal spectrum property with spectrum equal to_ \(\{-1^{2r+1},0^{6r+2},1^{6r+2},2^{2r+1}\}\) _if_ \(k=1\)_,_ \(\{-2,-1^{2r+3},0^{6r+5},1^{6r+5},2^{2r+3},3\}\) _if_ \(k=3\)_, and_ \[\{-m,(-m+1)^{3},-1^{2(k+r)-4},0^{2(k+3r)-1},1^{2(k+3r)-1},2^{2(k+r)- 4},m^{3},m+1\}\] \[\cup\bigcup_{i=2}^{m-2}\{(-m+i)^{4i-2},(m-i+1)^{4i-2}\},\] _if_ \(k>3\)_._
**Remark 67**.: _Utilizing Theorem 66, we can construct examples of Frobenius, type-A seaweeds which do not have the log-concave spectrum property. For example, let \(\mathfrak{g}=\mathfrak{p}^{A}\frac{5|2}{7}\) and \(\mathfrak{g}^{\prime}=\mathfrak{p}^{A}\frac{5|4|4|2}{7|4|4|}.\) Recall from Theorem 38 that the spectrum of \(\mathfrak{g}\) is \(\{-3,-2^{3},-1^{6},0^{9},1^{9},2^{6},3^{3},4\}\). On the other hand, by Theorem 64, we find that the spectrum of \(\mathfrak{g}^{\prime}\) is \(\{-3,-2^{3},-1^{10},0^{21},1^{21},2^{10},3^{3},4\}.\) Clearly, \(\mathfrak{g}^{\prime}\) does not have the log-concave spectrum property._
**Remark 68**.: _Note that for fixed \(k\) and varying values of \(r\), the collection of distinct eigenvalues in the spectra of the Frobenius, type-A seaweeds considered in Theorem 66 is fixed._
**Remark 69**.: _Combining Theorems 64 and 66 and Corollary 65 with Corollaries 14, 17, and 18, we obtain similar results for related families of Frobenius, type-A seaweeds._
Considering the results found above, as well as some experimental evidence, we are naturally led to the following conjectures.
**Conjecture 70**.: _Let \(\mathfrak{g}=\mathfrak{p}^{A}\frac{a_{1}|\ldots|a_{m}|k}{b_{1}|\ldots|b_{t}}\), for \(k\geq 1\), be a Frobenius, type-A seaweed with spectrum \(S\). If_
\[\mathfrak{g}^{\prime}=\mathfrak{p}^{A}\frac{a_{1}|\ldots|a_{m}|}{b_{1}|\ldots| b_{t}|}\underbrace{\overbrace{2k|\cdots|2k}^{r}|k}_{r-1}\quad\text{ or}\quad\mathfrak{g}^{\prime}=\mathfrak{p}^{A}\frac{a_{1}|\ldots|a_{m}|}{b_{1}| \ldots|b_{t}|}\underbrace{\overbrace{2k|\cdots|2k}^{r}}_{r},\]
_for \(r\geq 1\), then \(\mathfrak{g}^{\prime}\) is a Frobenius, type-A seaweed with spectrum \(S\cup S^{\prime}\), where \(S^{\prime}\) is a multiset consisting of values contained in \(S\). Moreover, if \(\mathfrak{g}\) has the unimodal spectrum property, then so does \(\mathfrak{g}^{\prime}\)._
**Conjecture 71**.: _If_
\[\mathfrak{g}=\mathfrak{p}^{A}\overbrace{\frac{2k|\cdots|2k}{2k}|1}^{r},\]
_for \(k,r\geq 1\), then \(\mathfrak{g}\) is Frobenius, and the set of distinct eigenvalues contained in the spectrum of \(\mathfrak{g}\) is equal to the collection of integers contained in the interval_
\[\begin{cases}[-2k+1,2k],&\text{if $r$ is odd};\\,&\text{if $r$ is even}.\end{cases}\]
_Moreover, \(\mathfrak{g}\) has the unimodal spectrum property._
**Remark 72**.: _The family of type-A seaweeds considered in Conjecture 71 provide us with yet another example of a Frobenius, type-A seaweed which does not have the log-concave spectrum property. One can compute that the spectrum of \(\mathfrak{g}=\mathfrak{p}^{A}\frac{8|8|8|1}{25}\) is equal to_
\[\{-7^{2},-6^{5},-5^{8},-4^{13},-3^{23},-2^{37},-1^{52},0^{64},1^{64},2^{52},3 ^{37},4^{23},5^{13},6^{8},7^{5},8^{2}\}.\]
_Notice that \(8^{2}=64<65=5*13,\) so \(\mathfrak{g}\) does not have the log-concave spectrum property._
**Conjecture 73**.: _If_
\[\mathfrak{g}_{k,r}=\mathfrak{p}^{A}\overbrace{\frac{2k|\cdots|2k}{1}|_{r}}^{r},\]
_for \(k,r\geq 1\), then the set of distinct eigenvalues contained in the spectrum of \(\mathfrak{g}_{k,r}\) is equal to the collection of integers contained in the interval \([-k+1,k]\), and \(\mathfrak{g}_{k,r}\) has the log-concave spectrum property. Moreover, if \(i\in[-k+1,0]\), then the multiplicity of \(i\) in the spectrum of \(\mathfrak{g}_{k,r}\) is equal to the multiplicity of \(i-1\) in the spectrum of \(\mathfrak{g}_{k+1,r}\); similarly, if \(i\in(0,k]\), then the multiplicity of \(i\) in the spectrum of \(\mathfrak{g}_{k,r}\) is equal to the multiplicity of \(i+1\) in the spectrum of \(\mathfrak{g}_{k+1,r}\)._
## 5 Epilogue
The inductive proof methods outlined in this article correspond naturally to a set of "winding moves" on the meander of a Frobenius, type-A seaweed. Such winding moves were first introduced by Panyushev ([16], 2001) - and later recast graph-theoretically by Coll et al. ([2], 2012) - for the purpose of simplifying computations involving seaweeds. The initial, overall goal of our study was to prove Conjecture 8 by tracking the effects of the winding moves on the spectra of Frobenius, type-A seaweeds via (extended) spectrum matrices. It quickly became clear, however, that the spectrum of a generic Frobenius, type-A seaweed grew increasingly wild upon iterative applications of winding moves; e.g., compare Theorems 38 and 45. Thus, we still have the following general question.
**Question 74**.: _How do the winding moves on the meander of a Frobenius, type-A seaweed \(\mathfrak{g}\) affect the spectrum of \(\mathfrak{g}\)?_
Note that upon successfully answering this question, one should obtain a combinatorial proof of Conjecture 8.
While the results of Section 3 provide significant insight into answering Question 74, such results have led to further questions. In particular, to the authors' knowledge, this article marks the first acknowledgement of the log-concave spectrum property for seaweeds. While not a property of Frobenius, type-A seaweeds in general, per Section 4, the results of Section 3 suggest that all Frobenius, maximal parabolic, type-A seaweeds do possess the log-concave spectrum property. Thus, we have the following conjecture.
**Conjecture 75**.: _If \(\mathfrak{g}\) is a Frobenius, maximal parabolic, type-A seaweed, then \(\mathfrak{g}\) has the log-concave spectrum property._
The conjecture above is certainly not comprehensive. For example, the seaweed \(\mathfrak{p}^{A}\,\frac{1|2|2|4|4|2}{2|2|2|1|4|4}\) is not parabolic, yet it possesses the log-concave spectrum property. We pose the following question.
**Question 76**.: _Which Frobenius, type-A seaweeds admit the log-concave spectrum property?_
In contrast to Section 3, in which the focus was primarily on applications of winding moves that increase the values of the parts in the defining compositions of the seaweeds while leaving the number of parts fixed, the motivation for Section 4 was to increase the number of parts without altering the size of each. In addition to providing a construction procedure for Frobenius, type-A seaweeds without the log-concave spectrum property, applications of such winding moves preserve the set of distinct eigenvalues in the spectra of the associated algebras. To the author's knowledge, this is the first instance of such behavior being noted, and it leads to the following question.
**Question 77**.: _Are the methods discussed in Section 4 the only means by which to construct a family of Frobenius, type-A seaweeds with a fixed set of distinct eigenvalues._
Recall that, in Section 3, our proof methods required the determination of formulas for the extended spectrum of each Frobenius, type-A seaweed. While it follows from its definition that the extended spectrum
of a Frobenius, type-A seaweed consists entirely of integers centered at \(0\), it is unclear whether these integers are generally unbroken and whether their multiplicities generally form a log-concave (resp., unimodal) sequence. Thus, we ask
**Question 78**.: _Is the set of values in the extended spectrum of a Frobenius type-A seaweed unbroken?_
and
**Question 79**.: _If the values in the extended spectrum of a Frobenius, type-A seaweed \(\mathfrak{g}\) are written in increasing order, is the corresponding sequence of multiplicities log-concave \((\)resp., unimodal\()\)?_
While an answer to Question 78 would be an interesting result in its own right, consideration of Question 79 may also provide insight into an answer to Question 74. Part of the challenge with using meanders when attempting to answer Question 74 is that it can be difficult to discern which paths in a given meander contribute their weights to the spectrum of the associated algebra. However, this particular difficulty is resolved when computing the extended spectrum of a Frobenius, type-A seaweed \(\mathfrak{g}\), since the weights of all paths in \(\overrightarrow{M}(\mathfrak{g})\) are counted. Thus, it is plausible that a purely graph-theoretic approach may yield answers to Question 79, which, in turn, may provide an avenue toward a proof (or counterexample) of Conjecture 8.
|
2307.07701 | Quantum metrology in the noisy intermediate-scale quantum era | Quantum metrology pursues the physical realization of higher-precision
measurements to physical quantities than the classically achievable limit by
exploiting quantum features, such as entanglement and squeezing, as resources.
It has potential applications in developing next-generation frequency
standards, magnetometers, radar, and navigation. However, the ubiquitous
decoherence in the quantum world degrades the quantum resources and forces the
precision back to or even worse than the classical limit, which is called the
no-go theorem of noisy quantum metrology and greatly hinders its applications.
Therefore, how to realize the promised performance of quantum metrology in
realistic noisy situations attracts much attention in recent years. We will
review the principle, categories, and applications of quantum metrology.
Special attention will be paid to different quantum resources that can bring
quantum superiority in enhancing sensitivity. Then, we will introduce the no-go
theorem of noisy quantum metrology and its active control under different kinds
of noise-induced decoherence situations. | Lin Jiao, Wei Wu, Si-Yuan Bai, Jun-Hong An | 2023-07-15T04:05:47Z | http://arxiv.org/abs/2307.07701v2 | # Quantum metrology and its noisy effects
###### Abstract
Quantum metrology pursues the physical realization of higher-precision measurements to physical quantities than the classically achievable limit by exploiting quantum features, such as quantum entanglement and squeezing, as resources. It has potential applications in developing next-generation frequency standards, magnetometers, radar, and navigation. However, the ubiquitous decoherence in the quantum world degrades the quantum resources and forces the precision back to or even worse than the classical limit, which is called the no-go theorem of noisy quantum metrology and greatly hinders its applications. Therefore, how to realize the promised performance of quantum metrology in realistic noisy situations attracts much attention in recent years. We will review the principle, categories, and applications of quantum metrology. Special attention will be paid to different quantum resources that can bring quantum superiority in enhancing the sensitivity. Then, we will introduce the no-go theorem of noisy quantum metrology and its active control under different kinds of noise-induced decoherence situations.
## I Introduction
Quantum metrology aims at achieving a highly precise measurement of a physical quantity of interest with the help of certain quantum resources as well as the principles of quantum mechanics [1; 2; 3]. These resources have no classical counterparts and result in quantum superiority over traditional metrology schemes. The sensitivity of classical metrology schemes is commonly constrained by the so-called shot-noise limit (SNL) \(\delta\theta\propto N^{-1/2}\), where \(\delta\theta\) is the root-mean-square error of the quantity \(\theta\) and \(N\) is the number of repeated measurements. The above SNL is guaranteed by the central limit theorem in physical statistics. By increasing the number of measurements, the metrological error can be reduced. In this sense, \(N\) is viewed as the number of resources in classical schemes. Generalizing to the quantum case, the concept of the number of resources is greatly extended. It has been demonstrated that if quantum resources, such as entanglement [4; 5; 6; 7] or squeezing [8; 9; 10], are used, the metrological sensitivity can surpass the classical SNL [2; 3; 11; 12; 13; 14]. Thus, the numbers of the entangled particles, the squeezing parameter, and the sizes of the many-body systems experiencing a quantum phase transition, are generally regarded as the number of quantum resources. It has been found that the metrology precision with a \(N\)-body entangled state reaches the Heisenberg limit (HL) scaling \(N^{-1}\), which beats the SNL \(N^{-1/2}\) times. Such an enhancement inspires many fantastic applications in gravitational-wave detection [15; 16; 17], quantum radar [18; 19; 20], atom clocks [21; 22; 23; 24], magnetometers [25; 26; 27; 14], gravimeter [28; 29], navigation [4; 30], and biological monitoring [31; 32; 33].
Despite these attractive progresses, quantum metrology has not yet provided a powerful superiority to outperform the state-of-the-art commercial metrology counterparts. It is still in laboratory-based proof-of-principle status. One of the challenges is decoherence caused by different kinds of noises. In any practical quantum metrology scheme, the quantum probe unavoidably interacts with its surrounding noises and becomes an open system [34; 35; 36; 37; 38; 39; 40; 41]. Its evolution impairs the quantum coherence, which is called decoherence. Quantum resources are very fragile and can be easily destroyed by decoherence. Decoherence results in the deterioration of the performance of quantum metrology. It was found that the metrology error generally returns to the SNL at an optimal transient encoding time and becomes divergent in the long-encoding-time regime under the influence of decoherence [42; 43; 44; 45; 46; 47; 48; 49]. Such a phenomenon is called the no-go theorem of noisy quantum metrology [50] and is the main obstacle to achieve a high-precision quantum metrology in practice. However, a clear imperfection leading to this no-go theorem is that it is based on the Born-Markovian approximation to describe the decoherence. Thus, determination of whether this no-go theorem is ostensible or fundamental and can be overcome is highly desirable from both theoretical and experimental perspectives.
The potential benefit of an advanced-technology innovation offered by quantum metrology make overcoming the challenge set by decoherence an extremely worthwhile goal. To minimize the unwanted effects of decoherence on quantum metrology, various strategies have been proposed. Many efforts, e.g., adaptive [51; 52] and nondemolition [53] measurements, correlated decoherence [54], purification [55], error correction [56; 57; 58; 59; 60; 61; 62], and dynamical control [63; 64; 65], have been proposed to restore the HL. A mechanism of quantum reservoir engineering to solve the divergence problem was proposed in [66; 67; 68; 69]. However, the decoherence-control scheme that simultaneously achieves the recovery of the HL for arbitrary \(N\) and the overcoming of the error-divergence problem in the long encoding time is still unavailable. In this paper, we give a brief review of the decoherence control schemes to realize a high-precision metrological performance.
This paper is organized as follows. In Sec. II, we review the basic concepts as well as the general formalism of
quantum parameter estimation. The Fisher information as well as two kinds of Cramer-Rao bound are introduced. In Sec. III, we review the categories and their respective physical principles of the ideal quantum-metrology schemes. The widely used quantum-metrology schemes based on Ramsey spectroscopy, Mach-Zehnder interferometer, and Sagnac interferometer are introduced. In Sec. IV, we review the quantum resources other than the entanglement and the squeezing, including spin squeezing, quantum criticality, and quantum chaos, being useful in enhancing the sensitivity of quantum metrology. The decoherence effects on quantum metrology are reviewed in Sec. V. In Sec. VI, the widely used control schemes to suppress decoherence in noisy quantum metrology are discussed. The conclusions and outlook of this paper are drawn in Sec. VII.
## II Quantum parameter estimation
Any quantum metrology scheme generally includes three steps. To measure an unknown quantity \(\theta\) of a physical system, we first prepare a quantum probe in specific state \(\varrho_{\text{n}}\) containing a certain quantum resource. Then, we couple the probe to the system to encode \(\theta\) into the probe state via a dynamical mapping as \(\varrho_{\theta}=\hat{\Lambda}_{\theta}(\varrho_{\text{in}})\), where the mapping operator \(\hat{\Lambda}_{\theta}\) may be unitary or nonunitary. Finally, we measure certain observable \(\hat{O}\equiv\sum_{j}o_{j}|o_{j}\rangle\langle o_{j}|\) of the probe in \(\varrho_{\theta}\) and infer the value of \(\theta\) from the result. The measurement yields an outcome \(o_{j}\) with a probability \(p(j|\theta)=\langle o_{j}|\varrho_{\theta}|o_{j}\rangle\). The estimation sensitivity of \(\theta\) is constrained by the famous Cramer-Rao bound [70]
\[\delta^{2}\theta\geq\frac{1}{vF_{\theta}}, \tag{1}\]
where \(\delta\theta\) is the root-mean-square error of \(\theta\), \(v\) is the number of repeated measurements, and \(F_{\theta}\) is the classical Fisher information (CFI) corresponding to the selected measurement scheme [71; 72; 73]. The CFI can be evaluated from the probability distribution \(p(j|\theta)\) as
\[F_{\theta}=\sum_{j}p(j|\theta)\bigg{[}\frac{\partial}{\partial\theta}\ln p(j| \theta)\bigg{]}^{2}. \tag{2}\]
The above projective measurement can be generalized to any positive-operator valued measure \(\{\hat{\Pi}_{j}\}\), which satisfies \(\sum_{j}\hat{\Pi}_{j}^{\dagger}\hat{\Pi}_{j}=\mathbf{I}\). The corresponding probability distribution reads \(p(j|\theta)=\text{Tr}(\hat{\Pi}_{j}\varrho_{\theta}\hat{\Pi}_{j}^{\dagger})\).
Equation (2) reveals that the CFI strongly relies on the choice of measured observables. Optimizing all the possible measurement observables, the ultimate estimation precision of \(\theta\) is constrained by the quantum Cramer-Rao bound
\[\delta^{2}\theta\geq\frac{1}{v\mathcal{F}_{\theta}}, \tag{3}\]
where \(\mathcal{F}_{\theta}\equiv\text{Tr}(\hat{\zeta}^{2}\varrho_{\theta})\) is the so-called quantum Fisher information (QFI) [72; 73]. \(\hat{\varsigma}\) is the symmetric logarithmic derivative and determined by \(\partial_{\theta}\varrho_{\theta}=\frac{1}{2}(\hat{\varsigma}\varrho_{ \theta}+\varrho_{\theta}\hat{\varsigma})\). The QFI describes the maximal information on \(\theta\) extractable from \(\varrho_{\theta}\), while the CFI denotes the maximal information extractable from the selected measurement scheme. Thus, one can immediately conclude that \(\mathcal{F}_{\theta}\geq F_{\theta}\). If the selected measurement scheme is the optimal one, then \(F_{\theta}\) would saturate to \(\mathcal{F}_{\theta}\). Unfortunately, there is no general way to find the optimal measurement observable. In this sense, designing the physically optimal measurement scheme that can saturate the best attainable precision bounded by the QFI is of importance in the study of quantum metrology.
In some studies, instead of CFI, the metrological sensitivity is alternatively quantified by using the so-called error propagation formula
\[\delta^{2}\theta=\frac{\Delta^{2}O}{|\partial\langle\hat{O}\rangle/\partial \theta|^{2}},\]
where \(\langle\hat{O}\rangle=\text{Tr}(\hat{O}\varrho_{\theta})\) and \(\Delta^{2}O=\langle\hat{O}^{2}\rangle-\langle\hat{O}\rangle^{2}\) are the expectation value and the fluctuation of the chosen operator \(\hat{O}\) in the state \(\varrho_{\theta}\), respectively. Compared with that of CFI, the error propagation formula provides an easier way to characterize the performance of a given quantum metrology scheme.
When the probe is a discrete-variable system, its density matrix \(\varrho_{\theta}\) is conveniently described in the Bloch representation as
\[\varrho_{\theta}=\frac{1}{d}\bigg{[}\mathbf{1}_{d}+\sqrt{\frac{d(d-1)}{2}} \boldsymbol{r}\cdot\boldsymbol{\zeta}\bigg{]}, \tag{4}\]
with \(d\) being the dimension of the probe, \(\boldsymbol{r}=\text{Tr}(\boldsymbol{\zeta}_{\theta\theta})\) being the Bloch vector, and \(\boldsymbol{\zeta}\) being the \((d^{2}-1)\)-dimensional vector of the generators of the group \(\text{SU}(d)\). The QFI is calculated via [73]
\[\mathcal{F}_{\theta}=(\partial_{\theta}\boldsymbol{r})^{\text{T}}\cdot\bigg{[} \frac{d}{2(d-1)}\boldsymbol{G}-\boldsymbol{r}\cdot\boldsymbol{r}^{\text{T}} \bigg{]}^{-1}\cdot\partial_{\theta}\boldsymbol{r}, \tag{5}\]
where \(\boldsymbol{G}\) is a real symmetric matrix with
\[\boldsymbol{G}_{ij}=\frac{1}{2}\text{Tr}\Big{(}\varrho_{\theta}\{\zeta_{i}, \zeta_{j}\}\Big{)}. \tag{6}\]
The most well-used scenario is the single-qubit case with \(d=2\). Under this circumstance, the QFI reduces to
\[\mathcal{F}_{\theta}=|\partial_{\theta}\boldsymbol{r}|^{2}+\frac{(\boldsymbol{r }\cdot\partial_{\theta}\boldsymbol{r})^{2}}{1-|\boldsymbol{r}|^{2}}. \tag{7}\]
for a mixed state, and \(\mathcal{F}_{\theta}=|\partial_{\theta}\boldsymbol{r}|^{2}\) for a pure state. Here, \(\boldsymbol{\zeta}=(\hat{\sigma}_{x},\hat{\sigma}_{y},\hat{\sigma}_{z})^{\text{T}}\) reduces to the Pauli matrices.
If the probe is a Gaussian Bosonic system with a set of annihilation and creation operators \(\boldsymbol{A}=\{\hat{a}_{1},..,\hat{a}_{N},\hat{a}_{1}^{\dagger},...\hat{a}_{N }^{\dagger}\}\), its quantum state \(\varrho_{\theta}\) can be fully characterized by the first-order moments of the displacement vector \(\boldsymbol{d}\) and the second-order moments of the
covariant matrix \(\mathbf{\sigma}\)[74]. The elements of \(\mathbf{d}\) and \(\mathbf{\sigma}\) are, respectively, defined by \(\mathbf{d}_{i}=\text{Tr}(\rho_{\theta}\mathbf{A}_{i})\) and \(\mathbf{\sigma}_{ij}=\text{Tr}[\rho_{\theta}\{\Delta\mathbf{A}_{i},\Delta\mathbf{A}_{j}\}]\) with \(\Delta\mathbf{A}_{i}=\mathbf{A}_{i}-\mathbf{d}_{i}\). With expressions of \(\mathbf{d}\) and \(\mathbf{\sigma}\) at hand, the QFI with respect to the mixed Gaussian state \(\varrho_{\theta}\) is calculated as [75, 76, 77]
\[\mathcal{F}_{\theta}=\frac{1}{2}[\text{vec}(\partial_{\theta}\mathbf{\sigma})]^{ \dagger}\mathbf{M}^{-1}\text{vec}(\partial_{\theta}\mathbf{\sigma})+2(\partial_{ \theta}\mathbf{d})^{\dagger}\mathbf{\sigma}^{-1}\partial_{\theta}\mathbf{d}, \tag{8}\]
where \(\text{vec}(\cdot)\) denotes the vectorization of a given matrix, and \(\mathbf{M}=\mathbf{\sigma}\otimes\mathbf{\sigma}-\mathbf{\varpi}\otimes\mathbf{\varpi}\) with \([\mathbf{A}_{i},\mathbf{A}_{j}]=i\mathbf{\varpi}_{ij}\). In the pure state case, the QFI reduces to
\[\mathcal{F}_{\theta}=\frac{1}{4}\text{Tr}[(\mathbf{\sigma}^{-1}\partial_{\theta} \mathbf{\sigma})^{2}]+2(\partial_{\theta}\mathbf{d})^{\dagger}\mathbf{\sigma}^{-1} \partial_{\theta}\mathbf{d}. \tag{9}\]
## III Ideal quantum metrology
Depending on the feature of the Hilbert space of the probe, quantum metrology schemes can be classified into two categories: discrete- and continuous-variable schemes. The discrete-variable quantum metrology scheme is generally based on Ramsey spectroscopy and the continuous-variable one is based on the Mach-Zehnder interferometer.
### Ramsey spectroscopy
In a conventional Ramsey spectroscopy to measure the atomic frequency \(\omega_{0}\), see Fig. 1(a), one chooses \(N\) atoms themselves as the probe and prepares their state in \(|\psi_{\text{in}}\rangle=|g\rangle^{\otimes N}\), where \(|g\rangle\) is the atomic ground states. First, a \(\pi/2\) microwave pulse with frequency \(\omega_{L}\) and time duration \(t=\pi/(2|\Delta|)\), with \(\Delta=\omega_{0}-\omega_{L}\), is applied on each atom. It converts \(|\psi_{\text{in}}\rangle\) into \(|\psi_{1}\rangle=\left(\frac{|g\rangle+|e\rangle}{\sqrt{2}}\right)^{\otimes N}\), where \(|e\rangle\) is the excited state. Second, the microwave is switched off and the atoms experience a free evolution governed by the Hamiltonian \(\hat{H}_{0}=\Delta\sum_{j}\hat{\sigma}_{j}^{\dagger}\hat{\sigma}_{j}\), with \(\hat{\sigma}_{j}=|g_{j}\rangle\langle e_{j}|\). It encodes \(\omega_{0}\) into the state
\[|\psi(t)\rangle=e^{-i\hat{H}_{0}t}|\psi_{1}\rangle=[(|g\rangle+e^{-i\Delta t} |e\rangle)/\sqrt{2}]^{\otimes N}. \tag{10}\]
Here, \(\omega_{L}\) contained in \(\Delta\) of \(\hat{H}_{0}\) is just for the purpose of the consistency of the used rotating frame with the one in the first step. Third, a \(\pi/2\) microwave pulse is switched on again, which converts \(|\psi(t)\rangle\) into
\[|\psi_{\text{out}}\rangle=\big{[}\cos(\Delta t/2)|g\rangle+i\sin(\Delta t/2)| e\rangle\big{]}^{\otimes N}. \tag{11}\]
Finally, the excited-state population operator \(\hat{O}=\hat{\sigma}_{j}^{\dagger}\hat{\sigma}_{j}\) of each atom is measured. The obtained results 1 is in a probability \(P_{1}=\sin^{2}\frac{\Delta t}{2}\) and 0 in a probability \(P_{0}=\cos^{2}\frac{\Delta t}{2}\). The expectation value \(\langle\hat{O}\rangle=\sin^{2}\frac{\Delta t}{2}\), the variance \(\Delta O=|\sin\Delta t|/2\), and the CFI \(F_{\omega_{0}}=t^{2}\) are readily calculated. Repeating the measurements within a time duration \(T\), we acquire \(\upsilon=TN/t\) measurement results. Then, according to the central limit theorem, we have the uncertainty of \(\hat{O}\) as
\[\delta O=\frac{\Delta O}{\sqrt{TN/t}}=\frac{|\sin\Delta t|}{2\sqrt{TN/t}}. \tag{12}\]
The sensitivity of \(\omega_{0}\) is evaluated via the error propagation formula as
\[\delta\omega_{0}=\frac{\delta O}{|\partial_{\omega_{0}}\langle\hat{O}\rangle| }=(NTt)^{-1/2}, \tag{13}\]
which equals to the sensitivity evaluated from the Cramer-Rao bound (1). Since no quantum resource is used in the input state, the scaling relation of \(\delta\omega_{0}\) with the atom number \(N\) is just the SNL. It can be evaluated that the QFI of Eq. (11) is
\[\mathcal{F}_{\omega_{0}}=t^{2}. \tag{14}\]
Thus, we obtain
\[\delta\omega_{0}=(\upsilon\mathcal{F}_{\omega_{0}})^{-1/2}=(NTt)^{-1/2}. \tag{15}\]
Therefore, the measurement scheme used in Eq. (13) saturates the quantum Cramer-Rao bound and is optimal.
Quantum entanglement is one of the most subtle and intriguing phenomena in nature. Its usefulness has been demonstrated in various applications such as quantum teleportation [78, 79], quantum cryptography [80, 81, 82], and quantum dense coding [83, 84, 80]. It occurs when a group of particles are generated or share spatial proximity in a way such that their state cannot be described independently of the states of the constituent particles. Measurements of physical properties such as position, momentum, spin, and polarization performed on entangled particles can be found to be perfectly correlated. The exploitation of quantum entanglement in quantum scenarios has great potential to outperform the schemes based on classical physics. This is also true for quantum metrology based on Ramsey spectroscopy.
First, the \(N\) atoms are prepared in the ground state \(|\psi_{\text{in}}\rangle=|g\rangle^{\otimes N}\). A \(\pi/2\) microwave pulse on the first atom is applied, see Fig. 1(b), and converts the state into \(|\psi_{1}\rangle=\frac{|g\rangle+|e\rangle}{\sqrt{2}}\otimes|g\rangle^{\otimes(N-1)}\). A CNOT gate, with the first atom acting as the control bit and the other \(N-1\) atom acting as the target bits, is applied to the atoms. \(|\psi_{1}\rangle\)
Figure 1: Scheme of Ramsey-spectroscopy-based quantum metrology. (a) Conventional Ramsey spectroscopy uses a product state as the input state. (b) Ramsey spectroscopy uses GHZ-type entangled state as the input state.
becomes a Greenberger-Horne-Zeilinger (GHZ)-type entangled state
\[|\psi_{2}\rangle=(|g\rangle^{\otimes N}+|e\rangle^{\otimes N})/\sqrt{2}. \tag{16}\]
Second, the atoms experience a free evolution governed by the atomic Hamiltonian \(\hat{H}_{0}=\Delta\sum_{j}\hat{\sigma}_{j}^{\dagger}\hat{\sigma}_{j}\) to encode \(\omega_{0}\) into the probe state, which converts \(|\psi_{2}\rangle\) into
\[|\psi(t)\rangle=(|g\rangle^{\otimes N}+e^{-iN\Delta t}|e\rangle^{\otimes N})/ \sqrt{2}. \tag{17}\]
Third, another CNOT gate is applied again to convert \(|\psi(t)\rangle\) into \(|\psi^{\prime}(t)\rangle=\frac{|g\rangle+e^{-iN\Delta t}|e\rangle}{\sqrt{2}} \otimes|g\rangle^{\otimes(N-1)}\). Switching on the \(\pi/2\) pulse again, \(|\psi^{\prime}(t)\rangle\) becomes
\[|\psi_{\rm out}\rangle=[\cos(N\Delta t/2)|g\rangle+i\sin(N\Delta t/2)|e\rangle ]\otimes|g\rangle^{\otimes N-1}. \tag{18}\]
Measuring the excited-state population operator \(\hat{O}=\hat{\sigma}_{1}^{\dagger}\hat{\sigma}_{1}\) of the first atom, we have the results 1 with probability \(P_{1}=\sin^{2}\frac{N\Delta t}{2}\) and 0 with probability \(P_{0}=\cos^{2}\frac{N\Delta t}{2}\). They lead to \(\langle\hat{O}\rangle=\sin^{2}\frac{N\Delta t}{2}\), \(\Delta O=|\sin(N\Delta t)|/2\), and \(F_{\omega_{0}}=N^{2}t^{2}\). Repeating this process within a time duration \(T\) yields \(\upsilon=T/t\) sets of experimental results. Thus the uncertainty of \(\hat{O}\) is
\[\delta O=\frac{\Delta O}{\sqrt{T/t}}=\frac{|\sin(N\Delta t)|}{2}\sqrt{\frac{t }{T}}. \tag{19}\]
The error propagation formula results in
\[\delta\omega_{0}=(N^{2}Tt)^{-1/2}, \tag{20}\]
which is called the HL [85; 86]. Equation (20) also equals the sensitivity evaluated from the Cramer-Rao bound (1). Obviously, by using entanglement, the HL has an \(N^{1/2}\)-times enhancement over the SQL in Eq. (13). The ultimate precision of Eq. (17) is \(\mathcal{F}_{\omega_{0}}(t)=N^{2}t^{2}\), from which the sensitivity of \(\omega_{0}\) reads
\[\delta\omega_{0}=(\upsilon\mathcal{F}_{\omega_{0}})^{-1/2}=(N^{2}Tt)^{-1/2}. \tag{21}\]
It matches with (20). Therefore, the measurement scheme used above saturates the quantum Cramer-Rao bound and is optimal.
### Mach-Zehnder interferometer
A wide class of quantum metrology using quantized lights as probes are based on the Mach-Zehnder interferometer [87; 88; 89; 67]. In order to measure a frequency parameter \(\gamma\) of a system, we choose two modes of optical fields with frequency \(\omega_{0}\) as the probe. The encoding of \(\gamma\) is realized by the time evolution \(\hat{U}_{0}(\gamma,t)=\exp(-i\hat{H}_{0}t)\) with
\[\hat{H}_{0}=\omega_{0}\sum_{m=1,2}\hat{a}_{m}^{\dagger}\hat{a}_{m}+\gamma\hat {a}_{2}^{\dagger}\hat{a}_{2}, \tag{22}\]
where the first term is the Hamiltonian of the fields and the second one is the linear interaction of the second field with the system [90; 91; 92]. The evolution \(\hat{U}_{0}(\gamma,t)\) accumulates a phase difference \(\gamma t\) to the two fields, which is measured by the Mach-Zehnder interferometer. It has two beam splitters \(\text{BS}_{i}\) (\(i=1,2\)) separated by the phase shifter \(\hat{U}_{0}(\gamma,t)\) and two detectors \(\text{D}_{i}\) (see Fig. 2). Its input-output relation reads
\[|\Psi_{\rm out}\rangle=\hat{V}\hat{U}_{0}(\gamma,t)\hat{V}|\Psi_{\rm in}\rangle, \tag{23}\]
where \(\hat{V}=\exp[i\frac{\pi}{4}(\hat{a}_{1}^{\dagger}\hat{a}_{2}+\hat{a}_{2}^{ \dagger}\hat{a}_{1})]\) is the action of \(\text{BS}_{i}\)[89]. We are interested in the quantum superiority of \(|\Psi_{\rm in}\rangle\) in metrology subject to an explicit measurement scheme. Thus we consider Caves's original scheme [88]; i.e., the photon difference \(\hat{M}=\hat{a}_{1}^{\dagger}\hat{a}_{1}-\hat{a}_{2}^{\dagger}\hat{a}_{2}\) is measured by \(\text{D}_{i}\), which is also the most general measurement in the Mach-Zehnder interferometer. We can evaluate \(\langle\hat{M}\rangle=\langle\Psi_{\rm out}|\hat{M}|\Psi_{\rm out}\rangle\) and \(\delta^{2}M=\langle\hat{M}^{2}\rangle-\langle\hat{M}\rangle^{2}\). The metrology sensitivity is obtained via
\[\delta\gamma=\frac{\delta M}{|\partial_{\gamma}\langle\hat{M}\rangle|}. \tag{24}\]
If the input state is a product state of a coherent state and a vacuum state, i.e., \(|\Psi_{\rm in}\rangle=\hat{D}_{\hat{a}_{1}}|0,0\rangle\), with \(\hat{D}_{\hat{a}}=\exp\left(\alpha\hat{a}^{\dagger}-\alpha^{\star}\hat{a}\right)\), which contains a mean photon number \(N=|\alpha|^{2}\), then we have \(\langle\hat{M}\rangle=N\cos(\gamma t)\) and \(\langle\hat{M}^{2}\rangle=N^{2}\cos^{2}(\gamma t)+N\). It can be readily evaluated that
\[{\rm min}\delta\gamma=1/(t\sqrt{N}) \tag{25}\]
when \(\gamma t=m\pi/2\), with \(m\) being a positive integer. In this case, the QFI reads \(\mathcal{F}_{\gamma}=Nt^{2}\). The coincidence of the metrology sensitivity from \(\mathcal{F}_{\gamma}\) with Eq. (25) indicates that the \(\hat{M}\)-measurement is optimal.
If the input state is a product state of a squeezed vacuum state and a coherent state, i.e., \(|\psi\rangle_{\rm in}=\hat{D}_{\hat{a}_{1}}\hat{S}_{\hat{a}_{2}}|0,0\rangle\)
Figure 2: Scheme of a Mach-Zehnder-interferometer based quantum metrology. Two fields interact at beam splitter \(\text{BS}_{1}\) and propagate along two arms. One of the fields couples to a system with the potential influence of quantum noise, by which the estimated parameter \(\gamma\) is encoded. After interfering at \(\text{BS}_{2}\), the fields are detected by the detectors \(\text{D}_{1}\) and \(\text{D}_{2}\).
with \(\hat{S}_{\hat{a}}=\exp[(\xi^{*}\hat{a}^{2}-\xi\hat{a}^{\dagger 2}/2)]\) and \(\xi=re^{i\phi}\), its total photon number is \(N=|\alpha|^{2}+\sinh^{2}r\), which contains the ratio \(\iota\equiv\sinh^{2}r/N\) from the squeezed mode and can be regarded as the quantum resource of the scheme. To \(|\psi\rangle_{\rm out}\), we have \(\bar{M}=[\sinh^{2}r-|\alpha|^{2}]\cos(\gamma t)\) and
\[\delta M = \{\cos^{2}(\gamma t)[|\alpha|^{2}+2\sinh^{2}r\cosh^{2}r]+\sin^{2 }(\gamma t) \tag{26}\] \[\times[\left|\alpha\cosh r-\alpha^{*}\sinh re^{i\phi}\right|^{2}+ \sinh^{2}r]\}^{\frac{1}{2}}.\]
Then the best precision of estimating \(\gamma\) is obtained as
\[\min\delta\gamma=\frac{[(1-\iota)e^{-2r}+\iota]^{\frac{1}{2}}}{t\sqrt{N}|1-2 \iota|} \tag{27}\]
when \(\phi=2\varphi\) and \(\gamma t=(2m+1)\pi/2\) for \(m\in\mathbb{Z}\). If the squeezing is absent, then \(\min\delta\gamma|_{\iota=0}=(tN_{0}^{1/2})^{-1}\) is just the SNL. For \(\iota\neq 0\), using \(e^{-2r}\simeq 1/(4\sinh^{2}r)\) for \(N\gg 1\) and optimizing \(\iota\), we have
\[\left.\min\delta\gamma\right|_{\iota=(2\sqrt{N})^{-1}}=(tN^{3/4})^{-1}, \tag{28}\]
which is the Zeno limit [93; 94; 95]. It beats the SNL and manifests the superiority of squeezing in metrology. The QFI of \(|\Psi_{\rm out}\rangle\) in this case is derived to be
\[\mathcal{F}_{\gamma}=t^{2}N\iota[4N(1-\iota)+1]. \tag{29}\]
Optimizing \(\iota\), we obtain the best sensitivity \(\min\delta\gamma|_{\iota=1/2}=(tN)^{-1}\), which is smaller than the Zeno limit in Eq. (28). It means that the above measurement scheme is not optimal. Ref. [96] revealed that the measurement of both the photon numbers of the two output ports can saturate the quantum Cramer-Rao bound. Benefiting from the quantum-enhanced sensitivity, the squeezing has been used in gravitational-wave observatory [16].
If the input state is two-mode squeezed vacuum state \(|\Psi_{\rm in}\rangle=\hat{S}|0,0\rangle\), with \(\hat{S}=e^{r(\hat{a}_{1}\hat{a}_{2}-\hat{a}_{1}^{\dagger}\hat{a}_{2}^{\dagger})}\), whose total photon number is \(N=2\sinh^{2}r\). In this case, we choose to measure the parity operator \(\hat{\Pi}=e^{i\pi\hat{a}_{1}^{\dagger}\hat{a}_{1}}\). Substituting \(|\Psi_{\rm in}\rangle\) into Eq. (23), we calculate \(\hat{\Pi}=\langle\Psi_{\rm out}|\hat{\Pi}|\Psi_{\rm out}\rangle=[1+N(2+N) \cos^{2}(\gamma t)]^{-1/2}\) and \(\delta\Pi=(1-\Pi^{2})^{1/2}\), where \(\hat{\Pi}^{2}=1\) has been used. Then the metrology sensitivity is evaluated via the error propagation formula \(\delta\Omega=\frac{\delta\Pi}{[\partial_{\alpha}\Pi]}\) as
\[\min\delta\gamma=\big{[}2t\sqrt{N(2+N)}\big{]}^{-1}, \tag{30}\]
when \(\gamma t=(2n+1)\pi/2\) with \(n\in\mathbb{Z}\). It is remarkable to find that the best sensitivity is even smaller than the HL \(\Delta\gamma\propto(tN)^{-1}\), which reflects the quantum superiority of the used squeezing and measured observable. The QFI of \(|\Psi_{\rm out}\rangle\) equals to
\[\mathcal{F}_{\gamma}=t^{2}N(N+2)\sin^{2}(\gamma t). \tag{31}\]
The sensitivity obtained from \(\mathcal{F}_{\gamma}\) matches well with Eq. (30). It verifies that the parity-measurement scheme saturates the quantum Cramer-Rao bound and is optimal. We call such a sensitivity surpassing the HL the super-HL [97; 98; 99; 91]. It is noted that a phase estimation error smaller than the inverse of the mean photon number was called the sub-HL in Refs. [89; 100; 101].
### Sagnac interferometer
High-performance gyroscopes for rotation sensing are of pivotal significance for navigation in many types of air, ground, marine, and space applications. Based on the Sagnac effect, i.e., two counter-propagating waves in a rotating loop accumulate a rotation-dependent phase difference, gyroscopes have been realized in optical systems [102; 103; 104; 105; 106; 107]. The records for precision and stability of commercial gyroscopes are held by optical gyroscopes [108; 109]. However, their precision, which is proportional to the surface area enclosed by the optical path [110], is still limited by the classical SNL. It dramatically constrains their practical application and further performance improvement. A quantum gyroscope based on the Sagnac interferometer can be established as follows.
We choose two beams of quantized optical fields as the quantum probe. They propagating in opposite directions are input into a 50:50 beam splitter and split into clockwise and counter-clockwise propagating beams (see Fig. 3). The setup rotates with an angular velocity \(\Omega\) about the axis perpendicular to its plane. Thus the two beams accumulate a phase difference \(\Delta\theta=\mathcal{N}4\pi kR^{2}\Omega/c\) when they re-encounter the beam splitter after \(\mathcal{N}\) rounds of propagation in the circular path [111]. Here \(k\) is the wave vector, \(c\) is the speed of light, and \(R\) is the radius of the quantum gyroscope. Remembering the standing-wave condition \(kR=n\) (\(n\in\mathbb{Z}\)) of the optical fields propagating along the circular path and defining \(\Delta t\equiv\mathcal{N}2\pi R/c\), we have \(\Delta\omega\equiv\Delta\theta/\Delta t=2n\Omega\). Therefore, the quantum gyroscope can be equivalently treated as two counter-propagating optical fields with a frequency difference \(\Delta\omega\) along the circular path. For concreteness, we choose the basic mode \(n=1\). Then the optical fields in the quantum gyroscope can be quantum mechanically described by [112]
\[\hat{H}_{S}=\omega_{0}\sum_{l=1,2}\hat{a}_{l}^{\dagger}\hat{a}_{l}+\Omega(\hat{a} _{1}^{\dagger}\hat{a}_{1}-\hat{a}_{2}^{\dagger}\hat{a}_{2}). \tag{32}\]
where \(\hat{a}_{l}\) is the annihilation operator of the \(l\)th field with frequency \(\omega_{0}\). The optical fields couple to the beam splitter twice and output in the state \(|\Psi_{\rm out}\rangle=\hat{V}\hat{U}_{0}(\Omega,t)\hat{V}|\Psi_{\rm in}\rangle\), where \(\hat{U}_{0}(\Omega,t)=\exp(-i\hat{H}_{S}t)\) is the
Figure 3: Schematic diagram of a quantum gyroscope based on the Sagnac interferometer. Figure cited from [69].
evolution operator of the fields and \(\hat{V}=\exp[i\frac{\pi}{4}(\hat{a}_{1}^{\dagger}\hat{a}_{2}+\hat{a}_{2}^{\dagger }\hat{a}_{1})]\) describes the action of the beam splitter. Thus the angular velocity \(\Omega\) is encoded into the state \(|\Psi_{\rm out}\rangle\) of the optical probe via the unitary evolution.
To exhibit the quantum superiority, we employ two-mode squeezed vacuum state as the input state \(|\Psi_{\rm in}\rangle=\hat{S}|0,0\rangle\), where \(\hat{S}=\exp[r(\hat{a}_{1}\hat{a}_{2}-\hat{a}_{1}^{\dagger}\hat{a}_{2}^{\dagger })]\) is the squeeze operator with \(r\) being the squeeze parameter. The total photon number of this input state is \(N=2\sinh^{2}r\), which is the quantum resource of our scheme. The parity operator \(\hat{\Pi}=\exp(i\pi\hat{a}_{1}^{\dagger}\hat{a}_{1})\) is measured at the output port [89]. Similar to the last example in Sec. III.2, we have
\[\min\delta\Omega=\big{[}2t\sqrt{N(2+N)}\big{]}^{-1}, \tag{33}\]
when \(\Omega t=(2n+1)\pi/4\) with \(n\in\mathbb{Z}\). Super-HL sensing to the angular velocity is achieved. It can be verified that this measurement scheme saturates the quantum Cramer-Rao bound governed by the QFI.
## IV Other quantum resources
### Spin Squeezing
Spin-squeezed states are a class of collective-spin states having squeezed spin variance along a certain direction, at the cost of anti-squeezed variance along an orthogonal direction [13; 3; 9; 8]. Spin squeezing is one of the most successful quantum resources that can witness large-scale quantum entanglement beating the SNL.
Consider an ensemble of \(N\) two-level atoms or spin-1/2 particles in a quantum state \(|\Psi\rangle\). Its collective spin operator is defined as \(\hat{\mathbf{J}}=\sum_{j=1}^{N}\hat{\boldsymbol{\sigma}}_{j}/2\). The mean-spin direction is \(\mathbf{n}_{0}=\langle\hat{\mathbf{J}}\rangle/|\langle\hat{\mathbf{J}}\rangle|\), with \(\langle\hat{\mathbf{J}}\rangle=\langle\Psi|\hat{\mathbf{J}}|\Psi\rangle\). To properly use this kind of collective spin state, we design a generalized Ramsey spectroscopy as follows [113]. First, we perform a rotation with angle \(\vartheta\) along the axis \(\boldsymbol{\eta}\) such that \(\boldsymbol{\eta}\cdot\mathbf{n}_{0}=0\). Then, we estimate \(\vartheta\) by measuring the operator \(\hat{J}_{\perp}=\boldsymbol{\alpha}\cdot\hat{\mathbf{J}}\), where \(\boldsymbol{\alpha}\cdot\mathbf{n}_{0}=\boldsymbol{\alpha}\cdot\boldsymbol{ \eta}=0\). In the Heisenberg picture, the rotation converts the measured operator \(\hat{J}_{\perp}\) into
\[\hat{J}_{\perp}^{\rm out}=e^{i\vartheta\hat{J}_{\eta}}\hat{J}_{\perp}e^{-i \vartheta\hat{J}_{\eta}}=\cos\vartheta\hat{J}_{\perp}-\sin\vartheta\hat{J}_{n _{0}}. \tag{34}\]
It follows that \(\langle\hat{J}_{\perp}^{\rm out}\rangle=-\sin\vartheta\langle\hat{J}_{n_{0}}\rangle\) and
\[\delta^{2}J_{\perp}^{\rm out} = \cos^{2}\vartheta\delta^{2}J_{\perp}+\sin^{2}\vartheta\delta^{2} \hat{J}_{n_{0}} \tag{35}\] \[-\frac{\sin(2\vartheta)}{2}\langle[\hat{J}_{\perp},\hat{J}_{n_{0} }]_{+}\rangle.\]
The best phase sensitivity is calculated as
\[\delta\vartheta=\min_{\vartheta}\frac{\delta J_{\perp}^{\rm out}}{|\cos\vartheta \langle\hat{J}_{n_{0}}\rangle|}=\frac{\delta J_{\perp}}{|\langle\hat{\mathbf{ J}}\rangle|}, \tag{36}\]
where \(|\langle\hat{\mathbf{J}}\rangle|=|\langle\hat{J}_{n_{0}}\rangle\mathbf{n}_{0}|=| \langle\hat{J}_{n_{0}}\rangle|\) has been used. Consider the phase sensitivity obtained in the generalized Ramsey spectroscopy by using the so-called spin-coherent state defined as \(|\theta,\phi\rangle=\frac{e^{\zeta j_{z}}}{(1+|\zeta|^{2})}|j,j\rangle\), where \(\zeta=e^{i\phi}\tan\frac{\theta}{2}\) and \(|j,j\rangle\) is the common eigen state of \(|\hat{J}^{2},\hat{J}_{z}\rangle\) with \(j=N/2\). For this state, the mean-spin direction is \(\mathbf{n}_{0}=(\sin\theta\cos\phi,\sin\theta\sin\phi,\cos\theta)\) and the expectation \(|\langle\hat{\mathbf{J}}\rangle|=N/2\). After some algebra, the variance \(\delta J_{\perp}=\sqrt{N}/2\) is evaluated. Therefore, we have
\[\delta\vartheta_{\rm scs}=N^{-1/2}, \tag{37}\]
which is just the SNL. The squeezing parameter of a spin-squeezed state is defined as the ratio of the phase sensitivity obtained via the spin-squeezed state and the spin-coherent state [8], i.e.,
\[\xi_{R}\equiv\frac{\delta\vartheta}{\delta\vartheta_{\rm scs}}=\sqrt{N}\frac{( \delta J_{\perp})_{\rm min}}{|\langle\hat{\mathbf{J}}\rangle|}. \tag{38}\]
When \(\xi_{R}<1\), the state is said to be spin squeezed, which can be used to beat the SNL. This requires that the state has a minimal spin fluctuation in the plane orthogonal to the mean-spin direction smaller than the one of the spin coherent state, i.e., \(\min\delta\hat{J}_{\perp}<\sqrt{N}/2\). \(\xi_{R}\) gives the quantum gain of the frequency sensitivity in the generalized Ramsey spectroscopy obtained by the spin-squeezed state over the SNL obtained by the spin coherent state.
First, we consider the superposition of two Dicke states [113]
\[|\Psi_{\rm SDS}\rangle=(|j,1/2\rangle+|j,-1/2\rangle)/\sqrt{2}, \tag{39}\]
where \(j=N/2\) with \(N\) being odd. One can evaluated \(\langle\hat{\mathbf{J}}\rangle=(\frac{j+1/2}{2},0,0)\). Thus \(\hat{J}_{\perp}=\cos x\hat{J}_{y}+\sin x\hat{J}_{z}\), with \(x\in[0,2\pi]\). It is straightforward to derive that
\[\min_{x}\delta^{2}J_{\perp} = \{\langle(\hat{J}_{y}^{2}+\hat{J}_{z}^{2})\rangle-|\langle(\hat{J} _{y}^{2}-\hat{J}_{z}^{2})\rangle^{2} \tag{40}\] \[+\langle[\hat{J}_{y},\hat{J}_{z}]_{+}\rangle]^{1/2}\}/2.\]
According to \(\langle[\hat{J}_{y},\hat{J}_{z}]_{+}\rangle=0\) for \(|\Psi_{\rm SDS}\rangle\), we have \((\delta^{2}J_{\perp})_{\rm min}=\langle\hat{J}_{z}^{2}\rangle=1/4\). Substituting it into Eq. (38), we obtain
\[(\xi_{R})_{\rm SDS}=2\sqrt{N}/(N+1)\simeq 2N^{-1/2}. \tag{41}\]
It indicates that the HL of the frequency sensitivity could be obtained if \(|\Psi_{\rm SDS}\rangle\) is used in the generalized Ramsey spectroscopy.
Second, we consider the one-axis twisted state defined as [9]
\[|\Psi_{\rm OAT}\rangle=e^{-i\mu J_{z}^{2}/2}|\pi/2,0\rangle. \tag{42}\]
The expectation value of \(\hat{\mathbf{J}}\) is \(\langle\hat{\mathbf{J}}\rangle=(j\cos^{2j-1}(\mu/2),0,0)\). Thus \((\delta J_{\perp})_{\rm min}\) has the same form as Eq. (40). Using \(\langle\hat{J}_{z}^{2}\rangle=j/2\) and
\[\langle\hat{J}_{y}^{2}\rangle=\frac{j}{2}[j+\frac{1}{2}-(j-\frac{1} {2})\cos^{2j-2}\mu], \tag{43}\] \[\langle[\hat{J}_{y},\hat{J}_{z}]_{+}\rangle=j(2j-1)\cos^{2j-2}( \mu/2)\sin(\mu/2), \tag{44}\]
we obtain
\[\min_{x}\delta^{2}J_{\perp}=\frac{j}{2}[1+\frac{j-1/2}{2}(A-\sqrt{A^{2}+B^{2}})], \tag{45}\]
where \(A=1-\cos^{2j-2}\mu\) and \(B=4\cos^{2j-2}(\mu/2)\sin(\mu/2)\). Under the condition \(j\gg 1\) and \(\mu\ll 1\), Eq. (45) is approximated as \(\min_{x}\delta^{2}J_{\perp}=\frac{j}{2}(\frac{1}{4\alpha^{2}}+\frac{2\beta^{2} }{3})\), where \(\alpha=j\mu/2\) and \(\beta=j\mu^{2}/4\). It reaches its minimum \((\delta J_{\perp})_{\text{min}}=\frac{1}{\sqrt{2}}(\frac{j}{3})^{1/6}\) when \(\mu=24^{1/6}j^{-2/3}\). Substituting it into Eq. (38), we obtain
\[(\xi_{R})_{\text{OAT}}\propto j^{-1/3}\propto N^{-1/3}. \tag{46}\]
Thus a metrology sensitivity \(N^{-5/6}\) would be achieved if \(|\Psi_{\text{OAT}}\rangle\) is used in the Ramsey spectroscopy.
Third, we consider the two-axis twisting state defined as
\[|\Psi_{\text{TAT}}\rangle=e^{-\theta(\hat{J}_{+}^{2}-\hat{J}_{-}^{2})/2}|j,-j \rangle^{\otimes N}, \tag{47}\]
where the mean-spin direction is along the \(z\)-axis. Unfortunately, the two-axis twisting model cannot be solved analytically for arbitrary \(N\). The numerical calculations reveal that the spin-squeezing parameter \(\xi_{R}\) scales with the atom number \(N\) as \((\xi_{R})_{\text{TAT}}\sim N^{-1/2}\), with leads to a sensitivity scaling as the HL \(\delta\vartheta\sim N^{-1}\).
Spin-squeezed states reflect the absolute superiority of the entanglement-enhanced sensitivity in quantum metrology. The distinctive role of atomic spin squeezing in improving the sensitivity makes it valuable for applications in quantum gyroscope [114; 115], atomic clocks [116; 117; 118; 119], magnetometers [120; 121; 26], and gravimetry [5].
### Quantum criticality
Quantum criticality can also act as a resource to enhance the sensitivity in metrology [123; 124]. It takes advantage of criticality associated with continuous quantum phase transitions, for which in the thermodynamic limit the energy gap above the ground state closes, to realize high-precision measurements to physical quantities. Consider a Hamiltonian \(\hat{H}(g)\) with the decomposition \(\hat{H}(g)=\sum_{n}E_{n}(g)|\psi_{n}(g)\rangle\langle\psi_{n}(g)|\), the QFI with respect to \(g\), which drives the quantum phase transition, for the ground state \(|\psi_{0}(g)\rangle\) reads [125; 126]
\[\mathcal{F}_{g}=4\sum_{n\neq 0}\frac{|\langle\psi_{n}(g)|\partial_{g}\hat{H}(g)| \psi_{0}(g)\rangle|^{2}}{[E_{n}(g)-E_{0}(g)]^{2}}. \tag{48}\]
It is clear that, if the energy gap above the ground state closes, the QFI diverges due to the vanishing denominator. This property leads to an arbitrarily high estimation precision in the thermodynamic limit. In general, the lowest excitation energy tends to zero as \((E_{1}-E_{0})\sim\xi^{-z}\) near the critical point \(g_{c}\), where \(z\) is the dynamical critical exponent and \(\xi\sim\Lambda^{-1}|g-g_{c}|^{-\nu}\) is the correlation length. Here \(\Lambda\) is the momentum cutoff determined by the inverse lattice spacing and \(\nu\) is the critical exponent [127]. Combined with Eq. (48), the QFI diverges as \(\mathcal{F}_{g}\sim|g-g_{c}|^{-2z\nu}\) near the critical point. A commonly used quantum critical metrology protocol is to adiabatically prepare a ground state with the parameters being near the critical point, then measure the observable to estimate the quantity which drives the occurrence of the quantum phase transition.
First, we consider applying the one-dimensional quantum transverse-field Ising model [128; 129] in quantum metrology. Its Hamiltonian reads
\[\hat{H}=-J\sum_{i=1}^{L-1}\hat{\sigma}_{i}^{x}\hat{\sigma}_{i+1}^{x}-h\sum_{i =1}^{L}\hat{\sigma}_{i}^{z}, \tag{49}\]
where \(\hat{\sigma}_{i}^{x/z}\) are Pauli operators and \(L\) is the size of spin chain. Using the standard Jordan-Wigner, Fourier, and Bogoliubov transformations, Eq. (49) is rewritten as \(\hat{H}=\sum_{k>0}\Lambda_{k}(\hat{\eta}_{k}^{\dagger}\hat{\eta}_{k}-1)\), where \(k=(2n+1)\pi/L\), with \(n=0,\cdots,L/2-1\), \(\hat{\eta}_{k}\) is the fermion annihilation operator, and \(\Lambda_{k}=\sqrt{(J\cos k+h)^{2}+(J\sin k)^{2}}\) is the excitation energy. It has a quantum phase transition from an ordered phase to a paramagnetic phase at the critical point \(h=J\). The QFI for the parameter \(J\) in the ground state is
\[\mathcal{F}_{J}=\sum_{k}\frac{(J+\frac{z_{c}}{L})^{2}\sin^{2}k}{[\frac{z_{c}^{ 2}}{L^{2}}+4J(J+\frac{z_{c}}{L})\cos^{2}\frac{k}{2}]^{2}}, \tag{50}\]
where \(z_{c}\equiv L(h-J)\) is the scaling variable. Since \(\partial_{z_{c}}\mathcal{F}_{J}|_{z_{c}\to 0}=0\), \(\mathcal{F}_{J}\) has a maximum at \(z_{c}=0\) for all values of \(L\). It scales with \(L\) as the HL, i.e., \(\mathcal{F}_{J}\approx\frac{L^{2}}{8J^{2}}\) in the critical region, while it scales as \(L^{1}\) in the off-critical region [129; 130].
Second, quantum critical metrology can be used to develop quantum thermometry. The precise measuring of temperature at the quantum level plays a more and more important role in the emerging fields of quantum thermodynamics and quantum technologies [132]. Although much effort has been made to enhance the precision of temperature measurement by using different quantum features, precisely measuring low temperature is still extremely challenging because the measured temperature errors in the existing quantum thermometry schemes are commonly divergent with decreasing temperature [133; 134; 135; 136; 137]. Ref. [131] presents non-Markovian quantum thermometry to measure the temperature of a quantum reservoir. The reservoir is at equilibrium initially, i.e., \(\rho_{\text{R}}(0)=\prod_{k}e^{-\beta\omega_{k}\hat{b}_{k}^{\dagger}\hat{b}_{k}} /\text{Tr}[e^{-\beta\omega_{k}\hat{b}_{k}^{\dagger}\hat{b}_{k}}]\), where \(\beta=(K_{B}T)^{-1}\) and \(K_{B}\) is the Boltzmann constant. A continuous-variable system is used as the quantum thermometer. The encoding dynamics are governed by the Hamiltonian
\[\hat{H}=\omega_{0}\hat{a}^{\dagger}\hat{a}+\sum_{k}[\omega_{k}\hat{b}_{k}^{ \dagger}\hat{b}_{k}+g_{k}(\hat{a}^{\dagger}\hat{b}_{k}+\hat{b}_{k}^{\dagger}\hat{a })], \tag{51}\]
where \(\hat{a}\) and \(\hat{b}_{k}\) are the annihilation operators of the thermometer with frequency \(\omega_{0}\) and the \(k\)th reservoir mode with frequency \(\omega_{k}\) of the measured reservoir, and \(g_{k}\) is their coupling strength. Their coupling is further characterized by the spectral density \(J(\omega)=\sum_{k}g_{k}^{2}\delta(\omega-\omega_{k})\). The Ohmic-family spectral density \(J(\omega)=\eta\omega^{*}\omega_{c}^{1-\varepsilon}e^{-\omega/\omega_{c}}\), where \(\eta\) is a dimensionless coupling constant, \(\omega_{c}\) is a cutoff frequency, and \(s\) is an Ohmicity index, is used. The ground state of Eq. (51) has a first-order phase transition at the critical point \(\omega_{0}/\omega_{c}=\eta\underline{\Gamma}(s)\), where \(\underline{\Gamma}(s)\) is Euler's \(\gamma\) function. Taking the Ohmic spectral density as an example, Fig. 4(a) plots the non-Markovian evolution of \(\mathcal{F}_{T}(t)\) for different cutoff frequencies \(\omega_{c}\). It can be seen that \(\mathcal{F}_{T}(t)\) gradually increases with time from zero to \(\omega_{c}\)-dependent stable values, which are larger than the Markovian approximate one \(\tilde{F}_{T}(\omega_{0})\) in the full-parameter regime. Another interesting feature is that an obvious maximum of the QFI is present at \(\omega_{0}=\eta\omega_{c}\), where the quantum phase transition occurs. Figure 4(b) shows the steady-state QFI \(\mathcal{F}_{T}(\infty)\) for different \(T\) and \(\omega_{c}\). It clearly demonstrates that, at the critical point, the QFI scales with the temperature as \(\mathcal{F}_{T}\sim T^{-2}\) in the full-temperature regime, which means that the performance of the non-Markovian quantum thermometry becomes better and better with a decrease of the temperature. This successfully solves the problem of the conventional schemes where the QFI tends to zero in the low-temperature regime. Besides the temperature of the reservoir, the criticality can also be used to sense the quantities in the spectral density [138; 74], which plays an important role in understanding the reservoir-induced decoherence to quantum systems.
The "super-HL scaling" in quantum critical metrology is another interesting topic. It was reported that the criticality can boost the QFI to \(\mathcal{F}_{\lambda}\sim L^{2\alpha}\) with \(2<\alpha<3\) in Refs. [139; 140; 141; 142]. This contradicts with the result that the sensitivity would be bounded by the HL \(\mathcal{F}_{\lambda}\sim L^{2}\) if no \(k\)-order nonlinear terms, with \(k\geq 2\), are involved in the encoding Hamiltonian [143; 144; 145; 146; 147; 148; 149; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100]. This contradiction was studied by considering the preparation time of the critical ground state [98]. The ground state is generally adiabatically prepared, i.e., the initial ground state adiabatically evolves to the desired critical one. However, the energy gap near the critical point becomes smaller and smaller, and hence any change needs to be infinitely slow to keep the adiabaticity. This is known as the critical slowing down. Therefore, quantum-critical metrology inevitably requires a divergent preparation time. It was found that if the scaling of time is considered, the HL \(\mathcal{F}_{\lambda}\sim L^{2}\) would be recovered and the "super-HL" in quantum critical metrology with local Hamiltonian would not present anymore [151; 98; 125].
To overcome the long time cost in adiabatic preparing of the critical ground state, a dynamic framework for critical metrology was proposed [152]. Consider a general parametric dynamical evolution state \(|\psi_{\lambda}(t)\rangle=e^{-i\hat{H}_{\lambda}t}|\psi\rangle\), where \(|\psi\rangle\) is a general initial state and \(\hat{H}_{\lambda}\) is a family of Hamiltonian that all nearest energy level spacing have equal value \(\sqrt{\Delta}\) near the critical point [152; 153]. It has been proven that the QFI of estimating \(\lambda\) scales as \(\mathcal{F}_{\lambda}(t)\sim[\sin(\sqrt{\Delta}t)-\sqrt{\Delta}t]^{2}/\Delta^ {3}\)[152]. Under the condition \(\sqrt{\Delta}t\sim O(1)\), the QFI scales with \(\Delta\) as \(\mathcal{F}_{\lambda}\sim\Delta^{-2}t^{2}\) and is divergent near the critical point \(\Delta\to 0\). Due to the nonanalytical nature of the whole spectrum, this scaling behavior holds for any initial state. It efficiently avoids the time cost in adiabatically preparing the critical ground state. We take the quantum Rabi model as an example to illustrate this. Its Hamiltonian reads
\[\hat{H}_{\lambda}=\omega\hat{a}^{\dagger}\hat{a}+\frac{\omega_{0}}{2}\hat{ \sigma}_{z}-\lambda\hat{\sigma}_{x}(\hat{a}+\hat{a}^{\dagger}), \tag{52}\]
Setting \(g\equiv 2\lambda/\sqrt{\omega\omega_{0}}\), this model has a critical point \(g_{c}=1\), where a normal to superradiant phase transition occurs and the energy gap scales as \(\varepsilon\sim|1-g^{2}|^{\nu z}\), with the critical exponent \(\nu z=1/2\)[154; 155]. By defining \(\Delta_{g}\equiv 4(1-g^{2})\) and setting initial state \(|\psi\rangle=|\downarrow\rangle\otimes|\psi\rangle_{b}\), the QFI for estimating \(g\) with the evolved state is derived as
\[\mathcal{F}_{g}(t)\approx 16g^{2}\frac{[\sin(\sqrt{\Delta_{g}}\omega t)-\sqrt{ \Delta_{g}}\omega t]^{2}}{\Delta_{g}^{3}}\text{Var}[\hat{P}]_{|\psi\rangle_{b}}, \tag{53}\]
with \(\hat{P}=[i(\hat{a}+\hat{a}^{\dagger})]/\sqrt{2}\) and \(|\psi\rangle_{b}\) being the bosonic part of initial state. It can be directly found that \(\mathcal{F}_{g}(t)\) diverges when \(g\to 1\). Notice that the evolution time needs to satisfy \(\sqrt{\Delta_{g}}t\sim O(1)\), hence the dynamical protocol also requires a long evolution time near the critical
point. But compared with the ground state critical quantum metrology, in which the adiabatic evolution time \(t\sim 1/\Delta_{g}\)[124; 151], such dynamical protocol greatly reduces the time cost.
In recent years, quantum critical metrology has been extended to many other transition types, such as the topological phase transition [156], the temperature driven phase transition [157], the first-order quantum phase transitions [158; 159], the the nonlinear critical model [160], dynamical quantum phase transition [161; 162; 163], and the Floquet induced criticality [164].
### Quantum Chaos
It was pointed out in Refs. [165; 166] that the quantum chaos can also be used to enhance the sensitivity of quantum metrology. Consider a collection of periodically kicked spins as the probe to detect a classical magnetic field. Its Hamiltonian reads
\[\hat{H}_{\rm KT}(t)=\alpha\hat{J}_{z}+\frac{k}{2j+1}\hat{J}_{y}^{2}\sum_{n=- \infty}^{\infty}\tau\delta(t-n\tau), \tag{54}\]
where \(\hat{J}_{y/z}\) are the collective spin operators, \(j=N/2\) is the quantum number of \(\hat{\bf J}^{2}\), and \(\alpha\) and \(k\) are the strengths of the magnetic field to be estimated and of the kicking field, respectively. The \(\hat{J}_{y}^{2}\) term introduces the nonlinearity and leads to the chaotic dynamics when \(k\geq 3.0\)[167]. To use the chaotic resource instead of initial entanglement, the metrology protocol is as follows. First, we prepare the probe in a spin-coherent state \(|\psi(0)\rangle=|\theta,\phi\rangle\), which is a product state of each spin state. Second, we encode \(\alpha\) into the prove state via the evolution governed by Eq. (54), i.e., \(|\psi_{\alpha}(t)\rangle=\hat{\mathcal{T}}e^{-i\int_{0}^{t}\hat{H}_{\rm KT}( \tau)d\tau}|\psi(0)\rangle\). Finally, we estimate the parameter \(\alpha\) from \(|\alpha(t)\rangle\). For comparison, when \(k=0\), the system is totally integrable and such metrology protocol reduces to the standard Ramsey spectroscopy in Sec. III.1.
There are two important time scales in chaotic systems [168]. One is the Ehrenfest time \(t_{\rm E}=\frac{1}{\lambda}\ln\frac{\Omega}{\hbar^{2}}\), where \(\lambda\) is the Lyapunov exponent, \(\Omega\) and \(h\) are the volumes of the phase space and a Planck cell, respectively, and \(d\) is the number of the degrees of freedom. It is a time scale at which the correspondence between classical and quantum dynamics begins to break down. The other is the Heisenberg time \(t_{\rm H}=\hbar/\Delta\), where \(\Delta\) is the mean energy level spacing. It is a time scale after which the dynamics resolves the discrete nature of the energy spectrum and a quasi-periodic behavior may be found. For Eq. (54) in the chaotic regime when \(k=30\) under the initial condition \(|\psi(0)\rangle=|\pi/2,\pi/2\rangle\), a detail analysis shows that \(t_{\rm E}=0.4\ln(2j+1)\) and \(t_{\rm H}=(2j+1)/6\).
It was found that
\[\mathcal{F}_{\alpha}\propto\begin{cases}tj^{2},&t_{\rm E}<t<t_{\rm H}\\ t^{2}j,&t\gg t_{\rm H}\end{cases}. \tag{55}\]
The numerical results are shown in Fig. 5. Figure 5(a) reveals that a transition from the \(t\)-scaling when \(t\approx t_{\rm E}\) to the \(t^{2}\)-scaling when \(t\geq t_{\rm H}\) occurs for different \(j=N/2\). Figure 5(b) confirms that the scaling relation of \(\mathcal{F}_{\alpha}\) can go beyond the SNL when \(t<t_{\rm H}\) even without the initial entanglement. It reveals the sensitivity enhancement caused by the chaotic dynamics. A similar protocol that uses chaotic dynamics to estimate magnetic fields was discussed in Ref. [166]. From the close relations between chaos, thermalization, and entanglement widely studied in Refs. [169; 170; 171; 172], a rough understanding of this enhancement is that the entanglement as the quantum resource is not prepared in the initial state, but is generated during the encoding dynamics governed by quantum chaos.
## V Effects of decoherence on quantum metrology
From the foregoing sections, we see that quantum metrology works by using quantum effects, such as quantum coherence, squeezing, and entanglement, to develop revolutionary techniques for enhancing measurement precision. The main idea is to actively control the quantum states of relevant systems in the desired manner to realize more precise measurements of physical quantities than the achievable precision limit in classical physics. Although many exciting progresses have been made, quantum metrology is still in the proof-of-principle phase. It still has not shown its superiority in absolute sensitivities over its classical counterparts. This is because the practical realization of quantum metrology is challenged by different kinds of noise-induced decoherence in its stability and scalability. As a ubiquitous phenomenon in the microscopic world, decoherence is caused by the inevitable interaction of quantum systems with their environments. It makes the quantum resources degraded and the states deviate from the desired manners. Determined by whether the open system has an energy exchange with the environment or not, the decoherence can
Figure 5: (a) \(t\) scaling of the QFI of \(\alpha\)\(I_{\alpha}\equiv\mathcal{F}_{\alpha}\) in the strongly chaotic case. Dashed and dash-dotted lines indicate Ehrenfest and Heisenberg times, respectively. (b) \(j\) scaling of the QFI. Fits have slopes \(1.96\), \(1.88\), \(1.46\), \(1.16\), and \(1.08\) in increasing order of \(t\). Kicking strength \(k=30\), and initial coherent state at \((\theta,\phi)=(\pi/2,\pi/2)\) in all plots. Figure cited from [165].
be classified into the purely dephasing [173; 174] and the dissipation [175]. When a quantum system couples to an environment in a dephasing way, the information or the quantum coherence of the quantum system leaks to the environment, keeping its energy unchanged. In cold-atom systems, the random recoils of the atoms cause the dephasing process [176; 177]. By heating the atomic gas, the enhanced random collisions among atoms give rise to a severe dephasing process. The dissipation occurs in the spontaneous emission of a two-level atom [178; 179]. In the electron-nuclear spin systems, the central electron spin generally experiences a dissipative process induced by the surrounding phonons and a flip-flop interaction with the nuclear spins [180; 181]. Compared with that of dephasing, the dissipative noise can not only leads to the exchange of information but also the relaxation of energy.
A widely used approximation is the Born-Markovian approximation. It was found that the Born-Markovian dephasing noise forces the HL precision in Ramsey spectroscopy not only to return to the SNL at an optimal encoding time but also to become divergent in the long-time condition [182]. In the optical Mach-Zehnder interferometer-based quantum metrology scheme, photon dissipation also makes the metrology precision using entanglement [36; 37; 90] and squeezing [34; 38] returns to or even worse than SNL. Having been proven to be universal for any Born-Markovian noises, these two destructive consequences are called the no-go theorem of noisy quantum metrology [183; 50]. The no-go theorem is the main obstacle to achieving high-precision quantum metrology in practice. Thus, the determination of whether this no-go theorem is ostensible or fundamental and whether can it be overcome are highly desirable from both theoretical and experimental perspectives.
In this section, we discuss two types of environmental noises, namely dephasing and dissipative noises, on the schemes of quantum metrology.
### Noisy Ramsey spectroscopy
Many previous studies have studied the behavior of disentanglement under the influence of decoherence [184; 185; 186; 187; 188; 189]. In the Markovian case, it is showed that the dynamics of the quantum entanglement between two qubits interacting independently with either quantum noise or classical noise becomes completely disentangled in a finite time [190]. Such a disentanglement scale time has the same order as the usual spontaneous lifetime [190]. However, in the non-Markovian case, Ref. [191] reported that non-Markovian effects influence the entanglement dynamics and may give rise to a revival of entanglement even after the complete disentanglement has been presented for finite time periods. Moreover, Ref. [192] demonstrated that non-Markovianity can be used to enhance the steady-state entanglement in a coherently coupled dimer system subject to dephasing noises. These results suggest the non-Markovian memory effect may be used to improve the performance of noisy quantum metrology. The possible non-Markovianity-assisted metrological schemes have been discussed in Refs. [193; 194; 195; 196; 74; 196].
We first study the effects of the local dephasing noises on the Ramsey spectroscopy. Influenced by the local dephasing noises, the encoding dynamics of the Ramsey spectroscopy is governed by the Hamiltonian
\[\hat{H}=\sum_{j}\{\Delta\hat{\sigma}_{j}^{\dagger}\hat{\sigma}_{j}+\sum_{k}[ \omega_{k}\hat{b}_{j,k}^{\dagger}\hat{b}_{j,k}+g_{k}\hat{\sigma}_{j}^{z}(\hat {b}_{jk}^{\dagger}+\hat{b}_{jk})]\}, \tag{56}\]
where \(\hat{b}_{j,k}\) is the annihilation operator of the \(k\)th mode for the noise felt by the \(j\)th TLS and \(g_{k}\) is their coupling strength. The coupling can be further characterized by the spectral density \(J(\omega)=\sum_{k}|g_{k}|^{2}\delta(\omega-\omega_{k})\). It generally takes the Ohmic-family form \(J(\omega)=\eta\omega(\omega/\omega_{c})^{s-1}e^{-\omega/\omega_{c}}\), where \(\eta\) is a dimensionless coupling constant, \(\omega_{c}\) is a cutoff frequency, and \(s\) is the so-called Ohmicity parameter. The environment is classified into the sub-Ohmic for \(0<s<1\), the Ohmic for \(s=1\), and the super-Ohmic for \(s>1\). The initial state of the total system is \(\rho_{\Gamma}(0)=\rho(0)\otimes e^{-\sum_{j,k}\beta\omega_{k}\hat{b}_{j,k}^{ \dagger}\hat{b}_{j,k}}/Z\), with \(Z\) being the partition function. After tracing out the noisy degrees of freedom, we obtain the exact non-Markovian master equation satisfied by the probe as
\[\dot{\rho}(t)=\sum_{j}\{-i\frac{\Delta}{2}[\hat{\sigma}_{j}^{z},\rho(t)]+\frac {\gamma(t)}{2}[\hat{\sigma}_{j}^{z}\rho(t)\hat{\sigma}_{j}^{z}-\rho(t)]\}, \tag{57}\]
where \(\gamma(t)=4\int_{0}^{t}d\tau\int_{0}^{\infty}d\omega J(\omega)\coth(\frac{ \beta\omega}{2})\cos[\omega(t-\tau)]\). Solving Eq. (57) under the initial condition \(\rho(0)=|\text{GHZ}\rangle\langle\text{GHZ}|\), we obtain
\[\rho(t) = \frac{1}{2}[|g\rangle\langle g|^{\otimes N}+|e\rangle\langle e|^{ \otimes N} \tag{58}\] \[+(e^{-N[\Delta t+\Gamma(t)]}|e\rangle\langle g|^{\otimes N}+ \text{h.c.})],\]
where \(\Gamma(t)=\int_{0}^{t}\gamma(\tau)d\tau\). Repeating the process of Ramsey spectroscopy in Sec. III.1, we obtain the measurement results 1 with \(P_{1}=\frac{1}{2}[1-\cos(N\Delta t)e^{-N\Gamma(t)}]\) and 0 with \(P_{0}=\frac{1}{2}[1+\cos(N\Delta t)e^{-N\Gamma(t)}]\). Then the maximal CFI is calculated as
\[F_{\omega_{0}}(t)=(Nt)^{2}e^{-2N\Gamma(t)} \tag{59}\]
when \(\Delta t=k\pi/2\) with \(k\) being odd numbers. Then the corresponding frequency sensitivity under the repetitive measurement with the time duration \(T\) reads
\[\delta\omega_{0}(t)=\Big{[}\frac{e^{2N\Gamma(t)}}{TN^{2}t}\Big{]}^{1/2}. \tag{60}\]
The minimal \(\delta\omega_{0}\) is determined at the encoding time satisfying \(\frac{d}{dt}\frac{e^{2N\Gamma(t)}}{t}=0\).
In the special case that the time scale of the environmental correlation function is much smaller than the one
in the system, we can make the Markovian approximation by extending the \(t\) in \(\gamma(t)\) to infinity. After choosing the Ohmic spectral density, we have
\[\lim_{t\rightarrow\infty}\gamma(t)=\frac{8\pi\eta}{\beta}\equiv\bar{\gamma}. \tag{61}\]
Substituting Eq. (61) into Eq. (60), we find that, although decreasing with time in the short-time regime, \(\delta\omega_{0}(t)\) tends to be divergent in the long-time regime. It means that the encoding time as a resource to enhance the sensitivity is destroyed by the dephasing noises. The short-time optimal \(\delta\omega_{0}\) is
\[\min\delta\omega_{0}=\Big{(}\frac{2\bar{\gamma}e}{TN}\Big{)}^{1/2} \tag{62}\]
when \(t=\frac{1}{2N\bar{\gamma}}\). It means that the Markovian dephasing noises force the HL in Ramsey spectroscopy not only to return to the SNL at an optimal encoding time but also to become divergent in the long-time condition [182]. Being universal for any Markovian noises [197; 198; 36; 200; 37], these two destructive consequences are called the no-go theorem of noisy quantum metrology [183; 50].
In the general non-Markovian case, the damping function \(\Gamma(t)\) for the Ohmic spectral density at the zero temperature reads
\[\Gamma(t)=4\eta\ln(1+\omega_{c}^{2}t^{2}). \tag{63}\]
Then, it is easy to derive from \(\frac{d}{dt}\frac{e^{2NT(t)}}{t}=0\) that the minimal \(\delta\omega_{0}\) is achieved at \(t\simeq\frac{\omega_{0}}{\omega_{c}\sqrt{SN\eta}}\) as [93]
\[\min\delta\omega_{0}\simeq\Big{(}\frac{e\sqrt{8\eta}}{TN^{3/2}\omega_{c}} \Big{)}^{1/2}. \tag{64}\]
The scaling relation \(\delta\omega_{0}\propto N^{-3/4}\) is called Zeno limit. Equation (64) reveals that being different from the Markovian case, the non-Markovian effect of the dephasing noises partially retrieves the quantum superiority of Ramsey spectroscopy, see Fig. 6. However, the divergence fate of the precision in the long-time condition, i.e., \(\lim_{t\rightarrow\infty}\delta\omega_{0}(t)=\infty\), does not change.
A similar result on that the non-Markovian effect of the dephasing noises can reduce the HL to the Zeno limit \(N^{-3/4}\) at an optimal time has been confirmed in Refs. [202; 203; 204; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 191; 192; 193; 194; 195; 196; 197; 198; 199; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 209; 211; 213; 214; 215; 216; 217; 218; 219; 223; 224; 225; 226; 227; 228; 229; 230; 231; 232; 233; 234; 235; 236; 237; 238; 239; 240; 241; 242; 243; 244; 245; 246; 247; 248; 249; 250; 251; 252; 253; 254; 255; 256; 257; 258; 259; 260; 261; 262; 263; 264; 265; 266; 267; 268; 269; 270; 271; 272; 273; 274; 275; 276; 2777; 278; 279; 280; 281; 282; 283; 284; 285; 286; 287; 288; 289; 289; 290; 282; 284; 286; 287; 288; 289; 291; 288; 289; 292; 293; 288; 289; 294; 289; 295; 296; 297; 298; 300; 301; 302; 303; 304; 305; 306; 307; 308; 309; 3108; 309; 320; 331; 332; 333; 341; 342; 343; 344; 355; 356; 357; 358; 369; 370; 371; 372; 373; 381; 382; 383; 384; 385; 386; 387; 388; 390; 388; 389; 391; 388; 392; 389; 393; 40; 40; 41; 423; 43; 444; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 91; 89; 92; 89; 93; 94; 80; 81; 84; 86; 87; 89; 88; 89; 95; 89; 96; 81; 88; 89; 97; 98; 99; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 111; 113; 107; 109; 114; 108; 115; 109; 120; 109; 131; 101; 116; 101; 117; 118; 119; 121; 122; 124; 125; 126; 127; 128; 129; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 150; 151; 152; 153; 154; 156; 157; 158; 159; 160; 161; 163; 164; 167; 168; 169; 170; 171; 173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 191; 193; 187; 188; 191; 194; 188; 189; 195; 196; 197; 198; 199; 201; 210; 211; 222; 233; 240; 241; 242; 243; 244; 245; 246; 247; 248; 249; 250; 251; 252; 253; 254; 256; 257; 258; 259; 261; 270; 271; 278; 279; 281; 282; 289; 293; 294; 295; 286; 296; 297; 298; 300; 311; 320; 333; 342; 35; 361; 37; 383; 399; 40; 41; 42; 43; 43; 44; 45; 46; 47; 48; 49; 51; 52; 53; 54; 56; 57; 58; 59; 61; 70; 71; 73; 74; 75; 76; 77; 78; 79; 80; 82; 83; 84; 85; 86; 87; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 11; 112; 133; 144; 145; 146; 147; 148; 149; 151; 156; 157; 158; 159; 161; 179; 199; 180; 199
Thus the metrology precision using the GHZ-type entanglement becomes exactly the same as the SNL. It means that the advantage of using entanglement in quantum metrology entirely disappears under the Markovian dissipative environments.
In the general non-Markovian case, a Laplace transform can linearize Eq. (67) into \(\tilde{u}(z)=[z+i\Delta+\int_{0}^{\infty}\frac{J(\omega)}{z+i\omega}d\omega]^{-1}\). The solution of \(u(t)\) is obtained by the inverse Laplace transform of \(\tilde{u}(z)\), which can be done by finding its pole from [184; 206]
\[y(E)\equiv\Delta-\int_{0}^{\infty}\frac{J(\omega)}{\omega-E}d\omega=E,\ (E=iz). \tag{71}\]
Note that the roots \(E\) of Eq. (71) are just the eigenenergies of the total system (65) in the single-excitation space. Specifically, expanding the eigenstate as \(|\Psi\rangle=(x\hat{\sigma}^{\dagger}+\sum_{k}y_{k}\hat{b}_{k}^{\dagger})|g,\{ 0_{k}\}\rangle\) and substituting it into \(\hat{H}|\Psi\rangle=E|\Psi\rangle\) with \(E\) being the eigenenergy, we have \((E-\omega_{0})x=\sum_{k}g_{k}y_{k}\) and \(y_{k}=g_{k}x/(E-\omega_{k})\). They readily lead to Eq. (71). Since \(y(E)\) is a decreasing function in the regime \(E<0\), Eq. (71) has one isolated root \(E_{b}\) in this regime provided \(y(0)<0\). While \(y(E)\) is ill-defined when \(E>0\), Eq. (71) has infinite roots in this regime forming a continuous energy band. We call the eigenstate of the isolated eigenenergy \(E_{b}\) bound state. After the inverse Laplace transform, we obtain
\[u(t)=Ze^{-iE_{b}t}+\int_{0}^{\infty}\frac{J(E)e^{-iEt}dE}{[E-\Delta-\delta(E)] ^{2}+[\pi J(E)]^{2}}, \tag{72}\]
where \(\delta(E)=\mathcal{P}\int_{0}^{\infty}\frac{J(\omega)}{E-\omega}d\omega\) with \(\mathcal{P}\) being the Cauchy principal value. The first term with \(Z=[1+\int_{0}^{\infty}\frac{J(\omega)d\omega}{(E_{b}-\omega)^{2}}]^{-1}\) is from the bound state, and the second one is from the band energies. Oscillating with time in continuously changing frequencies, the integral tends to zero in the long-time condition due to out-of-phase interference. Thus, if the bound state is absent, then \(\lim_{t\rightarrow\infty}u(t)=0\) characterizes a complete dissipation, while if the bound state is formed, then \(\lim_{t\rightarrow\infty}u(t)=Ze^{-iE_{b}t}\) implies a dissipation suppression. It can be evaluated that the bound state is formed for the Ohmic-family spectral density when \(\omega_{0}<\eta\omega_{c}\underline{\Gamma}(s)\), where \(\underline{\Gamma}(s)\) is the Euler's gamma function.
By substituting the form \(Ze^{-iE_{b}t}\) for the large-time \(u(t)\) in the presence of the bound state into Eq. (69), we have
\[\min(\delta\omega_{0})=Z^{-(N+1)}(N^{2}Tt)^{-1/2}, \tag{73}\]
where the dependence of \(E_{b}\) on \(\omega_{0}\) has been considered via \(\partial_{\omega_{0}}E_{b}=Z\). It is found that the bound state causes the encoding time as a metrology resource to be completely recovered. The precision asymptotically approaches the ideal HL (20) for \(N\ll[-1/\ln Z]\) when \(Z\) reaches unity. Therefore, whether the superiority of the quantum metrology under dissipative noises exists or not highly depends on the formation of the bound state and the non-Markovian effect. When the decoherence is Markovian or the bound state is absent, the quantum superiority is destroyed; whereas when the decoherence is non-Markovian and the bound states are formed, the quantum superiority is retrieved. The result suggests a guideline for experimentation to implement the ultrasensitive measurement in the practical noise situation by engineering the formation of the bound state. This could be realized by the technique of quantum reservoir engineering [207]. the bound state and its role in the non-Markovian dynamics have been observed in circuit QED [208] and ultracold atom [209; 210] systems.
### Noisy Mach-Zehnder interferometer
Photon dissipation is the main decoherence in the Mach-Zehnder interferometer. Previous work dealt with photon dissipation by eleplantine introduction of incomplete transmission coefficients in beam splitters [34; 35; 36; 38; 211], which is equivalent to the Born-Markov approximation description. Although the calculation under the Born-Markov approximation description is much more convenient, it may lose important physical phenomena. To reveal the actual performance of the scheme, the effect of photon dissipation on the scheme is studied by discarding the widely adopted Born-Markovian approximation and focusing specifically on non-Markovian effects.
Taking the local dissipative noise of the encoding optical path into account, the encoding dynamics of the Mach-Zehnder interferometer is governed by
\[\dot{\rho}(t) = -i[\omega_{0}\hat{a}_{1}^{\dagger}\hat{a}_{1}+\Omega(t)\hat{a}_ {1}^{\dagger}\hat{a},\rho(t)] \tag{74}\] \[+\kappa(t)[2\hat{a}_{2}\rho(t)\hat{a}_{2}^{\dagger}-\{\rho(t), \hat{a}_{2}^{\dagger}\hat{a}_{2}\}],\]
where \(\kappa(t)-i\Omega(t)=-\dot{c}(t)/c(t)\). \(c(t)\) is determined by
\[\dot{c}(t)+i(\gamma+\omega_{0})c(t)+\int_{0}^{t}f(t-\tau)c(\tau)d\tau=0 \tag{75}\]
under \(c(0)=1\). Repeating the similar procedure as Sec. III.2 leads to
\[\bar{M} = \mathrm{Re}[c(t)e^{i\omega_{0}t}](\sinh^{2}r-|\alpha|^{2}),\] \[\delta M^{2} = \mathrm{Im}[c(t)e^{i\omega_{0}t}]^{2}[|\alpha\cosh r-\alpha^{*} \sinh re^{i\phi}|^{2}+\sinh^{2}r]\] \[+\mathrm{Re}[c(t)e^{i\omega_{0}t}]^{2}[|\alpha|^{2}+\frac{1}{2} \sinh^{2}(2r)]+\frac{1-|c(t)|^{2}}{2}N.\]
Applying the Markovian solution \(c_{\mathrm{MA}}(t)=e^{-[\kappa+i(\omega_{0}+\gamma+\Delta)]t}\) with \(\kappa=\pi J(\omega_{0}+\gamma)\) and \(\Delta=\mathcal{P}\int_{0}^{\infty}\frac{J(\omega)}{\omega_{0}+\gamma-\omega}d\omega\), we obtain \(\min\delta\gamma\simeq(\frac{e^{2\delta t}-1}{2Nt^{2}})^{1/2}\) when \(\beta=(2N)^{-1}\) and \(\varphi=2\phi\). Getting divergent in the long-time limit, it's minimum at \(t=\kappa^{-1}\) returns the SNL \(e\kappa(2N)^{-1/2}\). Thus, the quantum superiority of the scheme in the Markovian noise disappears completely, which is consistent with the result in the Ramsey spectroscopy.
In the general non-Markovian dynamics, similar analysis as Sec. V.1 results in that, as long as \(\omega_{0}+\gamma-\eta\omega_{c}\underline{\Gamma}(s)\leq 0\), a bound state between the second optical field and its environment is formed and thus \(\lim_{t\rightarrow\infty}c(t)=Ze^{-i\varpi_{b}t}\). Focusing on the case in the presence of the bound state and substituting the asymptotic solution \(Ze^{-i\varpi_{b}t}\) into the result of \(\delta\gamma\), one can obtain
\[\left.\min\delta\gamma\right|_{\iota=(2\sqrt{N})^{-1}}=\frac{(tN^{3/4})^{-1}}{ Z}\bigg{(}1+\frac{1-Z^{2}}{2Z^{2}}N^{\frac{1}{2}}\bigg{)}^{\frac{1}{2}}, \tag{76}\]
when \(t=\frac{(2m+1)\pi}{2|\omega_{0}-\varpi_{b}|}\) and \(\varphi=2\phi\). Equation (76) remarkably reveals that, even in the long-time condition, \(\delta\gamma\) asymptotically tends to the ideal ZL with \(Z\) approaching 1, which is controllable by manipulating the spectral density \(J(\omega)\) and the working frequency \(\omega_{0}\) of the probe. It is in sharp contrast to the phenomenological and the Markovian approximate one, where \(\delta\gamma\) gets divergent with time increasing. Figures 7(a) and 7(b) confirm that the formation of the bound state makes \(|c(\infty)|\) occur an abrupt change from zero to finite values exactly coinciding with \(Z\). Figure 7(c) indicates that, as long as the bound state is formed, the profile of the local minima gets to be a decreasing function with time. Thus, the bound state makes the superiority of the encoding time as a resource in the ideal metrology case recovered. With the increase of \(Z\) accompanying the increase of \(\omega_{c}\), see Figs. 7(d) and 7(e), \(\min(\delta\gamma)\) gets nearer and nearer the ZL. All the results confirm that the precision asymptotically matching the analytical scaling (76) approaches the ZL with the formation of the bound state. Thus, the formation of a bound state between the quantum probe and its environment are two essential reasons for retrieving the ZL: the bound state supplies the intrinsic ability and the non-Markovian effect supplies the dynamical way.
### Noisy quantum critical metrology
Quantum criticality has been presented as a novel and efficient resource to improve the performances of quantum sensing and quantum metrology. Generally, protocols of criticality-based quantum metrology often work without decoherence. However, the many-body systems, which experience quantum phase transitions, usually interact with their surrounding environments. In this sense, the effect of decoherence should be taken into account. In Ref. [212], the authors discussed the influence of photon relaxations on the inverted variance of quantum-Rabi-model-based quantum metrology around the quantum phase transition. They found that the achieved precision still diverges when approaching the criticality, but the power-law exponent is smaller than that for the noise-free case. Recently, it has been reported that the \(p\)-body Markovian dephasing dynamics of \(N\)-spin GHZ states evolving under a \(k\)-body Hamiltonian showed an \(N^{-k+p/2}\) scaling of the estimation error [147].
In Ref. [213], the authors investigated the influences of dephasing on quantum critical metrology. In their scheme, a one-dimensional transverse-field Ising model
\[\hat{H}_{0}=-J\sum_{i=1}^{N-1}\hat{\sigma}_{i}^{z}\hat{\sigma}_{i+1}^{z}-B\sum _{i=1}^{N}\hat{\sigma}_{i}^{x} \tag{77}\]
is prepared in the ground state at the quantum critical point. Then a small field \(h\) is applied as \(h\sum_{i=1}^{N}\hat{\sigma}_{i}^{z}\). After the free evolution of time \(t\), a quantum measurement is performed and the result is compared with that obtained without applying the field \(h\). The sensitivity is defined as the smallest \(h\) that yields a measurement difference greater than the quantum fluctuation for an evolution time \(t\), i.e., \(\eta_{h}=h_{\min}\sqrt{t}\). The sensitivity has a theoretical lower bound \(\eta_{h}\geq 1/\sqrt{\mathcal{F}_{h}t}\), where the QFI \(\mathcal{F}_{h}\) is related to the spectral function \(\chi^{\prime\prime}(\omega)=\pi N\sum_{n\neq 0}|\langle 0|\hat{M}|n\rangle|^{2} \delta(\omega-E_{n})\) by
\[\mathcal{F}_{h}=\frac{8N}{\pi}\int d\omega\chi^{\prime\prime}(\omega)\frac{1- \cos(\omega t)}{\omega^{2}}. \tag{78}\]
Here \(\tilde{M}=\frac{1}{N}\sum_{i=1}^{N}\hat{\sigma}_{i}^{z}\). In the ideal noiseless case, one can find \(\mathcal{F}_{h}\propto N^{15/4}\) in the long-time limit \(t>\xi\) with \(\xi\) being the correlation length. If \(t<\xi\), the scaling relation becomes \(\mathcal{F}_{h}\propto t^{2}N^{7/4}\). Taking the effect of decoherence into account, the authors couple each spin of the Ising chain to an independent bosonic environment. The environment temperature is set to be zero and the noise spectrum is chosen as the Ohmic spectral density. They found
\[F(h)\propto Nt^{2}. \tag{79}\]
This result means that the local dephasing forces the ideal super-HL scaling back to the SNL. Such a negative effect still holds for the non-Ohmic spectral densities.
Figure 7: (a) \(|e(\infty)|\) (dark cyan circles) by solving Eq. (75), which coincides with \(Z\) (red solid line) from the bound state. The inset shows \(|c(t)|\). (b) The energy spectrum of the whole system of the optical field and its environment. \(\gamma=\pi\omega_{0}\) and \(\eta=0.02\) are used. (c) Evolution of \(\delta\gamma(t)\) without (purple dot-dashed line) and with (cyan solid line) the bound state, where the local minima match with the curve (red dashed line) of Eq. (76). Local minima of \(\delta\gamma(t)\) as a function of time (d) and \(N\) (e) in different \(\omega_{c}\). \(\iota=\left(2\sqrt{N}\right)^{-1}\), \(N=100\) in (d), and \(t=10\omega_{0}^{-1}\) in (e) are used. Figure cited from Ref. [67].
## VI Decoherence control
In this section, we give a brief review of the decoherence control schemes to minimize the unwanted effects of decoherence in metrological schemes.
### Dynamical control
Dynamical control is mainstream to suppress the decoherence in quantum metrology. Under the Markovian approximation, Refs. [214; 215; 216; 217; 218] proposed a gradient ascent pulse engineering scheme to beat the noisy effect on quantum metrology. It resorts to applying a series of control fields \(\sum_{k}V_{k}(t)\hat{H}_{k}\) on the probe to overcome the decoherence effect. The amplitudes \(V_{k}(t)\) of the control fields are optimized by the algorithm depicted in Fig. 8. It has been demonstrated in the single-atom system that the precision limit under the controlled scheme can go beyond the constraints put by the coherent time. How to generalize this scheme to \(N\)-body quantum metrology is an open question. Going beyond the Markovian approximation, the dynamical decoupling method is a popular way to fight against the dephasing noise. Ref. [64] used a dynamical decoupling method to beat the dephasing noises. It successfully revived the scaling of the measurement precision as \(N^{-k}\) with \(5/6\leq k\leq 11/12\). Such scaling is beyond the SNL and very close to the HL. A similar scheme was reported to beat the dephasing in quantum metrology using spin squeezing [219]. These results convincingly demonstrated that the dynamical decoupling method is a powerful tool to regain a high-precision sensitivity in the dephasing environment [220; 221; 222; 223; 224; 225; 226]. The dynamical decoupling method also was used to beat the dissipative noises. Reference [63] proposed a scheme to enhance the precision of parameter estimation in dissipative systems by employing dynamical decoupling pulses. It was found that the \(N\)-qubit dissipative systems can be preserved at the HL by applying a series of ideal \(\pi\) pulses on the qubits. A Floquet engineering strategy was proposed to overcome the no-go theorem of noisy quantum metrology [227]. It is found that, by applying a periodic driving on the atoms of the Ramsey spectroscopy, the ultimate sensitivity to measure their frequency characterized by the QFI returns to the ideal \(t^{2}\) scaling with the encoding time \(t\) whenever a Floquet bound state is formed by the system consisting of each driven atom and its local noise. Combined with the optimal control, this mechanism also makes the ideal HL scaling with the atom number \(N\) retrieved by optimizing the driving parameters. Simultaneously retrieving the quantum superiority of the encoding time and the number of entangled particles, the scheme supplies an efficient way to overcome the no-go theorem of noisy quantum metrology.
### Quantum error correction
Quantum error correction plays an important role in the realization of high-precision quantum metrology in the presence of noises [56; 57; 58; 59; 60; 61; 62]. To beat the local dephasing noises of Ramsey spectroscopy, Ref. [57] proposed a scheme of using \(m\) physical qubits to encode a logical qubit \(|0_{L}\rangle=(|+\rangle^{\otimes m}+|-\rangle^{\otimes m})/\sqrt{2}\) and \(|1_{L}\rangle=(|+\rangle^{\otimes m}-|-\rangle^{\otimes m})/\sqrt{2}\), where \(\hat{\sigma}_{x}|\pm\rangle=\pm|\pm\rangle\). The encoding dynamics under the influence of dephasing is governed by
\[\dot{\rho}(t)=-i\lambda[\hat{H},\rho(t)]+\sum_{j=1}^{m}\frac{\gamma}{2}[\hat{ \sigma}_{j}^{z}\rho(t)\hat{\sigma}_{j}^{z}-\rho(t)], \tag{80}\]
which causes a phase-flip error to any one of the physical qubits in a probability \(1-p\), with \(p=(1+e^{-\gamma t})/2\). Such kind of phase-flip error can be corrected as follows if the qubits occurring the error is fewer than \((m-1)/2\). First, we make a measurement, which projects \(\rho(t)\) to the basis \(\{\sigma_{z}^{\mathbf{k}}|+\rangle^{\otimes m},\sigma_{z}^{\mathbf{k}}|- \rangle^{\otimes m}\}\), with \(\sigma_{z}^{\mathbf{k}}=\hat{\sigma}_{z}^{k_{1}}\otimes\cdots\otimes\hat{ \sigma}_{z}^{k_{m}}\) and \(k_{j}\in\{0,1\}\), to diagnose which qubits occur the error. Without changing \(\rho(t)\), the measurement gives a result \(1\) for \(k_{j}=1\), while keeping other \(k_{j^{\prime}\neq j}\) zero, called error syndrome if the \(j\)th qubit occurs the error. Second, the error is completely corrected by applying an operation \(\hat{\sigma}_{z}^{\mathbf{k}}\) obtained from the error syndrome \(\mathbf{k}\). On the other hand, if the qubits occurring the phase-flip error are more than \((m-1)/2\), we judge that a phase-flip error occurs at the logic-qubit level and performs a correction operation on the logic qubit with probability \(1-p_{L}\). Then the state is described \(\mathcal{E}^{L}(p_{L})\rho(t)=p_{L}\rho(t)+(1-p_{L})\hat{\sigma}_{z}^{(L)}\rho (t)\hat{\sigma}_{z}^{(L)}\), where \(\hat{\sigma}_{z}^{(L)}\) acting on the states of the logic qubits and
\[p_{L}=\sum_{k=0}^{(m-1)/2}\binom{m}{k}p^{m-k}(1-p)^{k}. \tag{81}\]
The expansion of \(p_{L}\) for small \(1-p\) results in \(1-p_{L}\simeq\binom{m}{(m+1)/2}(1-p)^{(m+1)/2}\), which reveals an exponential suppression to the noise rate at the logic-qubit level.
Figure 8: Flow chart of the optimal control algorithm. Figure cited from [214].
Now, consider a metrology scheme using \(Nm\) physical qubits. After the free evolution, the state becomes \(|\psi_{\lambda}^{L}\rangle=(e^{-iN\theta_{\lambda}/2}|0_{L}\rangle^{\otimes N}+e ^{iN\theta_{\lambda}/2}|1_{L}\rangle^{\otimes N})/\sqrt{2}\). Then, the state is subjected to phase noise acting on all \(Nm\) physical qubits. After performing the above error corrections to each block of \(m\) qubits, the phase noise at the logic-qubit level is reduced and the state becomes \(\rho_{\lambda}^{L}=[\mathcal{E}^{L}(p_{L})]^{\otimes N}|\psi_{\lambda}^{L} \rangle\langle\psi_{\lambda}^{L}|\). It is readily calculated that the QFI of \(\rho_{\lambda}^{L}\) is
\[\mathcal{F}_{\theta}=(2p_{L}-1)^{2N}N^{2}. \tag{82}\]
By using the approximation \({m\choose(m+1)/2}<2^{m}\), it can be shown that \((2p_{L}-1)^{2N}\to 1\) and \(\mathcal{F}_{\theta}\approx N^{2}\) for \(N\rightarrow\infty\) as long as \(4N(2\sqrt{1-p})^{m}\ll 1\), i.e., \(m\sim O(\log N)\)[57]. Thus, the QFI of Ramsey spectroscopy under the influence of local noises is stabilized and the HL is attained, with only a logarithmic overhead.
Without resorting to measurements and operations, Ref. [60] developed an always-on dissipative quantum error correction to both the spin- and phase-flip errors and applied it in noisy quantum sensing in a trapped-ion system. To correct the spin-flip error, the logic qubit is encoded in three physical qubits, i.e., \(|\psi\rangle=a|0_{L}\rangle+b|1_{L}\rangle=a|000\rangle+b|111\rangle\). The spin-flip error is caused by \(\mathcal{L}_{\rm noise}(\rho)=\sum_{k=1}^{3}\mathcal{D}[\hat{L}_{k}](\rho)\) with \(\hat{L}_{k}=\sqrt{\Gamma}\hat{\sigma}_{k}^{x}\). Via interrogating the two-body stabilizer operators \(\hat{S}_{ij}=\hat{\sigma}_{i}^{z}\hat{\sigma}_{j}^{z}\), one can find which qubit occurs the single-flip error. If the second physical qubit occurs a spin-flip error, then \(\hat{S}_{12}|\psi\rangle=\hat{S}_{23}|\psi\rangle=-|\psi\rangle\). The error in the first and the second qubits can be found in an analogous fashion. The recovery protocol is realized in a continuous manner by the dissipative dynamics \(\mathcal{L}_{\rm qce}=\sum_{k=1}^{3}\mathcal{D}[\hat{L}_{\rm qce}^{(k)}]\), where \(\hat{L}_{\rm qce}^{(2)}=\sqrt{\Gamma_{\rm qce}}\hat{\sigma}_{2}^{x}\frac{1}{2} \frac{1-\hat{S}_{23}}{2}\), \(\hat{L}_{\rm qce}^{(1)}=\sqrt{\Gamma_{\rm qce}}\hat{\sigma}_{1}^{x}\frac{1- \hat{S}_{21}}{2}\frac{1-\hat{S}_{13}}{2}\), and \(\hat{L}_{\rm qce}^{(3)}=\sqrt{\Gamma_{\rm qce}}\hat{\sigma}_{3}^{x}\frac{1- \hat{S}_{13}}{2}\frac{1-\hat{S}_{32}}{2}\). Then the total dynamics is governed by
\[\dot{\rho}(t)=-i[\hat{H},\rho(t)]+\mathcal{L}_{\rm noise}(\rho(t))+\mathcal{L }_{\rm qce}(\rho(t)). \tag{83}\]
By replacing \(|0\rangle\rightarrow|+\rangle\), \(|1\rangle\rightarrow|-\rangle\), and \(\hat{\sigma}_{z}\leftrightarrow\hat{\sigma}_{x}\), the above scheme is applicable to correct the phase-flip error. Thus, both the phase- and spin-flip errors can be well corrected by this continuous-performing scheme without the disturbance of the measurements and feedback operations.
Up to now, all the schemes on the quantum error correction are established under the Markovian approximation. How to develop the quantum error correction for non-Markovian noises is still an open question.
### Nondemolition measurement
It has kept to be attractive to avoid the destructive impacts of the noises on quantum metrology via carefully designing measurement schemes. Reference [228] proposed an asymmetric Ramsey technique to protect the atoms in Ramsey spectroscopy from spontaneous emissions. It complements the normal \(\pi/2\) pulse with a phase distribution pulse to prepare the states protected from collective decoherence. Reference [51] proposed an adaptive scheme with nondemolition (weak) measurements to avoid the noisy effect on Ramsey spectroscopy using spin squeezing. A general adaptive quantum metrological scheme to beat the Markovian noises was presented in Ref. [52]. The total evolution time is divided into a number of steps interleaved with general unitary controls. The collective measurement performed in the end allows us to regard the scheme as a general adaptive protocol where measurement results at some stage of the protocol influence the control actions applied at later steps.
References [53, 183] proposed a continuous quantum nondemolition measurement of an atomic ensemble to eliminate the destructive impacts of independent dephasing acting on each atom. Consider that an ensemble of \(N\) two-level atoms is rotating around the \(z\) axis with an angular frequency \(\omega\). The aim is to estimate \(\omega\). Each atom is equally and independently subjected to a Markovian dephasing. The atoms are prepared in a spin coherent initially. Continuous monitoring of the collective spin operator \(\hat{J}_{y}\) is performed such that the conditional dynamics of the ensemble is described by the stochastic master equation
\[d\rho_{c} = -i\omega[\hat{J}_{z},\rho_{c}]dt+\frac{\kappa}{2}\sum_{j=1}^{N}[ \hat{\sigma}_{j}^{z}\rho_{c}\hat{\sigma}_{j}^{z}-\rho_{c}]dt+\Gamma[\hat{J}_{y }\rho_{c}\hat{J}_{y} \tag{84}\] \[-\{\hat{J}_{y}^{2},\rho_{c}\}/2]dt+\sqrt{\eta}\Gamma[\{\hat{J}_{y },\rho_{c}\}-2\text{Tr}(\rho_{c}\hat{J}_{y})\rho_{c}]dw,\]
conditioned by the measurement photocurrent \(dy_{t}=2\sqrt{\eta}\Gamma\text{Tr}(\rho_{c}\hat{J}_{y})dt+dw\). Here, \(\Gamma\) is the \(\hat{J}_{y}\)-measurement strength, \(\eta\) is the measurement efficiency, and \((dw)^{2}=dt\) is a Wiener process. The QFI is contributed by the continuous photocurrent \(dy_{t}\) and the final measurement on \(\rho_{c}\), i.e.,
\[\mathcal{F}_{\omega}=F_{y_{t}}+\sum_{\rm traj}p_{\rm traj}\mathcal{F}_{\omega}( \rho_{c}^{\rm(traj)}), \tag{85}\]
where \(F_{y_{t}}\) is the CFI of \(y_{t}\) and \(\mathcal{F}_{\omega}(\rho_{c}^{\rm(traj)})\) is the QFI of \(\rho_{c}^{\rm traj}\) corresponding to different trajectories. It was found that, even without an initial entanglement and in the presence of the local dephasing noises, the maximal \(\mathcal{F}_{\omega}\) at an optimal time can surpass the SNL. In this scheme, the monitoring-induced dynamics generates the resourceful state simultaneously with the frequency encoding. It reveals that continuous monitoring by quantum nondemolition measurements is a practical and relevant tool to obtain a quantum enhancement in spite of decoherence.
These schemes are only applicable to the Markovian noises. Being similar to the quantum error correction to suppress decoherence, how to develop general measurement schemes to control the non-Markovian noises of quantum metrology is still an open question. On the other hand, although partially recovering the quantum superiority in its scaling with \(N\), the divergence fate of
the precision in the long-time condition does not change. Therefore, how to retrieve the quantum superiority of both the encoding time and atom number as resources simultaneously is also an open question.
## VII Conclusions and outlook
Quantum metrology is a rapidly developing discipline in the second revolution of quantum science and technology. Various quantum resources and decoherence control schemes in noisy quantum metrology have been widely investigated. These studies supplied possible ways to overcome the no-go theorem of noisy quantum metrology and paved the way to its realization in practical decoherence situations. However, there are still many open problems. First, how to design the optimal measurement scheme which saturates the value of QFI for different quantum resources and encoding protocols? Second, how to generalize the well-established single-parameter quantum metrology to the multiparameter case? Third, how to construct universal control methods to suppress the decoherence, especially for the non-Markovian one, of noisy quantum metrology? Are there more quantum states of matter that could bring quantum superiority in quantum metrology? The efficient exploration of these problems from fundamental physics is hopefully to prompt the further development of quantum metrology in the near future. It also might supply basic physical ideas to reinforce the application of quantum metrology in innovating different kinds of advanced technologies.
###### Acknowledgements.
The work is supported by the National Natural Science Foundation of China (Grants No. 12275109, No. 12205128, No. 11834005, and No. 12247101).
|
2305.06500 | InstructBLIP: Towards General-purpose Vision-Language Models with
Instruction Tuning | Large-scale pre-training and instruction tuning have been successful at
creating general-purpose language models with broad competence. However,
building general-purpose vision-language models is challenging due to the rich
input distributions and task diversity resulting from the additional visual
input. Although vision-language pretraining has been widely studied,
vision-language instruction tuning remains under-explored. In this paper, we
conduct a systematic and comprehensive study on vision-language instruction
tuning based on the pretrained BLIP-2 models. We gather 26 publicly available
datasets, covering a wide variety of tasks and capabilities, and transform them
into instruction tuning format. Additionally, we introduce an instruction-aware
Query Transformer, which extracts informative features tailored to the given
instruction. Trained on 13 held-in datasets, InstructBLIP attains
state-of-the-art zero-shot performance across all 13 held-out datasets,
substantially outperforming BLIP-2 and larger Flamingo models. Our models also
lead to state-of-the-art performance when finetuned on individual downstream
tasks (e.g., 90.7% accuracy on ScienceQA questions with image contexts).
Furthermore, we qualitatively demonstrate the advantages of InstructBLIP over
concurrent multimodal models. All InstructBLIP models are open-sourced at
https://github.com/salesforce/LAVIS/tree/main/projects/instructblip. | Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, Steven Hoi | 2023-05-11T00:38:10Z | http://arxiv.org/abs/2305.06500v2 | # InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning
###### Abstract
Large-scale pre-training and instruction tuning have been successful at creating general-purpose language models with broad competence. However, building general-purpose vision-language models is challenging due to the rich input distributions and task diversity resulting from the additional visual input. Although vision-language pretraining has been widely studied, vision-language instruction tuning remains under-explored. In this paper, we conduct a systematic and comprehensive study on vision-language instruction tuning based on the pretrained BLIP-2 models. We gather 26 publicly available datasets, covering a wide variety of tasks and capabilities, and transform them into instruction tuning format. Additionally, we introduce an instruction-aware Query Transformer, which extracts informative features tailored to the given instruction. Trained on 13 held-in datasets, InstructBLIP attains state-of-the-art zero-shot performance across all 13 held-out datasets, substantially outperforming BLIP-2 and larger Flamingo models. Our models also lead to state-of-the-art performance when finetuned on individual downstream tasks (e.g., 90.7% accuracy on ScienceQA questions with image contexts). Furthermore, we qualitatively demonstrate the advantages of InstructBLIP over concurrent multimodal models. All InstructBLIP models are open-sourced.
## 1 Introduction
A longstanding aspiration of Artificial Intelligence (AI) research is to build a single model that can solve arbitrary tasks specified by the user. In natural language processing (NLP), instruction tuning [46; 7] proves to be a promising approach toward that goal. By finetuning a large language model (LLM) on a wide range of tasks described by natural language instructions, instruction tuning enables the model to follow arbitrary instructions. Recently, instruction-tuned LLMs have also been leveraged for vision-language tasks. For example, BLIP-2 [20] effectively adapts frozen instruction-tuned LLMs to understand visual inputs and exhibits preliminary capabilities to follow instructions in image-to-text generation.
Compared to NLP tasks, vision-language tasks are more diverse in nature due to the additional visual inputs from various domains. This poses a greater challenge to a unified model that is supposed to generalize to diverse vision-language tasks, many unseen during training. Most previous work can be grouped into two approaches. The first approach, multitask learning [6; 27], formulates various vision-language tasks into the same input-output format. However, we empirically find multitask learning without instructions (Table 4) does not generalize well to unseen datasets and tasks. The
Figure 1: A few qualitative examples generated by our InstructBLIP Vicuna model. Here, a range of its diverse capabilities are demonstrated, including complex visual scene understanding and reasoning, knowledge-grounded image description, multi-turn visual conversation, etc.
second approach [20; 4] extends a pre-trained LLM with additional visual components, and trains the visual components with image caption data. Nevertheless, such data are too limited to allow broad generalization to vision-language tasks that require more than visual descriptions.
To address the aforementioned challenges, this paper presents InstructBLIP, a vision-language instruction tuning framework that enables general-purpose models to solve a wide range of visual-language tasks through a unified natural language interface. InstructBLIP uses a diverse set of instruction data to train a multimodal LLM. Specifically, we initialize training with a pre-trained BLIP-2 model consisting of an image encoder, an LLM, and a Query Transformer (Q-Former) to bridge the two. During instruction tuning, we finetune the Q-Former while keeping the image encoder and LLM frozen. Our paper makes the following key contributions:
* We perform a comprehensive and systematic study on vision-language instruction tuning. We transform 26 datasets into the instruction tuning format and group them into 11 task categories. We use 13 held-in datasets for instruction tuning and 13 held-out datasets for zero-shot evaluation. Moreover, we withhold four entire task categories for zero-shot evaluation at the task level. Exhaustive quantitative and qualitative results demonstrate the effectiveness of InstructBLIP on vision-language zero-shot generalization.
* We propose instruction-aware visual feature extraction, a novel mechanism that enables flexible and informative feature extraction according to the given instructions. Specifically, the textual instruction is given not only to the frozen LLM, but also to the Q-Former, so that it can extract instruction-aware visual features from the frozen image encoder. Also, we propose a balanced sampling strategy to synchronize learning progress across datasets.
* We evaluate and open-source a suite of InstructBLIP models using two families of LLMs: 1) FlanT5 [7], an encoder-decoder LLM finetuned from T5 [34]; 2) Vicuna [2], a decoder-only LLM finetuned from LLaMA [41]. The InstructBLIP models achieve state-of-the-art zero-shot performance on a wide range of vision-language tasks. Furthermore, InstructBLIP models lead to state-of-the-art finetuning performance when used as the model initialization on individual downstream tasks.
## 2 Vision-Language Instruction Tuning
InstructBLIP aims to address the unique challenges in vision-language instruction tuning and provide a systematic study on the models' improved generalization ability to unseen data and tasks. In this section, we first introduce the construction of instruction-tuning data, followed by the training and evaluation protocols. Next, we delineate two techniques to improve instruction-tuning performance from the model and data perspectives, respectively. Lastly, we present the implementation details.
### Tasks and Datasets
To ensure the diversity of instruction tuning data while considering their accessibility, we gather comprehensive set of publicly available vision-language datasets, and transform them into the instruction tuning format. As shown in Figure 2, the final collection covers 11 task categories and 26 datasets, including image captioning [23; 3; 51], image captioning with reading comprehension [38], visual reasoning [16; 24; 29], image question answering [11; 12], knowledge-grounded image question answering [30; 36; 28], image question answering with reading comprehension [31; 39], image question generation (adapted from the QA datasets), video question answering [47; 49], visual conversational question answering [8], image classification [18], and LLaVA-Instruct-150K [25]. We include detailed descriptions and statistics of each dataset in Appendix C.
For every task, we meticulously craft 10 to 15 distinct instruction templates in natural language. These templates serve as the foundation for constructing instruction tuning data, which articulates the task and the objective. For public datasets inherently favoring short responses, we use terms such as _short_ and _briefly_ into some of their corresponding instruction templates to reduce the risk of the model overfitting to always generating short outputs. For the LLaVA-Instruct-150K dataset, we do not incorporate additional instruction templates since it is naturally structured in the instruction format. The full list of instruction templates can be found in Appendix D.
### Training and Evaluation Protocols
To ensure sufficient data and tasks for training and zero-shot evaluation, we divide the 26 datasets into 13 held-in datasets and 13 held-out datasets, indicated by yellow and white respectively in Figure 2. We employ the training sets of the held-in datasets for instruction tuning and their validation or test sets for held-in evaluation.
For held-out evaluation, our aim is to understand how instruction tuning improves the model's zero-shot performance on unseen data. We define two types of held-out data: 1) datasets not exposed to the model during training, but whose tasks are present in the held-in cluster; 2) datasets and their associated tasks that remain entirely unseen during training. Addressing the first type of held-out evaluation is nontrivial due to the data distribution shift between held-in and held-out datasets. For the second type, we hold out several tasks completely, including visual reasoning, video question answering, visual conversational QA, and image classification.
To avoid data contamination, datasets are selected carefully so that no evaluation data appear in the held-in training cluster across different datasets. During instruction tuning, we mix all the held-in training sets and sample instruction templates uniformly for each dataset. The models are trained with the standard language modeling loss to directly generate the response given the instruction. Furthermore, for datasets that involve scene texts, we add OCR tokens in the instruction as supplementary information.
### Instruction-aware Visual Feature Extraction
Existing zero-shot image-to-text generation methods, including BLIP-2, take an instruction-agnostic approach when extracting visual features. That results in a set of static visual representations being fed into the LLM, regardless of the task. In contrast, an instruction-aware vision model can adapt to the task instruction and produce visual representations most conducive to the task at hand. This is clearly advantageous if we expect the task instructions to vary considerably for the same input image.
We show the architecture of InstructBLIP in Figure 3. Similarly to BLIP-2 [20], InstructBLIP utilizes a Query Transformer, or Q-Former, to extract visual features from a frozen image encoder. The input to the Q-Former contains a set of \(K\) learnable query embeddings, which interact with the image encoder's output through cross attention. The output of the Q-Former consists of \(K\) encoded visual vectors, one per query embedding, which then go through a linear projection and are fed to the frozen LLM. As in BLIP-2, the Q-Former is pretrained in two stages using image-caption data
Figure 2: Tasks and their corresponding datasets used for vision-language instruction tuning. The held-in datasets are indicated by yellow and the held-out datasets by white.
before instruction tuning. The first stage pretrains the Q-Former with the frozen image encoder for vision-language representation learning. The second stage adapts the output of Q-Former as soft visual prompts for text generation with a frozen LLM. After pretraining, we finetune the Q-Former with instruction tuning, where the LLM receives as input the visual encodings from the Q-Former and the task instruction.
Extending BLIP-2, InstructBLIP proposes an instruction-aware Q-former module, which takes in the instruction text tokens as additional input. The instruction interacts with the query embeddings through self-attention layers of the Q-Former, and encourages the extraction of task-relevant image features. As a result, the LLM receives visual information conducive to instruction following. We demonstrate empirically (Table 2) that instruction-aware visual feature extraction provides substantial performance improvements for both held-in and held-out evaluations.
### Balancing Training Datasets
Due to the large number of training datasets and the significant differences in the size of each dataset, mixing them uniformly could cause the model to overfit smaller datasets and underfit larger datasets. To mitigate the problem, we propose to sample datasets with probabilities proportional to the square root of their sizes, or the numbers of training samples. Generally, given \(D\) datasets with sizes \(\{S_{1},S_{2},\dots,S_{D}\}\), the probability of a data sample being selected from a dataset \(d\) during training is \(p_{d}=\frac{\sqrt{S_{d}}}{\sum_{i=1}^{D}\sqrt{S_{i}}}\). On top of this formula, we make manual adjustments to the weights of certain datasets to improve optimization. This is warranted by inherent differences in the datasets and tasks that require varying levels of training intensity despite similar sizes. To be specific, we lower the weight of A-OKVQA, which features multiple-choice questions, and increase the weight of OKVQA, which requires open-ended text generation. In Table 2, we show that the balanced dataset sampling strategy improves overall performance for both held-in evaluation and held-out generalization.
### Inference Methods
During inference time, we adopt two slightly different generation approaches for evaluation on different datasets. For the majority of datasets, such as image captioning and open-ended VQA, the instruction-tuned model is directly prompted to generate responses, which are subsequently compared to the ground truth to calculate metrics. On the other hand, for classification and multi-choice VQA tasks, we employ a vocabulary ranking method following previous works [46; 22; 21]. Specifically, we still prompt the model to generate answers, but restrict its vocabulary to a list of candidates. Then, we calculate log-likelihood for each candidate and select the one with the highest value as the final prediction. This ranking method is applied to ScienceQA, IconQA, A-OKVQA (multiple-choice), HatefulMemes, Visual Dialog, MSVD, and MSRVTT datasets. Furthermore, for binary classification,
Figure 3: Model architecture of InstructBLIP. The Q-Former extracts instruction-aware visual features from the output embeddings of the frozen image encoder, and feeds the visual features as soft prompt input to the frozen LLM. We instruction-tune the model with the language modeling loss to generate the response.
we expand the positive and negative labels into a slightly broader set of verbalizers to exploit word frequencies in natural text (e.g., _yes_ and _true_ for the positive class; _no_ and _false_ for the negative class).
For the video question-answering task, we utilize four uniformly-sampled frames per video. Each frame is processed by the image encoder and Q-Former individually, and the extracted visual features are concatenated before being fed into the LLM.
### Implementation Details
Architecture.Thanks to the flexibility enabled by the modular architectural design of BLIP-2, we can quickly adapt the model to a wide range of LLMs. In our experiments, we adopt four variations of BLIP-2 with the same image encoder (ViT-g/14 [10]) but different frozen LLMs, including FlanT5-XL (3B), FlanT5-XXL (11B), Vicuna-7B and Vicuna-13B. FlanT5 [7] is an instruction-tuned model based on the encoder-decoder Transformer T5 [34]. Vicuna [2], on the other hand, is a recently released decoder-only Transformer instruction-tuned from LLaMA [41]. During vision-language instruction tuning, we initialize the model from pre-trained BLIP-2 checkpoints, and only finetune the parameters of Q-Former while keeping both the image encoder and the LLM frozen. Since the original BLIP-2 models do not include checkpoints for Vicuna, we perform pre-training with Vicuna using the same procedure as BLIP-2.
Training and Hyper-parameters.We use the LAVIS library [19] for implementation, training, and evaluation. All models are instruction-tuned with a maximum of 60K steps and we validate model's performance every 3K steps. For each model, a single optimal checkpoint is selected and used for evaluations on all datasets. We employ a batch size of 192, 128, and 64 for the 3B, 7B, and 11/13B models, respectively. The AdamW [26] optimizer is used, with \(\beta_{1}=0.9\), \(\beta_{2}=0.999\), and a weight decay of 0.05. Additionally, we apply a linear warmup of the learning rate during the initial 1,000 steps, increasing from \(10^{-8}\) to \(10^{-5}\), followed by a cosine decay with a minimum learning rate of 0. All models are trained utilizing 16 Nvidia A100 (40G) GPUs and are completed within 1.5 days.
## 3 Experimental Results and Analysis
### Zero-shot Evaluation
We first evaluate InstructBLIP models on the set of 13 held-out datasets with instructions provided in Appendix E. We compare InstructBLIP with the previous SOTA models BLIP-2 and Flamingo. As demonstrated in Table 1, we achieve new zero-shot SOTA results on all datasets. InstructBLIP consistently surpasses its original backbone, BLIP-2, by a significant margin across all LLMs,
\begin{table}
\begin{tabular}{l|c c c c c c c c c c c c c} \hline \hline & \multirow{2}{*}{NoCapps} & Flickr 30K & \multirow{2}{*}{GQA} & \multirow{2}{*}{VSR} & \multirow{2}{*}{IconQA} & \multirow{2}{*}{TextVQA} & \multirow{2}{*}{Visidal} & \multirow{2}{*}{HM} & \multirow{2}{*}{ViVi2Wiz} & SciQA & MSVD & MSRV & \multirow{2}{*}{iVQA} \\ & & 30K & & & & & & & & & & & \\ \hline Flamingo-3B [4] & - & 60.6 & - & - & - & 30.1 & - & 53.7 & 28.9 & - & 27.5 & 11.0 & 32.7 \\ Flamingo-9B [4] & - & 61.5 & - & - & - & 31.8 & - & 57.0 & 28.8 & - & 30.2 & 13.7 & 35.2 \\ Flamingo-80B [4] & - & 67.2 & - & - & - & 35.0 & - & 46.4 & 31.6 & - & 35.6 & 17.4 & 40.7 \\ \hline BLIP-2 (FlanT5\({}_{\text{QA}}\)) [20] & 104.5 & 76.1 & 44.0 & 60.5 & 45.5 & 43.1 & 45.7 & 53.0 & 29.8 & 54.9 & 33.7 & 16.2 & 40.4 \\ BLIP-2 (FlanT5\({}_{\text{XQL}}\)) [20] & 98.4 & 73.7 & 44.6 & 68.2 & 45.4 & 44.1 & 46.9 & 52.0 & 29.4 & 64.5 & 34.4 & 17.4 & 45.8 \\ BLIP-2 (Vicuna-7B) & 107.5 & 74.9 & 38.6 & 50.0 & 39.7 & 40.1 & 44.9 & 50.6 & 25.3 & 53.8 & 18.3 & 9.2 & 27.5 \\ BLIP-2 (Vicuna-13B) & 103.9 & 71.6 & 41.0 & 50.9 & 40.6 & 42.5 & 45.1 & 53.7 & 19.6 & 61.0 & 20.3 & 10.3 & 23.5 \\ \hline InstructBLIP (FlanT5\({}_{\text{XL}}\)) & 119.9 & **84.5** & 48.4 & 64.8 & 50.0 & 46.6 & 46.6 & 56.6 & 32.7 & 70.4 & 43.4 & 25.0 & 53.1 \\ InstructBLIP (FlanT5\({}_{\text{XL}}\)) & 120.0 & 83.5 & 47.9 & **65.6** & **51.2** & 46.6 & **48.5** & 54.1 & 30.9 & **70.6** & **44.3** & **25.6** & **53.8** \\ InstructBLIP (Vicuna-7B) & **123.1** & 82.4 & 49.2 & 54.3 & 43.1 & 50.1 & 45.2 & **59.6** & **34.5** & 60.5 & 41.8 & 22.1 & 52.2 \\ InstructBLIP (Vicuna-13B) & 121.9 & 82.8 & **49.5** & 52.1 & 44.8 & **50.7** & 45.4 & 57.5 & 33.4 & 63.1 & 41.2 & 24.8 & 51.0 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Zero-shot results on the held-out datasets. Here, Visidal, HM and SciQA denote the Visual Dialog, HatefulMemes and ScienceQA datasets, respectively. For ScienceQA, we only evaluate on the set with image context. Following previous works [49; 32; 4], we report the CIDEr score [42] for NoCapps and Flickr30K, iVQA accuracy for iVQA, AUC score for HatefulMemes, and Mean Reciprocal Rank (MRR) for Visual Dialog. For all other datasets, we report the top-1 accuracy (%).
demonstrating the effectiveness of vision-language instruction tuning. For instance, InstructBLIP FlanT5XL yields an average relative improvement of 15.0% when compared to BLIP-2 FlanT5XL. Furthermore, instruction tuning boosts zero-shot generalization on unseen task categories such as video QA. InstructBLIP achieves up to 47.1% relative improvement on MSRVTT-QA over the previous SOTA despite having never been trained with temporal video data. Finally, our smallest InstructBLIP FlanT5XL with 4B parameters outperforms Flamingo-80B on all six shared evaluation datasets with an average relative improvement of 24.8%.
For the Visual Dialog dataset, we choose to report the Mean Reciprocal Rank (MRR) over the Normalized Discounted Cumulative Gain (NDCG) metric. This is because NDCG favors generic and uncertain answers while MRR prefers certain responses [32], making MRR better aligned with the zero-shot evaluation scenario.
### Ablation Study on Instruction Tuning Techniques
To investigate the impact of the instruction-aware visual feature extraction (Section 2.3) and the balanced dataset sampling strategy (Section 2.4), we conduct ablation studies during the instruction tuning process. As illustrated in Table 2, the removal of instruction awareness in visual features downgrades performance significantly across all datasets. The performance drop is more severe in datasets that involve spatial visual reasoning (e.g., ScienceQA) or temporal visual reasoning (e.g., iVQA), where the instruction input to the Q-Former can guide visual features to attend to informative image regions. The removal of the data balancing strategy causes unstable and uneven training, as different datasets achieve peak performance at drastically different training steps. The lack of synchronized progress over multiple datasets harms the overall performance.
### Qualitative Evaluation
Besides the systematic evaluation on public benchmarks, we further qualitatively examine InstructBLIP with more diverse images and instructions. As illustrated in Figure 1, InstructBLIP demonstrates its capacity for complex visual reasoning. For example, it can reasonably infer from the visual scene what could have happened and deduce the type of disaster from the location of the scene, which it extrapolates based on visual evidence like the palm trees. Moreover, InstructBLIP is capable of connecting visual input with embedded textual knowledge and generate informative responses, such as introducing a famous painting. Furthermore, in descriptions of the overall atmosphere, InstructBLIP exhibits the ability to comprehend metaphorical implications of the visual imagery. Finally, we show that InstructBLIP can engage in multi-turn conversations, effectively considering the dialog history when making new responses.
In Appendix B, we qualitatively compare InstructBLIP with concurrent multimodal models (GPT-4 [33], LLaVA [25], MiniGPT-4 [52]). Although all models are capable of generating long-form responses, InstructBLIP's outputs generally contains more proper visual details and exhibits logically coherent reasoning steps. Importantly, we argue that long-form responses are not always preferable. For example, in Figure 2 of the Appendix, InstructBLIP directly addresses the user's intent by adaptively adjusting the response length, while LLaVA and MiniGPT-4 generate long and less
\begin{table}
\begin{tabular}{l|c|c c c c c} \hline \hline Model & \multirow{2}{*}{Held-in Avg.} & \multirow{2}{*}{GQA} &
\begin{tabular}{c} ScienceQA \\ (image-context) \\ \end{tabular} & \multirow{2}{*}{IconQA} & \multirow{2}{*}{Viz/Wiz} & \multirow{2}{*}{iVQA} \\ \hline InstructBLIP (FlanT5x.) & 94.1 & 48.4 & 70.4 & 50.0 & 32.7 & 53.1 \\ w/o Instruction-aware Visual Features & 89.8 & 45.9 (\(\downarrow\)2.5) & 63.4 (\(\downarrow\)7.0) & 45.8 (\(\downarrow\)4.2) & 25.1 (\(\downarrow\)7.6) & 47.5 (\(\downarrow\)5.6) \\ w/o Data Balancing & 92.6 & 46.8 (\(\downarrow\)1.6) & 66.0 (\(\downarrow\)4.4) & 49.9 (\(\downarrow\)0.1) & 31.8 (\(\downarrow\)0.9) & 51.1 (\(\downarrow\)2.0) \\ \hline InstructBLIP (Vicuna-7B) & 100.8 & 49.2 & 60.5 & 43.1 & 34.5 & 52.2 \\ w/o Instruction-aware Visual Features & 98.9 & 48.2 (\(\downarrow\)1.0) & 55.2 (\(\downarrow\)5.3) & 41.2 (\(\downarrow\)1.9) & 32.4 (\(\downarrow\)2.1) & 36.8 (\(\downarrow\)15.4) \\ w/o Data Balancing & 98.8 & 47.8 (\(\downarrow\)1.4) & 59.4 (\(\downarrow\)1.1) & 43.5 (\(\uparrow\)0.4) & 32.3 (\(\downarrow\)2.2) & 50.3 (\(\downarrow\)1.9) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results of ablation studies that remove the instruction-aware Visual Features (Section 2.3) and the balanced data sampling strategy (Section 2.4). For held-in evaluation, we compute the average score of four datasets, including COCO Caption, OKVQA, A-OKVQA, and TextCaps. For held-out evaluation, we show five datasets from different tasks.
relevant sentences. These advantages of InstructBLIP are a result of the diverse instruction tuning data and an effective architectural design.
### Instruction Tuning vs. Multitask Learning
A direct analogue to instruction tuning is multitask learning, a widely used method that involves the simultaneous training of multiple datasets with the goal of improving the performance of each individual dataset. To investigate whether the improvement in zero-shot generalization observed in instruction tuning is mainly from the formatting of instructions or merely from multitasking, we conduct a comparative analysis between these two approaches under identical training settings.
Following [46], we consider two multitask training approaches. In the first approach, the model is trained using the vanilla input-output format of the training datasets without instructions. During evaluation, instructions are still provided to the model, indicating the specific task to be performed. However, an exception is made for image captioning, as the model achieves better scores when only receiving the image as input. For the second approach, we take a step towards instruction tuning by prepending a [Task:Dataset] identifier to the text input during training. For example, we prepend [Visual question answering:VQAv2] for the VQAv2 dataset. During evaluation, we explore both instructions and this identifier. Particularly, for the identifier of held-out datasets, we only use the task name since the model never sees the dataset name.
The results are shown in Figure 4, including BLIP-2 zero-shot, multitask training, and instruction tuning. All of these models are based on the BLIP-2 FlanT5XL backbone and adhere to the identical training configurations delineated in Section 2. Overall, we can conclude two insights from the results. Firstly, instruction tuning and multitask learning exhibit similar performance on the held-in datasets. This suggests that the model can fit these two different input patterns comparably well, as long as it has been trained with such data. On the other hand, instruction tuning yields a significant improvement over multitask learning on unseen held-out datasets, whereas multitask learning still performs on par with the original BLIP-2. This indicates that instruction tuning is the key to enhance the model's zero-shot generalization ability.
### Finetuning InstructBLIP on Downstream Tasks
We further finetune the InstructBLIP models to investigate its performance on learning a specific dataset. Compared to most previous methods (e.g., Flamingo, BLIP-2) which increase the input image resolution and finetune the visual encoder on downstream tasks, InstructBLIP maintains the same image resolution (224\(\times\)224) during instruction tuning and keeps the visual encoder frozen during finetuning. This significantly reduces the number of trainable parameters from 1.2B to 188M, thus greatly improves finetuning efficiency.
Figure 4: Comparison of instruction tuning and multitask training based on BLIP-2 FlanT5XL backbone. For held-in evaluation, we compute the average score across all held-in datasets. For held-out evaluation, we compute the average score across GQA, TextVQA, VSR, HatefulMemes, IconQA, ScienceQA, iVQA, VizWiz.
The results are shown in Table 3. Compared to BLIP-2, InstructBLIP leads to better finetuning performance on all datasets, which validates InstructBLIP as a better weight initialization model for task-specific finetuning. InstructBLIP sets new state-of-the-art finetuning performance on ScienceQA (IMG), OCR-VQA, A-OKVQA, and is outperformed on OKVQA by PaLM-E [9] with 562B parameters.
Additionally, we observe that the FlanT5-based InstructBLIP is superior at multi-choice tasks, whereas Vicuna-based InstructBLIP is generally better at open-ended generation tasks. This disparity can be primarily attributed to the capabilities of their frozen LLMs, as they both employ the same image encoder. Although FlanT5 and Vicuna are both instruction-tuned LLMs, their instruction data significantly differ. FlanT5 is mainly finetuned on NLP benchmarks containing many multi-choice QA and classification datasets, while Vicuna is finetuned on open-ended instruction-following data.
## 4 Related Work
Instruction tuning aims to teach language models to follow natural language instructions, which has been shown to improve their generalization performance to unseen tasks. Some methods collect instruction tuning data by converting existing NLP datasets into instruction format using templates [46; 7; 35; 45]. Others use LLMs (e.g., GPT-3 [5]) to generate instruction data [2; 13; 44; 40] with improved diversity.
Instruction-tuned LLMs have been adapted for vision-to-language generation tasks by injecting visual information to the LLMs. BLIP-2 [20] uses frozen FlanT5 models, and trains a Q-Former to extract visual features as input to the LLMs. MiniGPT-4 [52] uses the same pretrained visual encoder and Q-Former from BLIP-2, but uses Vicuna [2] as the LLM and performs training using ChatGPT [1]-generated image captions longer than the BLIP-2 training data. LLaVA [25] directly projects the output of a visual encoder as input to a LLaMA/Vinuca LLM, and finetunes the LLM on vision-language conversational data generated by GPT-4 [33]. mPLUG-owl [50] performs low-rank adaption [14] to a LLaMA [41] model using both text instruction data and vision-language instruction data from LLaVA. A separate work is MultiInstruct [48], which performs vision-language instruction tuning without a pretrained LLM, leading to less competitive performance.
Compared to existing methods, InstructBLIP uses a much wider range of vision-language instruction data, covering both template-based converted data and LLM-generated data. Architecture wise, InstructBLIP proposes an instruction-aware visual feature extraction mechanism. Furthermore, our paper provides a comprehensive analysis on various aspects of vision-language instruction tuning, validating its advantages on generalizing to unseen tasks.
## 5 Conclusion
In this paper, we present InstructBLIP, a simple yet novel instruction tuning framework towards generalized vision-language models. We perform a comprehensive study on vision-language instruction tuning and demonstrate the capability of InstructBLIP models to generalize to a wide range of unseen tasks with state-of-the-art performance. Qualitative examples also exhibit InstructBLIP's various
\begin{table}
\begin{tabular}{l|c c c c c c c} \hline \hline & \begin{tabular}{c} ScienceQA \\ IMG \\ \end{tabular} & OCR-VQA & OKVQA & \begin{tabular}{c} Direct Answer \\ Val \\ \end{tabular} & \begin{tabular}{c} Multi-choice \\ Test \\ \end{tabular} &
\begin{tabular}{c} Multi-choice \\ Val \\ \end{tabular} \\ \hline \multirow{2}{*}{Previous SOTA} & LLaVA [25] & GIT [43] & PaLM-E(562B) [9] & [15] & [37] & [15] & [37] \\ & 89.0 & 70.3 & **66.1** & 56.3 & 61.6 & 73.2 & 73.6 \\ \hline BLIP-2 (FlanT5xxl.) & 89.5 & 72.7 & 54.7 & 57.6 & 53.7 & 80.2 & 76.2 \\ InstructBLIP (FlanT5xxl.) & **90.7** & **73.3** & 55.5 & 57.1 & 54.8 & **81.0** & **76.7** \\ \hline BLIP-2 (Vicuna-7B) & 77.3 & 69.1 & 59.3 & 60.0 & 58.7 & 72.1 & 69.0 \\ InstructBLIP (Vicuna-7B) & 79.5 & 72.8 & 62.1 & **64.0** & **62.1** & 75.7 & 73.4 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results of finetuning BLIP-2 and InstructBLIP on downstream datasets. Compared to BLIP-2, InstructBLIP provides a better weight initialization model and achieves SOTA performance on three out of four datasets.
capabilities on instruction following, such as complex visual reasoning, knowledge-grounded image description, and multi-turn conversations. Furthermore, we show that InstructBLIP can serve as an enhanced model initialization for downstream task finetuning, achieving state-of-the-art results. We hope that InstructBLIP can spur new research in general-purpose multimodal AI and its applications.
|
2308.09817 | Accelerating force calculation for dislocation dynamics simulations | Discrete dislocation dynamics (DDD) simulations offer valuable insights into
the plastic deformation and work-hardening behavior of metals by explicitly
modeling the evolution of dislocation lines under stress. However, the
computational cost associated with calculating forces due to the long-range
elastic interactions between dislocation segment pairs is one of the main
causes that limit the achievable strain levels in DDD simulations. These
elastic interaction forces can be obtained either from the integral of the
stress field due to one segment over the other segment, or from the derivatives
of the elastic interaction energy. In both cases, the results involve a
double-integral over the two interacting segments. Currently, existing DDD
simulations employ the stress-based approach with both integrals evaluated
either from analytical expressions or from numerical quadrature. In this study,
we systematically analyze the accuracy and computational cost of the
stress-based and energy-based approaches with different ways of evaluating the
integrals. We find that the stress-based approach is more efficient than the
energy-based approach. Furthermore, the stress-based approach becomes most
cost-effective when one integral is evaluated from analytic expression and the
other integral from numerical quadrature. For well-separated segment pairs
whose center distances are more than three times their lengths, this
one-analytic-integral and one-numerical-integral approach is more than three
times faster than the fully analytic approach, while the relative error in the
forces is less than $10^{-3}$. Because the vast majority of segment pairs in a
typical simulation cell are well-separated, we expect the hybrid
analytic/numerical approach to significantly boost the numerical efficiency of
DDD simulations of work hardening. | Rasool Ahmad, Wei Cai | 2023-08-18T20:59:00Z | http://arxiv.org/abs/2308.09817v1 | # Accelerating force calculation for dislocation dynamics simulations
###### Abstract
Discrete dislocation dynamics (DDD) simulations offer valuable insights into the plastic deformation and work-hardening behavior of metals by explicitly modeling the evolution of dislocation lines under stress. However, the computational cost associated with calculating forces due to the long-range elastic interactions between dislocation segment pairs is one of the main causes that limit the achievable strain levels in DDD simulations. These elastic interaction forces can be obtained either from the integral of the stress field due to one segment over the other segment, or from the derivatives of the elastic interaction energy. In both cases, the results involve a double-integral over the two interacting segments. Currently, existing DDD simulations employ the stress-based approach with both integrals evaluated either from analytical expressions or from numerical quadrature. In this study, we systematically analyze the accuracy and computational cost of the stress-based and energy-based approaches with different ways of evaluating the integrals. We find that the stress-based approach is more efficient than the energy-based approach. Furthermore, the stress-based approach becomes most cost-effective when one integral is evaluated from analytic expression and the other integral from numerical quadrature. For well-separated segment pairs whose center distances are more than three times their lengths, this one-analytic-integral and one-numerical-integral approach is more than three times faster than the fully analytic approach, while the relative error in the forces is less than \(10^{-3}\). Because the vast majority of segment pairs in a typical simulation cell are well-separated, we expect the hybrid analytic/numerical approach to significantly boost the numerical efficiency of DDD simulations of work hardening.
keywords: Dislocation dynamics simulations; Stress; Peach-Koehler force; Dislocation interaction; Automatic differentiation +
Footnote †: journal:
## 1 Introduction
Metals and alloys, such as copper, iron, and steel, have always played crucial roles in providing the tools and infrastructures necessary for the continued development of human civilization. The technologically important structural properties of these crystalline materials, including strength, ductility, formability, fracture toughness, creep are directly connected to their plastic deformation behavior under load [1; 2]. Fundamentally, plastic deformation of crystalline materials is governed by motion and evolution of dislocations, linear defects in the crystal lattice, and their interactions with other defects [3; 4]. Establishing a quantitative connection between microscopic dislocation evolution and macroscopic mechanical properties has been a long-standing goal of computational materials science.
Atomistic simulations are a widely-used computational tool to probe the fundamental mechanisms in materials behavior [5]. Despite their generality and fidelity, the computational cost of atomistic simulations becomes prohibitively high for simulation cell sizes approaching one micron. However, dislocations are known to self-organize into structures with characteristic lengths over several microns. Hence, it is generally expected that a much larger simulation cell (e.g. \(>10\,\mu\)m) is needed to capture the essential physics of plastic deformation in metals. These considerations have led to the development of Discrete Dislocation Dynamics (DDD) approach that ignores the atomic-level details and only keeps track of the evolution of dislocations as a line network embedded in an elastic medium [6; 7; 8; 9]. Since its origin more than three decades ago, DDD simulation has now emerged as a powerful tool to reveal the fundamental dislocation mechanisms of plastic deformation of crystalline materials.
In DDD simulations, the dislocation lines are discretized into a set of nodes, which are the fundamental degrees of freedom, connected by straight line segments [9] or curved splines [10]. Here, we focus our attention on straight
segment discretization of dislocation network as implemented in ParaDiS [11] and other DDD programs [9; 7; 12; 13]. The force on each node is calculated by considering the long-range elastic interactions between all pairs of dislocation segments and any externally applied load. The nodal velocities are next computed from the nodal forces using an appropriate dislocation mobility law. The nodes are evolved by numerically integrating the equation of motion. A set of topological operations are then performed to account for atomic-scale dislocation mechanisms such as junction formation, annihilation, and cross-slip. A remeshing step is also applied to maintain good-quality discretization of the dislocation lines as their lengths change. The above steps are repeated until the desired strain level is achieved [9; 14; 15].
One particular feature of DDD simulations is the continuous and steady increase of the degrees of freedom (number of nodes) because of the increase in dislocation density with continued plastic deformation. This increase in degrees of freedom is accompanied by a corresponding increase in the computational cost. The computational cost associated with force computation is further exacerbated by the fact that for numerical stability the simulation timestep becomes shorter with increasing dislocation density. This means more computational cycles are needed to reach a given increment of physical time (i.e. a given increment of strain for a constant strain-rate simulation) as the simulation proceeds. The continuous increase in the computational cost with deformation is a major bottleneck that limits the plastic strain accessible by DDD simulations.
Previous attempts to increase the amount plastic deformation accessible by DDD simulations have focused mainly on increasing the timestep taken during one integration step. This is achieved primarily by using an efficient subcycling time integrator [16; 17]. Furthermore, an implementation of these algorithms on graphical processing units (GPUs) has made it possible to achieve a strain of \(\sim 1\%\) in one day wall-clock time for single crystal Cu under a strain rate of \(10^{3}\) s\({}^{-1}\)[18]. Even in these algorithms and implementations, the force calculation remains the most computationally expensive step. Thus, an efficient force calculation algorithm will further enhance the capability of DDD simulations to reach higher strain levels and at lower strain rates.
In DDD simulations, as implemented in ParaDiS, forces arising from the long-range elastic interactions between a pair of segments are described by a non-singular continuum theory of dislocations [19]. The interaction forces on the end nodes of these segments can be obtained using two approaches: (1) from the integral of the Peach-Koehler (PK) force over one segment due to the stress field of the other, and (2) from the derivative of the elastic interaction energy between the two segments with respect to nodal positions. In the following, we shall refer to the first approach as the stress-based formulation, and the second approach as the energy-based formulation. The nodal forces from these two formulations do not match each other for a given pair of segments. But once the contribution from all segment pairs in a dislocation configuration consisting of complete loops are summed together, the total forces on any node obtained from these two formulations agree with each other.
In both the stress-based and energy-based formulations, the nodal force expressions involve a double-integral over the two interacting dislocation segments. These integrals can either be performed analytically, yielding a close-form expression that can then be evaluated, or be performed numerically (e.g. by Gaussian quadrature). The current implementation in ParaDiS follows the stress-based formulation in which both integrals have been carried out analytically [9]. On the other hand, Zbib et al. [7; 20], Zbib and de la Rubia [8] implement a purely numerical scheme to calculate the forces from stress-based formulation.
In this work, we compare the accuracy and efficiency of various methods to compute the nodal forces using both the stress-based and energy-based formulations. We confirm that the stress-based formulation leads to more efficient implementations than the energy-based formulation. Furthermore, we find that the stress-based formulation becomes most efficient when one integral is carried out analytically while the other integral is obtained by numerical quadrature. For well-separated segment pairs whose center distances are more than three times their lengths, this one-analytic-integral and one-numerical-integral approach is more than three times faster than the fully analytic approach, with the relative error in the forces below \(10^{-3}\). Therefore, we propose to use this hybrid analytic/numerical approach to evaluate the interaction forces for the vast majority of segment pairs beyond a cut-off distance and use the existing fully analytic approach to evaluate forces between segment pairs within the cut-off. This method should lead to a substantial increase of computational efficiency of DDD simulations with negligible loss of accuracy.
The rest of the paper is organized as follows. Section 2 briefly presents the theory behind the interaction forces between two straight dislocation segments. Section 3 presents the results on the accuracy and computational efficiency of the various numerical implementations. Finally, Section 4 presents some discussions and conclusive remarks.
## 2 Force between two straight dislocation segments
In this section, we briefly present the theoretical framework for computing the pair forces between two straight, finite-length, dislocation segments. In the non-singular continuum elasticity theory of dislocations [19], the Burgers vectors are distributed over a finite region of space instead of concentrated at the dislocation line. A specific isotropic distribution is chosen so that analytic expressions for the stress field of straight dislocation segments and their interaction energies can be obtained, similar to the classical singular elasticity theory of dislocations. The non-singular theory contains an additional core parameter \(a\) that characterizes the length-scale of Burgers vector distribution (the singular theory is recovered in the limit of \(a\to 0\)). For a finite value of core parameter \(a\), the stress field and interaction energies remain finite, and the nodal forces arising from the PK forces due to the stress field are consistent with those from the derivatives of the interaction energy, as long as complete dislocation loops are considered. As we shall see below, there are multiple approaches to compute the nodal forces from two interacting straight dislocation segments. Although mathematically equivalent, these approaches will lead to implementations with different accuracy and efficiency characteristics.
### Energy-based formulation
We now present the energy-based formulation of nodal forces. First, let us consider two dislocation loops \(C\) and \(C^{\prime}\) with Burgers vector, \(\mathbf{b}\) and \(\mathbf{b}^{\prime}\), respectively, in an infinite elastic medium with shear modulus \(\mu\) and Poisson's ratio
Figure 1: (a) A pair of finite dislocation segments interacting with each other through their elastic fields. The first segment starts at \(\mathbf{x}_{1}\) and ends at \(\mathbf{x}_{2}\) with Burgers vector \(\mathbf{b}\), and the second segment starts at \(\mathbf{x}_{3}\) and ends at \(\mathbf{x}_{4}\) with Burgers vector \(\mathbf{b}^{\prime}\). A generic point on segment 1-2 is denoted by \(\mathbf{x}\) and on segment 3-4 by \(\mathbf{x}^{\prime}\). (b) Schematic illustration of the numeric/analytic hybrid approach to compute force between interacting pairs. Thick solid black dislocation segment in the middle is the one whose interaction is considered with every other dislocation segment. The interactions with dislocation segments (shown in gray) lying within \(r_{\text{cut}}\) (three times the segment length) as well as the interaction with itself is treated with analytic expressions. The interactions with far-away dislocation segments (shown in thin black line) are treated by numerical methods.
\(\nu\). The interaction energy between the two loops is given in terms of a double-integral over the two loops as
\[E_{\text{loop}}= -\frac{\mu}{4\pi}\oint_{C}\oint_{C^{\prime}}\nabla^{2}R_{a}\left( \mathbf{b}\times\mathbf{b}^{\prime}\right)\cdot(d\mathbf{x}\times d\mathbf{x}^{\prime})+\frac{ \mu}{8\pi}\oint_{C}\oint_{C^{\prime}}\nabla^{2}R_{a}\left(\mathbf{b}\cdot d\mathbf{x} \right)(\mathbf{b}^{\prime}\cdot d\mathbf{x}^{\prime})\] \[+\frac{\mu}{4\pi(1-\nu)}\oint_{C}\oint_{C^{\prime}}(\mathbf{b}\times d \mathbf{x})\cdot\mathbf{T}\cdot(\mathbf{b}^{\prime}\times d\mathbf{x}^{\prime}),\] (1) where \[R_{a}= \sqrt{\|\mathbf{x}-\mathbf{x}^{\prime}\|^{2}+a^{2}},\quad\nabla^{2}R_{a} =\frac{2}{R_{a}}+\frac{a^{2}}{R_{a}^{3}},\quad\mathbf{T}=\frac{\partial^{2}R_{a}} {\partial\mathbf{x}\partial\mathbf{x}}=\frac{\partial^{2}R_{a}}{\partial\mathbf{x}^{ \prime}\partial\mathbf{x}^{\prime}}.\]
Here, \(\mathbf{x}\) is point on loop \(C\) and \(\mathbf{x}^{\prime}\) on loop \(C^{\prime}\). We note that there exists an alternate form for the last term in Equation 1 as derived by DeWit and Koehler [21] (in the classical singular continuum theory but easily generalizable to the non-singular theory). The alternate form gives the same result as Equation 1 as long as the integrals are carried over two closed dislocation loops. For the rest of this paper, we will continue to use Equation 1.
We now consider two straight dislocation segments of finite lengths. As shown in Figure 1(a), the first dislocation segment with Burgers vector \(\mathbf{b}\) starts at \(\mathbf{x}_{1}\) and ends at \(\mathbf{x}_{2}\); the second dislocation segment with Burgers vector \(\mathbf{b}^{\prime}\) starts at \(\mathbf{x}_{3}\) and ends at \(\mathbf{x}_{4}\). The interaction energy between the two segments is expressed by simply changing the closed line integrals in Equation 1 into open line integrals as
\[E_{\text{int}}= -\frac{\mu}{4\pi}\int_{\mathbf{x}_{1}}^{\mathbf{x}_{2}}\int_{\mathbf{x}_{3}} ^{\mathbf{x}_{4}}\nabla^{2}R_{a}\left(\mathbf{b}\times\mathbf{b}^{\prime}\right)\cdot(d \mathbf{x}\times d\mathbf{x}^{\prime})+\frac{\mu}{8\pi}\int_{\mathbf{x}_{1}}^{\mathbf{x}_{2}} \int_{\mathbf{x}_{3}}^{\mathbf{x}_{4}}\nabla^{2}R_{a}\left(\mathbf{b}\cdot d\mathbf{x}\right) (\mathbf{b}^{\prime}\cdot d\mathbf{x}^{\prime}) \tag{2}\] \[+\frac{\mu}{4\pi(1-\nu)}\int_{\mathbf{x}_{1}}^{\mathbf{x}_{2}}\int_{\mathbf{ x}_{3}}^{\mathbf{x}_{4}}(\mathbf{b}\times d\mathbf{x})\cdot\mathbf{T}\cdot(\mathbf{b}^{\prime} \times d\mathbf{x}^{\prime}),\]
The double-integral in Equation (2) can be carried out analytically, yielding closed-form expressions for the interaction energy between two straight dislocation segments [19]. The forces on the four nodes can then be obtained by taking the negative gradient of the interaction energy with respect to the nodal coordinates as
\[\mathbf{F}_{i}=-\frac{\partial E_{\text{int}}}{\partial\mathbf{x}_{i}},\quad i=1,2,3,4. \tag{3}\]
Given that the analytic expression of \(E_{\text{int}}\) is already very complicated, the closed-form expression of \(\mathbf{F}_{i}\) would be tedious to write down and to implement. Instead, we can use the automatic differentiation (autograd) tools, widely implemented in modern machine learning packages such as PyTorch, JAX, TensorFlow, etc., to carry out the spatial derivative. In this work, we use the PyTorch package [22] to perform the automatic differentiation of the analytic energy expression to obtain nodal forces. We shall call this approach the Energy-Based fully Analytic approach (EB-A), see Table 1.
Alternatively, we can perform the double-integral in Equation (2) numerically using Gaussian-Legendre quadrature. The nodal force can then be obtained using autograd. We shall call this approach the Energy-Based fully Numerical approach (EB-N2), where N2 means both integrals are carried out numerically. In principle, we can imagine a method in which one of the integrals is carried out analytically and the other one numerically (EB-N1), but we will not examine the performance of this possible implementation in this paper.
### Stress-based formulation
We now present the stress-based formulation of nodal forces. In the non-singular elasticity theory, the stress field at a point \(\mathbf{x}\) due to a dislocation loop \(C^{\prime}\) with Burgers vector \(\mathbf{b}^{\prime}\) is
\[\sigma_{\alpha\beta}(\mathbf{x})=\frac{\mu}{8\pi}\oint_{C^{\prime}}\frac{ \partial^{3}R_{a}}{\partial x_{i}\partial x_{p}\partial x_{p}}\left[b^{\prime} _{m}\varepsilon_{imad}dx^{\prime}_{\beta}+b^{\prime}_{m}\varepsilon_{im\beta} dx^{\prime}_{a}\right]+\frac{\mu}{4\pi(1-\nu)}\oint_{C^{\prime}}b^{\prime}_{m} \varepsilon_{imb}\left(\frac{\partial^{3}R_{a}}{\partial x_{i}\partial x_{a} \partial x_{p}}-\delta_{\alpha\beta}\frac{\partial^{3}R_{a}}{\partial x_{i} \partial x_{p}\partial x_{p}}\right)dx^{\prime}_{k} \tag{4}\]
where \(\mathbf{x}^{\prime}\) is point on the dislocation loop, \(\varepsilon_{ijk}\) is the Levi-Civita symbol, and \(\delta_{ij}\) is the Kronecker delta. The stress field due to a finite dislocation segment between \(\mathbf{x}_{3}\) and \(\mathbf{x}_{4}\) is obtained simply by converting the closed line integral in Equation (4) to an open line integral as
\[\sigma_{\alpha\beta}^{3-4}(\mathbf{x})=\frac{\mu}{8\pi}\int_{\mathbf{x}_{3}}^{\mathbf{x}_{4 }}\frac{\partial^{3}R_{a}}{\partial x_{i}\partial x_{p}\partial x_{p}}\left[b^{ \prime}_{m}\varepsilon_{imad}dx^{\prime}_{\beta}+b^{\prime}_{m}\varepsilon_{im \beta}dx^{\prime}_{a}\right]+\frac{\mu}{4\pi(1-\nu)}\int_{\mathbf{x}_{3}}^{\mathbf{x}_{4 }}b^{\prime}_{m}\varepsilon_{imb}\left(\frac{\partial^{3}R_{a}}{\partial x_{i} \partial x_{a}\partial x_{\beta}}-\delta_{\alpha\beta}\frac{\partial^{3}R_{a}}{ \partial x_{i}\partial x_{p}\partial x_{p}}\right)dx^{\prime}_{k} \tag{5}\]
The line integral in Equation (5) can be carried out analytically, resulting in a closed-form expression for the segment stress [19] (see A). The nodal forces on the other dislocation segment with endpoints on \(\mathbf{x}_{1}\) and \(\mathbf{x}_{2}\) and Burgers vector \(\mathbf{b}\) are then computed by integrating the local PK force due to stress of the segment 3-4 over the segment 1-2 as
\[\mathbf{F}_{1}=\int_{\mathbf{x}_{1}}^{\mathbf{x}_{2}}\left(\mathbf{\sigma}^{3-4}( \mathbf{x})\cdot\mathbf{b}\times\mathbf{t}\right)N_{1}(\mathbf{x})d\mathbf{x};\qquad\mathbf{F}_{2}= \int_{\mathbf{x}_{1}}^{\mathbf{x}_{2}}\left(\mathbf{\sigma}^{3-4}(\mathbf{x})\cdot\mathbf{b}\times \mathbf{t}\right)N_{2}(\mathbf{x})d\mathbf{x}, \tag{6}\]
where \(\mathbf{t}=(\mathbf{x}_{2}-\mathbf{x}_{1})/\|\mathbf{x}_{2}-\mathbf{x}_{1}\|\) is the unit tangent vector and \(N_{1}(\mathbf{x})\) and \(N_{2}(\mathbf{x})\) are the linear shape functions of the dislocation segment 1-2. \(N_{1}(\mathbf{x}_{1})=1\), \(N_{1}(\mathbf{x}_{2})=0\) and \(N_{2}(\mathbf{x}_{1})=0\), \(N_{2}(\mathbf{x}_{2})=1\). The line integral in Equation (6) can also be integrated analytically, resulting in a closed-form expression for the nodal forces [9]. This is the approach implemented in ParaDiS, and we shall call it the Stress-Based fully Analytic approach (SB-A).
Alternatively, we can use the analytic expression of the segment stress, but evaluate the integral in Equation (6) using Gaussian-Legendre quadrature. We shall call this hybrid approach SB-N1, where N1 indicates that one of the two line integrals is evaluated numerically. Furthermore, we can also evaluate both line integrals in Equations (5) and (6) numerically, and we shall call the approach SB-N2. The different methods described above are summarized in Table 1, and their accuracy and computational efficiency will be compared in the next section.
## 3 Results
In this section, we compare the accuracy and computational efficiency of the various methods for calculating the nodal forces due to elastic interaction between two straight dislocation segments of finite lengths. All methods are implemented in Python. The autograd tool in the PyTorch library is used for differentiation of the energy function to obtain nodal forces in energy-based approaches.
To construct the test cases, we randomly generate 8,000 pairs of dislocation segments. Each segment has a randomly chosen Burgers vectors and random line orientations, but a fixed length of 2 nm. The separation between midpoints of the two segments is a random number uniformly distributed between 6.0 and 30.0 nm. The elastic medium has the shear modulus of \(\mu=50\) GPa and Poisson's ratio \(\nu=0.3\). The dislocation core parameter is chosen to be \(a=0.01\) nm.
### Energy-based methods: EB-A vs EB-N2
Here we compare the interaction energy and forces between two straight segments using the EB-A and EB-N2 methods. The results from the EB-A method is considered to be exact, and based on which the error of the EB-N2 method is computed. Figure 2(a) shows that the error in the interaction energy computed by the EB-N2 method decreases exponentially fast with the number of quadrature points on each segment. The boxplot shows that at each chosen number of quadrature points, there is a significant spread of relative errors among the randomly generated segment pairs.
Figure 2(b) shows the relative error in the forces on the four nodes as a function of the number of quadrature points. The relative error in forces is computed from the magnitude of the force difference between EB-N2 and EB-A methods divided by the magnitude of the force computed by the EB-A method. The nodal forces computed by the EB-N2 method also converge to the exact values exponentially fast (in most cases) with an increasing number of quadrature points. Furthermore, only 3 quadrature points are enough to bring the maximum relative error in force down to below \(10^{-4}\), i.e. 0.01%. In addition, as shown by the whiskers and the 99\({}^{\text{th}}\) percentile marks, in the vast
\begin{table}
\begin{tabular}{l|c|c} \hline & Energy-based formulation & Stress-based formulation \\ \hline Both integrals analytic & EB-A & SB-A \\ \hline One integral analytic & - & SB-N1 \\ One integral numeric & EB-N2 & SB-N2 \\ \hline Both integrals numeric & \multicolumn{1}{c|}{} & \\ \hline \end{tabular}
\end{table}
Table 1: Descriptions of the various methods considered here for evaluating nodal forces due to elastic interaction between dislocation segment pairs. The methods are characterized as being either energy-based (EB) or stress-based (SB), and how the two line-integrals are carried out: both integrals analytic (A), one integral analytic and one integral numeric (N1), or both integrals numeric (N2).
majority of cases, the errors are orders of magnitude lower than the maximum error. For instance, using 3 quadrature points, in 75% of the cases, the relative errors in force lie below \(10^{-7}\), and in 99% of the cases, the relative errors lie below \(10^{-5}\).
We now compare the computation times of EB-A and EB-N2 methods to calculate the interaction forces between a pair of dislocation segments. The computational time for EB-A and EB-N2 methods using 3 quadrature points are listed in Table 2. These data are also plotted in Figure 3, together with the computational time for the EB-N2 methods as a function of the number of quadrature points on each segment. All these time data are the averages of 10000 different calculations on a CPU machine. The computation time for the EB-N2 method scales quadratically with the number of quadrature points due to the double line integral involved in Equation 2. As shown in Figure 3, even when 10 quadrature points are used on each segment, the EB-N2 method is still more efficient than the EB-A method. On the other hand, Figure 2(b) shows that 3 quadrature points are already sufficient to bring the relative error of the EB-N2 method to below \(10^{-4}\). Therefore, for dislocation pairs where the separation between the midpoints of the segments is more than three times the length of the segments, the EB-N2 method (using 3 quadrature points) is considered to be sufficiently accurate and is 30 times faster than the EB-A method (see Table 2).
### Stress-based methods: SB-A vs SB-N1 vs SB-N2
Here we compare the interaction forces between two straight segments using the SB-A, SB-N1 and SB-N2 methods. The tests are performed on the same set of segment pairs as those used in Section 3.1. The results from the SB-A method are considered to be exact, from which the errors of the SB-N1 and SB-N2 methods are computed.
Figure 4 (a) shows that the results from the SB-N1 method converges exponentially fast with increasing number of quadrature points. Using 3 quadrature points, the maximum relative error of the SB-N1 method is already less than
\begin{table}
\begin{tabular}{c|c c c c c} \hline \hline Method & EB-A & EB-N2 & SB-A & SB-N1 & SB-N2 \\ & & (3 quadrature points) & & (3 quadrature points) & (3 quadrature points) \\ \hline Time (s) & \(4.2\times 10^{-1}\) & \(1.4\times 10^{-2}\) & \(4.7\times 10^{-4}\) & \(1.5\times 10^{-4}\) & \(2.4\times 10^{-4}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Time (in seconds) taken by the different methods to evaluate the force due to the elastic interaction between a pair of dislocation segments.
Figure 2: Boxplot of the relative errors of the EB-N2 method in (a) interaction energies and (b) nodal forces between two dislocation segments as a function of the number of quadrature points. The orange dots on each box whisker denotes the \(99^{\text{th}}\) percentile of the relative error data.
\(10^{-3}\), i.e. 0.1%, which is considered sufficiently small. In fact, the vast majority of the relative errors are orders of magnitude lower than the maximum error. For instance, using 3 quadrature points, in 75% of the cases, the relative errors in force lie below \(10^{-6}\), and in 99% of the cases, the relative errors lie below \(10^{-4}\). Figure 4 (b) presents the results for SB-N2. We again see that the error decreases exponentially with the number of quadrature points. The error values are almost the same as those of SB-N1, indicating that for far-enough segments, the error is dominated by the numerical integration of PK force, and the contribution of numerical evaluation of stress field does not significantly increase the total error.
The computational times for force evaluation for one pair of segments taken by the SB-A, SB-N1 (3 quadrature points) and SB-N2 (3 quadrature points) methods are given in Table 2. Figure 3 also plots the computational time for the SB-N1 and SB-N2 methods as a function of the number of quadrature points. The computational time for the SB-N1 method scales linearly with the number of quadrature points, given that only one integral is evaluated numerically. The SB-N1 method is more efficient than the SB-A method for up to 8 quadrature points. However, Figure 4 shows that 3 quadrature points are already sufficient to bring the relative error of the SB-N1 method to below \(10^{-3}\). The computational time of SB-N2 method scales quadratically with the number of quadrature points due to numerical evaluation of both the stress integral, Equation (5), and the integral of PK force, Equation (6). Even if the computation time of SB-N2 is similar to that of SB-N1 for 1 quadrature point, the quadratic scaling of SB-N2 method makes it less efficient than SB-N1 method for 2 or more quadrature points. Thus, SB-N1 method is more computationally efficient than SB-N2 method. Therefore, for dislocation pairs where the separation between the midpoints of the segments is more than three times of the length of the segments, the SB-N1 method (using 3 quadrature points) is considered to be sufficiently accurate and is more than three times faster than the SB-A method (see Table 2).
Based on Table 2, Figure 2 and Figure 4, we conclude that the most efficient way to evaluate the forces from a pair of straight dislocation segments is to use the SB-N1 method (with 3 quadrature points) when the two segments are well-separated, i.e. the distance between their midpoints is more than three times the segment lengths. For segments that are closer than this cut-off distance, the SB-A method is a good choice. Given that in a DDD simulation, the vast majority of segment pairs are well separated, this combined approach using SB-N1 and SB-A methods based on distances is expected to be much more efficient than using the SB-A method alone.
Figure 3: Wall-clock time to determine nodal forces for one segment pair as a function of number of quadrature points for methods SB-N1, SB-N2 and EB-N2. The computational time is computed by averaging over 10000 dislocation segment pairs. Data presented in blue correspond to SB-N1 (diamond marker) and SB-N2 (circular marker) and that in orange correspond to EN2. The time taken by analytic evaluation of energy-based (EB-A) and stress-based (SB-A) forces are also shown in solid horizontal lines for comparison.
Figure 4: Boxplot of the relative errors in numerically computed nodal forces between two dislocation segments as a function of the number of quadrature points: (a) SB-N1 method, (b) SB-N2 method. Orange dots in each whisker denote 99\({}^{\text{th}}\) percentile of relative error data. The relative error in forces is computed by subtracting the analytically computed force from the numerical value and then dividing its magnitude by the corresponding analytic force magnitude.
Figure 5: Configuration of ten circular dislocation loops randomly oriented in a three-dimensional infinite linear isotropic elastic medium. Each dislocation loop is discretized into 45 segments, and has Burgers vector of unit magnitude and random orientation.
### Dislocation loops
We now compare the accuracy and efficiency of different methods in computing the total nodal forces in a scenario that resembles that of a DDD simulation. To this end, we consider ten circular dislocation loops, each with a radius of 10 nm, randomly oriented in an infinite isotropic linear-elastic medium, as shown in Figure 5. Each dislocation loop is discretized into 45 segments. The length of each segment is \(\approx 1.4\) nm. The Burgers vectors of the ten loop are chosen to be of unit magnitude and randomly oriented. Other parameters (\(\mu\), \(\nu\), \(a\)) are the same as those in the previous sections. The force on every node is the result of the interaction between every segment with all segments (including itself). Since the configuration contains only closed dislocation loops, the forces computed by the energy-based methods are expected to agree with those computed by the stress-based methods.
The forces computed entirely from the SB-A method is considered to be exact, based on which the errors of other methods are computed. (The maximum relative error of the EB-A method is less than \(10^{-3}\).) In order to balance accuracy and efficiency, we consider combined analytic/numerical approaches in which EB-A or SB-A is used for well-separated segment pairs (with center distances greater than three times the segment length), while EB-N2, SB-N1 or SB-N2 is used for the remaining segment pairs. For this test case, this means that essentially the analytic method (EB-A or SB-A) is used only for interactions between a segment with itself and with its four (nearest and next nearest) neighbors.
Figure 6(a) plots the forces per segment length on all the nodes using the SB-A method and the combined SB-N1/SB-A method (with 3 quadrature points). The differences between the two methods are too small to be seen in this figure. Figure 6 (b) shows the relative error in nodal forces using the combined EB-N2/EB-A method as a function of
Figure 6: Comparison of analytic and analytic/numerical total nodal force per unit length for the ten dislocation loops shown in Figure 5. (a) Variation of total nodal forces per unit length as a function of angular position of nodes in the dislocation loops. Ten different curves are shown, one for each dislocation loop. Solid orange curves are results from the SB-A method; blue dots are obtained from the analytic/numeric hybrid where nearby segments (with separation less than three times of the segment length) are handled by the SB-A method and far-away segments by the SB-N1 method (using 3 quadrature points). Both stress and energy formulations lead to the same plot as shown here. Boxplots of distribution of relative errors in forces as a function of quadrature points for the cases of (b) EB-N2/EB-A, (c) SB-N1/SB-A hybrid, and (d) SB-N2/SB-A schemes. Relative error in forces is defined as the ratio of the magnitude of the difference between hybrid and analytic forces to the magnitude of the analytic forces.
the number of quadrature points. The maximum relative error is less than \(10^{-2}\) even with 1 quadrature point. For the vast majority of the nodes, the relative error in the nodal forces decays rapidly with the number of quadrature points.
Figure 6 (c) shows the relative error in nodal forces using the combined SB-N1/SB-A method as a function of the number of quadrature points. The relative error also decreases exponentially with increasing number of quadrature points for the vast majority of the nodes. The maximum relative error seems to remain stagnant with increasing number of quadrature points when it is 2 or more, at a value below \(10^{-4}\), which is considered small enough. Figure 6 (d) shows the relative error in nodal forces using the combined SB-N2/SB-A method as a function of the number of quadrature points. The overall behavior is quite similar to that of the SB-N1/SB-A method shown in Figure 6 (c).
The time to evaluate the nodal forces using different methods are given in Table 3. The analytic evaluation of forces by SB-A method takes around 50 seconds, while the combined SB-N1/SB method 3-point quadrature is the fastest and takes 17 seconds. The combined SB-N1/SB-A method is both highly accurate and about three times as fast as the SB-A method.
## 4 Discussion and conclusion
In this work, we compare different ways of computing nodal forces in a DDD simulation using straight dislocation segments. The methods differ in their theoretical formulation, i.e. either energy-based (EB) or stress-based (SB), as well as how the line-integrals are carried out, i.e. either analytically or by numerical quadrature. We observe that a combined approach, where interaction forces due to well-separated segments (with center distances more than three times segment length) are computed using SB-N1 and forces due to other segments are computed using SB-A, can be both highly accurate (relative error less than \(10^{-3}\)) and significantly faster than using SB-A alone. Therefore, we recommend using such a combined approach for nodal force calculations in DDD simulations. In a DDD simulation in which the interactions between \(N\) segments need to be explicitly accounted for, the number of well-separated segment pairs scales as \(\mathcal{O}(N^{2})\), while the closely-spaced segment pairs scale as \(\mathcal{O}(N)\). Therefore, the faster SB-N1 method would be used in most cases in place of the exact but slower SB-A method, in the limit of large \(N\). The energy-based methods, unfortunately, are significantly slower than the stress-based methods, most likely due to the need of taking autograd of relatively complicated energy functions.
We note that all benchmark tests in this work are performed using Python codes running on CPUs. The observed numerical accuracy of the methods (e.g. convergence rate with respect to number of quadrature points) is expected to be generally applicable to implementations using other programming languages and computing platforms. While we expect the tests here also provide general insights into the relative efficiency of different methods, the exact ratio of computational time between methods can change if they are implemented in a different language (e.g. C language) or running on Graphical Processing Units (GPUs). More work is needed to determine how much speedup can be gained in DDD simulations of work hardening, by applying the methods developed here to C/CPU and Cuda/GPU implementations of ParaDiS.
In conclusion, this work shows that the most computationally intensive part of DDD simulations can be sped up by using more efficient methods for force evaluations between well-separated segment pairs while maintaining high accuracy. This finding is likely to significantly expand the capability of large-scale DDD simulations of work hardening in metals at reaching higher strains and under lower strain rates.
## Appendix A Stress field of a straight dislocation segment
We present the stress field of a finite straight dislocation segment in the framework of the non-singular elasticity theory of dislocations[19]. The expressions are presented in a coordinate-dependent form. We assume that the dislocation segment with Burgers vector \(\mathbf{b}^{\prime}\) lies along the \(\mathbf{z}-\)axis and extends from \((0,0,z_{1})\) to \((0,0,z_{2})\) as shown in Figure 11. We determine the stress at the field point \(\mathbf{x}=(x,0,z)\) which lies entirely in \(\mathbf{x}-\mathbf{z}\) plane. The vector \(\mathbf{R}=(x,0,z-z^{\prime})\) connects a point on the dislocation segment \((0,0,z^{\prime})\) to the field point \((x,0,z)\).
\begin{table}
\begin{tabular}{c|c c c c c} \hline \hline \multirow{2}{*}{Method} & EB-A & EB-N2 & SB-A & SB-N1 & SB-N2 \\ & & (3 quadrature points) & & (3 quadrature points) & (3 quadrature points) \\ \hline Time (s) & \(4.1\times 10^{4}\) & \(2.1\times 10^{3}\) & \(4.9\times 10^{1}\) & \(1.7\times 10^{1}\) & \(2.8\times 10^{1}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Time (in seconds) taken by the different methods to evaluate the nodal forces for the ten dislocation loops.
The stress field, Equation (5), in this special coordinate system can be expressed as
\[\begin{split}\frac{\sigma_{xx}}{\sigma_{0}}&=b^{\prime} _{y}\int_{z_{1}}^{z_{2}}\left[\frac{3x^{3}}{R_{a}^{5}}-\frac{x}{R_{a}^{3}} \left(1-3\left[\frac{a}{R_{a}}\right]^{2}\right]\right]dz^{\prime},\\ \frac{\sigma_{yy}}{\sigma_{0}}&=b^{\prime}_{y}\int_ {z_{1}}^{z_{2}}\frac{x}{R_{a}^{3}}\left(1+3\left[\frac{a}{R_{a}}\right]^{2} \right]dz^{\prime},\\ \frac{\sigma_{zz}}{\sigma_{0}}&=b^{\prime}_{y}\int_ {z_{1}}^{z_{2}}\left[\frac{3x(z-z^{\prime})^{2}}{R_{a}^{5}}-\frac{x}{R_{a}^{3} }\right]dz^{\prime}-b^{\prime}_{y}(1-\nu)\int_{z_{1}}^{z_{2}}\frac{x}{R_{a}^{3 }}\left(2+3\left[\frac{a}{R_{a}}\right]^{2}\right)dz^{\prime},\\ \frac{\sigma_{xy}}{\sigma_{0}}&=b^{\prime}_{x}\int_ {z_{1}}^{z_{2}}\frac{x}{R_{a}^{3}}dz^{\prime},\\ \frac{\sigma_{xz}}{\sigma_{0}}&=b^{\prime}_{y}\int_ {z_{1}}^{z_{2}}\left[\frac{3(z-z^{\prime})x^{2}}{R_{a}^{5}}-\frac{z-z^{\prime} }{R_{a}^{3}}\right]dz^{\prime}+\frac{b^{\prime}_{y}(1-\nu)}{2}\int_{z_{1}}^{z _{2}}\frac{z-z^{\prime}}{R_{a}^{3}}\left(2+3\left[\frac{a}{R_{a}}\right]^{2} \right)dz^{\prime},\\ \frac{\sigma_{yz}}{\sigma_{0}}&=b^{\prime}_{x}\int_ {z_{1}}^{z_{2}}\frac{z-z^{\prime}}{R_{a}^{3}}\left[1+\frac{1-\nu}{2}\left(2+3 \left[\frac{a}{R_{a}}\right]^{2}\right]\right)dz^{\prime}-\frac{b^{\prime}_{z} (1-\nu)}{2}\int_{z_{1}}^{z_{2}}\frac{x}{R_{a}^{3}}\left(2+3\left[\frac{a}{R_{ a}}\right]^{2}\right)dz^{\prime},\end{split}\] (A.1)
where
\[\sigma_{0}=\frac{\mu}{4\pi(1-\nu)},\qquad R_{a}=\sqrt{x^{2}+(z-z^{\prime})^{2 }+a^{2}}\] (A.2)
The following identities are used to derive the above equations, (A.1), from Equation (5)
\[\begin{split}\frac{\partial^{3}R_{a}}{\partial x_{i}\partial x_ {j}\partial x_{k}}&=\frac{3x_{i}x_{j}x_{k}}{R_{a}^{5}}-\frac{x_{i} \delta_{jk}+x_{j}\delta_{ik}+x_{k}\delta_{ij}}{R_{a}^{3}}\\ \frac{\partial}{\partial x_{i}}\nabla^{2}R_{a}&=- \frac{x_{i}}{R_{a}^{3}}\left(2+3\left[\frac{a}{R_{a}}\right]^{2}\right)\end{split}\] (A.3)
The integrals in Equation (A.1) can be evaluated either exactly [19] and used in SB-N1 method in the main text, or by numerically using Gauss-Legendre quadrature scheme which is used in SB-N2 method in the main text.
The closed form integral of Equation (A.1) is expressed as the difference
\[\sigma_{ij}=\bar{\sigma}_{ij}(z^{\prime}=z_{2})-\bar{\sigma}_{ij}(z^{\prime}= z_{1}).\] (A.4)
The stress field \(\bar{\sigma}_{ij}(z^{\prime})\) can be expressed in several equivalent forms, and for numerical stability a particular form should be used depending on the position of the field point relative to the dislocation segment [19]. If the field point is
Figure A.7: Coordinate system used to describe the stress field on a field point \((x,0,z)\) due a straight dislocation segment lying along \(\varepsilon-\)axis from \((0,0,z_{1})\) to \((0,0,z_{2})\).
located left to the dislocation segment, i.e. \(z<z_{1}<z_{2}\), the following form 1 should be used
\[\begin{split}\frac{\bar{\sigma}_{xx}}{\sigma_{0}}&= \frac{b_{y}^{\prime}x}{R_{a}(R_{a}+\lambda)}\left[1-\frac{x^{2}+a^{2}}{R_{a}^{2 }}-\frac{x^{2}+a^{2}}{R_{a}(R_{a}+\lambda)}\right],\\ \frac{\bar{\sigma}_{yy}}{\sigma_{0}}&=-\frac{b_{y} ^{\prime}x}{R_{a}(R_{a}+\lambda)},\\ \frac{\bar{\sigma}_{zz}}{\sigma_{0}}&=-b_{y}^{\prime }\left\{\frac{2vx}{R_{a}(R_{a}+\lambda)}\left[1+\frac{a^{2}}{2R_{a}^{2}}+\frac{ a^{2}}{2R_{a}(R_{a}+\lambda)}\right]+\frac{x\lambda}{R_{a}^{3}}\right\},\\ \frac{\bar{\sigma}_{xy}}{\sigma_{0}}&=-\frac{b_{x} ^{\prime}x}{R_{a}(R_{a}+\lambda)},\\ \frac{\bar{\sigma}_{xz}}{\sigma_{0}}&=b_{y}^{\prime }\left[-\frac{\nu}{R_{a}}+\frac{x^{2}}{R_{a}^{3}}+(1-\nu)\frac{a^{2}}{2R_{a}^{ 3}}\right],\\ \frac{\bar{\sigma}_{yz}}{\sigma_{0}}&=b_{x}^{\prime }\left[\frac{\nu}{R_{a}}-(1-\nu)\frac{a^{2}}{2R_{a}^{3}}\right]-\frac{b_{z}^{ \prime}(1-\nu)x}{R_{a}(R_{a}+\lambda)}\left[1+\frac{a^{2}}{2R_{a}^{2}}+\frac{ a^{2}}{2R_{a}(R_{a}+\lambda)},\right],\end{split}\] (A.5)
When the field point is located right to the dislocation segment, i.e. \(z_{1}<z_{2}<z\), the following form 2 should be used
\[\begin{split}\frac{\bar{\sigma}_{xx}}{\sigma_{0}}& =-\frac{b_{y}^{\prime}x}{R_{a}(R_{a}-\lambda)}\left[1-\frac{x^{2}+a ^{2}}{R_{a}^{2}}-\frac{x^{2}+a^{2}}{R_{a}(R_{a}+\lambda)}\right],\\ \frac{\bar{\sigma}_{yy}}{\sigma_{0}}&=\frac{b_{y}^{ \prime}x\lambda}{\rho_{a}^{2}R_{a}},\\ \frac{\bar{\sigma}_{zz}}{\sigma_{0}}&=b_{y}^{\prime }\left\{\frac{2vx}{R_{a}(R_{a}-\lambda)}\left[1+\frac{a^{2}}{2R_{a}^{2}}+ \frac{a^{2}}{2R_{a}(R_{a}-\lambda)}\right]+\frac{x\lambda}{R_{a}^{3}}\right\}, \\ \frac{\bar{\sigma}_{xy}}{\sigma_{0}}&=\frac{b_{x}x}{ R_{a}(R_{a}-\lambda)},\\ \frac{\bar{\sigma}_{xz}}{\sigma_{0}}&=b_{y}^{\prime }\left[-\frac{\nu}{R_{a}}+\frac{x^{2}}{R_{a}^{3}}+(1-\nu)\frac{a^{2}}{2R_{a}^ {3}}\right],\\ \frac{\bar{\sigma}_{yz}}{\sigma_{0}}&=b_{x}^{\prime }\left[\frac{\nu}{R_{a}}-(1-\nu)\frac{a^{2}}{2R_{a}^{3}}\right]+\frac{b_{z}^{ \prime}(1-\nu)x}{R_{a}(R_{a}-\lambda)}\left[1+\frac{a^{2}}{2R_{a}^{2}}+\frac{a ^{2}}{2R_{a}(R_{a}-\lambda)}\right],\end{split}\] (A.6)
where
\[\rho_{a}=\sqrt{x^{2}+y^{2}+a^{2}}.\] (A.7)
Finally, when the field point is located between the end points of the dislocation segment, i.e. \(z_{1}\leq z\leq z_{2}\), the following form 3 should be used
\[\begin{split}\frac{\bar{\sigma}_{xx}}{\sigma_{0}}&= -\frac{b_{y}^{\prime}x\lambda}{\rho_{a}^{2}R_{a}}\left[1-\frac{2(x^{2}+a^{2})}{ \rho_{a}^{2}}-\frac{x^{2}+a^{2}}{R_{a}^{2}}\right],\\ \frac{\bar{\sigma}_{yy}}{\sigma_{0}}&=\frac{b_{y}^{ \prime}x\lambda}{\rho_{a}^{2}R_{a}},\\ \frac{\bar{\sigma}_{zz}}{\sigma_{0}}&=b_{y}^{\prime }\left\{\frac{2vx\lambda}{\rho_{a}^{2}R_{a}}\left[1+\frac{a^{2}}{\rho_{a}^{2}}+ \frac{a^{2}}{2R_{a}^{2}}\right]+\frac{x\lambda}{R_{a}^{3}}\right\},\\ \frac{\bar{\sigma}_{xy}}{\sigma_{0}}&=\frac{b_{x}x \lambda}{\rho_{a}^{2}R_{a}}\left[1-\frac{2y^{2}}{\rho_{a}^{2}}-\frac{y^{2}}{R_{ a}^{2}}\right],\\ \frac{\bar{\sigma}_{xz}}{\sigma_{0}}&=b_{y}^{\prime }\left[-\frac{\nu}{R_{a}}+\frac{x^{2}}{R_{a}^{3}}+(1-\nu)\frac{a^{2}}{2R_{a}^ {3}}\right],\\ \frac{\bar{\sigma}_{yz}}{\sigma_{0}}&=b_{x}^{\prime }\left[\frac{\nu}{R_{a}}-(1-\nu)\frac{a^{2}}{2R_{a}^{3}}\right]+\frac{b_{z}^{ \prime}(1-\nu)x}{\rho_{a}^{2}R_{a}}\left[1+\frac{a^{2}}{\rho_{a}^{2}}+\frac{ a^{2}}{2R_{a}^{2}}\right],\end{split}\] (A.8)
Several typos in the stress expressions in [19] have been corrected in the above. |
2304.06629 | Jack Derangements | For each integer partition $\lambda \vdash n$ we give a simple combinatorial
expression for the sum of the Jack character $\theta^\lambda_\alpha$ over the
integer partitions of $n$ with no singleton parts. For $\alpha = 1,2$ this
gives closed forms for the eigenvalues of the permutation and perfect matching
derangement graphs, resolving an open question in algebraic graph theory. A
byproduct of the latter is a simple combinatorial formula for the immanants of
the matrix $J-I$ where $J$ is the all-ones matrix, which might be of
independent interest. Our proofs center around a Jack analogue of a hook
product related to Cayley's $\Omega$--process in classical invariant theory,
which we call the principal lower hook product. | Nathan Lindzey | 2023-04-13T15:54:13Z | http://arxiv.org/abs/2304.06629v1 | # Jack Derangements
###### Abstract
For each integer partition \(\lambda\vdash n\) we give a simple combinatorial expression for the sum of the Jack character \(\theta_{\alpha}^{\lambda}\) over the integer partitions of \(n\) with no singleton parts. For \(\alpha=1,2\) this gives closed forms for the eigenvalues of the permutation and perfect matching derangement graphs, resolving an open question in algebraic graph theory. A byproduct of the latter is a simple combinatorial formula for the immanants of the matrix \(J-I\) where \(J\) is the all-ones matrix, which might be of independent interest. Our proofs center around a Jack analogue of a hook product related to Cayley's \(\Omega\)-process in classical invariant theory, which we call _the principal lower hook product_.
## 1 Introduction
Let \(x:=x_{1},x_{2},\cdots\) be an infinite set of indeterminates and let \(\alpha\in\mathbb{R}\) be a real parameter. The _(integral form) Jack polynomials_\(J_{\lambda}:=J_{\lambda}(x;\alpha)\) are defined as the unique basis \(\{J_{\lambda}\}\) for the ring of symmetric functions that satisfies the following properties.
* _Orthogonality:_\(\langle J_{\lambda},J_{\mu}\rangle_{\alpha}=0\) if \(\lambda\neq\mu\) where \(\langle\cdot,\cdot\rangle_{\alpha}\) is the _deformed Hall inner product_ defined on the _power sum basis_\(\{p_{\lambda}\}\) such that \(\langle p_{\lambda},p_{\mu}\rangle_{\alpha}:=\delta_{\lambda,\mu}\alpha^{\ell( \lambda)}z_{\lambda}\).
* _Triangularity:_\(J_{\lambda}=\sum_{\mu\unlhd\lambda}c_{\lambda\mu}m_{\mu}\) where \(\{m_{\mu}\}\) is the _monomial basis_ and \(\unlhd\) denotes the _dominance ordering_ on integer partitions \(\lambda\vdash n\).
* _Normalization:_\([m_{1^{n}}]J_{\lambda}=n!\).
We refer the reader to [32, Ch. IV SS10] and [46] for a detailed treatment of these polynomials. In this work, we restrict our attention to the power sum expansion of the Jack polynomials
\[J_{\lambda}=\sum_{\mu\vdash n}\theta_{\alpha}^{\lambda}(\mu)p_{\mu}\quad\text{ for all }\lambda\vdash n.\]
The \(\theta_{\alpha}^{\lambda}\)'s are called the _Jack characters_ because they are a deformation of a normalization of the _irreducible characters_\(\chi^{\lambda}\) of _the symmetric group_\(S_{n}\). In particular, the Jack polynomials at \(\alpha=1,2\) recover the integral forms of the _Schur_ and _Zonal polynomials_ respectively. These specializations have been widely studied in algebraic combinatorics due to their connections with \(S_{n}\) and the set \(\mathcal{M}_{2n}\) of _perfect matchings of the complete graph_\(K_{2n}\), but for arbitrary \(\alpha\in\mathbb{R}\) many open questions remain [2, 46, 32]. This state of affairs has led to an investigation of the Jack characters since they provide _dual_ information about Jack polynomials that may shed light on these open questions; however, the dual path towards understanding Jack polynomials is paved with its own conjectures [18, 25, 27]. We make modest progress in this direction by considering _sums_ of \(\theta_{\alpha}^{\lambda}(\mu)\)'s rather than single \(\theta_{\alpha}^{\lambda}(\mu)\)'s.
Let \(\operatorname{fp}(\mu)\) be the number of singleton parts of \(\mu\). Define the \(\lambda\)_-Jack derangement sum_
\[\eta_{\alpha}^{\lambda}:=\sum_{\begin{subarray}{c}\mu\vdash n\\ \operatorname{fp}(\mu)=0\end{subarray}}\theta_{\alpha}^{\lambda}(\mu)\]
to be the sum of the Jack character \(\theta_{\alpha}^{\lambda}\) over the _derangements_, i.e., partitions \(\mu\vdash n\) with no singleton parts. To motivate this definition, recall that if \(\lambda\vdash n\) is the cycle type of a permutation \(\pi\in S_{n}\), then \(\pi\) is a _derangement_ if and only if \(\operatorname{fp}(\lambda)=0\). Let \(D_{n}\subseteq S_{n}\) be the set of derangements of \(S_{n}\). One can show that \(\eta_{1}^{\lambda}\) is a scaled character sum over \(D_{n}\), i.e.,
\[\eta_{1}^{\lambda}=\sum_{\begin{subarray}{c}\mu\vdash n\\ \operatorname{fp}(\mu)=0\end{subarray}}\theta_{1}^{\lambda}(\mu)=\sum_{ \begin{subarray}{c}\mu\vdash n\\ \operatorname{fp}(\mu)=0\end{subarray}}\frac{|C_{\mu}|}{\chi^{\lambda}(1)} \chi^{\lambda}(\mu)=\frac{1}{\chi^{\lambda}(1)}\sum_{\pi\in D_{n}}\chi^{ \lambda}(\pi)\]
where \(C_{\mu}\subseteq S_{n}\) is the conjugacy class corresponding to \(\mu\vdash n\). For \(\alpha=2\), an analogous result holds for the so-called _perfect matching derangements_ of \(\mathcal{M}_{2n}\) (see [28], for example). We are unaware of any combinatorial models for \(\alpha\neq 1,2\), but it is natural to view \(\eta_{\alpha}^{\lambda}\) as the \(\alpha\)-analogue of the character sum over derangements, which is the main focus of this paper.
While little is known about the Jack derangement sums for arbitrary \(\alpha\in\mathbb{R}\), the \(\alpha=1,2\) cases have received special attention in algebraic graph theory because they are in fact the eigenvalues of the so-called _derangement graphs_.
* The set \(\{\eta_{1}^{\lambda}\}_{\lambda\vdash n}\) is the spectrum of the _permutation derangement graph_ \[\Gamma_{n,1}=(S_{n},E)\text{ where }\pi\sigma\in E\Leftrightarrow\sigma\pi^{-1} \in D_{n},\] i.e., the normal Cayley graph of \(S_{n}\) generated by \(D_{n}\). See [13, Ch. 14] or [41] for more details on the permutation derangement graph.
* The set \(\{\eta_{2}^{\lambda}\}_{\lambda\vdash n}\) is the spectrum of the _perfect matching derangement graph_ \[\Gamma_{n,2}=(\mathcal{M}_{2n},E)\text{ where }mm^{\prime}\in E \Leftrightarrow m\cap m^{\prime}=\emptyset.\] See [13, Ch. 15] or [28] for more details on the perfect matching derangement graph.
These graphs made their debut in _Erdos-Ko-Rado combinatorics_, a branch of extremal combinatorics that studies how large families of combinatorial objects can be subject to the restriction that any two of its members intersect. By design, the _independent sets_ (sets of vertices that are pairwise non-adjacent) of \(\Gamma_{n,\alpha}\) are in one-to-one correspondence with the so-called _intersecting families_ of permutations and perfect matchings for \(\alpha=1,2\), and the spectra of these graphs have been used to give tight upper bounds and characterizations of the largest intersecting families of \(S_{n}\) and \(\mathcal{M}_{2n}\). We refer the reader to [13] for a comprehensive account of algebraic techniques in Erdos-Ko-Rado combinatorics.
The derangement graphs are interesting in their own right since they are natural analogues of the celebrated _Kneser graph_1, a cornerstone of algebraic graph theory [15, Ch. 7]. Because the algebraic combinatorics of permutations and perfect matchings are more baroque than that of subsets, the eigenvalues of the derangement graphs have proven to be far more challenging to understand. The following is a brief overview of the results in this area.
Footnote 1: Recall that the _Kneser graph_ is the graph defined on \(k\)-sets of \(\{1,2,\cdots,n\}\) such that two \(k\)-sets are adjacent if they are disjoint.
The first non-trivial recursion for the eigenvalues of the permutation derangement graph was derived by Renteln [41] using determinantal formulas for the _shifted Schur functions_[37],
which he used to calculate the minimum eigenvalue of the permutation derangement graph. Using different techniques, Ellis [9] later computed the minimum eigenvalue of the permutation derangement graph. Deng and Zhang [7] determined the second largest eigenvalue. In [22], Ku and Wales investigated some interesting properties of the eigenvalues of the permutation derangement graph. In particular, they proved _The Alternating Sign Theorem_, namely, that \(\operatorname{sgn}\,\eta_{1}^{\lambda}=(-1)^{|\lambda|-\lambda_{1}}\) for all \(\lambda\), and they offered a conjecture on the magnitudes of the eigenvalues known as the _Ku-Wales Conjecture_. In [23], Ku and Wong proved this conjecture by deriving another recursive formula using shifted Schur functions that also led to a simpler proof of the Alternating Sign Theorem.
It was soon noticed that the algebraic properties of the perfect matching derangement graph parallel those of the permutation derangement graph. The minimum eigenvalue of the perfect matching derangement graph was computed by Godsil and Meagher [14] and later by Lindzey [29, 30]. An analogue of the Alternating Sign Theorem was conjectured in [28, 13] which was recently proven by both Renteln [42] and Koh et al [21]. In an earlier effort to prove this conjecture, Ku and Wong [24] give recursive formulas for \(\eta_{2}^{\lambda}\) and a few closed forms for select shapes. In [44], Srinivasan gives more computationally efficient formulas for the eigenvalues of the perfect matching derangement graph. Godsil and Meagher ask whether an analogue of the Ku-Wales conjecture holds for the perfect matching derangement graph [13, pg. 316]. The latter has remained open since the eigenvalues of the perfect matching derangement graph have defied nice recursive expressions akin to permutation derangement graph. This is because the aforementioned determinantal formulas for shifted Schur functions do not exist for shifted Zonal polynomials or shifted Jack polynomials.
The main shortcoming of the known eigenvalue formulas for the derangement graphs is that they cannot be evaluated efficiently, i.e., they lack "good formulas". Indeed, finding closed forms for these eigenvalues was deemed a difficult open problem [13, pg. 316], perhaps due to the formal hardness of evaluating the irreducible characters of the symmetric group [39, 19, 38]. Our results show that good formulas for these eigenvalues exist.
To state our main results we need a few definitions. Let \(h_{*}^{1}(i,j):=\alpha a_{\lambda}(i,j)+l_{\lambda}(i,j)+1\) be the _lower hook length_ of the cell \((i,j)\in\lambda\) where \(a_{\lambda}(i,j)\) and \(l_{\lambda}(i,j)\) denote _arm length_ and _leg length_ respectively (see Section 3 for definitions). We define
\[H_{*}^{1}(\lambda):=h_{*}^{\lambda}(1,1)h_{*}^{\lambda}(1,2)\cdots h_{*}^{ \lambda}(1,\lambda_{1})\]
to be the _principal lower hook product_ of the integer partition \(\lambda\). For \(\alpha=1\), the lower hook length is just the usual notion of hook length, in which case we call \(H_{*}^{1}(\lambda)\) the _principal hook product_. Note that the principal hook product for \(\lambda=(n)\) is simply \(n!\).
It turns out that the principal hook product for arbitrary \(\lambda\) arises naturally in classical invariant theory, namely, in the evaluation of a differential operator known as _Cayley's \(\Omega\)-process_ (see [3]). Independently, Filmus and Lindzey [11] observe a similar phenomenon in their study of harmonic polynomials on perfect matchings, wherein they show that the principal lower hook product appears in the evaluation of a family of differential operators acting polynomial spaces associated with perfect matchings. From the results of [11], we show in Section 4 that the principal hook product \(H_{*}^{1}(\lambda)\) counts an interesting class of colored permutations \(\mathcal{S}_{\lambda}\), defined as follows.
For each \(i\in[n]:=\{1,2,\ldots,n\}\), we assign a list of colors \(L(i)\subseteq[m]\) for some \(m\in\mathbb{N}\). We define a _colored permutation_\((c,\sigma)\) to be an assignment of colors \(c=c_{1},c_{2},\ldots,c_{n}\) such that \(c_{i}\in L(i)\) and a permutation \(\sigma\in\operatorname{Sym}([n])\) such that \(\sigma(i)=j\Rightarrow c_{i}=c_{j}\), i.e., each cycle of the permutation is monochromatic. Any partition \(\lambda\) defines a color list on each element \(i\) of the symbol set \([\lambda_{1}]\) by setting \(L(i):=[\lambda_{i}^{\prime}]\) where \(\lambda^{\prime}\) denotes the _transpose_ or _conjugate_ partition of \(\lambda\). We define \(\mathcal{S}_{\lambda}\) to be the set of all such colored permutations, formally,
\[\mathcal{S}_{\lambda}:=\{(c\in[\lambda_{1}^{\prime}]\times\cdots\times[ \lambda_{\lambda_{1}}^{\prime}],\sigma\in S_{\lambda_{1}}):\sigma(i)=j \Rightarrow c_{i}=c_{j}\text{ for all }i\in[\lambda_{1}]\}.\]
We say that a colored permutation \((c,\sigma)\in\mathcal{S}_{\lambda}\) is a _derangement_ if \(\sigma(i)=i\Rightarrow c_{i}\neq 1\) for all \(1\leqslant i\leqslant\lambda_{1}\). In other words, these are the colored permutations that have no colored cycles in common with \((1,\ldots,1,())\in\mathcal{S}_{\lambda}\). Let \(\mathcal{D}^{\lambda}\) be the set of derangements of \(\mathcal{S}_{\lambda}\), and let \(\mathcal{D}^{\lambda}_{k}\) be the set of derangements of \(\mathcal{S}_{\lambda}\) with exactly \(k\) disjoint cycles. We define \(D^{\lambda}:=|\mathcal{D}^{\lambda}|\) and \(d^{\lambda}_{k}:=|\mathcal{D}^{\lambda}_{k}|\), so that
\[D^{\lambda}=d^{\lambda}_{1}+d^{\lambda}_{2}+\cdots+d^{\lambda}_{\lambda_{1}}.\]
For \(\lambda=(n)\), the \(d^{\lambda}_{k}\)'s recover the _(unsigned) associated Stirling numbers of the first kind_, i.e., the number of derangements of \(S_{n}\) that have precisely \(k\) disjoint cycles (see [5, pg. 256]). _Colored perfect matchings_\(\mathcal{M}_{\lambda}\) and their derangements \(\mathcal{D}^{\prime}_{\lambda}\) can be defined in a similar but slightly more complicated manner, which we defer to Section 4.
For any \(\alpha\in\mathbb{R}\), we define
\[D^{\lambda}_{\alpha}:=\sum_{k=1}^{\lambda_{1}}d^{\lambda}_{k}\alpha^{\lambda_ {1}-k}\]
to be the _\(\lambda\)-Jack derangement number_.
Our first main result is that the Jack derangement sums equal the Jack derangement numbers (up to sign).
**Theorem 1**.: _For any shape \(\lambda\) and \(\alpha\in\mathbb{R}\), we have_
\[\eta^{\lambda}_{\alpha}=(-1)^{|\lambda|-\lambda_{1}}D^{\lambda}_{\alpha}\]
Theorem 1 gives simpler, unified, and more general proofs of all the aforementioned results on the derangement graphs, which we list below.
**Corollary 2** (Alternating Sign Theorem).: _For any shape \(\lambda\) and \(\alpha\geqslant 0\), we have_
\[\operatorname{sgn}\ \eta^{\lambda}_{\alpha}=(-1)^{|\lambda|-\lambda_{1}}.\]
**Corollary 3** (Ku-Wales Theorem).: _For all \(\mu,\lambda\vdash n\) such that \(\mu_{1}=\lambda_{1}\) and \(\alpha\geqslant 0\), we have_
\[\mu\trianglelefteq\lambda\Rightarrow|\eta^{\mu}_{\alpha}|\leqslant|\eta^{ \lambda}_{\alpha}|.\]
Setting \(\alpha=2\) in Corollary 3 answers Godsil and Meagher's question on the Ku-Wales conjecture for the perfect matching derangement graph [13, pg. 316].
**Corollary 4**.: _For all \(\alpha\geqslant 1\) and \(n\geqslant 6\), we have_
\[(n)=\operatorname*{arg\,max}_{\lambda\vdash n}\ \eta^{\lambda}_{\alpha},\quad(n-1,1 )=\operatorname*{arg\,min}_{\lambda\vdash n}\ \eta^{\lambda}_{\alpha},\quad\text{ and }\quad(n-1,1)= \operatorname*{arg\,max}_{\begin{subarray}{c}\lambda\vdash n\\ \lambda\neq(n)\end{subarray}}\ |\eta^{\lambda}_{\alpha}|.\]
Finally, we note that colored permutations have appeared before in the study of the character theory of the symmetric group. In [45] Stanley conjectures a formula for \(\theta^{\lambda}_{1}(\mu)\) in terms of colored permutations, which was later proven by Feray [12] and reformulated by Feray and Sniady [10]. The combinatorics involved in their reformulation makes the expression more amenable to asymptotic analysis, leading to sharper results on the asymptotic character theory of \(S_{n}\)[10]. Moreover, Lasselle [26] conjectures that an analogue of the Stanley-Feray formula holds for the Jack characters. Although these works all feature colored permutatons, the main results center around their factorizations and asymptotics, which we do not consider. It is also not clear how to use the Stanley-Feray-Sniady formula to recover our main results for \(\alpha=1\). At any rate, these works and the present show that colored permutations have an understated role in the character theory of the symmetric group that seems worthy of future investigation.
#### Organization
The paper is organized as follows. In Section 2 we overview basic terminology and definitions in the theory of symmetric functions. We introduce the shifted Jack polynomials in Section 3 and show that the Jack derangement sums can be written as an alternating sum of shifted Jack polynomials (Theorem 6). The expression we obtain is difficult to work with, so in Section 4 we cover some combinatorial results of [1, 11] that lead to a simpler combinatorial formulation of the expression (Corollary 7). This combinatorial expression is used to prove the main result for \(\alpha=0\) in Section 4, but it is still not explicit enough to obtain closed-form expressions for \(\alpha\neq 0\). In Section 5 we prove a few technical lemmas about so-called _minors_ of principal lower hook products, which leads us to a more explicit formulation of a result of Alexandersson and Feray [1, Theorem 5.12] in the language of finite differences. With these lemmas in hand, we prove our first main result (Theorem 1) in Section 6.
The remainder focuses on various corollaries and specializations of \(\alpha\) and \(\lambda\). In Section 7, we give short proofs for Corollary 2, Corollary 3, and Corollary 4. In Sections 8 and 9 we take a closer look at the \(\alpha=1,2\) case and prove our second main result, namely, closed tableau-theoretic expressions for the eigenvalues of the derangement graphs (Theorem 18 and Theorem 26). For \(\alpha=1\), this extends a result of Okazaki [35, Corollary 1.3] and Stanley [48, Ex. 7.63b] for hooks to arbitrary shapes, which can be reformulated as a result on immanants of the matrix \(J-I\) (see Section 8). For \(\alpha=1\), we connect our closed form to Renteln's determinantal formula [41, Theorem 4.2] for the eigenvalues of the permutation derangement graph. We conclude with some open questions and directions for future work.
## 2 Preliminaries
The reader familiar with the theory of symmetric functions may skip this section, as our notation is completely standard, following Macdonald [32] and Stanley [46].
A _shape_ is a collection of cells \((i,j)\) such that \(i,j\in\mathbb{N}_{+}\). We let \(\lambda\) be an integer partition and we refer to it as a shape when we are appealing to its tableau interpretation. Let \(\lambda/\mu\) denote the _skew shape_ obtained by deleting the cells of \(\lambda\) that correspond to the partition \(\mu\). Let \(|\lambda|\) denote the _size_ of \(\lambda\), i.e., the number of cells of \(\lambda\). Let \(\ell(\lambda)\) denote the _length_ of \(\lambda\), i.e., the number of parts of \(\lambda\). A _Young tableau_\(t\) of shape \(\lambda\) is a tableau whose cells are labeled with the integers \([|\lambda|]\). For any Young tableau \(t\) let \(t_{i,j}\) denote the entry of the \((i,j)\) cell of \(t\). A Young tableau with entries strictly increasing along rows and columns is called _standard_. Let \(z_{\lambda}:=1^{m_{1}}2^{m_{2}}\cdots m_{1}!m_{2}!\cdots\) where \(m_{i}\) is the number of parts of \(\lambda\) equal to \(i\).
The _elementary symmetric functions_ are defined such that
\[e_{\lambda}:=e_{\lambda_{1}}\cdots e_{\lambda_{\ell(\lambda)}}\quad\text{ where }\quad e_{k}(x_{1},x_{2},\cdots)=\sum_{i_{1}<i_{2}<\cdots<i_{k}}x_{i_{1}}\cdots x _{i_{k}}.\]
The _power sum symmetric functions_ are defined such that
\[p_{\lambda}:=p_{\lambda_{1}}\cdots p_{\lambda_{\ell(\lambda)}}\quad\text{ where }\quad p_{k}(x_{1},x_{2},\cdots)=\sum_{i=1}x_{i}^{k}.\]
For a more detailed discussion of these polynomials, we refer the reader to [32, Ch. I].
We now review a well-known tableau-theoretic definition of the dominance ordering \(\leq\) on partitions. A cell \(\square\in\lambda\) is an _outer corner_ if the diagram obtained by removing \(\square\) is a partition of \(|\lambda|-1\). A non-cell \((i,j)\) is an _inner corner_ if the diagram obtained by adding the cell \(\square:=(i,j)\) to \(\lambda\) is a partition of \(|\lambda|+1\). For example, in the figure below, the outer
corners are labeled "\(+\)" and the inner corners are labeled "\(-\)":
We write \(\mu\nearrow\nu\) if \(\nu\) can be obtained from \(\mu\) by removing a single outer corner \(\square\in\mu\) and placing it on an inner corner of \(\mu\) that lies in a row above \(\square\) in \(\mu\). The following proposition is well-known.
**Proposition 5**.: _For any partitions \(\mu,\lambda\vdash n\), we have \(\mu\trianglelefteq\lambda\) if and only if there exists a sequence of partitions \(\nu^{1},\nu^{2},\ldots,\nu^{k}\) such that_
\[\mu=\nu^{1}\nearrow\nu^{2}\nearrow\cdots\nearrow\nu^{k}=\lambda.\]
## 3 Shifted Jack Polynomials
We briefly review some of the standard terminology associated with Jack polynomials defined in the introduction. For any cell \((i,j)\in\lambda\), the _leg length_\(l_{\lambda}(i,j)\) of \((i,j)\) is the number of cells below \((i,j)\) in the same column of \(\lambda\), and the _arm length_\(a_{\lambda}(i,j)\) of \((i,j)\) is the number of cells to the right of \((i,j)\) in the same row of \(\lambda\), i.e.,
\[a_{\lambda}(i,j)=|\{(i,k)\in\lambda:k>j\}|\quad\text{ and }\quad l_{\lambda}(i,j)=| \{(k,j)\in\lambda:k>i\}|.\]
Note that arm length and leg length remain well-defined even when \(\lambda\) is replaced by a set of cells that does not form an integer partition. Let
\[h_{*}^{\lambda}(i,j):=\alpha a_{\lambda}(i,j)+l_{\lambda}(i,j)+1\quad\text{ and }\quad h_{\lambda}^{*}(i,j):=\alpha(a_{\lambda}(i,j)+1)+l_{\lambda}(i,j)\]
be the _lower hook length_ and _upper hook length_ of \((i,j)\in\lambda\), respectively. Let
\[H_{*}^{\lambda}=\prod_{(i,j)\in\lambda}h_{*}^{\lambda}(i,j)\quad\text{ and }\quad H_{\lambda}^{*}=\prod_{(i,j)\in\lambda}h_{\lambda}^{*}(i,j)\]
be the _lower hook product_ and _upper hook product_ of \(\lambda\), respectively. Note that the lower and upper hook product remain well-defined even when \(\lambda\) is replaced by a set of cells that does not form an integer partition.
A _reverse semistandard Young tableau_ is a tableau on \(n\) cells with entries in \([n]\) that are weakly decreasing along rows and strictly decreasing down columns. Let \(\operatorname{RSSYT}(\mu)\) be the set of all reverse semistandard Young tableau of shape \(\mu\). For any reverse semistandard Young tableau \(t\) of shape \(\mu\), define
\[\psi_{t}(\alpha):=\prod_{i=1}^{|\mu|}\psi_{\rho^{j}/\rho^{j-1}}(\alpha);\quad \psi_{\lambda/\mu}(\alpha):=\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
where \(\bar{a}_{\mu}(i,j)=|\{(i,k)\in\lambda:k<j\}|\) and \(\bar{l}_{\mu}(i,j)=|\{(k,j)\in\lambda:k<i\}|\) denote the _co-arm length_ and _co-leg length_ of \((i,j)\in\mu\), respectively. The polynomials \(P_{\lambda}^{\star}\) are sometimes referred to as the _normalized shifted Jack polynomials_.
Theorem 6 is a simple but opaque expression for \(\eta_{\alpha}^{\lambda}\) in terms of shifted Jack polynomials. These expressions are already known for \(\eta_{1}^{\lambda}\) and \(\eta_{2}^{\lambda}\) in terms of the determinantal formula for the shifted Schur polynomials [41] and recently for the shifted Zonal polynomials [42]. Theorem 6 is simply the Jack analogue of these results. Henceforth, we let \(J_{k}^{\star}:=J_{(k)}^{\star}\).
**Theorem 6**.: _For all \(\lambda\) and \(\alpha\in\mathbb{R}\), we have_
\[\eta_{\alpha}^{\lambda}=\sum_{k=0}^{|\lambda|}\frac{(-1)^{|\lambda|-k}}{k!}J_ {k}^{\star}(\lambda).\]
Proof.: We begin with the Cauchy formula for Jack polynomials [46] and its expansion:
\[\sum_{\lambda}\frac{J_{\lambda}(x;\alpha)J_{\lambda}(y;\alpha)}{H_{\lambda}^{ \star}H_{\ast}^{\lambda}}=\prod_{i,j}(1-x_{i}y_{j})^{1/\alpha}=\prod_{i\geqslant 1 }\exp\left(\frac{p_{i}(x)p_{i}(y)}{\alpha i}\right).\]
Since \(\eta_{\alpha}^{\lambda}=J_{\lambda}|_{p_{1}=0,p_{2}=p_{3}=\cdots=1}\), setting \(p_{1}(x)=0\) and the remaining \(p_{i}(x)=1\) gives
\[\sum_{\lambda}\frac{\eta_{\alpha}^{\lambda}J_{\lambda}(y;\alpha)}{H_{\lambda} ^{\star}H_{\ast}^{\lambda}}=\prod_{i\geqslant 2}\exp\left(\frac{p_{i}(y)}{ \alpha i}\right).\]
Recall the basic fact that the generating function \(H(t)\) of the homogeneous complete symmetric polynomials \(h_{i}\) can be written as follows:
\[H(t)=\sum_{i=0}h_{i}t^{i}=\prod_{i}\frac{1}{1-x_{i}t}=\prod_{i\geqslant 1} \exp\left(\frac{p_{i}}{i}t^{i}\right).\]
This implies that
\[\sum_{\lambda}\frac{\eta_{\alpha}^{\lambda}J_{\lambda}}{H_{\lambda}^{\star}H_ {\ast}^{\lambda}}=\prod_{i\geqslant 2}\exp\left(\frac{p_{i}}{\alpha i} \right)=e^{-h_{1}/\alpha}\prod_{i}(1-x_{i})^{-1/\alpha}.\]
Following Stanley [46], we have \(\sum_{k}J_{k}/(\alpha^{k}k!)=\prod_{i}(1-x_{i})^{-1/\alpha}\). This, along with the fact that \(h_{1}=J_{1}\) gives us
\[\sum_{\lambda}\frac{\eta_{\alpha}^{\lambda}J_{\lambda}}{H_{\lambda}^{\star}H_ {\ast}^{\lambda}}=e^{-J_{1}/\alpha}\sum_{k}\frac{J_{k}}{\alpha^{k}k!}=\sum_{j, k}\frac{(-1)^{j}}{\alpha^{j}j!\alpha^{k}k!}J_{1}^{j}J_{k}.\]
The Pieri rule for Jack polynomials [46] implies that
\[J_{1}^{j}J_{k}=\sum_{\lambda=(k)+j}d^{\lambda/k}\left(\frac{H_{\ast}^{(k)}}{H_ {\ast}^{\lambda}}\right)J_{\lambda}.\]
where the sum ranges over all shapes \(\lambda\) obtained by adding \(j\) inner corners to \((k)\) in succession. Equating coefficients and then reindexing gives
\[\frac{\eta_{\alpha}^{\lambda}}{H_{\lambda}^{\star}H_{\ast}^{\lambda}}=\sum_{ \lambda=(k)+j}\frac{(-1)^{j}}{\alpha^{j}j!\alpha^{k}k!}\left(\frac{H_{\ast}^{(k )}}{H_{\ast}^{\lambda}}\right)d^{\lambda/k}=\sum_{k=0}^{|\lambda|}\frac{(-1)^{ |\lambda|-k}}{\alpha^{|\lambda|-k}(|\lambda|-k)!\alpha^{k}k!}\left(\frac{H_{ \ast}^{(k)}}{H_{\ast}^{\lambda}}\right)d^{\lambda/k}.\]
Let \(d^{\lambda}=|\lambda|!/H_{\lambda}^{*}\). By [36, Proposition 5.2], we have
\[d^{\lambda/\mu}=\frac{d^{\lambda}P_{\mu}^{*}(\lambda)}{|\lambda|(|\lambda|-1) \cdots(|\lambda|-|\mu|+1)}.\]
Multiplying both sides by \(H_{\lambda}^{*}H_{\lambda}^{*}\) and applying [36, Proposition 5.2] gives us
\[\eta_{\alpha}^{\lambda} =\sum_{k=0}^{|\lambda|}\frac{(-1)^{|\lambda|-k}H_{\lambda}^{*}H_{ *}^{(k)}}{\alpha^{|\lambda|-k}(|\lambda|-k)!\alpha^{k}k!}\left(\frac{d^{\lambda }P_{k}^{*}(\lambda)}{|\lambda|(|\lambda|-1)\cdots(|\lambda|-k+1)}\right)\] \[=\sum_{k=0}^{|\lambda|}\frac{(-1)^{|\lambda|-k}\alpha^{|\lambda|} |\lambda|!H_{*}^{(k)}}{\alpha^{|\lambda|-k}(|\lambda|-k)!\alpha^{k}k!}\left( \frac{P_{k}^{*}(\lambda)}{|\lambda|(|\lambda|-1)\cdots(|\lambda|-k+1)}\right)\] \[=\sum_{k=0}^{|\lambda|}\frac{(-1)^{|\lambda|-k}}{k!}H_{*}^{(k)}P_ {k}^{*}(\lambda).\]
By definition, we have \(H_{*}^{(k)}P_{k}^{*}(\lambda)=J_{k}^{*}(\lambda)\), which completes the proof.
## 4 Tableau Transversals and Principal Hook Products
We now leverage some combinatorial results of [1, 11] to give a more tractable combinatorial formulation of Theorem 6, which we use to prove Theorem 1 for \(\alpha=0,1,2\).
A _\(k\)-transversal_\(T\) of \(\lambda\) is a set of \(k\) cells of \(T\) which forms a partial transversal of the columns of \(\lambda\), that is, no two cells of \(T\) lie in the same column of \(\lambda\). Define the _\(\alpha\)-weight_ of a \(k\)-transversal \(T\) to be the lower hook product of \(T\), i.e., \(w_{\alpha}(T)=H_{*}^{T}\), with the convention that \(w_{\alpha}(\emptyset)=1\) (see Figure 1 for examples). Let \(\mathcal{T}_{\lambda}^{k}\) be the collection of \(k\)-transversals of \(\lambda\).
In [1, Theorem 5.12], Alexandersson and Feray show that
\[\frac{J_{k}^{*}(\lambda)}{k!}=\sum_{T\in\mathcal{T}_{\lambda}^{k}}w_{\alpha}( T). \tag{1}\]
Independently, Filmus and Lindzey [11] prove the following combinatorial identity
\[\frac{J_{\lambda_{1}}^{*}(\lambda)}{\lambda_{1}!}=\sum_{T\in\mathcal{T}_{ \lambda}^{\lambda_{1}}}w_{\alpha}(T)=H_{*}^{1}(\lambda). \tag{2}\]
For \(\alpha=1\), we note that Equation (2) can also be observed from Naruse's hook-length formula for standard skew-tableaux [33]. We write \(\mu\preceq_{k}\lambda\) if \(\mu\) is a subshape \(\lambda\) obtained by removing \(k\) columns of \(\lambda\). There are \(\binom{\lambda_{1}}{k}\) such subshapes, and we let the sigma notation \(\sum_{\mu\preceq_{k}\lambda}\) denote the sum over all \(\binom{\lambda_{1}}{k}\) subshapes \(\mu\) of \(\lambda\) obtained by removing \(k\) columns.
**Theorem 7**.: _For any shape \(\lambda\) and \(\alpha\in\mathbb{R}\), we have_
\[\eta_{\alpha}^{\lambda}=(-1)^{|\lambda|-\lambda_{1}}\sum_{k=0}^{\lambda_{1}}(- 1)^{k}\sum_{\mu\preceq_{k}\lambda}H_{*}^{1}(\mu).\]
Proof.: We claim that
\[\frac{J_{k}^{*}(\lambda)}{k!}=\sum_{T\in\mathcal{T}_{\lambda}^{k}}w_{\alpha}(T )=\sum_{\mu\preceq_{\lambda_{1}-k}\lambda}H_{*}^{1}(\mu).\]
The first equality follows from Equation (1), and the second equality follows from applying Equation (2) to each shape \(\mu\) obtained by removing \(\lambda_{1}-k\) columns from \(\lambda\), so that \(\mu_{1}=k\). Note for all \(\ell>\lambda_{1}\) that there exists no \(\mu\) such that \(\mu\preceq_{\ell}\lambda\). Reindexing Theorem 6 and applying the identity above gives
\[\eta^{\lambda}_{\alpha}=\sum_{k=0}^{|\lambda|}\frac{(-1)^{|\lambda|-k}}{k!}J^{*} _{k}(\lambda)=\sum_{k=0}^{\lambda_{1}}\frac{(-1)^{|\lambda|-k}}{k!}J^{*}_{k}( \lambda)=(-1)^{|\lambda|-\lambda_{1}}\sum_{k=0}^{\lambda_{1}}(-1)^{k}\sum_{ \mu\preceq_{k}\lambda}H^{1}_{*}(\mu),\]
as desired.
We are now ready to give an elementary combinatorial proof of Theorem 1 for \(\alpha=1,2\) using the Principle of Inclusion-Exclusion. This is due to the fact that \(\lambda\)-colored permutations \(\mathcal{S}_{\lambda}\) (defined in Section 1) and \(\lambda\)_-colored perfect matchings_\(\mathcal{M}_{\lambda}\) (defined below) are _bona fide_ combinatorial objects, and their cardinalities are counted by principal lower hook products.
**Theorem 8**.: [11] _For any shape \(\lambda\), we have_
\[|\mathcal{S}_{\lambda}|,|\mathcal{M}_{\lambda}|=H^{1}_{*}(\lambda)\]
_for \(\alpha=1,2\), respectively._
For each \(i\in[2n]:=\{1,2,\ldots,n\}\), we assign a list of colors \(L(i)\) such that \(L(i)=L(i+1)\) for all odd \(i\). A _colored perfect matching_\((c,m)\) is an assignment of colors \(c=c_{1},c_{2},\ldots,c_{n}\) such that \(c_{i}\in L(i)\) and a perfect matching \(m\in\mathcal{M}_{2n}\) such that \(m(i)=j\Rightarrow c_{i}=c_{j}\), where \(m(i)\) denotes the partner of \(i\) in the perfect matching \(m\). Any partition \(\lambda\) defines a color list on each element \(i\) of the symbol set \([2\lambda_{1}]\) by setting \(L(i)=L(i+1)=[\lambda^{\prime}_{i}]\). Let \(\mathcal{M}_{\lambda}\) to be the set of all such colored perfect matchings, formally,
\[\mathcal{M}_{\lambda}:=\{(c\in[\lambda^{\prime}_{1}]\times\cdots\times[ \lambda^{\prime}_{\lambda_{1}}],m\in\mathcal{M}_{2\lambda_{1}}):m(i)=j \Rightarrow c_{i}=c_{j}\text{ for all }i\in[2\lambda_{1}]\}.\]
We say that a colored perfect matching \((c,m)\in\mathcal{M}_{\lambda}\) is a _derangement_ if \(m(i)=i+1\Rightarrow c_{i}\neq 1\) for all odd \(1\leqslant i<2\lambda_{1}\). These are the colored perfect matchings that have no edges in common with \((1,\ldots,1,\{\{1,2\},\ldots,\{2\lambda_{1}-1,2\lambda_{1}\}\})\in\mathcal{M }_{\lambda}\). Let \(\mathcal{D}^{\prime}_{\lambda}\) be the set of derangements of \(\mathcal{M}_{\lambda}\).
Proof of Theorem 1 for \(\alpha=1,2\).: Let \(\alpha=1\). A _fixed point_ of a colored permutation is a symbol \(i\) such that \(c(i)=1\) and \(\sigma(i)=i\). Consider the summation \(\sum_{\mu\preceq_{k}\lambda}H^{1}_{*}(\mu)\) which ranges over each shape \(\mu\) obtained by removing \(k\) columns from \(\lambda\). For each \(\mu\) in this summation, the indices \(I\subseteq[\lambda_{1}]\) of the \(k\) columns removed from \(\lambda\) to obtain \(\mu\) correspond to \(k\) fixed points of a \(\lambda\)-colored permutation, and the number of colored permutations on the remaining columns is counted by \(H^{1}_{*}(\mu)\). Thus it counts the number of \(\lambda\)-colored
Figure 1: Let \(\mu=(4,3,2)\vdash 9\). The colored cells \(S=\{(2,1),(1,2),(2,3),(1,4)\}\) on the left is a \(4\)-transversal of \(\mu\) with \(\alpha\)-weight \(w_{\alpha}(S)=(\alpha+1)^{2}\). The colored cells \(S^{\prime}=\{(1,1),(3,2)\}\) on the right is a \(2\)-transversal of \(\mu\) with \(\alpha\)-weight \(w_{\alpha}(S^{\prime})=1\). Each colored cell is labeled with its lower hook length with respect to \(S\) and \(S^{\prime}\).
permutations that have each \(i\in I\) as a fixed point. This overcounts the number of \(\lambda\)-colored permutations for which \(I\) is the exactly the set of fixed points, so we must exclude those \(\lambda\)-colored permutations for which \(I\) is a proper subset of its set of fixed points. Thus by the Principle of Inclusion-Exclusion, the alternating sum in Theorem 7 for \(\alpha=1\) counts the number of \(\lambda\)-colored permutations with exactly \(0\) fixed points, as desired.
The proof for \(\alpha=2\) is identical _mutatis mutandis_ and shows \(\eta_{2}^{\lambda}=(-1)^{|\lambda|-\lambda_{1}}|\mathcal{D}_{\lambda}^{ \prime}|=(-1)^{|\lambda|-\lambda_{1}}D_{2}^{\lambda}\), where the last equality is a combinatorial exercise left to the reader.
In Section 6 we give a generalization of the proof above to all \(\alpha\in\mathbb{R}\), but along the way we collect several results concerning principal lower hook products, perhaps of independent interest, that allow us to give a closed-form expression of Theorem 1. Moreover, the specialization to \(\alpha=1,2\) leads to nice expressions for the eigenvalues of the derangement graphs. The reader uninterested in such closed-form expressions may skip to Section 6.
For didactical reasons, we conclude this section with a proof of the first main result for \(\alpha=0\), as it is simple and provides some insight into the general \(\alpha\in\mathbb{R}\) case.
**Theorem 9**.: _For all \(\lambda\), we have_
\[\eta_{0}^{\lambda}=(-1)^{|\lambda|-\lambda_{1}}\prod_{i=1}^{\ell(\lambda^{ \prime})}(\lambda^{\prime}_{i}-1).\]
Proof.: Let \(x=x_{1},\ldots,x_{n}\) be the roots of a polynomial \(p(z)\). Recall _Vieta's formula_
\[p(z)=\prod_{i=1}^{n}(z-x_{i})=\sum_{k=0}^{n}(-1)^{k}e_{k}(x_{1},\cdots,x_{n})z ^{n-k}.\]
By Theorem 6, we have
\[\eta_{0}^{\lambda}=\sum_{k=0}^{|\lambda|}(-1)^{|\lambda|-k}J_{k}^{\star}( \lambda)/k!.\]
For \(\alpha=0\), Theorem 7 implies that \(J_{k}^{\star}(\lambda)/k!=|\mathcal{T}_{\lambda}^{k}|\), which gives us
\[\eta_{0}^{\lambda}=\sum_{k=0}^{|\lambda|}(-1)^{|\lambda|-k}J_{k}^{\star}( \lambda)/k!=(-1)^{|\lambda|}\sum_{k=0}^{|\lambda|}(-1)^{k}|\mathcal{T}_{ \lambda}^{k}|=(-1)^{|\lambda|}\sum_{k=0}^{|\lambda|}(-1)^{k}e_{k}(\lambda^{ \prime}).\]
Setting \(z=1\) and \(x=\lambda^{\prime}\) in Vieta's formula gives us
\[\eta_{0}^{\lambda}=(-1)^{|\lambda|}\prod_{i=1}^{\lambda_{1}}(1-\lambda^{ \prime}_{i})=(-1)^{|\lambda|-\lambda_{1}}\prod_{i=1}^{\ell(\lambda^{\prime})}( \lambda^{\prime}_{i}-1),\]
as desired.
To see that this proves Theorem 1 for \(\alpha=0\), first note that the effect of setting \(\alpha=0\) is that the arm lengths of cells in \(T\in\mathcal{T}_{\lambda}^{k}\) are ignored, thus we associate the identity permutation \(()\) to each \(T\). Let
\[\mathcal{D}_{0}^{\lambda}=\{(c\in[\lambda^{\prime}_{1}]\times\cdots\times[ \lambda^{\prime}_{m}],()):c(i)\neq 1\text{ for all }i\}\subseteq\mathcal{D}^{\lambda}\]
be the derangements that move no symbols of \([\lambda_{1}]\). Clearly, \(|\mathcal{D}_{0}^{\lambda}|=\prod_{i=1}^{\ell(\lambda^{\prime})}(\lambda^{ \prime}_{i}-1)\), as desired. Evidently, we may define the Jack derangements at \(\alpha=0\) to be the words of \([\lambda^{\prime}_{1}]\times\cdots\times[\lambda^{\prime}_{m}]\) that avoid the symbol \(1\). For \(\alpha\neq 0\) we must take into account the arm lengths of the cells, which requires a more detailed examination of the principal lower hook product.
Minors of the Principal Hook Product
In this section we prove a few technical lemmas concerning the principal hook product that are needed for closed-form expressions of Theorem 1. Let \(\lambda^{-i}\) be the shape obtained by removing the \(i\)th column of \(\lambda\). Let \(\lambda^{-i_{1}-i_{2}-\cdots-i_{k}}\) be the shape obtained by removing (distinct) columns \(i_{1},i_{2},\ldots,i_{k}\) of \(\lambda\). It is useful to think of the \(H^{1}_{*}(\lambda^{-i})\)'s as the _first minors_ of \(\lambda\), and the \(H^{1}_{*}(\lambda^{-i_{1}-\cdots-i_{k}})\)'s as \(k\)-_minors_ of \(\lambda\). The ordering of the \(i_{j}\)'s is immaterial, i.e.,
\[\lambda^{-i_{1}-i_{2}-\cdots-i_{k}}=\lambda^{-i_{\sigma(1)}-i_{\sigma(2)}- \cdots-i_{\sigma(k)}}\quad\text{ for all }\sigma\in S_{k}.\]
Let \(\lambda^{\underline{k}}\) be the shape obtained by removing the last \(k\) columns of \(\lambda\). We adopt the shorthand \(h_{j}:=h_{*}^{\lambda}(1,j)\) henceforth. Lemma 10 gives a Laplace-like expansion that relates the principal lower hook product to its first minors.
**Lemma 10** (Laplace Expansion).: _For all \(\lambda\), we have_
\[\sum_{i=1}^{\lambda_{1}}H^{1}_{*}(\lambda^{-i})=\frac{1}{\alpha}\left(H^{1}_{ *}(\lambda)+(\alpha-h_{\lambda_{1}})H^{1}_{*}(\lambda^{\underline{1}})\right),\text{ equivalently,}\]
\[H^{1}_{*}(\lambda)=\sum_{i=1}^{\lambda_{1}-1}\alpha H^{1}_{*}(\lambda^{-i})+h_ {\lambda_{1}}H^{1}_{*}(\lambda^{-\lambda_{1}}).\]
Proof.: Let \(h^{\prime}_{j}:=h_{j}-\alpha\). We can write any first minor of \(\lambda\) in terms of hook products of \(\lambda\):
\[H^{1}_{*}(\lambda^{-i})=\prod_{j<i}h^{\prime}_{j}\prod_{j>i}h_{j},\]
which implies that
\[\sum_{i=1}^{\lambda_{1}}\frac{H^{1}_{*}(\lambda^{-i})}{H^{1}_{*}(\lambda)}= \sum_{i=1}^{\lambda_{1}}\frac{1}{h_{i}}\prod_{j<i}\frac{h^{\prime}_{j}}{h_{j}} =\frac{1}{h_{1}}+\cdots+\left[\prod_{j=1}^{\lambda_{1}-2}\frac{h^{\prime}_{j}} {h_{j}}\right]\frac{1}{h_{\lambda_{1}-1}}+\left[\prod_{j=1}^{\lambda_{1}-2} \frac{h^{\prime}_{j}}{h_{j}}\right]\frac{h^{\prime}_{\lambda_{1}-1}}{h_{ \lambda_{1}-1}}\frac{1}{h_{\lambda_{1}}}.\]
Note that this sum telescopes to \(1\) if and only if \(\alpha=1\) and \(h_{\lambda_{1}}=1\). In general, we have
\[\sum_{i=1}^{\lambda_{1}}\frac{H^{1}_{*}(\lambda^{-i})}{H^{1}_{*}( \lambda)} =\frac{1}{h_{1}}+\cdots+\left[\prod_{j=1}^{\lambda_{1}-2}\frac{h^{ \prime}_{j}}{h_{j}}\right]\frac{1}{h_{\lambda_{1}-1}}+\left[\prod_{j=1}^{ \lambda_{1}-1}\frac{h^{\prime}_{j}}{h_{j}}\right]\frac{1}{h_{\lambda_{1}}}\] \[=1-\frac{(h_{\lambda_{1}}-1)}{h_{\lambda_{1}}}\prod_{j=1}^{ \lambda_{1}-1}\frac{h^{\prime}_{j}}{h_{j}}-(\alpha-1)\sum_{i=1}^{\lambda_{1}-1 }\frac{h^{\prime}_{1}}{h_{1}}\cdots\frac{h^{\prime}_{\lambda_{1}-i-1}}{h_{ \lambda_{1}-i-1}}\cdot\frac{1}{h_{\lambda_{1}-i}}.\]
Multiplying both sides by \(H^{1}_{*}(\lambda)\) gives
\[\sum_{i=1}^{\lambda_{1}}H^{1}_{*}(\lambda^{-i})=H^{1}_{*}(\lambda)-(h_{\lambda _{1}}-1)H^{1}_{*}(\lambda^{-\lambda_{1}})-(\alpha-1)\sum_{i=1}^{\lambda_{1}-1 }H^{1}_{*}(\lambda^{-i}). \tag{3}\]
After rearranging terms and noting that \(\lambda^{-\lambda_{1}}=\lambda^{\underline{1}}\), we have
\[\sum_{i=1}^{\lambda_{1}}H^{1}_{*}(\lambda^{-i})=\frac{1}{\alpha}(H^{1}_{*}( \lambda)+(\alpha-h_{\lambda_{1}})H^{1}_{*}(\lambda^{\underline{1}})),\]
as desired. Rearranging once more finishes the proof.
For \(\alpha\geqslant 1\), we are now in a position to give a short proof of both the Alternating Sign Theorem and a useful upper bound on the magnitudes of the Jack derangement sums.
**Proposition 11**.: _For all \(\alpha\geqslant 1\), we have \(\operatorname{sgn}\,\eta_{\alpha}^{\lambda}=(-1)^{|\lambda|-\lambda_{1}}\). Moreover, \(|\eta_{\alpha}^{\lambda}|\leqslant H_{*}^{1}(\lambda)\)._
Proof.: Since \(\alpha,h_{\lambda_{1}}\geqslant 1\), applying Equation (3) repeatedly shows that
\[\sum_{\mu\preceq_{\lambda_{1}}\lambda}H_{1}^{*}(\mu)\leqslant\cdots\leqslant \sum_{\mu\preceq_{1}\lambda}H_{1}^{*}(\mu)\leqslant H_{1}^{*}(\lambda). \tag{4}\]
If \(|\lambda|-\lambda_{1}\) is even, then by Corollary 7 we have
\[0\leqslant H_{1}^{*}(\lambda)-\sum_{\mu\preceq_{1}\lambda}H_{1}^{*}(\mu) \leqslant\eta_{\alpha}^{\lambda};\quad\text{ otherwise, }\quad 0\geqslant-H_{1}^{*}(\lambda)+\sum_{\mu\preceq_{1} \lambda}H_{1}^{*}(\mu)\geqslant\eta_{\alpha}^{\lambda},\]
i.e., \(\operatorname{sgn}\,\eta_{\alpha}^{\lambda}=(-1)^{|\lambda|-\lambda_{1}}\). That \(|\eta_{\alpha}^{\lambda}|\leqslant H_{*}^{1}(\lambda)\) follows from Equation (4) and Theorem 7.
For any \(\lambda\) and integer \(0\leqslant j\leqslant\lambda_{1}-1\), let
\[f_{\lambda}^{*}(j):=\prod_{i=0}^{j}((j+1)\alpha-h_{\lambda_{1}-i}),\]
and define \(f_{\lambda}^{*}(j):=1\) for all negative integers \(j\). For the proof of the next lemma, it will be useful to define the following related quantity:
\[f_{\lambda}^{*}(j,i):=\prod_{l=0}^{j-i}((j+1)\alpha-h_{\lambda_{1}-l})\prod_{ l=j-i+1}^{j}((j+2)\alpha-h_{\lambda_{1}-l-1}).\]
In other words, \(f_{\lambda}^{*}(j,i)\) is the function obtained by both incrementing the \(\alpha\)-coefficient by \(1\) and decrementing the hook index by \(1\) in the last \(i\) factors \(f_{\lambda}^{*}(j)\). Lemma 12 is a generalization of Lemma 10 that will lead to a more explicit version of [1, Theorem 5.12].
**Lemma 12**.: _For all shapes \(\lambda\) and \(0\leqslant j\leqslant\lambda_{1}-1\), we have_
\[\sum_{i=1}^{\lambda_{1}}f_{\lambda^{-i}}^{*}(j-1)\ H_{*}^{1}((\lambda^{-i})^{ \underline{j}})=\frac{1}{\alpha}\left(f_{\lambda}^{*}(j-1)\ H_{*}^{1}(\lambda ^{\underline{j}})+f_{\lambda}^{*}(j)\ H_{*}^{1}(\lambda^{\underline{j+1}}) \right).\]
Proof.: We begin by listing a few combinatorial facts that are easily verified.
1. For all \(j\), we have \(h_{(\lambda^{-i})_{1}-j}=h_{\lambda_{1}-j}\) if \(i\leqslant\lambda_{1}-j\); otherwise, \(h_{(\lambda^{-i})_{1}-j}=h_{\lambda_{1}-1-j}-\alpha\).
2. For all \(i\leqslant\lambda_{1}-j\), we have \((\lambda^{-i})^{\underline{j}}=(\lambda^{\underline{j}})^{-i}\).
3. For all \(i>\lambda_{1}-j\) we have \((\lambda^{-i})^{\underline{j}}=\lambda^{\underline{j+1}}\).
4. For all \(j\), we have \(h_{(\lambda^{\underline{j}})_{1}}=h_{\lambda_{1}-j}-j\alpha\).
The first three facts allows us to split the summation as follows:
\[\sum_{i=1}^{\lambda_{1}}f_{\lambda^{-i}}^{*}(j-1)\ H_{*}^{1}((\lambda^{-i})^{ \underline{j}})=f_{\lambda}^{*}(j-1)\sum_{i=1}^{\lambda_{1}-j}H_{*}^{1}(( \lambda^{\underline{j}})^{-i})+\sum_{i=1}^{j}f_{\lambda}^{*}(j-1,i)\ H_{*}^{1} (\lambda^{\underline{j+1}}).\]
By Lemma 10 and the last fact, we have
\[=\frac{1}{\alpha}\left(f_{\lambda}^{*}(j-1)H_{*}^{1}(\lambda\dot{ \underline{\lambda}})+(\alpha-h_{\lambda_{1}-j})f_{\lambda}^{*}(j-1,0)+\alpha \sum_{i=1}^{j}f_{\lambda}^{*}(j-1,i)H_{*}^{1}(\lambda\dot{\underline{\lambda}} )\right).\]
It suffices to show that the bracketed factor equals \(f_{\lambda}^{*}(j)\). We may write the summation as
\[\sum_{i=1}^{j}f_{\lambda}^{*}(j-1,i)=(j\alpha-h_{\lambda_{1}}) \cdots(j\alpha-h_{\lambda_{1}-j+2}) \cdot \left((j+1)\alpha-h_{\lambda_{1}-j}\right)+\] \[\vdots\] \[(j\alpha-h_{\lambda_{1}})(j\alpha-h_{\lambda_{1}-1}) \cdot \left((j+1)\alpha-h_{\lambda_{1}-3}\right) \cdot \cdots \left((j+1)\alpha-h_{\lambda_{1}-j}\right)+\] \[(j\alpha-h_{\lambda_{1}}) \cdot \left((j+1)\alpha-h_{\lambda_{1}-2}\right) \cdot \cdots \left((j+1)\alpha-h_{\lambda_{1}-j}\right)+\] \[((j+1)\alpha-h_{\lambda_{1}-1}) \cdot \cdots \left((j+1)\alpha-h_{\lambda_{1}-j}\right).\]
which we may write as
\[\sum_{i=1}^{j}f_{\lambda}^{*}(j-1,i) =((j+1)\alpha-h_{\lambda_{1}-j})\ [\ (j\alpha-h_{\lambda_{1}}) \cdots(j\alpha-h_{\lambda_{1}-j+2})\] \[+((j+1)\alpha-h_{\lambda_{1}-(j-1)})\ [\ (j\alpha-h_{ \lambda_{1}})\cdots(j\alpha-h_{\lambda_{1}-j+3})\] \[+((j+1)\alpha-h_{\lambda_{1}-(j-2)})\ [\ (j\alpha-h_{ \lambda_{1}})\cdots(j\alpha-h_{\lambda_{1}-j+4})\] \[\vdots\] \[+((j+1)\alpha-h_{\lambda_{1}-1})\ ]\cdots].\]
We may factor out \(((j+1)\alpha-h_{\lambda_{1}-j})\), leaving
\[(j\alpha-h_{\lambda_{1}})\cdots(j\alpha-h_{\lambda_{1}-j+2}) +((j+1)\alpha-h_{\lambda_{1}-j+1})\ [\ (j\alpha-h_{\lambda_{1}}) \cdots(j\alpha-h_{\lambda_{1}-j+3})\] \[+((j+1)\alpha-h_{\lambda_{1}-j+2})\ [\ (j\alpha-h_{\lambda_{1}}) \cdots(j\alpha-h_{\lambda_{1}-j+4})\] \[\vdots\] \[+((j+1)\alpha-h_{\lambda_{1}-1})\ ]\cdots].\]
We have
\[\alpha(j\alpha-h_{\lambda_{1}})\cdots(j\alpha-h_{\lambda_{1}-j+2})+f_{\lambda }^{*}(j-1,0)=(j\alpha-h_{\lambda_{1}})\cdots(j\alpha-h_{\lambda_{1}-j+2})((j+ 1)\alpha-h_{\lambda_{1}-j+1}),\]
so we may factor out \(((j+1)\alpha-h_{\lambda_{1}-j+1})\). Continuing in this manner gives us
\[=\frac{1}{\alpha}\left(f_{\lambda}^{*}(j-1)H_{*}^{1}(\lambda\dot{ \underline{\lambda}})+f_{\lambda}^{*}(j)H_{*}^{1}(\lambda\dot{\underline{ \lambda}})\right),\]
as desired.
Theorem 13 is a more explicit form for [1, Theorem 5.12], perhaps of independent interest.
**Theorem 13**.: _For all \(\lambda\) and \(\alpha\in\mathbb{R}\), we have_
\[\frac{J^{\star}_{\lambda_{1}-k}(\lambda)}{(\lambda_{1}-k)!}=\sum_{\mu\preceq_{k} \lambda}H^{1}_{*}(\mu)=\frac{1}{\alpha^{k}}\sum_{j=0}^{k}(-1)^{j}\frac{\prod_{i =1}^{\lambda_{1}}(h_{i}-j\alpha)}{(k-j)!j!},\text{ equivalently,}\]
\[\frac{H^{*}_{k}}{(\lambda_{1}-k)!}J^{\star}_{\lambda_{1}-k}(\lambda)=\sum_{j=0 }^{k}(-1)^{j}\binom{k}{j}\prod_{i=1}^{\lambda_{1}}(h_{i}-j\alpha).\]
Proof.: First, note that
\[k!\sum_{\mu\preceq_{k}\lambda}H^{1}_{*}(\mu)=\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
where \(\Delta^{k}[f](x):=\sum_{i=0}^{k}(-1)^{k-i}\binom{k}{i}f(x+i)\) for any function \(f(x)\). Forward differences of this kind are connected to polynomial interpolation in the falling factorial basis
\[x^{\underline{k}}:=x(x-1)(x-2)\cdots(x-k+1),\]
in particular, the _Newton (interpolation) polynomial_\(N(x)\) of a set of points \(S=\{(x_{i},p(x_{i}))\}_{i=0}^{d}\):
\[N(x):=[p(x_{0})]x^{\underline{0}}+[p(x_{0}),p(x_{1})]x^{\underline{1}}+\cdots+ [p(x_{0}),p(x_{1}),\ldots,p(x_{d})]x^{\underline{d}}\]
where \([p(x_{0}),\ldots,p(x_{j})]\) is the notation for the so-called _\(j\)th divided difference_. Note that if \(p(x)\) is a degree-\(d\) polynomial and \(|S|>d+1\), then \([p(x_{0}),\ldots,p(x_{j})]=0\) for all \(j>d\).
Finally, we recall the well-known fact that if \(x_{i}=i\) for all \(0\leqslant i\leqslant d\), then
\[[p(x_{0}),p(x_{1}),\ldots,p(x_{j})]=\frac{\Delta^{j}[p](0)}{j!},\]
and the Newton interpolation polynomial is of the form
\[N(x)=\frac{p(0)}{0!}x^{\underline{0}}+\frac{\Delta^{1}[p](0)}{1!}x^{ \underline{1}}+\cdots+\frac{\Delta^{d}[p](0)}{d!}x^{\underline{d}}. \tag{6}\]
See Stanley [47, Ch. 1.9] for a more in-depth discussion of the calculus of finite differences and its connections to combinatorics. In the next section, we show that each Jack derangement number is the sum of the coefficients of a Newton polynomial.
## 6 Proof of Theorem 1
Building off the results of the previous sections, we give a proof of Theorem 1 in this section. For all \(j>0\), define
\[H_{*}^{1}(\lambda,j):=\prod_{i=1}^{\lambda_{1}}(h_{i}-j\alpha)\]
to be the _\(j\)-shifted principal lower hook product_. It will be convenient to think of the shifted principal lower hook product as a univariate polynomial in \(x\), i.e.,
\[\mathbf{H}_{*}^{1}(\lambda,x):=\prod_{i=1}^{\lambda_{1}}(h_{i}-x\alpha).\]
We let \(d_{n,k}^{(\alpha)}\) denote the \(\alpha\)-generalization of the _rencontres numbers_, that is,
\[d_{n,k}^{(\alpha)}:=\frac{\alpha^{n}n!}{\alpha^{k}k!}\sum_{i=0}^{n-k}\frac{(- 1)^{i}}{\alpha^{i}i!}.\]
For \(\alpha=1\), the rencontres numbers \(d_{n,k}:=d_{n,k}^{(1)}\) count the number of permutations of \(S_{n}\) that have precisely \(k\) fixed points.
**Theorem 14**.: _For all \(\lambda\), \(\alpha\in\mathbb{R}\), and \(n\geqslant\lambda_{1}\), we have_
\[\eta_{\alpha}^{\lambda}=(-1)^{|\lambda|-\lambda_{1}}\frac{1}{\alpha^{n}n!} \sum_{j=0}^{n}d_{n,j}^{(\alpha)}H_{*}^{1}(\lambda,j).\]
Proof.: By Theorem 7 we have
\[\eta_{\alpha}^{\lambda}=(-1)^{|\lambda|-\lambda_{1}}\sum_{k=0}^{\lambda_{1}}(-1)^ {k}\sum_{\mu\preceq_{k}\lambda}H_{*}^{1}(\mu).\]
By Theorem 13, we have
\[=(-1)^{|\lambda|-\lambda_{1}}\sum_{k=0}^{\lambda_{1}}\frac{(-1)^{k}}{\alpha^{k }}\sum_{j=0}^{k}(-1)^{j}\frac{H_{*}^{1}(\lambda,j)}{(k-j)!j!}.\]
Interchanging summations gives us
\[=(-1)^{|\lambda|-\lambda_{1}}\sum_{j=0}^{\lambda_{1}}\sum_{k=j}^{ \lambda_{1}}\frac{(-1)^{k-j}}{\alpha^{k}}\frac{H_{*}^{1}(\lambda,j)}{(k-j)!j!}\] \[=(-1)^{|\lambda|-\lambda_{1}}\sum_{j=0}^{\lambda_{1}}\frac{H_{*}^ {1}(\lambda,j)}{\alpha^{j}j!}\sum_{k=j}^{\lambda_{1}}\frac{(-1)^{k-j}}{\alpha ^{k-j}(k-j)!}\] \[=(-1)^{|\lambda|-\lambda_{1}}\frac{1}{\alpha^{\lambda_{1}} \lambda_{1}!}\sum_{j=0}^{\lambda_{1}}d_{\lambda_{1}j}^{(\alpha)}H_{*}^{1}( \lambda,j),\]
which proves the result for \(n=\lambda_{1}\). Since \(\mathbf{H}_{*}^{1}(\lambda,x)\) has degree \(\lambda_{1}\), the \(n\)th order forward difference \(\Delta^{n}\) of \(\mathbf{H}_{*}^{1}(\lambda,x)\) at the origin vanishes for all \(n>\lambda_{1}\). Therefore, we have
\[\sum_{k=0}^{\lambda_{1}}\frac{1}{\alpha^{k}}\sum_{j=0}^{k}(-1)^{k-j}\frac{H_{* }^{1}(\lambda,j)}{(k-j)!j!}=\sum_{k=0}^{n}\frac{1}{\alpha^{k}}\sum_{j=0}^{k}(- 1)^{k-j}\frac{H_{*}^{1}(\lambda,j)}{(k-j)!j!}\]
for all \(n\geqslant\lambda_{1}\), thus
\[\eta_{\alpha}^{\lambda}=(-1)^{|\lambda|-\lambda_{1}}\frac{1}{\alpha^{n}n!} \sum_{j=0}^{n}d_{n,j}^{(\alpha)}H_{*}^{1}(\lambda,j),\]
as desired.
Theorem 14 allows us to connect the Jack derangement sums to the Poisson distribution. For all \(\alpha\in\mathbb{R}\), a simple induction shows that \(\sum_{j=0}^{n}d_{n,j}^{(\alpha)}/\alpha^{n}n!=1\), and moreover, that
\[\lim_{n\to\infty}\frac{d_{n,k}^{(\alpha)}}{\alpha^{n}n!}=\frac{e^{-1/\alpha}} {\alpha^{k}k!}.\]
For \(\alpha>0\), the limiting distribution is the Poisson distribution with expected value \(1/\alpha\). After taking limits, for all \(\alpha\in\mathbb{R}\), we have
\[\eta_{\alpha}^{\lambda}=(-1)^{|\lambda|-\lambda_{1}}e^{-1/\alpha}\sum_{x=0}^{ \infty}\frac{H_{*}^{1}(\lambda,x)}{\alpha^{x}x!}. \tag{7}\]
For \(\alpha>0\), we may interpret the Jack derangement sum as some type of "generalized factorial moment" of the Poisson distribution (up to sign), i.e.,
\[\eta_{\alpha}^{\lambda}=(-1)^{|\lambda|-\lambda_{1}}\mathbb{E}[\mathbf{H}_{*} ^{1}(\lambda,x)].\]
A combinatorial interpretation of these moments will follow as a corollary of Theorem 1. It is well-known that the factorial moments of the Poisson distribution have a remarkably simple form. For all \(\alpha\in\mathbb{R}\), we have
\[\lim_{x\to\infty}\frac{x^{\underline{k}_{\alpha}}}{\alpha^{x}x!}=e^{1/\alpha} \tag{8}\]
where \(x^{\underline{k}_{\alpha}}:=\alpha^{k}x^{\underline{k}}\). In light of Equation (7), the foregoing suggests that we should express the polynomial \(\mathbf{H}^{1}_{*}(\lambda,x)\) in the \(\alpha\)_-falling factorial basis_\(\{x^{\underline{k}_{\alpha}}\}\), which we determine below for \(\lambda\) such that \(\lambda_{1}=1,2,3\).
If \(\lambda_{1}=1\), then we have \(\mathbf{H}^{1}_{*}(\lambda,x)=-x^{\underline{1}_{\alpha}}+\lambda^{\prime}_{ 1}x^{\underline{0}_{\alpha}}\). If \(\lambda_{1}=2\), then we have
\[\mathbf{H}^{1}_{*}(\lambda,x)=x^{2_{\alpha}}-(\lambda^{\prime}_{2}+\lambda^{ \prime}_{1})x^{\underline{1}_{\alpha}}+\lambda^{\prime}_{2}(\alpha+\lambda^{ \prime}_{1})x^{\underline{0}_{\alpha}}.\]
If \(\lambda_{1}=3\), then we may write \(\mathbf{H}^{1}_{*}(\lambda,x)\) as
\[-x^{\underline{3}_{\alpha}}+(\lambda^{\prime}_{3}+\lambda^{\prime}_{2}+ \lambda^{\prime}_{1})x^{\underline{2}_{\alpha}}-((\alpha+\lambda^{\prime}_{1 })\lambda^{\prime}_{3}+(\alpha+\lambda^{\prime}_{1})\lambda^{\prime}_{2}+( \alpha+\lambda^{\prime}_{2})\lambda^{\prime}_{3})x^{\underline{1}_{\alpha}}+ \lambda^{\prime}_{3}(\alpha+\lambda^{\prime}_{2})(2\alpha+\lambda^{\prime}_{1 }).\]
Indeed, the following proposition shows that each coefficient of \(\mathbf{H}^{1}_{*}(\lambda,x)\) expressed in the \(\alpha\)-falling factorial basis is a polynomial \(c^{\lambda}_{k}(\alpha)\) that admits a combinatorial interpretation.
**Proposition 15**.: _Let \(\hat{\lambda}\) be the partition obtained by removing the first column of \(\lambda\), and let \(\#\mathrm{cyc}(\sigma)\) denote the number of cycles of a permutation \(\sigma\). For all \(\alpha\in\mathbb{R}\), we have_
\[\mathbf{H}^{1}_{*}(\lambda,x)=\sum_{k=0}^{\lambda_{1}}c^{\lambda}_{k}(\alpha)x ^{\underline{k}_{\alpha}}\]
_where \(c^{\lambda}_{k}(\alpha)=(\alpha(\lambda_{1}-1-k)+\lambda^{\prime}_{1})c^{ \hat{\lambda}}_{k}(\alpha)-c^{\hat{\lambda}}_{k-1}(\alpha)\), \(c^{\lambda}_{k}(\alpha):=0\) if \(k>\lambda_{1}\), \(c^{\lambda}_{-1}(\alpha):=0\). Moreover, we have_
\[(-1)^{k}[\alpha^{\lambda_{1}-k-j}]c^{\lambda}_{k}(\alpha)=\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
which proves the first statement. To prove the second statement, note that the recurrence relation shows that \(\operatorname{sgn}\,c_{k}^{\lambda}(\alpha)=(-1)^{k}\). The parameter \(\alpha\) records the \(\lambda_{1}-1-k\) ways to join a cycle of \(\hat{\lambda}\)-colored permutation that is not one of \(k\) singleton cycles \(I\subseteq[\lambda_{1}]\setminus\{1\}\). There are \(\lambda_{1}^{\prime}\) ways of not joining a \(\hat{\lambda}\)-colored permutation. Of the latter, the choice \((1,1)\in\lambda\) results in a fixed point \(1\in I\), leaving are \(k-1\) choices for the remaining elements of \(I\subseteq[\lambda_{1}]\setminus\{1\}\). Therefore, we add \(|c_{k-1}^{\hat{\lambda}}(\alpha)|\), which completes the proof.
Proof of Theorem 1.: By Equation (7), it suffices to show that
\[e^{-1/\alpha}\sum_{x=0}^{\infty}\frac{H_{*}^{1}(\lambda,x)}{\alpha^{x}x!}= \sum_{j=0}^{\lambda_{1}}d_{j}^{\lambda}\alpha^{\lambda_{1}-j}=D_{\alpha}^{ \lambda}.\]
Recall that \(c_{k}^{\lambda}(\alpha)=[x^{\underline{k}_{\alpha}}]H_{*}^{1}(\lambda,x)\) is the \(x^{\underline{k}_{\alpha}}\)-coefficient of \(H_{*}^{1}(\lambda,x)\) expressed in the \(\alpha\)-falling factorial basis. By Proposition 15 and Equation (8), we have
\[e^{-1/\alpha}\sum_{x=0}^{\infty}\frac{H_{*}^{1}(\lambda,x)}{ \alpha^{x}x!} =e^{-1/\alpha}\sum_{x=0}^{\infty}\sum_{k=0}^{\lambda_{1}}\frac{c _{k}^{\lambda}(\alpha)x^{\underline{k}_{\alpha}}}{\alpha^{x}x!}\] \[=e^{-1/\alpha}\sum_{k=0}^{\lambda_{1}}\sum_{x=0}^{\infty}\frac{c _{k}^{\lambda}(\alpha)x^{\underline{k}_{\alpha}}}{\alpha^{x}x!}.\] \[=\sum_{k=0}^{\lambda_{1}}c_{k}^{\lambda}(\alpha).\]
By the Principle of Inclusion-Exclusion, we have
\[=\sum_{j=0}^{\lambda_{1}}d_{j}^{\lambda}\alpha^{\lambda_{1}-j}=D_{\alpha}^{ \lambda},\]
which completes the proof of our first main result.
## 7 Proofs of Corollaries 2, 3, and 4
With Theorem 1 in hand, we now give short proofs of the corollaries stated in Section 1.
Proof of Corollary 2.: Clearly \(D_{\alpha}^{\lambda}\geqslant 0\) for all \(\alpha\geqslant 0\), so the proof follows from Theorem 1.
Proof of Corollary 3.: By Proposition 5 and induction, it suffices to prove the result for \(\mu,\lambda\) such that \(\mu\nearrow\lambda\) (see Section 2 for a review of the dominance ordering \(\trianglelefteq\)). Let \(i\) be the column of the outer corner and let \(j\) be the column of the inner corner. Note that \(i<j\).
By Theorem 1, for all \(\nu\), we have \(|\eta_{\alpha}^{\nu}|=\sum_{k=1}^{\nu_{1}}d_{k}^{\nu}\alpha^{\nu_{1}-k}\), so it suffices to show that \(d_{k}^{\mu}\leqslant d_{k}^{\lambda}\) for all \(k\), i.e., that the number of colored derangements \((c,\sigma)\) with precisely \(k\) disjoint cycles does not decrease when the symbol \(i\) loses the color \(b:=\mu_{i}^{\prime}\) and the symbol \(j\) gains the color \(a:=\lambda_{j}^{\prime}=\mu_{j}^{\prime}+1\). To show this, we give an injective map \(\phi_{k}:\mathcal{D}_{k}^{\mu}\to\mathcal{D}_{k}^{\lambda}\) for all \(k\) as follows.
First, since \(i<j\), we have \(b>a\). If \(c_{i}\neq b\), then \(\phi_{k}(c,\sigma)=(c,\sigma)\in\mathcal{D}_{k}^{\lambda}\). If \(c_{i}=b\), then \(\phi_{k}(c,\sigma)=(c^{\prime},(i\;j)\sigma(i\;j))\) where the coloring \(c^{\prime}\) is defined below (note that \(\phi_{k}\) is indeed well-defined since we have \(\mu_{1}=\lambda_{1}\), i.e., \(\mu\)-colored and \(\lambda\)-colored permutations are defined on the same symbol set \([\lambda_{1}]=[\mu_{1}]\)).
Since \(c_{i}=b\), the symbols \(i\) and \(j\) do not belong to the same cycle of \(\sigma\), thus \(c_{i}\neq c_{j}\). Also, recall that \((i\;j)\sigma(i\;j)\) relabels the symbols of \(\sigma\) so that \(i\) becomes \(j\) and vice versa. Let \(I\) be the set of symbols of the cycle of \(\sigma\) that contains \(i\). Define \(c_{i^{\prime}}^{\prime}:=a\) for all \(i^{\prime}\in(I\cup\{j\})\setminus\{i\}\)
so that all the symbols of \(j\)'s cycle in \((i\ j)\sigma(i\ j)\) have the same color. Define \(c^{\prime}_{i}:=c_{j}\) so that all symbols in \(i\)'s cycle of \((i\ j)\sigma(i\ j)\) have the same color. Finally, let \(c^{\prime}_{l}:=c_{l}\) for all remaining symbols \(l\notin I\cup\{j\}\). Clearly \((i\ j)\sigma(i\ j)\) has the same cycle type as \(\sigma\), and so it follows that \(\phi_{k}(c,\sigma)\in\mathcal{D}_{k}^{\lambda}\). It is also clear that every \((c^{\prime},\sigma^{\prime})\) in the image of \(\phi_{k}\) has a unique preimage; therefore, \(\phi_{k}\) is injective for all \(k\), as desired.
Before we prove Corollary 4, which characterizes the extrema of the Jack derangements for \(\alpha\geqslant 1\), we require a proposition that is essentially the Jack generalization of the well-known fact that the probability of drawing a derangement uniformly at random from \(S_{n}\) is greater than \(1/3\) for \(n\geqslant 4\).
**Proposition 16**.: _For all \(\alpha\geqslant 1\) and \(n\geqslant 4\), we have \(D_{\alpha}^{(n)}>H_{*}^{(n)}/3\)._
Proof.: By our main result, we have
\[D_{\alpha}^{(n)} =\sum_{k=0}^{n}(-1)^{k}\frac{n(n-1)\cdots(n-k+1)}{k!}H_{*}^{(n-k)}\] \[=H_{*}^{(n)}\sum_{k=0}^{n}\frac{(-1)^{k}}{k!}\frac{n(n-1)\cdots(n -k+1)}{(\alpha(n-1)+1)(\alpha(n-2)+1)\cdots(\alpha(n-k)+1)}\]
For \(\alpha\geqslant 1\) and \(k>0\), we have
\[\frac{n(n-1)\cdots(n-k+1)}{k!(\alpha(n-1)+1)\cdots(\alpha(n-k)+1)}-\frac{n(n- 1)\cdots(n-(k+1)+1)}{(k+1)!(\alpha(n-1)+1)\cdots(\alpha(n-(k+1))+1)}>0.\]
Iteratively applying this fact to the \(k\geqslant 4\) terms of the summation gives us
\[>H_{*}^{(n)}[1-\frac{n}{\alpha(n-1)+1}+\frac{n(n-1)}{2(\alpha(n-1 )+1)(\alpha(n-2)+1)}\] \[\qquad-\frac{n(n-1)(n-2)}{6(\alpha(n-1)+1)(\alpha(n-2)+1)(\alpha (n-3)+1)}]\] \[\geqslant H_{*}^{(n)}/3,\]
where the last inequality follows from the fact that \(\alpha\geqslant 1\).
Proof of Corollary 4.: Let \(\mu:=(n-1,1)\). By Theorem 1 and Proposition 16, we have
\[\eta_{\alpha}^{(n)}=D_{\alpha}^{(n)},\quad\text{ and }\quad|\eta_{\alpha}^{\mu}| =D_{\alpha}^{(n)}/(\alpha(n-1))>H_{*}^{(n)}/3(\alpha(n-1)).\]
Since \(\alpha\geqslant 1\), we have \(|\eta_{\alpha}^{\lambda}|\leqslant H_{*}^{1}(\lambda)\) by Proposition 11; therefore, it suffices to show that
\[H_{*}^{1}(\mu)/3\geqslant H_{*}^{1}(\lambda)\quad\text{ for all }\lambda\neq(n),\mu.\]
Recall that \(h_{\lambda}(i,j)=a_{\lambda}(i,j)+l_{\lambda}(i,j)+1\) is the hook length of \((i,j)\in\lambda\). Define
\[A:=\{h_{*}^{\lambda}(1,j)\}_{j=1}^{\lambda_{1}}\quad\text{ and }\quad B:=\{h_{*}^{\mu}(1,j)\}_{j=1}^{n-1}.\]
Note that \(|A|<|B|\). Now define the injective map \(\phi\) on lower hook lengths of the first row
\[\phi:A\to B\quad\text{ such that }\quad h_{*}^{\lambda}(1,j)\mapsto h_{*}^{\mu}(1, j^{\prime})\]
where \(j^{\prime}\) is the greatest column index of \(\mu\) for which \(h_{\lambda}(1,j)\leqslant h_{\mu}(1,j^{\prime})\). Due to the fact that \(a_{\lambda}(1,j)\leqslant a_{\mu}(1,j^{\prime})\), we have \(h_{*}^{\lambda}(1,j)\leqslant h_{*}^{\mu}(1,j^{\prime})\). Let \(\text{im}\phi\subseteq B\) be the image of \(\phi\). By the definition of \(\phi\), we have \(\prod_{a\in A}a\leqslant\prod_{b\in\text{im}\phi}b\). Since \(n\geqslant 6\), we have \(3\leqslant\prod_{b\notin\text{im}\phi}b\), thus
\[H_{*}^{1}(\lambda)=\prod_{a\in A}a\leqslant 3\prod_{b\in\text{im}\phi}b\leqslant H_{*} ^{1}(\mu)/3,\]
as required.
It may be interesting to explore these corollaries for other ranges of \(\alpha\in\mathbb{R}\). Computational experiments show that the Jack derangements behave quite differently when \(\alpha<0\), but perhaps there is still an elegant characterization of the sign, relative magnitude, and extrema in this range. Using Corollary 3, one could also try to extend Corollary 4 to a total ordering of all the Jack derangement sums.
## 8 Eigenvalues of the Permutation Derangement Graph
All of the recursive expressions mentioned in Section 1 for the eigenvalues of the permutation derangement graph embark from [48, Ex. 7.63a], where Stanley considers the sum
\[d_{\lambda}:=\sum_{\pi\in D_{n}}\chi^{\lambda}(\pi)\]
and shows it can be expressed in terms of the complete homogeneous symmetric functions:
\[\sum_{\lambda\vdash n}d_{\lambda}s_{\lambda}=\sum_{k=0}^{n}(-1)^{n-k}n^{k}\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
they are at least as difficult as the associated Stirling numbers of the first kind. Theorem 14 offers a more concrete but less combinatorial form, which for arbitrary \(\alpha\) seems to be as good as it gets; however, for \(\alpha=1,2\), we show that Theorem 14 can be massaged into an explicit combinatorial closed form in terms of what we call _extended hook products_. In addition, we recover Renteln's determinantal formula for \(\eta_{1}^{\lambda}\)[41, Theorem 4.2] for \(\alpha=1\). Before we begin, we require a few simple but unconventional tableau-theoretic definitions.
Let \(\lambda^{c}\) be the _complement_ of \(\lambda\), defined such that
\[\lambda^{c}:=(\lambda_{1}-\lambda_{1},\lambda_{1}-\lambda_{2},\cdots,\lambda_ {1}-\lambda_{\ell(\lambda)}).\]
In other words, the complement of \(\lambda\) is the subset of cells of the shape \((\lambda_{1})^{\ell(\lambda)}\) that do not lie in \(\lambda\). For \(\lambda=(10,6,3,1)\), the complement \(\lambda^{c}=(0,4,7,9)\) is the set of dots below:
Let \(\operatorname{rev}(\lambda^{c})\) be the partition obtained by reversing the order of the rows of \(\lambda^{c}\). We also let \(\operatorname{rev}:\lambda^{c}\to\operatorname{rev}(\lambda^{c})\) denote the natural bijection defined on their cells, e.g.,
\[\operatorname{rev}\left(\begin{array}{ccccc}&u&t&s&r\\ &q&p&o&nm&l&k\\ j&i&h&g&f&e&d&c&b&a\end{array}\right)\quad=\quad\begin{array}{ccccc}a&b&c&d&e &f&g&h&i&j\\ &k&l&mn&o&p&q\\ &r&s&t&u\end{array}\right..\]
For any cell \(\square\in\lambda^{c}\), we define its _upper hook length_ to be \(h^{*}_{\lambda^{c}}(\square)=h^{*}_{\operatorname{rev}(\lambda^{c})}( \operatorname{rev}(\square))\), and similarly for lower hook lengths. For example, we have the following upper hook lengths for \(\alpha=1\) and \(\mu=(10,6,3,1)\):
\[\begin{array}{|c|c|c|c|c|c|c|c|c|}\hline 13&11&10&8&7&6&4&3&2&1\\ \hline 8&6&5&3&2&1&1&2&3&4\\ \hline 4&2&1&1&2&3&5&6&7&8\\ \hline 1&1&2&4&5&6&8&9&10&11\\ \hline\end{array}.\]
We define the _extended ith principal upper hook product_ as follows:
\[H_{i}^{+}(\lambda):=H_{i}^{*}(\lambda)H_{i}^{*}(\lambda^{c}).\]
Continuing the example above, we see that \(H_{3}^{+}(\mu)=4\cdot 2\cdot 1\cdot 8!/4=80640\). Note that \(H_{1}^{*}(\lambda)=H_{1}^{+}(\lambda)\) for all \(\lambda\) since \((\lambda^{c})_{1}=0\).
Let \(p(\lambda)=p_{0},p_{1},\ldots,p_{\lambda_{1}}\) be the sequence of the first \(\lambda_{1}+1\) edges along the NE-SW lattice path induced by \(\lambda\), e.g.,
Let \(\nu(\lambda):=\nu_{1},\nu_{2},\ldots,\nu_{l}\) be the indices of the subsequence of vertical edges of \(p\). It is not difficult to see that
\[\nu(\lambda)=(\lambda_{1}-\lambda_{i}+i-1:i-1\leqslant\lambda_{i}).\]
Continuing the example, we have \(\nu(\mu)=0,5,9\). Note that \(\nu(\lambda)_{1}=0\) for all \(\lambda\).
Recall from Section 6 that \(d_{n,k}\) is the \(k\)_th rencontres number_, i.e., the number of permutations of \(S_{n}\) with precisely \(k\) fixed points. Let \(p_{n,k}=d_{n,k}/n!\) be the probability of drawing a permutation (uniformly at random) from \(S_{n}\) with precisely \(k\) fixed points. The _Frobenius coordinates_ of \(\lambda\) are given by \(\lambda=(a_{1},\ldots,a_{d}\mid b_{1},\ldots,b_{d})\) where \(a_{i}:=\lambda_{i}-i\) is the number of boxes to the right of the diagonal in row \(i\), and \(b_{i}:=\lambda_{i}^{\prime}-i\) is the number of boxes below the diagonal in column \(i\). By default, we define \(a_{d+1}:=-1\). We are now ready to give a nice closed form for the eigenvalues of the permutation derangement graph \(\Gamma_{n,1}\).
**Theorem 18** (Eigenvalues of \(\boldsymbol{\Gamma_{n,1}}\)).: _For all \(\lambda=(\lambda_{1},\ldots,\lambda_{\ell})=(a_{1},\ldots,a_{d}\mid b_{1}, \ldots,b_{d})\vdash n\), we have_
\[\eta_{1}^{\lambda}=(-1)^{n}\sum_{i\leqslant\lambda_{i}+1}(-1)^{\lambda_{i}}p_ {\lambda_{1},a_{1}-a_{i}}\ H_{i}^{+}(\lambda)\]
Proof.: The product \(\prod_{k=1}^{\lambda_{1}}(h_{k}-i)\) vanishes if \(h_{k}=i\) for some \(k\), which happens if and only if \(i\notin\nu(\lambda)\) (see Figure 2 for an illustration). Otherwise, we have \(H_{i}^{+}(\lambda)=|\prod_{k=1}^{\lambda_{1}}(h_{k}-i)|\) and \(\operatorname{sgn}\ \prod_{k=1}^{\lambda_{1}}(h_{k}-i)=(-1)^{\lambda_{1}- \lambda_{i}}\). The proof now follows from Theorem 14.
**Corollary 19**.: [7] _For all two-row shapes \(\lambda=(n-k,k)\), we have_
\[\eta_{1}^{\lambda}=\frac{(-1)^{k}d_{n-k+1,1}+(-1)^{n-k}d_{k,1}}{(n-2k+1)}.\]
Figure 2: The shifted principal lower hook products for \(\lambda=(10,6,3,1)\). Row \(j\) shows the product \(H_{*}^{1}(\lambda,j)\). For \(\alpha=1\), the product of the hook lengths along any uncolored row is \(0\), the product of the hook lengths along row \(2\) of \(\lambda\) equals the product of the green cells in the \(j=5\) row, and the product of the hook lengths along row \(3\) of \(\lambda\) equals the product of the blue cells in the \(j=9\) row. Theorem 18 shows \(\eta_{1}^{\lambda}=p_{10,0}[13!/(12\cdot 9\cdot 5)]+p_{10,5}[8!4!/(7\cdot 4)]-p_{10, 9}H_{3}^{+}(\lambda)=4242315\) (note \(p_{10,9}=0\)). For \(\alpha=2\), we have \(H_{*}^{1}(\lambda,j)=0\) for \(j=5,6,7\).
Proof.: \[\eta_{1}^{(n-k,k)} =\frac{1}{(n-k)!}\left((-1)^{k}d_{n-k,0}H_{1}(\lambda)+(-1)^{n-k}d_{ n-k,n-2k+1}H_{2}^{+}(\lambda)\right)\] \[=\frac{1}{(n-k)!}\left((-1)^{k}d_{n-k,0}\frac{(n-k+1)!}{(n-2k+1)} +(-1)^{n-k}d_{n-k,n-2k+1}k!(n-2k)!\right)\] \[=(-1)^{k}d_{n-k,0}\frac{(n-k+1)}{(n-2k+1)}+(-1)^{n-k}\frac{d_{n-k, n-2k+1}}{{n-k\choose k}}\] \[=(-1)^{k}\frac{d_{n-k+1,1}}{(n-2k+1)}+(-1)^{n-k}\frac{{n-k\choose k -1}d_{k-1,0}}{{n-k\choose k}}\] \[=\frac{(-1)^{k}d_{n-k+1,1}+(-1)^{n-k}d_{k,1}}{(n-2k+1)}.\]
For ease of notation, let \(d^{\prime}_{n,k}:=k!d_{n,k}\) (c.f. _the shifted derangement number_[41, SS4]).
**Corollary 20**.: _For all three-row shapes \(\lambda=(\lambda_{1},\lambda_{2},\lambda_{3})\), we have_
\[\eta^{\lambda}=\frac{(-1)^{|\lambda|-\lambda_{1}}d^{\prime}_{\lambda_{1}+2,2}} {(\lambda_{1}-\lambda_{3}+2)(\lambda_{1}-\lambda_{2}+1)}+\frac{(-1)^{|\lambda| -\lambda_{2}}d^{\prime}_{\lambda_{2}+1,2}}{(\lambda_{1}-\lambda_{2}+1)(\lambda _{2}-\lambda_{3}+1)}+\frac{(-1)^{|\lambda|-\lambda_{3}}d^{\prime}_{\lambda_{3},2}}{(\lambda_{1}-\lambda_{3}+2)(\lambda_{2}-\lambda_{3}+1)}.\]
Proof.: \[\eta_{1}^{\lambda} =\frac{(-1)^{|\lambda|}}{\lambda_{1}!}\left((-1)^{\lambda_{1}}d_ {\lambda_{1},0}H_{1}(\lambda)+(-1)^{\lambda_{2}}d_{\lambda_{1},\lambda_{1}- \lambda_{2}+1}H_{2}^{+}(\lambda)+(-1)^{\lambda_{3}}d_{\lambda_{1},\lambda_{1}- \lambda_{3}+2}H_{3}^{+}(\lambda)\right)\] \[=\frac{(-1)^{|\lambda|-\lambda_{1}}d_{\lambda_{1},0}(\lambda_{1}+2 )!}{\lambda_{1}!(\lambda_{1}-\lambda_{2}+1)(\lambda_{1}-\lambda_{3}+2)}+\frac {(-1)^{|\lambda|-\lambda_{2}}d_{\lambda_{1},\lambda_{1}-\lambda_{2}+1}(\lambda _{2}+1)!(\lambda_{1}-\lambda_{2})!}{\lambda_{1}!(\lambda_{2}-\lambda_{3}+1)}\] \[\qquad\qquad+\frac{(-1)^{|\lambda|-\lambda_{3}}d_{\lambda_{1}, \lambda_{1}-\lambda_{3}+2}\lambda_{3}!(\lambda_{1}-\lambda_{3}+1)!}{\lambda_{1 }!(\lambda_{2}-\lambda_{3}+1)}\] \[=\frac{(-1)^{|\lambda|-\lambda_{1}}d_{\lambda_{1}+2,2}}{(\lambda_ {1}-\lambda_{2}+1)(\lambda_{1}-\lambda_{3}+2)}+\frac{(-1)^{|\lambda|-\lambda_ {2}}d^{\prime}_{\lambda_{2}+1,2}}{(\lambda_{1}-\lambda_{2}+1)(\lambda_{2}- \lambda_{3}+1)}\] \[\qquad\qquad+\frac{(-1)^{|\lambda|-\lambda_{3}}d^{\prime}_{\lambda _{3},2}}{(\lambda_{1}-\lambda_{3}+2)(\lambda_{2}-\lambda_{3}+1)}.\]
Continuing in this manner, the expression above becomes exceedingly more cumbersome to explicitly write down for partitions with more parts; however, it does suggest a compact expression as a determinant. Let \(\ell:=\ell(\lambda)\) and define the following \(\ell\times\ell\) matrices:
\[W(\lambda):=\begin{bmatrix}(-1)^{\lambda_{1}-\lambda_{1}+1-1}d^{\prime}_{ \lambda_{1}+\ell-1,\ell-1}&(\lambda_{1}-1)^{\ell-2}&(\lambda_{1}-1)^{\ell-3}& \cdots&1\\ (-1)^{\lambda_{1}-\lambda_{2}+2-1}d^{\prime}_{\lambda_{2}+\ell-2,\ell-1}&( \lambda_{2}-2)^{\ell-2}&(\lambda_{2}-2)^{\ell-3}&\cdots&1\\ (-1)^{\lambda_{1}-\lambda_{3}+3-1}d^{\prime}_{\lambda_{3}+\ell-3,\ell-1}&( \lambda_{3}-3)^{\ell-2}&(\lambda_{3}-3)^{\ell-3}&\cdots&1\\ \vdots&\vdots&\vdots&\ddots\end{bmatrix},\]
and \(V(\lambda):=((\lambda_{i}-i)^{j-1})^{\ell}_{i,j=1}\). Clearly \(V(\lambda)\) is Vandermonde in the variables \(x_{i}=\lambda_{i}-i\), and any submatrix of \(W(\lambda)\) obtained by removing the first column and then removing any row is also Vandermonde. We are now ready to show that \(\eta_{1}^{\lambda}\) is a determinant.
**Theorem 21**.: [41, Theorem 4.2] _For all shapes \(\lambda=(\lambda_{1},\lambda_{2},\ldots,\lambda_{\ell})\), we have_
\[\eta_{1}^{\lambda}=\det W(\lambda)V(\lambda)^{-1}\]
Proof.: First, we claim that \(H_{+}^{k}(\lambda)\) can be written as a scaled ratio of Vandermonde determinants, i.e.,
\[H_{+}^{k}(\lambda) =(\lambda_{k}-k+\ell)!(\lambda_{1}-\lambda_{k}+k-1)!\frac{\prod \limits_{\begin{subarray}{c}i<j\\ i,j\neq k\end{subarray}}(\lambda_{i}-\lambda_{j}+j-i)}{\prod\limits_{\begin{subarray} {c}i<j\\ i<j\end{subarray}}(\lambda_{i}-\lambda_{j}+j-i)}\] \[=\frac{(\lambda_{k}-k+\ell)!(\lambda_{1}-\lambda_{k}+k-1)!}{\prod \limits_{\begin{subarray}{c}i<j\\ i=k\text{ or }j=k\end{subarray}}(\lambda_{i}-\lambda_{j}+j-i)}.\]
It is clear that the \(k\)th extended principal hook product of any shape cannot be larger than \((\lambda_{k}-k+\ell)!(\lambda_{1}-\lambda_{k}+k-1)!\). Here, we think of \((\lambda_{k}-k+\ell)!\) as representing all possible hook lengths in the cells of \(\lambda_{k}\), i.e., \((\lambda_{k}-k+\ell)!\leqslant H_{k}^{*}(\lambda)\), and \((\lambda_{1}-\lambda_{k}+k-1)!\) as representing all possible hook lengths in the non-cells to the right of \(\lambda_{k}\), i.e., \((\lambda_{1}-\lambda_{k}+k-1)!\leqslant H_{k}^{*}(\lambda^{c})\). The denominator corrects for the hook lengths that do not appear in \(H_{k}^{*}(\lambda)\) and \(H_{k}^{*}(\lambda^{c})\). Indeed, when \(i<j=k\), the values \((\lambda_{i}-\lambda_{k}+k-i)\) are the only hook lengths that do not appear in \(H_{k}^{+}(\lambda)\) for all \(i\). Similarly, when \(k=i<j\), the values \((\lambda_{k}-\lambda_{j}+j-k)\) are the only hook lengths that do not appear in \(H_{k}^{+}(\lambda^{c})\) for all \(j\), which proves the claim.
By Theorem 14, we have
\[\eta_{1}^{\lambda} =\frac{(-1)^{\lambda_{1}}}{\lambda_{1}!}\sum\limits_{k\leqslant \lambda_{k}+1}(-1)^{\lambda_{k}}d_{\lambda_{1},\lambda_{1}-\lambda_{k}+k-1}\ \frac{(\lambda_{k}-k+\ell)!(\lambda_{1}-\lambda_{k}+k-1)!}{\prod\limits_{ \begin{subarray}{c}i<j\\ i=k\text{ or }j=k\end{subarray}}(\lambda_{i}-\lambda_{j}+j-i)}\] \[=\frac{(-1)^{\lambda_{1}}}{\lambda_{1}!}\sum\limits_{k\leqslant \lambda_{k}+1}(-1)^{\lambda_{k}}\binom{\lambda_{1}}{\lambda_{1}-\lambda_{k}+k -1}d_{\lambda_{k}-k+1,0}\ \frac{(\lambda_{k}-k+\ell)!(\lambda_{1}-\lambda_{k}+k-1)!}{\prod \limits_{\begin{subarray}{c}i<j\\ i=k\text{ or }j=k\end{subarray}}(\lambda_{i}-\lambda_{j}+j-i)}\] \[=(-1)^{\lambda_{1}}\sum\limits_{k\leqslant\lambda_{k}+1}(-1)^{ \lambda_{k}}\frac{d_{\lambda_{k}-k+1,0}}{(\lambda_{k}-k+1)!}\ \frac{(\lambda_{k}-k+\ell)!}{\prod\limits_{ \begin{subarray}{c}i<j\\ i=k\text{ or }j=k\end{subarray}}(\lambda_{i}-\lambda_{j}+j-i)}\] \[=(-1)^{\lambda_{1}}\sum\limits_{k\leqslant\lambda_{k}+1}(-1)^{ \lambda_{k}}\binom{\lambda_{k}-k+\ell}{\ell-1}(\ell-1)!\ \frac{d_{\lambda_{k}-k+1,0}}{\prod\limits_{ \begin{subarray}{c}i<j\\ i=k\text{ or }j=k\end{subarray}}(\lambda_{i}-\lambda_{j}+j-i)}\] \[=\sum\limits_{k\leqslant\lambda_{k}+1}(-1)^{k}\ \left((-1)^{\lambda_{1}- \lambda_{k}+k}d^{\prime}_{\lambda_{k}-k+\ell,\ell-1}\right)\left(\frac{1}{\prod \limits_{\begin{subarray}{c}i<j\\ i=k\text{ or }j=k\end{subarray}}(\lambda_{i}-\lambda_{j}+j-i)}\right).\]
Let \(W(\lambda)^{k,1}\) be the submatrix of \(W(\lambda)\) obtained by removing the first column and \(k\)th row. Then we have
\[=\frac{1}{\det V(\lambda)}\sum\limits_{k\leqslant\lambda_{k}+1}(-1)^{k+1}\ \left((-1)^{\lambda_{1}-\lambda_{k}+k-1}d^{\prime}_{\lambda_{k}-k+\ell,\ell-1} \right)\ \det W(\lambda)^{k,1}.\]
Since \(W(\lambda)_{k,1}=(-1)^{\lambda_{1}-\lambda_{k}+k-1}d^{\prime}_{\lambda_{k}-k+ \ell,\ell-1}\), the summation is simply the Laplace expansion of \(W(\lambda)\) along the first column, that is,
\[=\frac{\det W(\lambda)}{\det V(\lambda)}=\det W(\lambda)V(\lambda)^{-1},\]
which completes the proof.
A standard result in the theory of symmetric functions is that \(f^{\lambda}\) can be expressed as a determinant via the Jacobi-Trudi identity (see [48, Cor. 7.16.3]), which gives a determinantal expression for [48, Ex. 7.63a].
**Corollary 22**.: _[_41_]_ _For all shapes \(\lambda=(\lambda_{1},\lambda_{2},\ldots,\lambda_{\ell})\), we have_
\[d_{\lambda}=|\lambda|!\;\det\left(\frac{1}{\lambda_{i}-i+j}\right)_{i,j=1}^{ \ell}\;\det W(\lambda)V(\lambda)^{-1}.\]
However, the fact that \(d_{\lambda}\) can be written as the determinant of a \(\ell\times\ell\) matrix actually goes back to a result of Goulden and Jackson concerning determinantal expressions of _immanants_, a \(\lambda\)-generalization of the permanent and determinant defined as follows:
\[\operatorname{Imm}_{\lambda}(A):=\sum_{\pi\in S_{n}}\chi^{\lambda}(\pi)A_{i, \pi(i)}\]
where \(A\) is any \(n\times n\) matrix. Indeed, if we consider the adjacency matrix of the complete graph \(K_{n}=J_{n}-I_{n}\) where \(J_{n}\) is the \(n\times n\) all-ones matrix, then we have
\[\operatorname{Imm}_{\lambda}(K_{n})=\sum_{\pi\in S_{n}}\chi^{\lambda}(\pi) \prod_{i=1}^{n}(K_{n})_{i,\pi(i)}=\sum_{\pi\in D_{n}}\chi^{\lambda}(\pi)=d_{ \lambda}.\]
Via the MacMahon master theorem, Goulden and Jackson [16, Theorem 2.1] produce a \(\ell\times\ell\) matrix \(A^{\prime}\) for which \(\operatorname{Imm}_{\lambda}(K_{n})=\det A^{\prime}\). The foregoing shows that this determinant has a natural combinatorial interpretation that can also be evaluated efficiently.
We note that the original Ku-Wales theorem (see Section 1) appears to be somewhat related to a dominance result of Pate [40] on normalized immanant inequalities of positive semi-definite Hermitian matrices, a classical subject initiated by Schur. In particular, Pate shows that \(\operatorname{Imm}_{\mu}(B)/f^{\mu}\leqslant\operatorname{Imm}_{\lambda}(B) /f^{\lambda}\) for all \(\mu\nearrow\lambda\) such that \(\mu_{\ell(\mu)}=1\) and \(B\) is positive semi-definite. James [20] shows for any \(\mu,\lambda\vdash n\), that if \(\operatorname{Imm}_{\mu}(B)/f^{\mu}\leqslant\operatorname{Imm}_{\lambda}(B) /f^{\lambda}\), then \(\mu\trianglelefteq\lambda\), thus Pate's result is a partial converse (the full converse is known to be false). We are unsure how exactly the Ku-Wales theorem fits into this literature, as \(K_{n}\) is not positive semi-definite; nevertheless, it is curious that the absolute values of its immanants still obey an immanant dominance property with respect to the dominance ordering on partitions.
For each \(\lambda\vdash n\), let \(\operatorname{Imm}_{\lambda}(xI-A)/f^{\lambda}\) be the _(normalized) immanantal polynomial_ of \(A\). Without much additional effort we can derive explicit formulas for the coefficients of \(\operatorname{Imm}_{\lambda}(xI-K_{n})\), which are also of combinatorial significance. Indeed, for each \(0\leqslant k\leqslant n\), it is not difficult to show that the coefficient of \((-1)^{n-k}x^{k}\) is the \(\lambda\)-eigenvalue of the \(k\)_-derangement graph_, i.e., the Cayley graph of \(S_{n}\) generated by all permutations with precisely \(k\) fixed points, which has been the subject of many papers in algebraic graph theory.
**Theorem 23**.: _For all \(\lambda\vdash n\), we have_
\[\operatorname{Imm}_{\lambda}(xI-K_{n})/f^{\lambda}=\sum_{k=0}^{n}(-1)^{n-k} \left[\sum_{\mu\nearrow\lambda}f^{\mu}\frac{s_{\mu}^{\star}(\lambda)}{(n-k)! }\eta_{1}^{\mu}\right]x^{k}.\]
Proof.: Recall that the \(s_{\star}^{\star}\)'s are the shifted Schur polynomials, i.e., the unnormalized shifted Jack polynomials at \(\alpha=1\), and that \(f^{\lambda}\) is the number of standard Young tableaux of shape \(\lambda\). By the definition of the immanantal polynomial, we have
\[\operatorname{Imm}_{\lambda}(xI-K_{n})/f^{\lambda}=\frac{1}{f^{\lambda}}\sum_ {\pi\in S_{n}}\chi^{\lambda}(\pi)\prod_{i=1}^{n}(xI-K_{n})_{i,\pi(i)}\]
For any \(k\)-set \(I\subseteq{[n]\choose k}\), let \(S_{n}^{I}\subseteq S_{n}\) be the set of permutations such that \(\sigma(i)=i\) for all \(i\in I\) and \(\sigma(j)\neq j\) for all \(j\notin I\).
\[=\frac{1}{f^{\lambda}}\sum_{k=0}^{n}\sum_{I\subseteq{[n]\choose k}}\sum_{ \sigma\in S_{n}^{I}}x^{k}(-1)^{n-k}\chi^{\lambda}(\sigma)\]
For any character \(\chi\) of \(S_{n}\), let \(\chi\!\!\downarrow_{S_{n-k}}\) denote the restriction to the subgroup \(S_{n-k}\).
\[=\frac{1}{f^{\lambda}}\sum_{k=0}^{n}{n\choose k}\sum_{\pi\in D_{n-k}}x^{k}(-1) ^{n-k}\chi^{\lambda}\!\!\downarrow_{S_{n-k}}(\pi)\]
To compute this restriction we iterate the branching rule \(k\) times (see [43], for example). It is well-known that the multiplicity of \(\mu\vdash(n-k)\) in the restriction of \(\lambda\) to \(S_{n-k}\) is \(f^{\lambda/\mu}\), the number of standard skew tableaux of skew shape \(\lambda/\mu\), equivalently, the number of distinct ways of successively adding \(k\) outer corners to obtain \(\lambda\) starting from \(\mu\). This gives
\[=\frac{1}{f^{\lambda}}\sum_{k=0}^{n}x^{k}(-1)^{n-k}{n\choose k} \sum_{\mu\nearrow^{k}\lambda}f^{\lambda/\mu}\sum_{\pi\in D_{n-k}}\chi^{\mu}(\pi)\] \[=\frac{1}{f^{\lambda}}\sum_{k=0}^{n}(-1)^{n-k}{n\choose k}\sum_{ \mu\nearrow^{k}\lambda}f^{\lambda/\mu}f^{\mu}\eta_{1}^{\mu}x^{k}\]
where \(\mu\) ranges over all shapes on \(n-k\) cells obtained by removing \(k\) outer corners successively from \(\lambda\). Note that \(\sum_{\mu\nearrow^{k}\lambda}f^{\lambda/\mu}f^{\mu}=f^{\lambda}\), thus the coefficients are convex combinations of \(\mu\)-eigenvalues of \(\Gamma_{n-k,1}\). By [36, Proposition 5.2], we may write
\[=\sum_{k=0}^{n}(-1)^{n-k}\sum_{\mu\nearrow^{k}\lambda}f^{\mu}\frac{s_{\mu}^{* }(\lambda)}{(n-k)!}\ \eta_{1}^{\mu}x^{k},\]
which completes the proof.
For small \(k\) we obtain reasonable expressions as positive linear combinations of eigenvalues of \(\Gamma_{n-k,1}\); however, these formulas quickly become unwieldy as \(k\) increases. On the other hand, when \(k\) is close to \(n\), these coefficients can also be efficiently computed through other means, as \(|\mu|\) is small. For example, the coefficient of \(x^{n-2}\) is the \(\lambda\)-eigenvalue of the well-known _transposition graph_, i.e., the Cayley graph of \(S_{n}\) generated by all its transpositions. It would be interesting to obtain more explicit expressions for \([x^{k}]\)\(\operatorname{Imm}_{\lambda}(xI-K_{n})/f^{\lambda}\). One barrier is that nice expressions for \(s_{\mu}^{*}(\lambda)/(n-k)!\) are only known in special cases, not for arbitrary \(\mu\subseteq\lambda\), which itself is an open question. For more details on the \(k\)-derangement graphs, we refer the reader to the recent survey [31].
## 9 Eigenvalues of the Perfect Matching Derangement Graph
We now move onto the perfect matching derangement graph, i.e., the case where \(\alpha=2\). The situation here is complicated by the fact that the upper and lower hook lengths do not coincide. We first consider the two row case, which has an interesting connection to derangements of the _hyperoctahedral group_\(B_{n}=\mathbb{Z}_{2}\wr S_{n}\leqslant S_{2n}\), that is, the automorphism group of the hypercube \(\{\pm 1\}^{n}\). Below we recall some results of Chen and Stanley [4] that will give a combinatorial interpretation of the two-row Jack derangement sums for \(\alpha=2\).
The elements \(w\) of \(B_{n}\) can be represented as _signed permutations_, i.e., a permutation of \([n]\) along with a plus or minus sign attached to each symbol. To represent the signing,
we adopt the shorthand \(\tilde{i}:=i^{-}\) and \(i:=i^{+}\), e.g., \((2,4,\bar{5})(\bar{3})(1,\bar{6})\in B_{6}\). Following Chen and Stanley [4], we say that \(w\in B_{n}\) is _balanced_ if each of its cycles has an even number of minus signs. For example, the element \((\bar{2},4,\bar{5})(3)(\bar{1},\bar{6})\) is balanced, whereas \((\bar{2},4,\bar{5})(\bar{3})(1,6)\) is unbalanced. By [4, Cor. 2.4], the number of balanced elements of \(B_{n}\) equals \((2n-1)!!\). We say \(w\in B_{n}\) is _totally unbalanced_ if each of its cycles are unbalanced. By [4, Prop. 3.1], the number of totally unbalanced elements of \(B_{n}\) also equals \((2n-1)!!\). Chen and Stanley define an element \(w\in B_{n}\) to be \(k\)_-separable_ if the cycles of \(w\) can be partitioned into two parts \(A,B\) such that every cycle of \(A\) is balanced and the sum of the cycle lengths of the cycles in \(B\) equals \(k\). Moreover, they show that these are precisely the elements of \(B_{n}\) that fix some \(k\)-dimensional subcube \(\{\pm 1\}^{k}\) of \(\{\pm 1\}^{n}\)[4, Prop. 2.2]. Note that if \(k=0\), then a \(0\)-dimensional subcube is taken to be a vertex of the hypercube.
Let \(\mathcal{E}_{n}\) be the set of derangements (fixed-point-free elements) of \(B_{n}\), i.e.,
\[\mathcal{E}_{n}=\{w\in B_{n}:w(j)\neq j\text{ for all }j\in[n]\}.\]
It is well-known that \(|\mathcal{E}_{n}|=d_{n,0}^{\langle 2\rangle}\). Every totally unbalanced element of \(B_{n}\) belongs to \(\mathcal{E}_{n}\). The combinatorial proof given on [4, pg. 70] extends to a bijection between balanced signed permutations of \(B_{n}\) and \(\mathcal{M}_{2n}\) that also maps fixed-point-free balanced signed permutations of \(B_{n}\) to _perfect matching derangements_ of \(\mathcal{M}_{2n}\):
\[\mathcal{D}^{\prime}_{2n}:=\{m\in\mathcal{M}_{2n}:m\cap m^{*}=\emptyset\}\]
where \(m^{*}=\{\{1,\bar{1}\},\{2,\bar{2}\},\cdots\{n,\bar{n}\}\}\). Finally, recall that our combinatorial proof in Section 4 of Theorem 1 at \(\alpha=2\) shows \(|\eta_{2}^{\lambda}|=|\mathcal{D}^{\prime}_{\lambda}|\) where \(\mathcal{D}^{\prime}_{\lambda}\) is the set of derangements of \(\lambda\)-colored perfect matchings \(\mathcal{M}_{\lambda}\).
The foregoing observations give an interesting interpretation of \(\eta_{2}^{\lambda}\) for two-row shapes \((n-k,k)\) in terms of \(B_{n-k}\)-derangements that stabilize a fixed hypercube \(\{\pm 1\}^{k}\subseteq\{\pm 1\}^{n-k}\).
**Theorem 24**.: _For all two-row shapes \(\lambda=(n-k,k)\), we have_
\[\eta_{2}^{\lambda} =(-1)^{k}\sum_{i=0}^{k}\binom{k}{i}(2i-1)!!\ |\mathcal{D}^{\prime}_{2(n-k-i)}|\] \[=(-1)^{k}\ |\{\sigma\in\mathcal{E}_{n-k}:\sigma\text{ fixes }\{\pm 1 \}^{k}\}|.\]
Note that when \(n\) is even and \(k=n/2\), we have \(\eta_{2}^{(n/2,n/2)}=(-1)^{n/2}|\mathcal{E}_{n/2}|\), hence the two-row shapes interpolate between the derangements that stabilize a fixed vertex of the hypercube and the derangements that stabilize the whole hypercube.
Recall that for \(\alpha=1\) we ignored all indices \(j\) corresponding to \(\leftarrow\) moves in the lattice path induced by \(\lambda\), since \(H^{1}_{*}(\lambda,j)=0\) in these cases. This is no longer the case for \(\alpha=2\); however, we can still identify the non-vanishing terms via lattice paths. Here, instead of each vertical move \(\downarrow\) descending by a single row, each vertical move descends by two rows, and as before we ignore horizontal moves \(\leftarrow\) if they border a row of \(\lambda\). For example, if \(\lambda=(10,6,3,1)\), then we ignore the indices \(j=5,6,7\) corresponding to arrows that border the second row (see Figure 2 at \(\alpha=2\)):
\[\begin{array}{|
**Corollary 25**.: _For all \(\lambda\vdash n\) such that each part of \(\lambda^{\prime}\) is even, let \(\mu\) be the partition obtained from \(\lambda\) by removing all rows of even index, and let \(\mu=(a_{1},\ldots,a_{d}\ |\ b_{1},\ldots,b_{d})\). Then_
\[\eta^{\lambda}_{2}=(-1)^{|\mu|}\sum_{i\in\mu_{i}+1}(-1)^{\mu_{i}}p^{(2)}_{\mu_{ 1},a_{1}-a_{i}}\ H^{+}_{i}(\mu),\]
_where \(p^{(2)}_{m,k}\) is the probability that an element of \(B_{m}\) has precisely \(k\) fixed points._
As an aside, we note that similar results can be shown for all \(\alpha\in\mathbb{N}_{+}\) by considering the rencontres numbers associated with the group \(S_{\alpha}\wr S_{n}\), and in these cases one can derive determinantal formulas as we did in the previous section _mutatis mutandis_.
We conclude this section with a somewhat more complicated formula for the eigenvalues of the perfect matching derangement graph in terms of extended lower hook products. To see why we should not immediately expect an expression as simple as the \(\alpha=1\) case for all \(\lambda\), it is instructive to consider the one-row case:
\[\eta^{(n)}_{2}=|\mathcal{D}^{\prime}_{2n}|=\sum_{k=0}^{n}(-1)^{k}p^{(2)}_{n,k }(2k-1)!!(2(n-k)-1)!!.\]
Recall that for \(\alpha=1\), this summation is just a single term \(\eta^{(n)}_{1}=d_{n,0}n!\). Indeed, one expects a more involved expression for \(\alpha=2\), due to the fact that even though we have \(|\mathcal{D}_{2n}|=(2n-1)!!(1/\sqrt{e}+o(1))\), the \(o(1)\) term does not converge to \(0\) nearly as fast as in the case of permutation derangements. In particular, the \(n\)th partial sum of the expansion of \(e^{-1/2}\) is just an approximation of the probability of drawing a derangement from \(\mathcal{M}_{2n}\).
In light of the lattice path interpretation given above, it will be useful to think of the parts of \(\lambda\) as being grouped into consecutive pairs \(\lambda_{2i-1},\lambda_{2i}\) where the difference \(\lambda_{2i-1}-\lambda_{2i}\) between consecutive rows gives the order of the approximation, roughly speaking. By default, if \(\lambda\) has less than \(2i\) parts, then we set \(\lambda_{2i}:=0\). We define a shifted analogue of the extended lower hook products as follows
\[H^{i}_{+}(\lambda,j):=H^{i}_{*}(\lambda,j)H^{i}_{*}(\lambda^{c},j),\]
i.e., the product obtained by subtracting each factor of \(H^{i}_{+}(\lambda)\) by \(\alpha j\).
**Theorem 26** (Eigenvalues of \(\boldsymbol{\Gamma_{n,2}}\)).: _For all \(\lambda\vdash n\), we have_
\[\eta^{\lambda}_{2}=(-1)^{n}\sum_{\begin{subarray}{c}i=1\\ 2i-1\leqslant\lambda_{2i-1}+1\end{subarray}}(-1)^{\lambda_{2i-1}}\sum_{ \begin{subarray}{c}j=0\\ 2i-1+j\leqslant\lambda_{1}\end{subarray}}(-1)^{j}\ p^{(2)}_{\lambda_{1},a_{1}-a _{i}+j}\ H^{2i-1}_{+}(\lambda,j).\]
One of the main obstacles towards getting an expression identical to the \(\alpha=1\) case is that the probability distribution \(\{p^{(2)}_{n,i}\}\) is defined over the \((2n)!!\) elements of \(B_{n}\), not the \((2n-1)!!\) elements of \(S_{2n}/B_{n}\cong\mathcal{M}_{2n}\). Ideally, we seek a probability distribution \(\{p^{\prime}_{n,i}\}\) over perfect matchings such that \(p^{\prime}_{n,i}\) is the probability of drawing uniformly at random a perfect matching from \(\mathcal{M}_{2n}\) that has \(i\) edges in common with \(m^{*}\). It seems that exact formulas for these probabilities cannot be expressed as succinctly as in the case of permutations (see the discussion above as well as the proof of Proposition 16, for example). We leave it as an open question whether there is a more elegant formula for the \(\alpha=2\) case; nevertheless, we have given a closed form that is suitable for calculation and applications, as we have demonstrated in the previous sections.
Future Work and Open Questions
It may be worthwhile to study the Jack derangements, colored permutations, and colored perfect matchings from a purely combinatorial point of view. Indeed, one can verify that many of the well-known identities for derangements admit Jack analogues, and a closer study of their combinatorics may give more elegant formulas for the Jack derangements.
In [50], Sniady studies the Jack characters from the viewpoint of _asymptotic representation theory_. Like the classical derangements, our expressions for the Jack derangements are quite amenable to asymptotic analysis, so it seems natural to consider the asymptotics of the Jack derangements and how they relate to Sniady's results (see also [6, 8]).
As discussed earlier, a byproduct of our main results at \(\alpha=1\) is a simple combinatorial form for \(\operatorname{Imm}_{\lambda}(J-I)\), which begs the question of whether other adjacency matrices have immanants with nice combinatorial properties. In particular, can one find nice combinatorial expressions for the immanants of adjacency matrices \(A(G_{n})\) of graph families \(\{G_{n}\}\) besides \(A(K_{n})=J_{n}-I_{n}\)? We refer the reader to [49] for more details on combinatorial interpretations of immanants. Along these lines, it would be quite interesting to find other unions of conjugacy classes \(S\subseteq S_{n}\) such that the \(\lambda\)-eigenvalues of the normal Cayley graph \(\operatorname{Cay}(S_{n},S)\) are counted by some "\(\lambda\)-colored variant" of \(S\).
Let \(GL(n,q)\) be the group of \(n\times n\) invertible matrices of \(\mathbb{F}_{q}^{n\times n}\). We say that \(g\in G\) is _eigenvalue-free_ if \(\det(\lambda I-g)\neq 0\) for all \(\lambda\in\mathbb{F}_{q}\) (see [34] for a more detailed discussion). One can view such elements as a \(q\)-analogue of the derangements of \(S_{n}\), and since the set of eigenvalue-free elements is a union of conjugacy classes of \(GL(n,q)\), the eigenvalues of the normal Cayley graph generated by eigenvalue-free elements can be understood via the character theory of \(GL(n,q)\). Like the symmetric group, there exists a characteristic map from the class algebra onto a Hopf algebra that allows one to get concrete (albeit extremely complicated) expressions for the irreducible characters of \(GL(n,q)\) via symmetric function manipulations (see [17], for example). In particular, a basis for this Hopf algebra can be defined in terms of _Macdonald polynomials_, which can be seen as a \(q\)-analogue of the Jack polynomials [32, Ch. VI]. A first step towards a full \(q\)-analogue of our main results would be to generalize what we have done here to Macdonald polynomials. Perhaps the combinatorics that arise in this work may give some insight as to what the right generalization should be.
|
2306.10632 | Embedding quantum optimization problems using AC driven quantum
ferromagnets | Analog quantum optimization methods, such as quantum annealing, are promising
and at least partially noise tolerant ways to solve hard optimization and
sampling problems with quantum hardware. However, they have thus far failed to
demonstrate broadly applicable quantum speedups, and an important contributing
factor to this is slowdowns from embedding, the process of mapping logical
variables to long chains of physical qubits, enabling arbitrary connectivity on
the short-ranged 2d hardware grid. Beyond the spatial overhead in qubit count,
embedding can lead to severe time overhead, arising from processes where
individual chains ``freeze" into ferromagnetic states at different times during
evolution, and once frozen the tunneling rate of this single logical variable
decays exponentially in chain length. We show that this effect can be
substantially mitigated by local AC variation of the qubit parameters as in the
RFQA protocol (Kapit and Oganesyan, Quant. Sci. Tech. \textbf{6}, 025013
(2021)), through a mechanism we call Symphonic Tunneling. We provide general
arguments and substantial numerical evidence to show that AC-driven multi-qubit
tunneling is dramatically faster than its DC counterpart, and since ST is not a
1d-specific mechanism, this enhancement should extend to clusters of coupled
chains as well. And unlike a uniform transverse field, in higher dimensions
this method cannot be efficiently simulated classically. We explore schemes to
synchronize the AC tones within chains to further improve performance.
Implemented at scale, these methods could significantly improve the prospects
for achieving quantum scaling advantages in near-term hardware. | Gianni Mossi, Vadim Oganesyan, Eliot Kapit | 2023-06-18T19:52:24Z | http://arxiv.org/abs/2306.10632v1 | # Embedding quantum optimization problems using AC driven quantum ferromagnets
###### Abstract
Analog quantum optimization methods, such as quantum annealing, are promising and at least partially noise tolerant ways to solve hard optimization and sampling problems with quantum hardware. However, they have thus far failed to demonstrate broadly applicable quantum speedups, and an important contributing factor to this is slowdowns from embedding, the process of mapping logical variables to long chains of physical qubits, enabling arbitrary connectivity on the short-ranged 2d hardware grid. Beyond the spatial overhead in qubit count, embedding can lead to severe time overhead, arising from processes where individual chains "freeze" into ferromagnetic states at different times during evolution, and once frozen the tunneling rate of this single logical variable decays exponentially in chain length. We show that this effect can be substantially mitigated by local AC variation of the qubit parameters as in the RFQA protocol (Kapit and Oganesyan, Quant. Sci. Tech. **6**, 025013 (2021)), through a mechanism we call Symphonic Tunneling. We provide general arguments and substantial numerical evidence to show that AC-driven multi-qubit tunneling is dramatically faster than its DC counterpart, and since ST is not a 1d-specific mechanism, this enhancement should extend to clusters of coupled chains as well. And unlike a uniform transverse field, in higher dimensions this method cannot be efficiently simulated classically. We explore schemes to synchronize the AC tones within chains to further improve performance. Implemented at scale, these methods could significantly improve the prospects for achieving quantum scaling advantages in near-term hardware.
## 1 Introduction
While enormous progress has been made in gate model quantum computing in recent years, quantum advantage for practical, real-world problems with such devices has yet to be demonstrated. This is largely due to the extreme precision requirements and noise sensitivity inherent to the gate model. In contrast, _analog_ quantum optimizers, including quantum annealers (QA) [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] and more recent Rydberg atom systems [11, 12, 13], are much more resilient can routinely solve problems with hundreds or even thousands of variables. While not computationally universal, a huge array of NP-hard and NP-complete problems can be solved using these systems, with the likely mechanism of quantum advantage lying in collective quantum tunneling as a way to escape local minima once the system becomes glassy [14, 15, 16, 17]. Recent experimental results [10, 13, 18] on large problems in both neutral atom and flux qubit analog devices suggest quantum scaling advantage over classical heuristics, for problems which are short ranged and map natively to the underlying hardware graph. However, demonstration of _broadly applicable_ quantum scaling advantage has remained elusive, particularly for problems defined on abstract graphs with long ranged connectivity.
To attack these problems, encodings such as minor embedding are required [19, 20, 21], where single logical variables are mapped to long ferromagnetically coupled chains (similar, antiferromagnetic schemes exist for Rydberg arrays [22, 23]). These methods have an inescapable quadratic overhead in qubit count, but more insidiously, they can introduce severe time penalties as well [21, 24, 25, 26]. The essential mechanism for this in large problems is that individual chains will "freeze" into ferromagnetic states at different times during evolution (note of course that the whole system will eventually freeze as the transverse field driving quantum dynamics is reduced to zero). In the process, transverse field corrections and other effects can tip them into the "wrong" state, and once frozen the tunneling rate of these single variables decays exponentially in chain length. This leads to an ugly tradeoff in the strength of the intra-chain coupling \(J_{c}\). If it is too weak, the system becomes more noise sensitive and can have (logically meaningless) broken chains in its ground state. Larger \(J_{c}\) values however, tend to freeze earlier and have even worse scaling for multiqubit tunneling. In the worst cases, the time to solution can scale exponentially in the total system size, e.g. \(\exp{(N^{2})}\) for \(N\) logical variables [25]. And unlike the spatial overhead, this problem cannot be solved by simply making the system larger; a more fundamental change to the quantum optimization protocol is required.
We have demonstrated previously the use of randomised dynamical protocols to ameliorate exponential scaling, both analytically and numerically in simple 0-dimensional problems of \(N\) coupled qubits, where level structure exhibits an avoided level crossing with a gap to other excited states. The Random Field (or Radio Frequency) Quantum Annealing (RFQA) protocol[27] utilizes this separation of scales to accelerate the mixing across the level crossing with no appreciable heating, thus
reducing the "difficulty exponent"
\[\Upsilon\equiv\frac{1}{N}\log_{2}\Big{(}\frac{1}{\Gamma(N)}\Big{)}, \tag{1}\]
as extracted from the tunnelling rate \(\Gamma(N)\) (more on this in the next Section 2), which is constrained in those structureless models by the optimal adiabatic value, e.g. \(\Upsilon\geq\frac{1}{2}\) for the Grover problem [28]. While this optimal value is exponentially fragile to control precision, RFQA is not. By contrast, quantum tunnelling of logical qubits based on the two groundstates of the Ising chain of length \(L\) is governed by a highly structured Hamiltonian and we therefore expect to be able to accelerate the dynamics more efficiently. To this end we explored a number of _structured_ protocols inspired both by basic understanding of correlated dynamics of small spin clusters and practical considerations of hardware implementation. Remarkably, we find significant reductions \(\Upsilon\), seemingly to arbitrarily small values (without appreciable heating or melting of ferromagnetic order - see Section 4), as we vary drive strength and other details, see Fig. 1. We refer to the general scheme of structured/optimized multifrequency protocols as Symphonic Tunnelling.
To demonstrate the potential of RFQA/ST to mitigate the time overhead of embedding, we report the results of an extensive series of numerical simulations, focusing
Figure 1: Difficulty exponent \(\Upsilon\) obtained from the rate of complete magnetization reversal (see Eq. 1 for definition and Sec. 4 for further details) as a function of oscillations’ amplitude \(\alpha\) for three different driving protocols defined in Sec. 3, for three different values of transverse field \(\kappa\). A vanishing or negative \(\Upsilon\) implies that an exponential fit is no longer good (at least for the ranges simulated), indicating a potential crossover to polynomial scaling.
on the tunneling of a single one-dimensional ring in the ferromagnetic (frozen) phase, where the base DC tunneling rate decays exponentially in length \(L\). We focus on single chains because, relying on AC dynamics, RFQA cannot be simulated with quantum Monte Carlo [29, 30, 31, 32, 33], and thus embedded _problems_ are firmly out of reach of classical simulation techniques at any meaningful scales. Likewise, while we expect RFQA methods to accelerate problem solving for native problems, benchmarking that in simulation is problematic because small instances (e.g. \(N\leq 20\)) typically do not show the expected large-scale exponential difficulty scaling and instances numerically mined for very small gaps are generally fragile to perturbations [34]. The exponential scaling of collective tunneling in 1d chains, in contrast, is well-controlled and obvious at small \(L\).
Further, the key slowdown mechanism of freezing can be captured in a single chain, and we average over random detuning as a proxy for the disordered environment- and inability to find precise resonances-of real analog problems. And since ST is fundamentally _not_ a 1d-specific mechanism, it will generalize to clusters of coupled chains in larger systems. Finally, 1d chains are amenable to matrix product simulation techniques (in this case, time-evolving block decimation [35, 36, 37, 38, 39]), which allow us to extend our simulations out to much larger system sizes than are possible in full-wavefunction evolutions. We demonstrate parameter regimes where the chain remains frozen-in that ferromagnetic correlations do not vanish at large separation-but collective tunneling is accelerated to the point that polynomial and exponential scaling cannot be distinguished from the data.
The rest of this paper is organized as follows. In the next section, we introduce the basic 1d collective tunneling problem and the AC protocols we will use to attack it. Following that, we present details of our numerical simulation methods, including full wavefunction simulations and matrix product methods. We then present numerical results for a range of system parameters and protocols. While we do not have analytical arguments to predict performance in arbitrarily large systems, the robustness of our observed speedups (which persist out to the largest system sizes we can simulate) strongly suggest significant performance improvements at the application scale. We finally detail some of the considerations for a superconducting hardware implementation, and offer concluding remarks.
## 2 Ising Model of reverse annealing - quasi-static DC protocol
We consider quantum dynamics of a single ferromagnetic ring, with total Hamiltonian
\[H\left(t\right)=-\sum_{j}\left[\kappa_{j}\left(t\right)X_{j}+h_{j}\left(t \right)Z_{j}+J_{j}\left(t\right)Z_{j}Z_{j+1}\right], \tag{2}\]
with time dependent couplings chosen to accelerate the mixing of two ferromagnetic ground states without any appreciable excitation of excited states. For ease of comparison we will benchmark all protocols using the standard "reverse annealing" quench, whereby we initialize the system in a definite all-down classical groundstate
with no transverse field, turn the transverse field on and off slowly, and examine the probability of complete magnetization reversal
\[P(t_{f})\equiv|\langle\uparrow\cdots\uparrow|U(t)|\downarrow\cdots\downarrow \rangle|^{2}. \tag{3}\]
Importantly, we take great precautions to avoid exciting the problem out of the low energy ferromagnetic doublet, i.e.
\[P_{heat}\equiv 1-|\langle\uparrow\cdots\uparrow|U(t)|\downarrow\cdots\downarrow \rangle|^{2}-|\langle\downarrow\cdots\downarrow|U(t)|\downarrow\cdots \downarrow\rangle|^{2}\ll P(t_{f}). \tag{4}\]
We do not attempt to measure heating during evolution, but since no cooling mechanism is present to remove stray excitations, measuring it at the end is sufficient.
In what we follows we focus on the statistically significant average probably of success, whereby several instances of the problem are considered with resultant \(P(t_{f})\) averaged over the ensemble of static and dynamic variations of potentials. We average \(P\left(t_{f}\right)\) over fixed detuning \(h\) (corresponding to a uniform \(Z\) bias \(h/2L\)) drawn from the uniform range \(\left\{-2/\sqrt{L},2/\sqrt{L}\right\}\) (this scaling choice is justified below), for both the DC and RFQA cases. We calibrate the analysis of complicated protocols defined below against the simplest uniform quasi-state sweep characterized by the maximum value of transverse field \(\kappa_{0}\) - see Fig. 2 and, importantly, the value of detuning \(W\equiv 2\sum_{i}h_{i}\). In this simple case we can accurately estimate (see App. A of [27])
\[P(t_{f})\approx\int dW\mathcal{P}(W)\frac{\Omega_{0}^{2}}{W}\sin^{2}(Wt_{f}) \rightarrow\pi\frac{\Omega_{0}^{2}}{W}t_{f}\equiv\Gamma_{0}t_{f}, \tag{5}\]
which ignores ramp time \(t_{ramp}\) (in practice, we fix \(t_{f}=6(2\pi N)=6t_{ramp}\)) and assumes are relatively smooth distribution of detuning \(\mathcal{P}(W)\) and sufficiently long \(t_{f}\) to sample it efficiently. The most important part of this expression is the many-body matrix element connecting the two magnetization states which for a uniform field case is well-captured by the scaling form
\[\Omega_{0}\left(L\right)\propto\frac{\kappa_{0}}{L^{\kappa_{0}/J}}\left(\frac{ \kappa_{0}}{J}\right)^{L-1},\ \ \Gamma_{0}\propto\frac{\Omega_{0}^{2}}{W}. \tag{6}\]
This matrix element figures prominently in the standard textbook formulation of symmetry restoration via path integral instantons, but can also be obtained by direct high order perturbation theory; at large \(L\) the polynomial prefactor is largely irrelevant but included for completeness. Since \(\Omega_{0}(L)\) is normally exponentially small in \(L\), the total number of spins flipping, we can use the difficulty exponent \(\Upsilon=(1/L)\log_{2}(1/\Gamma(L))\) to help isolate the scaling advantage of various protocols. Thus, the uniform field protocol discussed thus far corresponds to \(\Upsilon_{0}=2\log_{2}J/\kappa_{0}\), which only vanishes at the quantum critical point, where the order parameter vanishes, and with it the ability to use the chains for embedding logical qubits!
Before turning to the more powerful protocols that appear to deliver dramatic reduction in \(\Upsilon\) without destroying the logical qubits in the process we close this section
by defining two useful quantitative tools. One is the so called "time-to-solution" (TTS) that is a nice proxy[40] for \(\Gamma\)
\[\mathrm{TTS}\equiv t_{f}\frac{\ln(1-0.99)}{\ln(1-\langle P(t_{f})\rangle)} \sim\frac{t_{f}}{\langle P(t_{f})\rangle}\to 1/\Gamma. \tag{7}\]
And finally, we also track the time-averaged order parameter correlation function _during_ the waiting stage at largest separation available
\[C_{ZZ}=\frac{1}{t_{wait}}\int_{t_{ramp}}^{t_{f}-t_{ramp}}\overline{\langle Z_{ 0}(t)Z_{L/2}(t)\rangle}\,\mathrm{d}t. \tag{8}\]
Tracking this quantity allows us to confirm that we do not leave the ferromagnetic phase in AC evolution, as discussed below in the results section.
## 3 Dynamic protocols and other simulation details
This section details three novel modifications to the standard quasi-static approach described above. All these approaches include modulations to the three types of couplings already present in the Hamiltonian that are both weak and slow so as to
Figure 2: The reverse annealing schedule explored in this work. The ferromagnetic coupling \(J\) is held fixed (red line), and the uniform transverse field is ramped up and down to induce collective tunneling (blue curve). In the RFQA cases, these parameters are locally modulated with independent oscillating frequencies; the modulation of two (out of \(L\)) transverse fields \(\kappa_{j}\left(t\right)\) and \(\kappa_{k}\left(t\right)\) is shown in gold and green. Note in other protocols individual coupling strengths \(J_{j}\) can be oscillated, and/or these coupling strengths can be modulated by the transverse field strengths of the coupled qubits (as in real quantum annealers).
avoid exciting (heating) the system while dramatically increasing the rate of mixing of the two ferromagnetic groundstates. The seemingly inevitable success of this program was demonstrated by two of us both analytically and numerically in the context of featuresless "oracle" problems[27] where the choice of driving terms was randomized and averaged over, which each qubit assigned its own driver. By contrast, the protocols we employ in this work are much less random, with total number of independent frequencies scaling as \(\sim\sqrt{L}\), i.e. with many qubits sharing the same driver. This modification appears to dramatically improve the performance of Symphonic Tunnelling, while at the same time precluding a straightforward extension of prior analytic results for the fully random ensemble. Because of its pivotal conceptual importance (certainly in our thinking) and clarity, we first review the basic mechanism of the fully random protocol, RFQA, before turning to the correlated variant employed here in the following paragraph.
Consider an exponentially avoided level crossing or a first order phase transition where \(K\) spins must flip. Now imagine to every spin we add an oscillating term \(O_{j}\sin 2\pi f_{j}t\), where \(O_{j}\) is a local operator. At first order, when the energy difference of the two states approaches any \(\pm f_{j}\), the AC perturbation can resonantly mix the states, with a mean Rabi frequency \(c\Omega_{0}\), for some \(O\left(1\right)\) constant \(c\). This accelerates the off-resonant tunneling rate by a factor of \(K\), but amounts to a dramatic underestimate, since there are also \(4{K\choose 2}\) two-frequency processes at second order, \(8{K\choose 3}\) three-frequency terms at third order, and so on; all are generically nonzero. If we assume that the _average_\(m\)th order process scales as \(\Lambda^{m}\Omega_{0}\) (for \(\Lambda<1\), e.g. exponential decay in \(m\)), then a simple incoherent sum of all contributing processes produces a total energy-averaged tunneling rate
\[\Gamma_{T}\propto\frac{\Omega_{0}^{2}}{W}\sum_{m=0}^{K}2^{m}\Lambda^{2m}{K \choose m}\propto\frac{\Omega_{0}^{2}}{W}\left(1+2\Lambda^{2}\right)^{K}. \tag{9}\]
This can favorably shift the scaling exponent of the problem, and of particular interest to us is the case where \(\left(1+2\Lambda^{2}\right)^{K}\) grows faster than \(\Omega_{0}^{2}\) decays and this simple prediction breaks down. For structureless problems examined previously[27] adiabatic scaling acts as a speedlimit, hence breakdown of perturbation theory is unphysical. However, typical problems of interest are structured and one might expect physical consequence of such a breakdown, e.g. a breakdown of exponential scaling of mixing in favor of a faster process. We _numerically_ demonstrate such crossovers for multiple protocols below, where exponential and polynomial scaling cannot be distinguished from the data, though we emphasize that these are numerical results and we do not have any analytical guarantees this scaling would persist as \(L\rightarrow\infty\). However, the existence of this dynamical regime for \(L\approx 40\) is of keen experimental and practical interest to ongoing efforts to engineer quantum platforms.
We now compare the standard uniform-transverse-field reverse annealing protocol (the "DC protocol") with three different implementations of the RFQA protocol. These are differentiated by (i) the operators that are being oscillated, and (ii) the
distribution of \(L\) random frequencies \(\{f_{j}\}\) and initial phases \(\{\phi_{j}\}\) that parametrize these oscillations. We define a parameter \(\alpha\), common to all protocols, that fixes the amplitude of the oscillations. Note that this parameter determines the _relative_ magnitude of the oscillations and it multiplies the base parameter (e.g. coupling \(J_{j}\) or transverse field \(\kappa_{j}\)) in the full time-dependent \(H\left(t\right)\). In all cases the longitudinal bias field \(h_{j}\left(t\right)=-h\) is kept fixed throughout evolution. The three RFQA protocols we simulate here are:
* X-RFQA with randomly-paired oscillations: transverse field strengths are locally modulated, so and \(J_{j}(t)=1\) while \(\kappa_{j}(t)=\kappa(t)(1+\alpha\sin(f_{j}t+\phi_{j}))\) where \(\kappa(t)\) is the ramp schedule described in the DC protocol. \(O(\sqrt{L})\) random frequencies and phases are generated independently and these are randomly assigned to the spins in the chain (each tone is thus repeated \(O\left(\sqrt{L}\right)\) times). This protocol could be implemented in neutral atoms by locally modulating laser intensities.
* ZZ-RFQA with randomly-paired oscillations: local ferromagnetic coupling strengths are modulated. Here \(\kappa_{j}(t)=\kappa(t)\) is the schedule described in the DC protocol and interactions are oscillated like \(J_{j}(t)=1+\alpha\sin(f_{j}t+\phi_{j})\). We sample the random frequencies and phases in the same way as the X-RFQA case. This protocol can be implemented in quantum annealers through local AC flux control of the coupling terms.
* XZZ-RFQA with randomly-paired oscillations: the transverse fields follow the same schedule \(\kappa_{j}(t)=\kappa(t)(1+\alpha\sin(f_{j}t+\phi_{j}))\) of the X-RFQA protocol, while ferromagnetic interactions are modulated based on the transverse field strengths of the coupled qubits, as \(J_{j}(t)\equiv(1-\alpha\sin(f_{j}t+\phi_{j}))(1-\alpha\sin(f_{j+1}t+\phi_{j+1}))\) This reflects real flux qubit physics [6], where the stronger the transverse field is, the lower the susceptibility to longitudinal flux, so increasing a local transverse field decreases couplings to that qubit. This protocol could be implemented in flux qubit hardware through time-dependent local flux control of transverse field strengths.
Notice that in all cases we repeat each tone \(O\left(\sqrt{L}\right)\) times at random spatial locations-\(\sqrt{L}/3\), to be precise (when this quantity is not an integer, each tone is repeated either \(\mathrm{floor}\left(\sqrt{L}/3\right)\) or \(\mathrm{ceiling}\left(\sqrt{L}/3\right)\) with appropriate probability, and the process halts when all \(L\) sites have frequency/phase pairs assigned). We choose this pairing structure for a few reasons. First, for all three protocols, synchronizing tones causes the corresponding AC terms to interfere constructively, increasing the boost to collective tunneling. However, if the same tone is repeated too many times, it can cause large mean value swings in \(\kappa\) and/or \(J\), leading to phase transitions into the paramagnetic state. This shows up as more significant heating and the vanishing of the ferromagnetic order parameter (defined above), which are both very undesirable in the context of embedded chains for problem solving. We found \(O\left(\sqrt{L}\right)\) pairing to be a kind of "sweet spot" between reaping the benefits of synchronization without significant heating or phase transitions out of the ground state doublet (and further, relevant for experimental implementation, smaller modulation amplitudes \(\alpha\) are required to obtain
the same change in \(\Upsilon\) with tone repetition). Finally, in a real implementation of \(N\) qubits in a 2d lattice, a requirement of only \(O\left(\sqrt{N}\right)\) unique frequencies significantly reduces the signal generation and control complexity.
Each random frequency is drawn from the range \(f_{j}\in\{1/2L,1/L\}\) with random phase \(\phi_{j}\). This choice significantly mitigates heating from the AC drives. The \(1/\sqrt{L}\) scaling of the detuning range ensures that an \(O\left(1\right)\) fraction of the RFQA trials are driving tunneling between states separated by any energy difference less than \(O\left(\sqrt{L}\left\langle f_{j}\right\rangle\right)\); as discussed extensively in [27], multiqubit transitions are only accelerated by RFQA if the energy difference between the competing states falls in the window in which the frequency combinations are dense, consistent with the core mechanism being an exponential proliferation of weak resonances. This slowly decaying "resonance" condition is in stark contrast to DC protocols (the "population transfer" or "reverse annealing" simulated here) where given an exponentially decaying minimum gap \(\Omega_{0}\), \(P\left(t_{f}\right)\) can only reach appreciable values if the detuning \(h\) is \(O\left(\Omega_{0}\right)\) or less. Finally, to regularize behavior at small \(L\)-and consequently provide a larger range of reliable data to fit-we also perform our simulations with a single transverse field, at site 0, weakened by a constant prefactor. This reduces the degeneracy splitting \(\Omega_{0}\) by a prefactor without changing the scaling with \(L\); absent this step, at larger \(\kappa\) values and small \(L\), tunneling occurs too quickly to allow us to employ long enough runtimes to use small frequencies and thus, mitigate heating from the AC drives. For fair comparisons the same single field weakening is used in all simulations.
## 4 Results
### Acceleration of Tunnelling
We study the transverse-field values of \(\kappa=0.7,0.8,0.9\), with AC modulation amplitudes \(\alpha\) between 0 and 0.5 for X-RFQA and ZZ-RFQA, and 0.3 for XZZ-RFQA. These peak amplitudes are chosen based on the observed response of the system's \(L\)-scaling to the various protocol choices. We observe that in all the cases we considered, RFQA oscillations significantly accelerate the global-spin-flip tunneling process at fixed scale \(L\), as confirmed by the increase of the average tunnelling probability compared to the analogous DC protocol shown in Fig. 3 for \(L=18\), and by the exponential fits shown in Fig. 1 for a wide range of parameters.
In order to capture the \(L\)-dependence of this improvement we study the approximate TTS ratio \(t_{f}/\langle P(t_{f})\rangle\) vs \(L\) for a fixed choice of \(\kappa,\alpha\) and RFQA variant (see C1). For all the DC protocols and at least the smaller-\(\kappa\), smaller-\(\alpha\) RFQA protocols we observe what appears _prima facie_ to be an exponential scaling of the TTS with \(L\). We fit the approximate TTS ratio \(t_{f}/\langle P(t_{f})\rangle\) obtained from the numerical data with a two-parameter fitting function \(f(L)=aL2^{\Upsilon L}\). At small \(L\) there is of course some inherent ambiguity in the choice of polynomial prefactor in such fits. We chose this form because in fits where the polynomial prefactor was allowed to vary (e.g. \(L^{c}\) instead of \(L\)) the
best fit exponent \(c\) was always close to 1, and because the empirical scaling form in Eq. 6 combined with \(W\propto L^{-1/2}\) predicts prefactor exponents close to 1 as well. We chose to fit all data using the same functional form for consistency and to make the easiest comparisons. From this fit we obtain the difficulty exponent \(\Upsilon\). For the DC protocol, we used standard exact diagonalization methods to check that the observed TTS scaling exponent closely tracks the prediction \(TTS\propto W/\Delta_{\rm min}^{2}\) obtained from the empirical gap \(\Delta_{\rm min}\) between the two dressed quasi-degenerate ground states of the TFIM in zero longitudinal field, at fixed \(\kappa\) (see Appendix B for a demonstration of this).
We observe that for all RFQA protocols, the difficulty exponent \(\Upsilon\) decreases as \(\alpha\) is increased from zero to positive values (Fig. 1), with XZZ-RFQA showing the most dramatic improvements and X-RFQA the most modest. For large enough values of \(\kappa\) and \(\alpha\), the X- and XZZ-RFQA protocols exhibit a crossover from positive to negative values of the exponent, suggesting the end of the exponentially-diverging regime of the TTS. We expect the X-RFQA variant, being the least performant of the three, will exhibit an analogous crossover at larger \(\kappa,\alpha\) values of the ones we studied here. For all the three RFQA protocols we employes Time-Evolving Block Decimation (TEBD) [35] methods in order to extend the calculations of the average tunnelling probability to larger system sizes \(L\geq 20\), for \(\kappa=0.9\) and \(\alpha\) values close to the crossover point, obtaining results largely consistent with the ones at smaller \(L\) (see Fig. 3). The shifts in the fitted difficulty exponents produced by aggregating the additional TEBD data are very small: \(0.048\to 0.062\) for the X-, \(-0.026\to-0.018\) for the ZZ-, and \(-0.0009\to-0.00005\) for the XZZ-RFQA variants respectively.
\begin{table}
\begin{tabular}{|c||c|c||c|c||c|c|} \hline Protocol & \(\kappa\) & \(\alpha\) & \(\Upsilon_{ST}\) & \(2\gamma_{dis}\) & \(\left\langle Z_{0}Z_{L/2}\right\rangle_{0}\) & \(\left\langle Z_{0}Z_{L/2}\right\rangle_{dis}\) \\ \hline X & 0.8 & 0.5 & 0.28 & 0.39 & 0.88 & 0.87 \\ & 0.9 & 0.5 & 0.05 & 0.15 & 0.81 & 0.8 \\ \hline ZZ & 0.8 & 0.4 & 0.14 & 0.19 & 0.88 & 0.84 \\ & 0.8 & 0.5 & 0.01 & 0.12 & 0.88 & 0.81 \\ & 0.9 & 0.3 & 0.05 & 0.18 & 0.81 & 0.78 \\ & 0.9 & 0.4 & -0.03 & 0.08 & 0.81 & 0.75 \\ \hline XZZ & 0.8 & 0.2 & 0.05 & 0.07 & 0.88 & 0.84 \\ & 0.8 & 0.25 & -0.03 & 0.06 & 0.88 & 0.81 \\ & 0.9 & 0.1 & 0.11 & 0.19 & 0.81 & 0.79 \\ & 0.9 & 0.15 & 0.0 & 0.06 & 0.81 & 0.77 \\ \hline \end{tabular}
\end{table}
Table 1: Evidence for an irreducibly AC nature of tunneling acceleration in this system, by comparison of the fitted TTS exponent to the exponential decay of the DC average \(\langle\Omega_{0}^{2}\rangle\) over the instantaneous modulation of Hamiltonian parameters appropriate to that protocol. Two point correlations are reported for \(L=20\); see text for more details.
DC parameter fluctuations and ferromagnet-paramagnet phase transitions do not explain the observed speedup
In all three of our simulated protocols, the transverse field and coupling terms are modulated fairly substantially. Consequently, in rare events, one will find appreciable shifts in the mean values of \(\kappa\) or \(J\), and in those cases the instantaneous collective tunnel splitting \(\Omega_{0}\) will be much larger than in the corresponding uniform field case, potentially even large enough to induce a transition to the paramagnetic phase. A skeptical reader could very reasonably ask if rare large fluctuations are sufficient to explain the speedups we observe, and since the challenge of implementing simultaneous AC modulation of all terms is significant, it is important to rule out more prosaic explanations for the fast tunneling we report.
We first consider DC fluctuations in \(\Omega_{0}\). As argued above, in the DC case the
Figure 3: Average tunnelling probability \(\langle P(t_{f})\rangle\) vs system size \(L\) at \(\kappa=0.9\). In the main plot: average tunnelling probability curves for the DC protocol (\(\alpha=0\)) and RFQA protocols with values of \(\alpha\) closest to where the the difficulty exponent \(\Upsilon\) is changing sign. For X- (blue dots), ZZ- (orange crosses) and XZZ-RFQA (green diamonds) these are respectively \(\alpha=0.5,0.4\) and \(0.15\)). Note that the average tunnelling probability for the RFQA protocols shown here is approximately constant in (or very weakly dependent on) the system size \(L\), while for the DC protocol it is clearly exponentially decaying. The data for the larger system sizes \(L\geq 20\) (dashed segments) were obtained using TEBD. Inset: \(\langle P(t_{f})\rangle\) vs \(L\) plot in the positive \(\Upsilon\) regime. Shown here is the case with \(\kappa=0.7\), with a choice of \(\alpha=0.4\) for the X- and ZZ-RFQA protocols, and \(\alpha=0.2\) for the XZZ-RFQA protocol. In this regime all probability curves decay exponentially with \(L\), albeit with different rates which favour the RFQA protocols over the DC-driven one.
detuning-averaged tunneling rate \(\Gamma\propto\Omega_{0}^{2}/W\); this remains true with disorder if we replace \(\Omega_{0}^{2}\rightarrow\left\langle\Omega_{0}^{2}\right\rangle_{dis}\). Here, the disorder average is over random modulations of the appropriate terms equivalent to taking instantaneous time slices of \(H\left(t\right)\) in Eq. 2 for the appropriate protocol. To check whether this effect is able to explain the speedup, we used exact diagonalization with \(h=0\) to compute \(\left\langle\Omega_{0}^{2}\right\rangle\) for eight protocol/parameter choices (all at or close to the potential scaling crossover regime), with 2000 random disorder realizations for each datapoint and \(L\) running from 8 to 20. To compare to the AC-driven tunneling rate \(\Upsilon\), we numerically fit
\[\left\langle\Omega_{0}^{2}\left(L\right)\right\rangle=A\frac{2^{-2\gamma_{dis }L}}{L}. \tag{10}\]
This is the same scaling form used to extract \(\Upsilon\); as shown in Table 1, the resulting exponents are all larger by a shift of 0.05-0.13, well outside any fitting or sampling uncertainty here. DC fluctuations in \(\Omega_{0}\) thus cannot explain the tunneling rates we observe. Further, since \(\Omega_{0}\) is exponentially sensitive to mean value shifts, \(\left\langle\Omega_{0}^{2}\right\rangle\) returns significantly larger shifts (compared to the uniform field) than \(\left\langle\Omega_{0}\right\rangle\), and the _median_ value of \(\Omega_{0}\) at each \(L\) shows very little change compared to the uniform case.
We now address the second possibility, transitions into the paramagnetic phase. In the clean limit this transition has a gap that scales as \(1/L\); with disorder this becomes a stretched exponential [41, 42, 43], which is still substantially larger than the exponentially decaying gaps of transitions in the ferromagnetic phase. Fast tunneling driven by such transitions would not suggest utility for embedded problems, since in the paramagnetic phase long-ranged correlation (and thus, energetic awareness of the logical problem) is lost. Such transitions would also likely lead to signficant heating through the Kibble-Zurek mechanism. To ensure that this is not happening, as mentioned earlier we tracked the ferromagnetic order parameter \(\left\langle Z_{0}Z_{L/2}\right\rangle\), averaged over the "waiting" phase of AC-driven evolution (and over detuning and frequency distributions). As shown in FIG. 4, and for DC eigenstates in Table 1, this order parameter is modestly reduced by the modulation terms but we see no evidence that it will vanish at large \(L\). Interestingly, the AC averages are slightly _larger_ than the DC averages; this is because the DC average is taken at degeneracy but the AC average includes small detuning terms \(h\) that bias the system further toward ferromagnetism. We thus conclude that global mixing with paramagnetic states does not explain the speedup we report.
## 5 Implementation Prospects and Discussion
Through extensive numerical simulations, we have demonstrated significant acceleration of multiqubit tunneling in transverse field Ising chains through variations of RFQA, assuming \(O\left(1/\sqrt{L}\right)\) energy uncertainty (in comparison to an exponentially decaying minimum gap). In many of our simulations, exponential and polynomial scaling of the average tunneling time could not be distinguished from the data, in contrast to the clear exponential decay of all uniform field cases. While we reported only 1d simulations here, preliminary full state evolution studies of weakly coupled rings showed similar speedups
when compared to single rings with the same total number of qubits; we will present those results in future work. As ST is not a 1d-specific mechanism, this is not surprising, and suggests these results should generalize to embedded _problems_, which are far out of reach of simulation.
Of course, the ultimate goal of these AC methods is to address two of the core physics problems in quantum annealing (embedding overhead and computational weakness of a uniform transverse field), in a simple and scalable manner. While any RFQA implementation is undoubtedly more complex than current uniform transverse field hardware, as we are only varying flux-tunable \(X_{j}/Z_{j}Z_{k}\) terms already present in the system, no direct changes to the qubit or coupler hardware are required and the complexity lies entirely on the control side. Further, we have shown that reusing the same frequencies across the lattice has performance benefits, and \(O\left(\sqrt{N}\right)\) unique tones reused across \(O\left(N\right)\) qubits is sufficient to rapidly accelerate tunneling while avoiding
Figure 4: Time-averaged two-point correlation function \(C_{ZZ}\) in Eq. 8 vs system size \(L\), at fixed \(\kappa=0.9\). The time-averaging is taken over the “plateau” part of the protocol in between the two ramps, when the global transverse field value \(\kappa\) does not depend on time. The the main plot shows the curves for the DC protocol (\(\alpha=0\), solid black line) and for each of the RFQA protocols, for the value of \(\alpha\) closest to the where the TTS scaling exponent crosses from positive to negative values. For X- (blue dots), ZZ- (orange crosses) and XZZ-RFQA (green diamonds) these values are respectively \(\alpha=0.5\), \(0.4\) and \(0.15\). The data for the larger system sizes \(L\geq 20\) (dashed segments) were obtained using TEBD. In the inset, the curves for the largest values of \(\alpha\) we studied. For X-, ZZ- and XZZ-RFQA respectively these are \(\alpha=0.5,0.5\) and \(0.3\).
local transitions out of the ferromagnetic phase for individual chains. This favorable condition reduces our methods' implementation complexity to a local addressing and/or signal routing problem, which while difficult is still substantially easier than the expected requirements for topological error correction codes [44].
With regard to real implementations, it is likewise important to note that all of these simulations have been noise-free (though they do incorporate random detuning as a proxy for the many energy uncertainty sources in real experiments). This may strike the reader as a significant oversight in a paper making claims about improving near-term hardware. We chose to forgo noise simulation for a few reasons. First, we expect our results to apply (with suitable tuning and modifications) to both quantum annealers and neutral atoms, which have extremely different noise models that are both difficult to simulate in their own ways.
In neutral atoms, the error model (see the supplemental information of [13]) is dominated by laser noise (amplitude and phase fluctuations of the drive beams), decay from the Rydberg state (a leakage error) and atom loss. Amplitude fluctuations1 in the drive lasers-weak, random modulation of the transverse fields-are going to be negligible compared to the larger modulations used to implement RFQA; phase variations correspond to slowly fluctuating \(Z\) biases that stymie attempts to hit exponentially narrow resonances (at sufficiently large system size) but are less deleterious to transitions active over wider detuning ranges. Leakage and atom loss are more serious, in that they change the problem graph, and they are a common challenge to all neutral atom protocols. Of course, an RFQA implementation cannot solve these problems, though we do not expect it would meaningfully increase their rates, and since they are comparatively slow error sources, _any_ methods capable of finding the solution in reduced quantum evolution time can reduce their impact.
Footnote 1: Here, we assume these fluctuations can be local or global; in previous experiments global beams were used, but obviously local control would be involved in any implementation of the protocols we discuss in this work.
For quantum annealers, the picture is more complex, and one which was reviewed in [27]. The error model in quantum annealers primarily consists of quasistatic control error modeled by small random fluctuations in the problem Hamiltonian parameters and transverse field strengths, \(1/f\)-like noise along \(Z\) for each qubit, and comparatively strong interaction with a cold (but not zero temperature, e.g. \(k_{B}T\ll J\) but \(k_{B}T\gg\Omega_{0}\) at small avoided crossings) bath. RFQA does not _directly_ mitigate control error or \(1/f\) longitudinal noise, the effects of which are at least partially captured by our detuning averaging. It is possible that RFQA could _indirectly_ reduce the impact of both error sources by allowing the user to work with larger intra-chain ferromagnetic coupling \(J\) (which can lead to rapid freezing in the uniform field case), increasing the local gap and overall energy scale of the problem. Interaction with the cold bath is more complex, and likely prohibitively difficult to simulate in the AC-driven regime [45, 46]; in a work by two of us [47], a Hilbert space of five hundred states was used to simulate the bath interaction with two qubits, and this method does not scale to long evolution times.
The original RFQA work established that AC driving accelerates bath-assisted phase transitions as well, in line with the benefits seen in uniform field annealing with a pause in the schedule [17], but it is possible that the AC driving could also amplify harmful bath effects though an as-yet unknown mechanism. Since we cannot rigorously simulate the most important and interesting open system effects in a flux qubit implementation, we found it unnecessary to simulate other forms of noise for this work.
We want to close with an interesting question that this work raises, but certainly does not settle. Namely, the generic expectation (with some notable counterexamples [48]) of first order quantum phase transitions between ground states is that the collective spin rearrangements defining the transition are exponentially slow in the number of participating degrees of freedom, e.g. \(\Omega_{0}\propto 2^{-cK}\). What we ask is the following: are there physically realistic-and hopefully, application relevant-first order phase transitions where, under the influence of AC driving, the nature of the transition does not change (e.g. all order parameters that are finite across the transition remain so, and the system does not meaningfully heat), but the collective tunneling rate crosses over to polynomial scaling even for arbitrarily large system sizes? Our results suggest that this is _possible_ but we make no claims based on numerical evidence alone that such scaling must persist at \(L\rightarrow\infty\). A conclusive answer to this question would be of significant interest for quantum optimization, and dynamical many-body physics more generally.
## 6 Acknowledgements
We would like to thank Steven Dissler, Andrew King, Glen Mbeng, Eleanor Rieffel, Paul Varosy and Steven Weber for useful discussions around this project. We also would like to thank Zhijie Tang for assisting in early simulations of RFQA in 1d chains. EK and VO were jointly supported by DARPA under the Reversible Quantum Machine Learning and Simulations program, contract HR00112190068. EK's research in this area was also funded by NSF grant PHY-1653820. GM would like to acknowledge support from the NASA Ames Research Center and from DARPA under IAA 8839, Annex 128. Resources supporting this work were also provided by the NASA High-End Computing (HEC) Program through the NASA Advanced Supercomputing (NAS) Division at Ames Research Center.
## Appendix A Linear regime
The main text includes an explanation as to why one should see a linear growth regime \(\left\langle P\left(t_{f}\right)\right\rangle=\Gamma\,t_{f}\) for the average tunnelling probability as a function of the final time of the protocols. By analogy, the prefactor \(\Gamma\) can informally be interpreted as an "average tunnelling rate". This linear-growth prediction is confirmed in Fig. 14 by means of example, where we compare \(\left\langle P\left(t_{f}\right)\right\rangle\) vs the rescaled final time \(t_{f}/L\) in the case of X-RFQA for \(\kappa=0.8,\alpha=0.3\). By using this linear-growth expression for \(\left\langle P\left(t_{f}\right)\right\rangle\) in Eq. (7) one can see that in the large-\(L\) limit we have that \(TTS\sim 1/\Gamma\) and it would in
principle be possible to extract the difficulty exponent \(\Upsilon\) by studying the average rate \(\Gamma\) of the linear regime. In practice, however, this becomes quickly very expensive since \(\Gamma\) is typically exponentially decaying with \(L\), and extracting its value requires fitting a linear function with a slope that even at moderately large values of \(L\) becomes almost indistinguishable from zero. In order to do so reliably, then one would have to perform simulations for times \(t_{f}\) that grow exponentially with \(L\). For this reason we decide to extract the difficulty exponent \(\Upsilon\) by fitting the scaling of the TTS directly.
## Appendix B DC protocol: minimum-gap scaling exponent and difficulty exponent
We studied the scaling of the empirical gap between the two quasi-degenerate dressed ferromagnetic states at given \(\kappa\) using exact diagonalization methods. We extracted the scaling exponent \(\hat{\Upsilon}\) of the quantity \(1/\Delta_{\min}^{2}\) by fitting it with the function \(f(L)=a2^{\Upsilon L}\), with fit parameters \(a,\hat{\Upsilon}\). The TTS datapoints obtained from the dynamical simulations of the DC protocol for the same given \(\kappa\) were then fitted with the single-parameter function \(TTS(L)=b\,2^{\hat{\Upsilon}L}\) by using the \(\hat{\Upsilon}\) value extracted before. Fig. 11 shows that the scaling exponent \(\hat{\Upsilon}\) of the inverse square gap \(1/\Delta_{\min}^{2}\) corresponds to the difficulty exponent \(\Upsilon\) defined by the TTS's exponential scaling with \(L\). Formally, \(\lim_{L\to\infty}\frac{1}{L}\log_{2}(1/\Delta^{2})=\lim_{L\to\infty}\frac{1}{ L}\log_{2}TTS=\Upsilon\).
## Appendix C TTS and difficulty exponent
As explained in the main text, in order to calculate the difficulty exponent \(\Upsilon\) (shown in Fig. 1) for a particular protocol and fixed values of \(\kappa,\alpha\), we plot the large-\(L\) approximation to the TTS, \(t_{f}/\langle P\left(t_{f}\right)\rangle\) vs the system size \(L\) and fit the data with the two-parameter fitting function \(f(L)=aL2^{\Upsilon L}\), obtaining \(\Upsilon\). Fig. C1 (left) shows this for the ZZ-RFQA protocol at \(\kappa=0.8\) and \(\alpha\) between zero (the DC protocol) and \(0.5\). The fitted \(\Upsilon\) decreases monotonically with \(\alpha\). Analogous behaviours are observed for all the other RFQA protocols in all parameter ranges we have studied.
The distribution of the tunneling probabilities \(P(t_{f})\) depends on the random choices longitudinal fields \(h\), and (for the AC-driven protocols) on the random frequencies and phases. It is undesirable that this distributions - and as a consequence, the value of the TTS - should be dominated by rare events. If this were the case, then any observed acceleration of tunneling could be indicative of an atypical behaviour of the RFQA protocols. In order to rule out this possibility we computed the median of the tunnelling probability \(P_{med}\left(t_{f}\right)\) and fitted the ratio \(t_{f}/P_{med}\left(t_{f}\right)\) vs \(L\) using the same fitting function as before. The results for the ZZ-RFQA protocol at \(\kappa=0.8\) and \(\alpha=0-0.5\) are shown in Fig. C1 (right). Even though we obtain slightly different scaling exponents, the overall
picture is unchanged: the "difficulty exponent" \(\Upsilon_{med}\) obtained in this way connects to the DC value at \(\alpha=0\) and crosses into negative values at for large enough \(\alpha\).
## Appendix D Heating
In this work we use RFQA to accelerate tunnelling from one ferromagnetic state of a quantum Ising chain to the other. Ideally, this should happen without exciting the system out of the the quasi-degenerate ferromagnetic doublet. We use the _heating_ quantity \(P_{heat}\) defined through Eq. (4) in order to assess the amount of excitations produced by the various protocols. In particular, we compare \(P_{heat}\) with the tunnelling probability \(P\left(t_{f}\right)\) in order to rule out the possibility that \(P_{heat}\gg P\left(t_{f}\right)\), which was observed in [27] to obscure the possible advantage achieved by RFQA. For all DC protocols, and all RFQA protocols at large enough \(\alpha\), we have that \(\langle P_{heat}\rangle<\langle P(t_{f})\rangle\) for all the system sizes studied. Some RFQA protocols go through an intermediate regime at small \(\alpha\) where \(\langle P\left(t_{f}\right)\rangle\lesssim\langle P_{heat}\rangle\), for the larger values of \(L\). This is due to a comparatively larger amount of heat than what is observed in the DC case for the same \(\kappa\), combined with too modest an increase of the tunnelling probability (See Fig. D1). A qualitatively analogous behaviour is observed by considering the median heat and the median tunnelling probability.
Given the empirically-observed behaviour of \(P(t_{f})\) and \(P_{heat}\), one could reasonably imagine that \(P_{heat}\gg P\left(t_{f}\right)\) would eventually hold in the \(L\rightarrow\infty\) limit. For the DC case, this issue can arguably be solved by using better adiabatic ramps, but for the AC-case this prediction seems hard to confirm or deny conclusively without a theoretical description of the AC-induced excitation processes that holds for non-perturbative values
of \(\kappa\) and \(\alpha\), which we currently lack. Nevertheless, we do not expect the heating we observe _at given size_ to significantly affect the results presented in this work.
## Appendix E Considerations for TEBD in this system
For the one-dimensional model studied in this work we use Time-Evolving Block Decimation [35], as implemented by the ITensor library [38, 39] to access larger system sizes. Unlike the exact-state simulation code used for the smaller \(L\), TEBD uses matrix-product states (MPS) with finite bond dimension \(\chi\) in order to approximate the time-evolving wavefunction. Physical results are accessed by increasing \(\chi\) until convergence is reached (up to the desired numerical accuracy). Unlike other MPS-based methods such as the Density Matrix Renormalization Group (DMRG), where the one-dimensional area law allows for an efficient representation of the ground state of a local Hamiltonian by MPSs and _e.g._ for the calculation of intensive quantities (of order \(O(1)\)), the calculation of quantities that exhibit an exponential decay with the system size (such as the tunnelling probability \(P(t_{f})\) of our model) require increasingly better
approximations of the time-evolving wavefunction, even in cases where the entanglement is not diverging with \(L\).
|
2302.13814 | An Independent Evaluation of ChatGPT on Mathematical Word Problems (MWP) | We study the performance of a commercially available large language model
(LLM) known as ChatGPT on math word problems (MWPs) from the dataset DRAW-1K.
To our knowledge, this is the first independent evaluation of ChatGPT. We found
that ChatGPT's performance changes dramatically based on the requirement to
show its work, failing 20% of the time when it provides work compared with 84%
when it does not. Further several factors about MWPs relating to the number of
unknowns and number of operations that lead to a higher probability of failure
when compared with the prior, specifically noting (across all experiments) that
the probability of failure increases linearly with the number of addition and
subtraction operations. We also have released the dataset of ChatGPT's
responses to the MWPs to support further work on the characterization of LLM
performance and present baseline machine learning models to predict if ChatGPT
can correctly answer an MWP. We have released a dataset comprised of ChatGPT's
responses to support further research in this area. | Paulo Shakarian, Abhinav Koyyalamudi, Noel Ngu, Lakshmivihari Mareedu | 2023-02-23T16:06:16Z | http://arxiv.org/abs/2302.13814v2 | # An Independent Evaluation of ChatGPT on Mathematical Word Problems (MWP)
###### Abstract
We study the performance of a commercially available large language model (LLM) known as ChatGPT on math word problems (MWPs) from the dataset DRAW-1K. To our knowledge, this is the first independent evaluation of ChatGPT. We found that ChatGPT's performance changes dramatically based on the requirement to show its work, failing \(20\%\) of the time when it provides work compared with \(84\%\) when it does not. Further several factors about MWPs relating to the number of unknowns and number of operations that lead to a higher probability of failure when compared with the prior, specifically noting (across all experiments) that the probability of failure increases linearly with the number of addition and subtraction operations. We also have released the dataset of ChatGPT's responses to the MWPs to support further work on the characterization of LLM performance and present baseline machine learning models to predict if ChatGPT can correctly answer an MWP. We have released a dataset comprised of ChatGPT's responses to support further research in this area.
Large Language Models, Math Word Problems, ChatGPT 1
languageresourceLanguage Model
## 1 Introduction
The emergence of large language models (LLM) has gained much popularity in recent years. At the time of this writing, some consider OpenAI's GPT 3.5 series models as the state-of-the art [1]. In particular, a variant tuned for natural dialogue known as ChatGPT [2], released in November 2022 by OpenAI, has gathered much popular interest, gaining over one million users in a single week [3]. However, in terms of accuracy, LLMs are known to have performance issues, specifically when reasoning tasks are involved [1, 4]. This issue, combined with the ubiquity of such models has led to work on prompt generation and other aspects of the input [5, 6]. Other areas of machine learning, such as meta-learning [7, 8] and introspection [9, 10] attempt to predict when a model will succeed or fail for a given input. An introspective tool, especially for certain tasks, could serve as a front-end to an LLM in a given application.
As a step toward such a tool, we investigate aspects of math word problems (MWPs) that can indicate the success or failure of ChatGPT on such problems. We found that ChatGPT's
performance changes dramatically based on the requirement to show its work, failing \(20\%\) of the time when it provides work compared with \(84\%\) when it does not. Further several factors about MWPs can lead to a higher probability of failure when compared with the prior, specifically noting that the probability of failure increases linearly with the number of addition and subtraction operations (across all experiments). We also have released the dataset of ChatGPT's responses to the MWPs to support further work on the characterization of LLM performance. While there has been previous work examining the LLM performance on MWPs [4], such work did not investigate specific aspects that increase MWP difficulty nor did it examine performance on ChatGPT in particular.
The remainder of this paper proceeds as follows. In Section 2, we describe our methodology. Then we describe our results in Section 3. Using these intuitions, we present baseline models to predict the performance of ChatGPT in Section 4. This is followed by a discussion of related work (Section 5) and future work (Section 6).
## 2 Methodology
**MWP Dataset.** In our study, we employed the DRAW-1K dataset [11, 12, 13] which not only includes 1,000 MWPs with associated answers but also template algebraic equations that one would use to solve such a word problem. As a running example, consider the following MWP.
_One whole number is three times a second. If 20 is added to the smaller number, the result is 6 more than the larger._
We show ChatGPT's (incorrect) response to this MWP in Figure 1. The DRAW-1K dataset not only includes the correct answer, which in this case is \(12\) and \(7\) but also includes template equations used to solve the problem. For our running example, this consists of the equations \(m-n=a-b\) and \(c\times m-n=0\). This information represents a symbolic representation of the problem which can potentially be used to identify aspects that make such problems more difficult.
**Entering Problems into ChatGPT at Scale.** At the time of our study, OpenAI, the maker of ChatGPT had not released an API. However, using the ChatGPT CLI Python Wrapper1 we interfaced with ChatGPT allowing us to enter the MWP's at scale. For the first two experiments, we would add additional phrases to force ChatGPT to show only the final answer. We developed these additions to the prompt based on queries to ChatGPT to generate the most appropriate phrase. However, we found in our third experiment that this addition impacted results. We ran multiple experiments to test ChatGPT's ability with these problems.
Footnote 1: We used ChatGPT CLI Python Wrapper by Mahmoud Mabrouk, see [https://github.com/mmabrouk/chatgpt-wrapper](https://github.com/mmabrouk/chatgpt-wrapper)
* **January 2023 Experiment (No work).** Our first experiment was run in early January 2023 prior to OpenAI's announcement of improved performance on mathematical tasks on January 30, 20232 and in this experiment we included the following statement as part of the prompt.
Don't provide any work/explanation or any extra text. Just provide the final number of answers for the previous question, with absolutely no other text. if there are two or more answers provide them as a comma separated list of numbers.
* **February 2023 Experiment (No work).** Our second experiment was run in mid-February 2023 after the aforementioned OpenAI announcement and also used a prompt that would cause ChatGPT to show only the answer, however we found that our original prompt led to more erratic behavior, so we modified the prompt for this experiment, and used the following. Don't provide any work/explanation or any extra text. Just provide the final number of answers for the previous question, with absolutely no other text. if there are two or more answers provide them as a comma separated list of numbers like: '10, 3', etc; or if there is only 1 answer provide it like '10'. Absolutely no other text just numbers alone. Just give me the numbers (one or more) alone. No full stops, no spaces, no words, no slashes, absolutely nothing extra except the 1 or more numbers you might have gotten as answers.
* **February 2023 Experiment (Showing Work).** We also repeated the February experiment without the additional prompt, thereby allowing ChatGPT to show all its work. We note that in this experiment we used ChatGPT Plus which allowed for faster response. At the time of this writing, ChatGPT Plus is only thought to be an improvement to accessibility and not a different model.3
Figure 1: ChatGPT’s response (Jan. 24, 2023) to MWP _One whole number is three times a second. If 20 is added to the smaller number, the result is 6 more than the larger._ In Step A it correctly identifies the set of equations needed to solve the problem and correctly simplifies it in Step B. However, it fails to correctly perform the algebraic operation in Step C (it should state \(2y=14\)). This leads ChatGPT to obtain an incorrect result, returning \(42\) and \(14\) instead of \(21\) and \(7\).
## 3 Results
The key results of this paper are as follows: (1.) the creation of a dataset consisting of ChatGPT responses to the MWPs, (2.) identification of ChatGPT failure rates (\(84\%\) for January and February experiments with no work and \(20\%\) for the February experiment with work), (3.) identification of several factors about MWPs relating to the number of unknowns and number of operations that lead to a higher probability of failure when compared with the prior (Figure 3), (4.) identification that the probability of failure increases linearly with the number of addition and subtraction operations (Figure 5), and (5.) identification of a strong linear relationship between the number of multiplication and division operations and the probability of failure in the case where ChatGPT shows its work.
**Dataset.** We have released ChatGPT's responses to the 1,000 DRAW-1K MWP's for general use at [https://github.com/lab-v2/ChatGPT_MWP_eval](https://github.com/lab-v2/ChatGPT_MWP_eval). We believe that researchers studying this dataset can work to develop models that can combine variables, operate directly on the symbolic template, or even identify aspects of the template from the problem itself in order to predict LLM performance. We note that at the time of this writing, collecting data at scale from ChatGPT is a barrier to such work as API's are not currently directly accessible, so this dataset can facilitate such ongoing research without the overhead of data collection.
**Overall Performance of ChatGPT on DRAW-1K.** As DRAW-1K provides precise can complete answers for each problem, we classified ChatGPT responses in several different ways and the percentage of responses in each case is shown in Figure 2.
1. _Returns all answers correctly._ Here ChatGPT returned all answers to the MWP (though it may round sometimes).
2. _Returns some answers correctly, but not all values._ Here the MWP called for more than one value, but ChatGPT only returned some of those values.
3. _Returns "No Solution."_ Here ChatGPT claims there was no solution to the problem. This was not true for any of the problems.
4. _Returns answers, but none are correct._ Here ChatGPT returned no correct answers (e.g., see Figure 1).
Throughout this paper, we shall refer to the probability of failure as the probability of cases 3 and 4 above (considered together). In our February experiment, we found that when ChatGPT omitted work, the percentages, as reported in Figure 2 remained the same, though they differed significantly when work was included. We also report actual numbers for all experiments in Table 1. We note that the probability of failure increases significantly when the work is not shown. However, when the work is included, ChatGPT obtains performance in line with state-of-the-art models (i.e. EPT [18, 16]) which has a reported \(59\%\) accuracy while ChatGPT (when work is shown) has fully correct (or rounded) answers \(51\%\) of the time, but can be viewed as high as \(80\%\) if partially correct answers are included.
**Factors Leading to Incorrect Responses.** We studied various factors from the templated solutions provided for the MWP in the DRAW-1K dataset and these included number of equations, number of unknowns, number of division and multiplication operations, number of addition and
subtraction operations, and other variants derived from the metadata in the DRAW-1K dataset. We identified several factors that, when present, cause ChatGPT to fail with a probability greater than the prior (when considering the lower bound of a \(95\%\) confidence interval). These results are shown in Figure 3. One interesting aspect we noticed is that when the system would be required to show its work, the number of unknowns present no longer seems to increase the probability of failure (this was true for all quantities of unknowns in addition to what is shown in Figure 3). Additionally, the number of multiplication and division operations, while increasing the probability of failure greater than the prior in the January experiment was not significant (based on \(95\%\) confidence intervals) in the February experiment (when work was not shown) - possibly a result of OpenAI's improvements made at the end of January. However, there was a significant relationship between the number of multiplication and division operations and failure when work was shown. In fact, we found a strong linear relationship (\(R^{2}=0.802\)) for this relationship in the case where work was shown.
**Correlation of failure with additions and subtractions.** Previous work has remarked on the failure of LLM's in multi-step reasoning [1, 4]. In our study, we identified evidence of this phenomenon. Specifically, we found a strong linear relationship between the number of addition and subtraction operations with the probability of failure (\(R^{2}=0.821\) for the January experiment, \(R^{2}=0.870\) for the February experiment and \(R^{2}=0.915\) when work was shown).
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Response Type & Jan. 2023 & Feb. 2023 & Feb. 2023 \\ & (No work) & (No work) & (Showing work) \\ \hline \hline Returns answers, but none are correct & 831 & 830 & 186 \\ Returns “No Solution” & 9 & 10 & 14 \\ Returns all answers correctly & 135 & 134 & 513 \\ Returns some answers correctly, but not all values & 25 & 26 & 287 \\ \hline \end{tabular}
\end{table}
Table 1: Number of responses for each ChatGPT Variant
Figure 2: Overall results on the 1,000 MWPs in DRAW-1K based on ChatGPT’s response.
We show this result in Figure 5. It is noteworthy that the relationship existed in all of our experiments, and seemed to be strengthened when ChatGPT included work in the result.
## 4 Performance Prediction Baselines
The results of the previous section, in particular, the factors indicating a greater probability of failure (e.g. Figures 3-5), may indicate that the performance of ChatGPT can be predicted. In this section, we use features obtained from the equations associated with the MWPs to predict performance. Note that here we use ground-truth equations to derive the features, so the models presented in this section are essentially using an oracle - we leave extracting such features from equations returned by ChatGPT or another tool (e.g., EPT [18]) to future work. That said, as these features deal with counts of operations, unknowns, and equations, a high degree of accuracy in creating the equations would not be required to faithfully generate such features.
Following the ideas of machine learning introspection [9, 10], we created performance prediction models using random forest and XGBoost. We utilized scikit-learn 1.0.2 and XGBoost 1.6.2 respectively. In our experiments, we evaluated each model on each dataset using a five-fold cross-validation and report average precision and recall in Table 2 (along with F1 computed based on those averages). In general, our models were able to provide higher precision than random on predicting incorrect answers for both classifiers. Further, XGBoost was shown to be
Figure 4: Additional finding specific to the February, 2023 experiment where ChatGPT displayed its work relating number of multiplications to probability of failure, \(R^{2}=0.802\), \(95\%\) confidence intervals.
Figure 3: Aspects of MWPs that led to ChatGPT failure more often than the prior (\(95\%\) confidence intervals shown).
able to provide high recall for predicting correct responses. While these results are likely not suitable for practical use, they do demonstrate that the features extracted provide some amount of signal to predict performance and provide a baseline for further study.
## 5 Related Work
The goal of this challenge dataset is to develop methods to introspect a given MWP in order to identify how an LLM (in this case ChatGPT) will perform. Recent research in this area has examined MWPs can be solved by providing a step-by-step derivation [14, 15, 16, 17]. While these approaches provide insight into potential errors that can lead to incorrect results, this has not been studied in this prior work. Further, the methods of the aforementioned research are specific to the algorithmic approach. Work resulting from the use of our challenge dataset could lead to solutions that are agnostic to the underlying MWP solver - as we treat ChatGPT as a black box. We also note that, if such efforts to introspect MWPs are successful, it would likely complement a line of work dealing with "chain of thought reasoning" for LLMs [5, 6]
\begin{table}
\begin{tabular}{|c|c|c c c|c c c|} \hline Version of & Model & Incorr. & Incorr. & Incorr. & Corr. & Corr. & Corr. \\ ChatGPT & Type & Prec. & Recall & F1 & Prec. & Recall & F1 \\ \hline \hline Jan. & RF & 0.90 & 0.88 & 0.89 & 0.34 & 0.41 & 0.37 \\ (No work) & XGBoost & 0.95 & 0.22 & 0.36 & 0.16 & 0.93 & 0.26 \\ \hline Feb. & RF & 0.94 & 0.89 & 0.91 & 0.47 & 0.63 & 0.54 \\ (No work) & XGBoost & 0.98 & 0.35 & 0.51 & 0.18 & 0.95 & 0.31 \\ \hline Feb. & RF & 0.78 & 0.69 & 0.73 & 0.74 & 0.82 & 0.78 \\ (Showing work) & XGBoost & 0.77 & 0.59 & 0.67 & 0.69 & 0.83 & 0.75 \\ \hline \end{tabular}
\end{table}
Table 2: Performance Prediction Baseline Models using Ground Truth Equations
Figure 5: Increase in probability of an incorrect response as a function of the number of addition operations (prior probability shown with dashed line, \(95\%\) confidence intervals, linear regression with \(R^{2}=0.821\) for January, \(R^{2}=0.870\) for February without showing work and \(R^{2}=0.915\) for February with showing work).
which may inform better ways to generate MWP input into an LLM (e.g., an MWP with fewer additions may be decomposed into smaller problems). While some of this work also studied LLM performance on Math Word Problems (MWPs), it only looked at how various prompting techniques could improve performance rather than underlying characteristics of the MWP that leads to degraded performance of the LLM.
## 6 Future Work
Understanding the performance of commercial black-box LLMs will be an important topic as they will likely become widely used for both commercial and research purposes. Further future directions would also include an examination of ChatGPT performance on datasets other MWPs [13], investigating ChatGPT's nondeterminism, and exploring these studies on upcoming commercial LLM's to be released by companies such as Alphabet and Meta.
## Acknowledgments
Some of the authors have been funded by the ASU Fulton Schools of Engineering.
|
2307.00708 | Domain control and periodic poling of epitaxial ScAlN | ScAlN is an emerging ferroelectric material that possesses large band gap,
strong piezoelectricity, and holds great promises for enhanced \chi^{(2)}
nonliearity. In this study, we demonstrate high-fidelity ferroelectric domain
switching and periodic poling of Al-polar ScAlN thin film epitaxially grown on
on c-axis sapphire substrate using gallium nitride as a buffer layer. Uniform
poling of ScAlN with periods ranging from 2 um to 0.4 um is realized. The
ability to lithographically control the polarization of epitaxial ScAlN
presents a critical advance for its further exploitation in ferroelectric
storage and nonlinear optics applications. | Fengyan Yang, Fengyan Yang, Ding Wang, Ping Wang, Juanjuan Lu, Zetian Mi, Hong X. Tang Tang | 2023-07-03T02:10:17Z | http://arxiv.org/abs/2307.00708v2 | # Domain control and periodic poling of epitaxial ScAlN
###### Abstract
ScAlN is an emerging ferroelectric material that possesses large band gap, strong piezoelectricity, and holds great promises for enhanced \(\chi^{(2)}\) nonliearity. In this study, we demonstrate high-fidelity ferroelectric domain switching and periodic poling of Al-polar ScAlN thin film epitaxially grown on c-axis sapphire substrate using gallium nitride as a buffer layer. Uniform poling of ScAlN with periods ranging from 2 \(\upmu\)m to 0.4 \(\upmu\)m is realized. The ability to lithographically control the polarization of epitaxial ScAlN presents a critical advance for its further exploitation in ferroelectric storage and nonlinear optics applications.
+
Footnote †: preprint: APS/123-QED
Alloying scc with aluminum nitride (AlN) results in an innovative class of ferroelectric nitride semiconductor, Sc\({}_{x}\)Al\({}_{1-x}\)N, that holds great potential for electronic and photonic applications. Single crystalline ScAlN can be grown on c-axis sapphire substrate with gallium nitride (GaN) as a buffer layer via molecular beam epitaxy [1; 2]. The Sc composition is tunable which can be leveraged for varying bandgap and refractive index. ScAlN possesses unique properties such as high dielectric strength, large piezoelectric constants, and low coercive fields in comparison to AlN [3; 4]. These properties make ScAlN attractive for various applications in electronics, including heterostructure field effect transistors (HFETs), piezoelectric and ferroelectric devices, and high-frequency acoustic resonators [5; 6; 7; 8; 9].
Especially, ScAlN promises a high second-order nonlinear optical susceptibility \(\chi^{(2)}\), which was reported to increase at higher Sc concentration and can be nearly two times as large as that of LiNbO\({}_{3}\) when Sc concentration reaches 20% [10]. This particular property leads to enhanced \(\chi^{(2)}\) nonlinear interaction, for example, in second harmonic generators [11; 12; 13] and optical parametric oscillators [14; 15]. Moreover, the Sc alloying in AlN significantly decrease the coercive field to below the dielectric breakdown field, making it possible to flip the polarization [8]. Therefore, if periodically flipping and maintaining the polarization is achievable on a ScAlN platform, quasi-phase matching for \(\chi^{(2)}\) process can be realized, further enhancing the effective coupling strength and facilitating numerous nanophotonic applications such as \(\chi^{(2)}\) frequency conversion [12; 13; 15], Pockels combs [16], and cascaded \(\chi^{(2)}\) and \(\chi^{(3)}\) nonlinearities [17; 18; 19; 20; 21].
This article presents a study on the ferroelectric switching property of epitaxial Al-polar ScAlN thin film grown on GaN-buffered sapphire template, which is highly compatible with standard III-nitrides semiconductor fabrication process. The ferroelectric switching current was extracted using a custom setup based on Positive-Up-Negative-Down (PUND) measurement [22; 23], and the coercive field was found to be 6 MV/cm. We further demonstrate high fidelity and uniform ScAlN poling with periods ranging from 2 \(\upmu\)m to 0.4 \(\upmu\)m. Reaching such short poling periods is a prerequisite for second harmonic generation in the deep visible even ultra-violet wavelength range, which remains to be challenging due to stringent requirement on poling periods [24; 25; 26]. The submicron poling period demonstrated in this article provides an opportunity for mirrorless OPO [27; 28; 29], a type of optical parametric oscillator that does not require an optical cavity, which simplifies the design and makes the device more robust against environmental disturbance.
The starting wafer consists of a 100 nm-thick Al-polar ScAlN film grown on a c-axis sapphire substrate via Molecular Beam Epitaxy (MBE) using highly Si-doped GaN as a buffer layer [1]. Fig. 1(a) shows the wurtzite crystal structure of Al-polar ScAlN alloy alloyed with 25% Sc concentration, which was set to be our targeted concentration in growth to allow better lattice match to GaN and reduce dislocation density [6]. An atomic-force-microscopy (AFM) scan over a 5 \(\upmu\)m\(\times\)5 \(\upmu\)m area reveals a smooth surface with a root-mean-squared roughness as low as 0.6 nm, as indicated in Fig. 1(b). The layer composition of the wafer was characterized by a scanning electron microscope (SEM) equipped with energy-dispersive X-ray spectroscopy (EDS). Fig. 1(c) shows the cross-sectional SEM images of of Sc\({}_{0.25}\)Al\({}_{0.75}\)N/GaN heterostructure. The element distribution was obtained by EDS mapping at 4.088 keV, 1.486 keV, and 1.098 keV, which correspond to emission lines of Sc K\(\alpha\), Al K\(\alpha\), and Ga L\(\alpha\), respectively, indicating well defined boundaries at all growth interfaces.
The ferroelectric switching property of ScAlN was characterized by PUND measurement, where a series of programmed voltage pulses was applied to the film successively to extract ferroelectric switching current \(I_{f}\). Electrodes of varying designs were patterned on ScAlN chip through nickel evaporation, followed by a liftoff process. The underlying GaN layer, which has low resistivity due to high Si doping concentration, serves as the counter electrode. Silver paste was applied at the edge of the chip, connecting the Si-doped GaN layer with an aluminum holder. Fig. 1(d) illustrates the PUND measurement setup. An arbitrary waveform generator (AWG) was programmed to generate PUND pulses, which were amplified by a linear voltage amplifier, and then applied to nickel electrodes via an ultra-sharp probe tip. The current induced by the electric field was converted to voltage signal through a low-noise current amplifier, and an oscilloscope was used to measure the output voltage.
Fig. 2(a) displays the applied voltage pulses and circuit current in PUND measurement. The triangular-shaped volt
age pulses have a peak voltage of \(\pm 62\) V and a pulse length of 200 \(\upmu\)s. A negative voltage pulse was applied to initialize the polarization of the film, followed by positive ('P') and up ('U') pulses. In addition to the polarization switching current \(I_{f}\), parasitic currents including displacement current \(I_{c}=C\frac{dV}{dt}\) and static leakage current \(I_{l}\), contribute to the measured \(I_{p}\). Since polarization was supposed to be fully reversed during "P" pulse which was chosen with a relatively long duration, the ferroelectric current should not present in \(I_{u}\). Therefore, the net forward switching current \(I_{f}\) can be obtained by subtracting \(I_{u}\) from \(I_{p}\). Similarly, the backward switching current can be extracted from negative ('N') and down ('D') pulses, where \(I_{b}=I_{n}-I_{d}\).
The ferroelectric current density can be calculated by \(J=I/A\), where \(A\) is the area of the nickel electrode. The electric field applied on ScAlN film can be approximated as \(E=V/d\), with \(d=100\) nm. The time-dependent polarization \(P\) of the film can be found by numerically integrating \(J\) over time. The resulting \(J-E\) and \(P-E\) relationships are shown in Fig. 2(b), represented by red and purple curves, respectively. The \(J_{f}\) and \(J_{b}\) values exhibit large asymmetry, which is mainly attributed to the asymmetric electrode configurations and the enhanced leakage current \(I_{v}\) during forward switching, as reported in previous work [30].
In order to obtain reliable measurements, we focused on the backward switched polarization \(P_{b}\) and swept the peak voltage with fixed pulse length. The results are shown in Fig. 2(c). The ScAlN film under investigation exhibited a coercive field of approximately 6 MV/cm and a maximum backward switched polarization of 250 \(\upmu\)C/cm\({}^{2}\). Remnant polarization of this film is thus estimated to be \(P_{b,max}/2=125\) \(\upmu\)C/cm\({}^{2}\).
We employed trapezoid voltage pulses with a peak voltage of 62 V and pulse width of 500 \(\upmu\)s to perform poling process, which fully reverse the ferroelectric polarization from Al-polar to N-polar. After poling, the top nickel electrodes were removed by hydrochloric acid, then Piezresponse Force Microscopy (PFM) was utilized to map out N-polar and Al-polar domains, as shown in Fig.2(d). The significant phase difference between domains confirms the effective poling of ScAlN thin film.
To verify the polarization retention capability of ScAlN film, we first applied a poling voltage pulse to a selected device. After certain periods of time time, we repeated the application of the same pulse and recorded the current response, as shown in Fig.3 (a). Notably, even after a week, the current response exhibited excellent stability, showing no discernible increase in the probe current. This suggests negligible polarization loss during the testing period and long-term polarization retention.
High fidelity and flexibility of domain engineering are also required for realizing quasi-phase matching in quadradic nonlinear optical devices. We first designed and patterned a "YALE"-shape top electrode, consisting of the four letters which are connected through a bus electrode. The bus is connected to a 20\(\upmu\)m\(\times\)20\(\upmu\)m square electrode for convenient contact with the probe tip. Multiple voltage pulses with 62 V peak voltage and 500 \(\upmu\)s length were applied to the contact electrode at room temperature to ensure thorough polarization switching. After poling, the N-polar domains were more susceptible to HCl solution and can be further exposed via HCl etching for up to 20 minutes. The significant contrast between the original and poled domains can be revealed via scanning electron microscopy(SEM), as shown in Fig. 3(b). The well-defined boundaries and uniformity of the reversed domains provide strong evidence for high quality poling of ScAlN film via lithographically defined patterns.
Finally, to demonstrate periodic poling of ScAlN, electrode arrays were patterned with varying periods \(\Lambda\) of challenging small values, which are 2\(\upmu\)m, 1\(\upmu\)m, 0.8\(\upmu\)m, 0.6\(\upmu\)m and 0.4\(\upmu\)m, respectively. The same poling voltage pulses used previously were applied to these electrodes, followed by the same post-processing steps. The resulting exposed domains are presented in Fig. 3(c), where the poling periods agrees well with our designed electrode patterns. The duty cycles for electrode arrays were designed and fabricated to be 25% for all the periods, but here the achieved poling duty cycles were 31%, 35%, 41%, 54% and 75% as period decreases, which we believe can be mediated by optimizing pulse length and electrode width for smaller periods accordingly. Both the smallest poling period and poling uniformity is comparable to that achieved in the state-of-art LiNbO3 platform with lithographically defined electrode [25]. We hereby demonstrate our capability of realizing periodic poling of thin film ScAlN with arbitrary periods, especially sub-micron periods, which in principle can fullfill quasi-phase matching for \(\chi^{(2)}\) frequency conversion in any wavelength range within ScAlN's transparency window. Importantly, the submicron poling period will unlock possibility for mirrorless optical parametric oscillation, which since its first proposal in 1966 [27], hasn't been verified in any integrated photonic platforms.
In conclusion, ScAlN is a promising material for photonic applications due to its flexibility in domain engineering and high fidelity periodical poling reported here. The decent coercive field 6MV/cm make it possible to pole thicker ScAlN film which provides better optical confinement when patterned into waveguides. Photonic waveguide based on 500nm sputtered ScAlN has been reported before with propagation loss 9\(\pm\)2dB/cm at 1550nm [31], which still holds room for fabrication optimization to achieve comparable loss as AlN photonic circuits [32; 33; 11]. With future development and growth optimization on waveguide-compatible films, it is feasible to realize low-loss ScAlN photonic circuits for a wide range of integrated \(\chi^{(2)}\) nonlinear optics applications, and bring significant advances to III-nitride photonics [34].
## Acknowledgment
This project is supported in part by Semiconductor Research Corporation (SRC), Defense Advanced Research Projects Agency (DARPA) under the COmpact Front-end Filters at the ElEment-level (COFFEE). FY and HXT acknowledge partial funding from National Science Foundation (NSF) Center for Quantum network (CQN) under grant number EEC-1941583. The facilities used for device fabrication were supported by the Yale SEAS Cleanroom and the Yale Institute for Nanoscience and Quantum Engineering (YINQE). The authors would like to express their gratitude Dr. Yong Sun, Dr. Michael Rooks, Sean Rinehart, and Kelly Woods for their invaluable assistance in device fabrication.
## Data Availability
The data that support the findings of this study are available from the corresponding authors upon reasonable request.
|
2306.00421 | Introduction to Medical Imaging Informatics | Medical imaging informatics is a rapidly growing field that combines the
principles of medical imaging and informatics to improve the acquisition,
management, and interpretation of medical images. This chapter introduces the
basic concepts of medical imaging informatics, including image processing,
feature engineering, and machine learning. It also discusses the recent
advancements in computer vision and deep learning technologies and how they are
used to develop new quantitative image markers and prediction models for
disease detection, diagnosis, and prognosis prediction. By covering the basic
knowledge of medical imaging informatics, this chapter provides a foundation
for understanding the role of informatics in medicine and its potential impact
on patient care. | Md. Zihad Bin Jahangir, Ruksat Hossain, Riadul Islam, MD Abdullah Al Nasim, Md. Mahim Anjum Haque, Md Jahangir Alam, Sajedul Talukder | 2023-06-01T07:53:11Z | http://arxiv.org/abs/2306.00421v3 | # Introduction to Medical Imaging Informatics
###### Abstract
Medical imaging informatics is a rapidly growing field that combines the principles of medical imaging and informatics to improve the acquisition, management, and interpretation of medical images. This chapter introduces the basic concepts of medical imaging informatics, including image processing, feature engineering, and machine learning. It also discusses the recent advancements in computer vision and deep learning technologies and how they are used to develop new quantitative image markers and prediction models for disease detection, diagnosis, and prognosis prediction. By covering the basic knowledge of medical imaging informatics, this chapter provides a foundation for understanding the role of informatics in medicine and its potential impact on patient care.
###### Abstract
The proposed method is a novel method for modeling the motion motion of medical images. The proposed method is a novel method for modeling the motion motion of medical images. The proposed method is a novel method for modeling the motion motion of medical images. The proposed method is a novel method for modeling the motion motion of medical images. The proposed method is a novel method for modeling the motion motion of medical images. The proposed method is a novel method for modeling the motion motion of medical images. The proposed method is a novel method for modeling the motion motion of medical images. The proposed method is a novel method for modeling the motion motion of medical images. The proposed method is a novel method for modeling the motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion motion motion motion motion motion motion motion of medical images. The proposed method is a novel method for modeling the motion
## 3 Medical imaging informatics
Medical Imaging Informatics is a revolutionary subset of medical informatics that encompasses image coding, image processing, image distribution connection to image acquisition devices with analogue or digital output and communicating information (data) that is pivotal to the provision and delivering appropriate patient care critical for health and well-being. The revolution of medical imaging and biomedical informatics has changed the confined way of research and the nature of medicine. At all major levels of health care and in a variety of medical settings, medical imaging plays a pivotal role. Technologies like X-rays, mammography, computed tomography (CT scans), ultrasonography and other clinical specialities are the edging of medical imaging. Recent advances in medical imaging technology, e.g. Picture Archive and Communication systems (PACS), image-guided surgery and therapy, computer-aided diagnosis (CAD), and electronic Patient Record (ePR) with image distribution have propelled imaging informatics as a discipline to manage and synthesize knowledge from medical images for effective and efficient patient care as well as outcomes [2]. On the contrary, biomedical informatics incorporates a wide range of domain-specific methodologies. For the betterment of human health, biomedical informatics is the integrative field that studies the effective uses of biomedical data for scientific inquiry, problem-solving and decision-making. Biomedical informatics integrates computer applications ranging from the processing of very low-level narrations to extremely high-level ones, which are completely and systematically different [3].
Medical imaging informatics is a field that involves the use of computer science, information science, and engineering to manage, analyze, and interpret medical images and data. This includes developing and implementing systems and software to store, retrieve, and analyze medical images and integrating these images with electronic medical records and other health information systems. Artificial intelligence and machine learning are increasingly used in medical image informatics to improve the accuracy, efficiency, and effectiveness of medical imaging [15].
Here are some of the most popular applications for AI in medical image informatics.
1. Image analysis and interpretation: Using AI in image analysis and interpretation can improve the speed and accuracy of diagnosis and help healthcare providers make more informed decisions about treatment.
2. Computer-aided diagnosis: AI can assist radiologists and other healthcare providers in interpreting medical images and identifying abnormalities, which can help improve the accuracy and efficiency of diagnosis.
Figure 1: Medical Imaging Informatics
3. Image-guided surgery: AI can be used to help surgeons navigate during procedures by providing real-time image guidance and information about the location and orientation of surgical instruments.
4. Predictive analytics: AI can analyze large amounts of data from medical images and other sources to make predictions about patient outcomes or identify potential risk factors for specific conditions.
### Types of medical imaging modalities
There are several different types of medical imaging modalities, each of which uses different methods and technologies to produce images of the body. Some of the most common medical imaging modalities include:
1. X-ray: X-ray imaging uses a small amount of ionizing radiation to produce images of the body's internal structures. X-rays are commonly used to visualize bones and can be used to diagnose fractures, osteoporosis, and other bone conditions.
2. CT (computed tomography): CT uses X-rays and a computer to produce detailed images of the body's internal structures. CT scans often diagnose cancer, cardiovascular disease, and other conditions.
3. MRI (magnetic resonance imaging): MRI uses a strong magnetic field and radio waves to produce detailed images of the body's soft tissues, such as muscles, tendons, and organs. MRI is often used to diagnose brain and spinal cord injuries and conditions such as multiple sclerosis and cancer.
4. Ultrasound: Ultrasound uses high-frequency sound waves to produce images of the body's internal structures. Ultrasound is often used to visualize the fetus during pregnancy and the heart, blood vessels, and other organs.
5. PET (positron emission tomography): PET uses small amounts of radioactive material to produce images of the body's metabolic activity. PET scans are often used to diagnose cancer and other conditions.
### Image storage and retrieval
Image storage and retrieval is an essential aspect of medical imaging informatics, as it involves managing and organizing medical images in a way that allows them to be easily accessed by authorized users. Medical images can be stored electronically in various formats, including DICOM (Digital Imaging and Communications in Medicine), a standard format for storing and transmitting medical images. These images can be stored on a central server or in the cloud and accessed by authorized users, such as doctors and technologists, from any location. There are several benefits to the
\begin{table}
\begin{tabular}{|c|l|l|l|} \hline
**Name** & **Technology** & **Anatomies** & **Dimension** \\ \hline X-ray & X-ray imaging uses a small amount of ionizing radiation to create images of the inside of the body, which can be used to visualize bones and some soft tissues [16] & Most organ & 2D, 2D+t \\ \hline CT & CT scans use a series of X-rays to create detailed, cross-sectional images of the body, CT scans visualize organs, bones, and other tissues in great detail [17]. & Most organ & 2D, 3D, 4D \\ \hline Ultrasound & Uses high-frequency sound waves to create images of the inside of the body. Ultrasound is often used to visualize the abdomen, pelvis, and other internal organs [16]. & Most organ & 2D,2D+t, 3D, 4D \\ \hline MRI & MRI uses a powerful magnetic field and radio waves to create detailed images of the body’s tissues and organs. & Most organ & 3D, 4D \\ \hline Nuclear & Utilize external detectors or gamma cameras to detect the emission of gamma rays from radioisotopes that have been ingested [16]. & All organs with radioactive tracer uptake & 2D, 3D, 4D \\ \hline Microscopy & Typically use an illumination source and lenses to magnify specimens before capturing an image [16]. & Primarily biopsies and surgical specimens & 2D, 3D, 4D \\ \hline \end{tabular}
\end{table}
Table 1: SOME CHARACTERISTICS OF MEDICAL IMAGING MODALITIES
electronic storage of medical images: Reduced physical storage space, Faster access to images, Improved organization, and Enhanced security.
### Image analysis and interpretation
Image analysis and interpretation using algorithms and software to analyze and interpret medical images to extract meaningful information and insights. This is an important aspect of medical imaging informatics, as it can help doctors and other healthcare professionals more accurately diagnose and treat medical conditions. There are several ways in which image analysis and interpretation can be used in medical imaging: Automated image analysis: Algorithms and software can automatically analyze medical images and identify specific features or abnormalities. For example, an algorithm might be trained to detect tumors in CT scans. Computer-aided diagnosis: Image analysis and interpretation can assist doctors in diagnosing by providing additional information and insights that might not be apparent from the raw images alone. For example, an algorithm might be used to identify patterns in an MRI scan that suggest the presence of a particular condition. Artificial intelligence and machine learning: Machine learning algorithms can improve image analysis and interpretation by learning from large datasets of labeled images. This can improve the accuracy and efficiency of the analysis process.
## 4 Image Processing
The field of medical imaging (MI), and image processing (IP) have been emerged as one of the fastest-developing research areas in computer vision. A large array of techniques are employed in the subsection of digital signal processing known as "image processing" to improve or edit digital pictures in order to prepare them more useful for a variety of reasons. So the study of computer vision focuses on how computers can "understand" photos, movies, or 3D volumes by extracting the needed elements and properties from the images using a variety of algorithms and methodologies [4]. It serves as the foundation for the model training. Various image editing techniques, such as super-resolution, denoising, dehazing, deraining, and deblurring, are referred to as image processing. One must first understand the concept of a picture before learning how it is processed. Image processing is a technique for applying certain operations to an image in order to produce an improved image or to pull out any meaningful information from that. It is a technique of processing where an image serves as the input, and the output can either be another image or attributes or properties associated with the input image. It is just one of the advanced technology that is now developing swiftly and is a major area of study for both engineers and computer scientists. The terms analogue and digital image processing refer to two different categories of image processing. For tangible copies like prints and pictures, analogue image processing can be employed. When interpreting images utilizing these visual methods, image analysts employ a variety of interpretive basics. Through the use of computers, digital image processing techniques enable picture alteration. As image processing is a fundamental part of model training, it needs to remove image noise for better feeding into the model. In the memoir, the usefulness of computer vision as a tool has been quite limited, since most prospective applications had to wait to develop affordable memory technology and realize suitable perfection ratios in terms of processing power [5]. Digital image processing (DIP) consists of 11 core phases, each of which may include further steps. The following is a description of the fundamental processes in digital image processing. To process digital images, one must first complete these basic stages. Getting a picture that is already digitally stored might suffice as an easy method of image acquisition. Pre-processing, involving scaling, etc., usually occurs at the imagery collection stage. One of the easiest and most attractive aspects of digital image processing is picture enhancement. The basic concept underlying enhancement techniques are to either emphasize certain elements of interest within an image or reveal concealed info. Shifting the luminance, for instance. The process of photo restoration involves improving the appearance of an image. Picture restoration is objective, in contrast to augmentation, which is intuitive because restoration procedures often are based on mathematical or probabilistic models of image deterioration. The enormous expansion in the use of digital photographs on the Internet has led to a growing relevance for the field of color image processing. This might involve, among other things, digital color modeling, and processing. Wavelets serve as the foundation for expressing pictures at different resolution levels. Splitting of pictures into progressively smaller areas for data compression and pyramidal depiction. Compression techniques involve ways to lower the amount of space needed to store or transmit a picture. It is essential to compress data, particularly for web-based applications. The field of morphological processing focuses on methods for separating image elements that help represent and describe the shape. Segmentation techniques separate a picture into its elements or objects. Generally speaking, another of the most challenging jobs in digital image processing is independent fragmentation. Imaging issues that need individual object identification can be successfully solved with the help of a tough segmentation approach.
The recently advanced image processing technique is named MAXIM [6] which has been formed from multi-axis MLP. Due to the absence of focus on the immediate area, it is employed for low-level vision to facilitate improved pictures. This method operates using UNet-shaped hierarchical composition and focuses on long-distance interactions.
It takes on two MLP-based building blocks: (i) Multi-axis gates and (ii) a Cross-gating block. We could observe the architecture of Maxim in Figure 2 and it depended on a half-cast design for each block (Figure 2b) and also puts in an encoder, decoder, and bottlenecks with residual channel attention block (RCAB) with filtering of skip connections. In Figure (2a), It has the backbone of MAXIM with a cross-gating block on Convolution (figure 2c). This proposed model acquires performance of the highest calibre of more than ten benchmark datasets for image processing tasks, adding denoising, de-blurring, deraining, and enhancement whilst using less or equivalently fewer parameters. Multi-axis gated MLP is in the following equation (1) for complexity analysis:
\[\omega(MAB)=d2HWC(GlobalgMLP)+b2HWC(LocalgMLP)+10HWC2(Denselayers),(1)\]
Losses accumulate throughout stages and sizes during MAXIM's end-to-end training in below the following equation (2):
L = s=1Sn=1NLchar(Rs,n,Tn) + Lfreq(Rs,n,Tn) (2)
Where Tn denotes multi-scale images, Lchar and Lfreq are the Charbonnier loss and the frequency reconstruction loss.
In a different work, Tolle et. al. suggested a novel Bayesian approach to deep imaging by utilizing mean field variational inference (MFVI) [16]. It has been mainly used in denoising, super-resolution, and inpainting for image processing. This approach permits for uncertainty quantification on a per-pixel layer distributed on the neural weights and omits the necessity of early stopping. Using Bayesian optimization with Gaussian Process Regression (GRP) optimizes the parameters for reconstruction accuracy. We demonstrate that a poorly chosen prior results in inferior accuracy and validation and that optimizing the weight pre-value for each cognitive system is adequate. We have shown below the mathematical notion of the MFVI DIP (figure 3).
Since the image processing task is very crucial, Cheng and his research team build a pre-trained model which is named image processing transformer (IPT). It has mainly been the representation of the transformer and its variant
Figure 3: Demonstration of the mathematical notion abaft MFVI DIP [5]
Figure 2: MAXIM architecture of ( a. Backbone, b. Encoder/ Decoder/ Bottleneck, c. Cross Gating Block) ref.4
architectures. In this paper, the IPT model has worked on low-level computer vision tasks and developed a new pre-trained model [6].In essence, it is indicated by the popular ImageNet benchmark datasets. This IPT architecture consists of heads, transformer encoder/decoder, and tails have been shown in Figure 5. Apparently, this new pre-trained model is set up well suited to various image processing tasks and therefore gets the desired score after several tuning parameters.
In another study, Singh did an empirical analysis that weighed the impact of image segmentation on skin lesion detection. Here, 10 deep-learning-based models to identify and classify the eight existing image segmentation methods. In this study, image enhancement and image morphology were used for picture processing for getting image classification performance and employed a double experimental design. The ResNet50 model has given the best performance with 91.9% accuracy on the ISIC2017 dataset and compared to original photos with producing segmented images [9]. In the subject of the Internet of Medical Things (IoMT), one of the vital aspects is medical image processing. In recent times, deep learning algorithms have produced effective results on problems requiring the identification of medical images. As most of the time, we get some problems when we use typical deep learning algorithms only for causing little training data and domain mismatch. In the recent case which was Covid19, we could not detect covid19 by perusing computed tomography (CT) images. For this reason, Niu et. al. developed a well-known approach called distant domain transfer learning (DDTL) [8]. It also has privacy policies so that training data can not be easily accessed. In addition, It consists of two components: the reduced-size Unet segmentation model and the Distant Feature Fusion (DFF) classification model as shown in Figure 4. It has achieved better accuracy when we have tested unseen data using evaluation metrics.
Zaghl et. al. focused on the classification of melanoma skin cancer only using image processing techniques. So in this research, they are approached with four steps such as (i) enhancing algorithms, (ii) segmentation stage, (iii) feature extraction, and finally (iv) measured total dermoscopy value (TDV) for cancer classification. Therefore, day-by-day image processing techniques are important for medical image analysis to get a proper result [9]. For both diagnosis and therapy, a variety of medical imaging techniques are employed. The most popular ones include MRI, X-ray, ultrasound, radionuclide, and optical. So Nagornov et. al suggests a system based on RNS for FPGA (field programmable gate array). For the purpose of processing 2D and 3D medical images, the discrete wavelet transform is one way to apply different fusion, denoising, and compression techniques. With the evolution of scanning technology and digital gadgets, medical imaging systems provide more accurate images [10, 19]. FPGA accelerators treat wavelet processing (WP) with scaled filter coefficients(SFC) and parallel computing to ameliorate the effects of top-notch 3D picture systems. This method enhances device performance by 2.89-3.59 times and raises the hardware resources by 1.18-3.29 times. In Figure 5 we have shown wavelet processing for medical images.
Computerized diagnostic picture segmentation is crucial for facilitating and accelerating healthcare practices' screening and therapeutic processes. Therefore, Karimzadeh and his research team have made a novel model with morphology processing methods on CNN and PCA architecture. As a shape-based loss function had applied in lieu of Binary-Cross
Figure 4: this image processing transformers suggested schematic (IPT). The IPT model includes an encoder and a decoder in addition to many heads and tails for various functions that share a common transformer block [17].
Entropy that is the Dice scores enhanced from 0.81\(\pm\)0.03 and 0.74\(\pm\)0.07 to 0.86\(\pm\)0.03 and 0.87\(\pm\)0.05 respectively for segmentation from MR and CT images. So the suggested PCA-based loss function was used and there were no outliers, patchy, or unrealistic metrics [11].
## 5 Feature Engineering
With the process of invention, innovation and diffusion, scientific advancement has reached a level which is inconceivable. However, feature engineering, one of the advancements, has been at the center of attention for a while. Complications arise when there is a huge amount of data available to use for Artificial intelligence and machine learning techniques and the necessity for feature engineering appears. Feature engineering is a multistep process consisting of the creation, extraction, and selection of variables that are most accurate. This process converts raw data into features which makes any case easy to analyze.
While defining features, for instance, when we work with observations in tabular data, each observation has attributes, which depict something meaningful about the observation referred to as features. Feature engineering art varies among data scientists. Steps to perform feature engineering are given below [12]:
Raw data collection: Collecting data from different sources which may encompass unstructured, structured, textual data.
Data Processing: The process consists of raw data manipulation and unification from different sources, which may involve data amplification, data augmentation, fusion, ingestion, stochastic simulation, and sampling error.
Feature Creation: Visualize and plot data if something is not working with adding up data then the data is filtered to create features to be used for modeling the data. It involves domain knowledge, instinct, and a lengthy process of trial
Figure 5: Distant Feature Extraction (DFF) architecture [8]
Figure 6: Wavelet processing for medical images. [10]
\begin{table}
\begin{tabular}{p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt}} \hline \hline
**Author \& Year** & **Dataset** & **Image Processing Techniques** & **Model** & **Result** & **Conclusion** \\ \hline \hline \multirow{9}{*}{Tu et. al. (2022)} & SIDD, DND, GoPro, HIDE, REDS, RealBlur-RJ, RJ,Rain13k, ResSIDE-Indoor, Outdoor, Free-CloudIncline{1-2} & SIDD, DND, GoPro, HIDE, REDS, RealBlur-RJ, RJ,Rain13k, ResSIDE-Indoor, Outdoor, Free-CloudIncline{1-2} & Denoising, Deblurring, DAXIM-3S, MAXIM-2S Enhancement & PSNR \& SSIM, this metrics are given better accuracy with using MAXIM & Affordable \& Productive for image processing. \\ & SDD, DND, GoPro, HIDE, REDS, RealBlur-RJ, RJ,Rain13k, ResSIDE-Indoor, Outdoor, Free-CloudIncline{1-2} & & Dice Scores 0.81\&0.03 and Strength 0.74\&0.07 to 0.86\&0.03 and estimations \\ & SDD, DND, GoPro, HIDE, REDS, RealBlur-RJ, RJ,Rain13k, ResSIDE-Indoor, Outdoor, Free-CloudIncline{1-2} & & & The use of SFC and RNS \\ & SDD, DND, GoPro, HIDE, REDS, RealBlur-RJ, Rain13k, ResSIDE-Indoor, Outdoor, Free-CloudIncline{1-2} & & & Improved the efficiency of 3D medical \\ & SDD, DND, GoPro, HIDE, REDS, RealBlur-RJ, Rain13k, ResSIDE-Indoor, Outdoor, Free-CloudIncline{1-2} & & & \\ & SDD, DND, GoPro, HIDE, REDS, RealBlur-RJ, Rain13k, ResSIDE-Indoor, Outdoor, Free-CloudIncline{1-2} & & & \\ & SDD, DND, GoPro, HIDE, REDS, RealBlur-RJ, Rain13k, ResSIDE-Indoor, Outdoor, Free-CloudIncline{1-2} & & & \\ & SDD, DND, GoPro, HIDE, REDS, RealBlur-RJ, Rain13k, ResSIDE-Indoor, Outdoor, Free-CloudIncline{1-2} & & & \\ & SDD, DND, GoPro, HIDE, REDS, RealBlur-RJ, Rain13k, ResSIDE-Indoor, Outdoor, Free-CloudIncline{1-2} & & & \\ & SDD, DND, GoPro, HIDE, REDS, RealBlur-RJ, Rain13k, ResSIDE-Indoor, Outdoor, Free-CloudIncline{1-2} & & & \\ & SDD, DND, GoPro, HIDE, REDS, RealBlur-RJ, Rain13k, ResSIDE-Indoor, Outdoor, Free-CloudIncline{1-2} & & & \\ & SDD, DND, GoPro, HIDE, REDS, RealBlur-RJ, Rain13k, ResSIDE-Indoor, Outdoor, Free-CloudIncline{1-2} & & & \\ & SDD, DND, GoPro, HIDE, REDS, RealBlur-RJ, Rain13k, ResSIDE-Indoor, Outdoor, Free-CloudIncline{1-2} & & & \\ & SDD, DND, GoPro, HIDE, REDS, RealBlur-RJ, Rain13k, ResSIDE-Indoor, Outdoor, Free-CloudIncline{1-2} & & & \\ & SDD, DND, GoPro, HIDE, REDS, RealBlur-RJ, Rain13k, ResSIDE-Indoor, Outdoor, Free-CloudIncline{1-2} & & & \\ & SDD, DND, GoPro, HIDE, REDS, RealBlur-RJ, Rain13k, ResSIDE-Indoor, Outdoor, Free-CloudIncline{1-2} & & & \\ & SDD, DND, GoPro, HIDE, REDS, RealBlur-RJ, Rain13k, ResSIDE-Indoor, Outdoor, Free-CloudIncline{1-2} & & & \\ & SDD, DND, GoPro, HIDE, REDS, RealBlur-RJ, Rain13k, ResSIDE-Indoor, Outdoor, Free-CloudIncline{1-2} & & & \\ & SDD, DND, GoPro, HIDE, REDS, RealBlur-RJ, Rain13k, ResSIDE-Indoor, Outdoor, Free-CloudIncline{1-2} & & & \\ & SDD, DND, GoPro, HIDE, REDS, RealBlur-RJ, Rain13k, ResSIDE-Indoor, Outdoor, Free-CloudIncline{1-2} & & & \\ & SDD, DND, GoPro, HIDE, REDS, RealBlur-RJ, Rain13k, ResSIDE-Indoor, Outdoor, Free-CloudIncline{1-2} & & & \\ & SDD, DND, GoPro, HIDE, REDS, RealBlur-RJ, Rain13k, ResSIDE-Indoor, Outdoor, Free-CloudIncline{1-2} & & & \\ & SDD, DND, GoPro, HIDE, REDS, RealBlur-RJ, Rain13k, ResSIDE-Indoor, Outdoor, Free-CloudIncline{1-2} & & & \\ & SDD, DND, GoPro, HIDE, REDS, RealBlur-RJ, Rain13k, ResSIDE-Indoor, Outdoor, Free-CloudIncline{1-2} & & & \\ & SDD, DND, GoPro, HIDE, REDS, RealBlur-RJ, Rain13k, ResSIDE-Indoor, Outdoor, Free-CloudIncline{1-2} & & & \\ & SDD, DND, GoPro, HIDE, REDS, RealBlur-RJ, Rain13k, ResSIDE-Indoor, Outdoor, Free-CloudIncline{1-2} & & & \\ & SDD, DND, GoPro, HIDE, REDS, RealBlur-RJ, Rain13k, ResSIDE-Indoor, Outdoor, Free-CloudIncline{1-2} & & & \\ & SDD, DND, GoPro, HIDE, REDS, RealBlur-RJ, Rain13k, ResSIDE-Indoor, Outdoor, Free-CloudIncline{1-2} & & & \\ & SDD, DND, GoPro, HIDE, REDS, RealBlur-RJ, Rain13k, ResSIDE-Indoor, Outdoor, Free-CloudIncline{1-2} & & & \\ & SDD, DND, GoPro, HIDE, REDS, RealBlur-RJ, Rain13k, ResSIDE-Indoor, Outdoor, Free-CloudIncline{1-2} & & & \\ & SDD, DND, GoPro, HIDE, REDS, RealBlur-RJ, Rain13k, ResSIDE-Indoor, Outdoor, Free-CloudIncline{1-2} & & & \\ & SDD, DND, GoPro, HIDE, REDS, RealBlur-RJ, Rain13k, ResSIDE-Indoor, Outdoor, Free-CloudIncline{1-2} & & & \\ & SDD, DND, GoPro, HIDE, REDS, RealBlur-RJ, Rain13k, ResSIDE-Indoor, Outdoor, Free-CloudIncline{1-2} & & & \\ & SDD, DND, GoPro, HIDE, REDS, RealBlur-RJ, Rain13k, ResSIDE-Indoor, Outdoor, Free-CloudIncline{1-2} & & & \\ & SDD, DND, GoPro, HIDE, REDS, RealBlur-RJ, Rain13k, ResSIDE-Indoor, Outdoor, Free-CloudIncline{1-2} & & & \\ & SDD, DND, GoPro, HIDE, REDS, RealBlur-RJ, Rain13k, ResSIDE-Indoor, Outdoor, Free-CloudIncline{1-2} & & & \\ & SDD, DND, GoPro, HIDE, REDS, RealBlur-RJ, Rain13k, ResSIDE-Indoor, Outdoor, Free-CloudIncline{1-2} & & & \\ & SDD, DND, GoPro, HIDE, REDS, RealBlur-RJ, Rain13k, ResSIDE-Indoor, Outdoor, Free-CloudIncline{1-2} & & & \\ & SDD, DND, GoPro, HIDE, REDS, RealBlur-RJ, Rain13k, ResSIDE-Indoor, Outdoor, Free-CloudIncline{1-2} & & & \\ & SDD, DND, GoPro, HIDE, REDS, RealBlur-RJ, Rain13k, ResSIDE-Indoor, Outdoor, Free-CloudIncline{1-2} & & & \\ & SDD, DND, GoPro, HIDE, REDS, RealBlur-RJ, Rain13k, ResSIDE-Indoor, Outdoor, Free-CloudIncline{1-2} & & & \\ & SDD, DND, GoPro, HIDE, REDS, RealBlur-RJ, Rain13k, ResSIDE-Indoor, Outdoor, Free-CloudIncline{1-2} & & & \\ & SDD, DND, GoPro, HIDE, REDS, RealBlur-RJ, Rain13k, ResSIDE-Indoor, Outdoor, Free-CloudIncline{1-2} & & & \\ & SDD, DND, GoPro, HIDE, REDS, RealBlur-RJ, Rain13k, ResSIDE-Indoor, Outdoor, Free-CloudIncline{1-2} & & & \\ & SDD, DND, GoPro, HIDE, REDS, RealBlur-RJ, Rain13k, ResSIDE-Indoor, Outdoor, Free-CloudIncline{1-2} & & & \\ & SDD, DND, GoPro, HIDE, REDS, RealBlur-RJ, Rain13k, ResSIDE-Indoor, Outdoor, Free-CloudIncline{1-2} & & & \\ & SDD, DND, GoPro, HIDE, REDS, RealBlur-RJ, Rain13k, ResSIDE-Indoor, Outdoor, Free-CloudIncline{1-2} & & & \\ & SDD, DND, GoPro, HIDE, REDS, RealBlur-RJ, Rain13k, ResSIDE-Indoor, Outdoor, Free-CloudIncline{1-2} & & & \\ & SDD, DND, GoPro, HIDE, REDS, RealBlur-RJ, Rain13k, ResSIDE-Indoor, Outdoor, Free-CloudIncline{1-2} & & & \\ & SDD, DND, GoPro, HIDE, REDS, RealBl
and error. The human attention involved in administering this process significantly shapes the cost of model generation [20, 21].
Feature selection: Algorithms are responsible to analyze and judging features to determine which features to take into account or which features are redundant or irrelevant and should be removed.
Modeling: To evaluate the quality of selected features, models are created which may utilize the execution of learning algorithms, for instance, cross-validation, wrapper models etc.
Benchmark: The reduction rate of error and improvement of the model's prognostication and accuracy standard is set where all variables are compared. This is the stage where data scientists with domain expertise do experiments and testing for benchmarking.
The acceptance or declination of a predictive model is determined by feature engineering. In machine learning, feature engineering imposes data for creating new variables to enhance model accuracy. To provide a prediction, different machine learning models, for instance, decision trees, random forests, neural networks, and gradient boosting machines accept a feature vector. From the provided feature set, new features are engineered. Primarily, this process is manual and will be different for different kinds. Feature vector plays a crucial role in machine learning as most machine learning performance is dependent on it [15]. Although, a number of automated feature tools are available, for instance, Deep feature synthesis (relational and temporal data), Precise handling of time (keeping data safe from common label leakage problems, prediction time row by row can be specified), reusable feature primitive (It is possible to build and share own custom primitives to reuse on any dataset)[16]. For better analysis, feature engineering is seen as a generalization of mathematical optimization [18].
## 6 Machine Learning
Systems can learn and develop automatically based on experience allowing a technique called machine learning. Without specialist programming, machine learning is capable of doing tasks. Machine learning refers to the process
Figure 8: Steps for feature engineering
Figure 7: Feature Engineering basic
of developing computer programs that can examine data and decide for themselves via the use of a set of algorithms what should be conducted with that information [22]. The most prevalent algorithm is supervised machine learning. A supervised machine learning method uses prior information to interpret new input. The system is reading a fresh but comparable data set, and supervised algorithms use examples from earlier, related training data sets to identify flaws within them to predict future problems based on those instances. Often, training data sets are initially determined by humans, and then the system is trained to recognize patterns associated with each pertinent training data set. After that, the system compares the recently obtained data sets to the training data sets. The algorithm can identify and anticipate certain problems after having access to sufficient training data sets for comparison. This type of machine learning can uncover defects that improve the relevant model by further comparing its selected response to the intended actions that were corrected. Unsupervised machine learning methodologies make use of unlabeled "raw" data. This raw data has no known flaws, therefore it only attempts to infer an action based on concealed unlabeled flaws inside uncategorized data. Because the system lacks a predefined failure mode pattern or any suggestions for possible answers, it cannot determine the appropriate course of action. Nonetheless, the system examines the information and makes an attempt to inferences based on irregularities or flaws that have been tagged in any learning data set. Semi-supervised machine learning techniques integrate supervised and unsupervised learning by utilizing both labeled and unlabeled data sets for training. These sets of data frequently include substantial volumes of unlabeled, uncategorized information. After that, the system aggregates smaller, explicitly stated data points on labeled and classified fault patterns. Algorithms for reinforcement machine learning employ trial-and-error behavior based on a data set, and they award themselves whenever the correct action is taken. As a consequence, the system uses a reinforcing technique in its decision-making mechanism. Based on clear reward feedback, it can quickly and automatically determine the optimum course of action within a specific data set. In order to boost the system's performance, this method immediately employs reinforcement signals. The following Figure 1 is a machine learning process in which we maintain these steps.
Machine learning is a subfield of artificial intelligence that focuses on developing algorithms and models that can learn from data and make predictions or take actions without being explicitly programmed. To perform machine learning, an algorithm is trained on a dataset. The algorithm can then make predictions or decisions based on the patterns it has learned from the data.
To understand it more, we can think of a story. Dr. Sarah was a radiologist at a busy hospital, responsible for interpreting medical images and diagnosing patients based on the images. She spent long hours poring over X-rays, CT scans, and MRIs, trying to identify abnormalities and make accurate diagnoses.
One day, Dr. Sarah was introduced to a new machine-learning model developed to assist with interpreting medical images. At first, she was sceptical, but as she started using the model, she was amazed by its accuracy and speed.
The machine learning model had been trained on thousands of medical images and the corresponding diagnoses made by expert radiologists. As a result, it could identify patterns and abnormalities in the images that were not immediately obvious to the human eye.
Dr. Sarah started using the machine learning model to assist with her diagnoses, which helped her identify issues more quickly and accurately than before. She could also see more patients in a day, thanks to the time the machine learning model saved her.
Figure 9: Machine Learning Process
### How machine learning model learn
A machine learning model learns by being trained on a dataset of labeled examples. The process of training a machine learning model involves providing the model with a large number of examples that have been labeled with the correct output (also known as the ground truth). The model uses these labeled examples to learn how to map input data to the correct output.
### Types of machine learning
There are three main types of machine learning:
#### 6.2.1 Supervised learning.
In supervised learning, the algorithm is trained on labeled data, where the correct output is provided for each input. Supervised learning aims to make predictions or decisions based on the patterns learned from the data. Examples of supervised learning include predicting whether a customer will churn based on their past behavior or identifying the sentiment of a tweet as positive or negative.
#### 6.2.2 Unsupervised learning.
In unsupervised learning, the algorithm is not provided with labeled data and must discover patterns independently. Unsupervised learning aims to identify patterns or relationships in the data. Examples of unsupervised learning include clustering data points into groups based on their similarities or identifying fraudulent transactions based on unusual patterns in the data.
#### 6.2.3 Reinforcement learning.
In reinforcement learning, the algorithm learns through trial and error, receiving rewards or penalties for certain actions. The goal of reinforcement learning is to learn the best action to take in a given situation to maximize a reward. Examples of reinforcement learning include training a robot to navigate through a maze by rewarding it for reaching the end and penalizing it for making incorrect turns or training a self-driving car to make decisions based on the environment and traffic conditions.
### Limitations of machine learning
Healthcare is one of the many areas that machine learning has the potential to disrupt. When creating and using these systems, developers and implementers should be aware of several machine-learning restrictions.
The quality and quantity of data available for training is one restriction. Machine learning algorithms need high-quality data to learn and produce precise predictions. Due to the complexity of the data, the need for more consistency in data collection, and the difficulty in getting significant quantities of patient data that are representative of the population, this can be problematic in the healthcare industry.
The models' interpretability is yet another drawback. Although machine learning models may produce precise predictions, it can be challenging to comprehend how they do so. This might make it challenging to pinpoint the mistakes' root causes and modify the model.
Another restriction is that machine learning algorithms are frequently created for certain tasks and may need to generalize better to other activities. This might make applying a model trained for one job to another challenging.
Furthermore, the quality of machine learning models depends on the data they are trained on. If the data are skewed, the model will pick up on that bias and spread it through its forecasts. This is a prevalent issue in the healthcare industry, where inequalities exist and a lack of diversity in data collecting and representation.
## 7 Deep Learning
Deep learning is a subfield of machine learning that involves using artificial neural networks with many layers (hence the term "deep") to learn and make decisions based on data. Deep learning has achieved remarkable success in many applications, including image and speech recognition, natural language processing, and self-driving cars.
One of the key benefits of deep learning is its ability to learn and make decisions based on raw, unstructured data, such as images or text. Traditional machine learning techniques often require that the data be manually extracted and
processed, which can be time-consuming and error-prone. On the other hand, deep learning can learn directly from the raw data, making it more efficient and accurate.
Deep learning is implemented using artificial neural networks inspired by how the human brain works. An artificial neural network consists of layers of interconnected "neurons," each receiving input and producing an output based on weights and biases. These weights and biases are adjusted during training to learn the relationships between the input data and the desired output [23].
Deep learning has achieved impressive results in various applications and is expected to play a significant role in the future of machine learning and artificial intelligence. However, it is important to note that deep learning is not a magic bullet and can still be affected by issues such as bias in the training data and overfitting. Noisy labels [24] can compromise generalization ability of medical imaging models
In the 1970s, computer vision first started and was viewed as the visual perception of an aspiring agenda to mimic human intelligence and endow robots with intelligent behavior. Later in the 1980s, priority was given to the sophistication of mathematical techniques to perform quantitative image and scene analysis. Stepping ahead to the 2000s, data-driven and learning approaches as core constituents of vision is being embraced by this decade [13].
### How the deep learning model learns
Artificial neural networks with numerous layers, or "deep" neural networks, are the foundation of the powerful machine learning method. Instead of depending on fixed characteristics, these models are built to autonomously learn features and representations from the input. This makes deep learning models especially suitable for applications like audio and picture identification, natural language processing, and other areas where conventional machine learning models may fail. A standard machine learning model's learning process is similar to a deep learning model in that it is trained on one dataset before being tested on another to assess its performance. The way deep learning models learn, as opposed to conventional machine learning models, differs in a few significant ways. Deep learning models provide several key advantages over traditional models, including learning hierarchical data representations. As the model advances through the network's layers, it can understand increasingly abstract terms of the input. When classifying images, for instance, the network's first layers can pick up on essential elements like borders. In contrast, its deeper layers might pick up on more intricate features like forms and objects. Another distinction between deep learning and regular machine learning models is that the latter often require structured data, while the former may learn from unstructured data like text and pictures. Deep learning models are particularly well-suited for tasks like speech and picture recognition. Furthermore, unlike typical machine learning models, which could have trouble with massive datasets, deep learning models can learn from enormous volumes of data. More advanced than typical machine learning models, deep learning models can learn
Figure 10: Some optical illusions tell us about the visual system. (a) Looking at this deceiving picture, the brain thinks there’s a black dot inside each white circle until someone focuses on each individual white circle and in the end realizes that it was never at all. (b) These long diagonal lines are parallel. They sure don’t look it, but they are! Removing the smaller “stitch”-like lines shows the truth about this optical illusion..
from the data without supervision. Finally, deep learning models are more generalizable to new data than conventional machine learning models. This is so that deep learning models can acquire more abstract data representations that are less reliant on the particulars of the training set. In general, deep learning is a powerful method that can enhance the effectiveness, accuracy, and dependability of services within the medical business and offer new chances for the area of medical imaging informatics to create new paths toward precision medicine.
### Different types of deep learning models
There are several different types of deep learning models, including:
1. Convolutional neural networks (CNNs): CNNs are used for image and video analysis and are particularly well-suited for tasks such as image classification, object detection, and image segmentation[25]. They are called "convolutional" because they use convolutional layers, which are used to extract features from the input data.
2. Recurrent neural networks (RNNs): RNNs are used for tasks that involve sequential data, such as natural language processing, speech recognition, and time series analysis. They are called "recurrent" because they have loops that allow them to process data over time.
3. Long short-term memory (LSTM) networks: It is a type of RNN that are particularly effective at learning long-term dependencies in sequential data. They have been used for tasks such as language translation, language modeling, and speech recognition.
4. Autoencoders: Autoencoders are a type of neural network used for dimensionality reduction and feature learning. They consist of an encoder that maps the input data to a lower-dimensional representation and a decoder that maps the lower-dimensional representation back to the original input space.
5. Generative adversarial networks (GANs): GANs is a neural network that generates new data similar to a training dataset. They consist of two networks: a generator network that generates new data and a discriminator network that distinguishes the generated data from the actual data.
6. Transfer learning: Transfer learning is a technique in which a deep learning model trained on one task is fine-tuned for a different task. This can be useful when more data is needed to train a model from scratch.
These are just a few examples of the many deep-learning models available. The appropriate model for a particular task will depend on the type and complexity of the data, as well as the specific goals of the model.
### Limitations of deep learning
Deep learning has made strides in several fields, including self-driving cars, audio and picture identification, and natural language processing. It uses neural network models layers of linked nodes--to automatically learn from a lot of data
Figure 11: A rough record of some of the dynamic topics of research in computer vision
and gradually improve performance. However, it's crucial to remember that deep learning is not a universally applicable solution and has several drawbacks that must be considered before using it for a particular task. One of the significant drawbacks is the need for a lot of labelled data, which can be difficult when there isn't much of it or when it has to be accurately labelled and indicative of real-world settings. This may be especially troublesome in industries like healthcare, where patient data is frequently private and hard to get.
The inability to analyze deep learning models is another drawback. It may be challenging to comprehend the reasoning behind a particular choice or forecast since the models tend to learn complicated correlations between the input and output data. The inability to understand the results makes finding and fixing model flaws challenging. Deep learning models may also be prejudiced if the training data contains any biases. For instance, the model may be biased toward identifying white individuals if the training data contains more photographs of white people than images of people of other races.
Another restriction that might result in worse performance on fresh, untried data is overfitting, which occurs when the model is too complicated and has learned too much from the training data. This is very troublesome in real-world applications where the data is not the same as the training data. Finally, deep learning models need a lot of computer power to train, which might be difficult if there aren't enough. This can be especially troublesome for small and medium-sized businesses or organizations, which might need access to the same resources as big companies.
## 8 Importance of data in machine learning and deep learning
Data is the foundation of machine learning and deep learning algorithms. The algorithms are trained on data, and the quality and quantity of the data used to train the model significantly impact its performance. To build an accurate and effective model, it is essential to have high-quality data relevant to the problem being solved.
The importance of data in machine learning and deep learning can be understood by considering its role. First, the data is used to train the model, which involves feeding it to the algorithm and adjusting its internal parameters to minimize the error between the model's predictions and the true outcomes. The quality and quantity of the data used to train the model directly impact its ability to learn and make accurate predictions.
Second, the data is used to evaluate the model's performance. Once the model has been trained, it is important to evaluate its performance on a separate test set. This allows the model's performance to be assessed and indicates how well the model is likely to perform on new, unseen data.
Finally, the data is used to fine-tune the model. Suppose the model's performance could be more satisfactory. Several techniques can be used to fine-tune it, such as adjusting the model's hyperparameters or adding additional layers or units to the model. The data used to fine-tune the model can be used to identify areas where the model is performing poorly and help to improve its overall performance.
## 9 Recent advancements in computer vision
Computer vision is a field that deals with how computers can be made to understand and interpret the visual world. It has made tremendous progress in recent years, thanks to advances in machine learning and hardware capabilities. There have been many significant advancements in the field of computer vision in recent years. Some of the most notable include:
#### 9.0.1 Deep learning
The use of deep neural networks has significantly improved the performance of computer vision systems in tasks such as object recognition, object detection, and image segmentation.
#### 9.0.2 Transfer learning
Pre-trained deep learning models can be fine-tuned for specific tasks, allowing for more efficient training and improved performance on small datasets.
#### 9.0.3 Generative adversarial networks (GANs)
GANs have been used to generate realistic images and applied to tasks such as image-to-image translation and image super-resolution.
#### 9.0.4 Computer vision in robotics
Advances in computer vision have enabled the development of robots that can navigate and interact with their environment using visual information.
#### 9.0.5 Augmented reality (AR)
Computer vision techniques are used in AR systems to detect and track objects in the real world, allowing for the overlay of digital content onto the physical world.
#### 9.0.6 Video analysis
Deep learning models have been applied to action recognition, scene understanding, and video content summarization tasks.
#### 9.0.7 Medical image analysis
Computer vision techniques are being used to analyze medical images, such as CT scans and X-rays, allowing for the automatic detection of abnormalities and the development of assistive tools for physicians.
## 10 Conclusion and Future Direction
For over three decades Medical imaging informatics has been driving clinical research, translation and practice. Already deep in the big medical data era, imaging data availability is only expected to grow, complemented by massive amounts of associated data-rich EMR/ HER and physiological data, climbing to orders of magnitude higher than what is available today. As such, the research community is struggling to harness the full potential of the wealth of data that are now available at the individual patient level underpinning precision medicine. Keeping up with storage, sharing, and processing while preserving privacy and anonymity has pushed boundaries in traditional means of doing research. Imaging investigators often have issues with managing data, indexing, investigating and query digital pathology data. One of the abstract challenges is how to manage relatively comprehensive, multi-layered data sets that will continue to amplify over time since it is unjustifiable to intensively compare the query data with each sample in a high dimensional database due to practical storage and processing constraints [33]. How to thoroughly scrutinize the characteristics of data commencing from multiple approaches, is a riddle. Data analytics approaches, such as machine learning and statistical analysis, can be used to automatically identify and analyze important features in large data sets, including anatomical areas of interest and physiological phenomena. This can help researchers and scientists to better understand the underlying physiology and pathophysiology of different tissues and regions in the body. It would be impossible to gain insights and make discoveries without using these approaches. Particularly in the field of artificial intelligence and machine learning, deep learning methods are currently being used in a lot of research endeavours. Although challenges exist, researchers are actively working to address challenges. Some of the key areas of focus include developing explainable AI methods which can help to better understand how these systems make decisions, as well as leveraging advanced techniques such as 3D reconstruction and visualization to improve the performance of these systems. In conclusion, medical imaging informatics advances are anticipated to improve the quality of care levels witnessed today, once innovative solutions along the lines of selected research endeavors presented in this study are adopted in clinical practice and thus potentially transforming precision medicine.
|
2308.11080 | Stress representations for tensor basis neural networks: alternative
formulations to Finger-Rivlin-Ericksen | Data-driven constitutive modeling frameworks based on neural networks and
classical representation theorems have recently gained considerable attention
due to their ability to easily incorporate constitutive constraints and their
excellent generalization performance. In these models, the stress prediction
follows from a linear combination of invariant-dependent coefficient functions
and known tensor basis generators. However, thus far the formulations have been
limited to stress representations based on the classical Rivlin and Ericksen
form, while the performance of alternative representations has yet to be
investigated. In this work, we survey a variety of tensor basis neural network
models for modeling hyperelastic materials in a finite deformation context,
including a number of so far unexplored formulations which use theoretically
equivalent invariants and generators to Finger-Rivlin-Ericksen. Furthermore, we
compare potential-based and coefficient-based approaches, as well as different
calibration techniques. Nine variants are tested against both noisy and
noiseless datasets for three different materials. Theoretical and practical
insights into the performance of each formulation are given. | Jan N. Fuhg, Nikolaos Bouklas, Reese E. Jones | 2023-08-21T23:28:26Z | http://arxiv.org/abs/2308.11080v1 | Stress representations for tensor basis neural networks: alternative formulations to Finger-Rivlin-Ericksen
###### Abstract
Data-driven constitutive modeling frameworks based on neural networks and classical representation theorems have recently gained considerable attention due to their ability to easily incorporate constitutive constraints and their excellent generalization performance. In these models, the stress prediction follows from a linear combination of invariant-dependent coefficient functions and known tensor basis generators. However, thus far the formulations have been limited to stress representations based on the classical Rivlin and Ericksen form, while the performance of alternative representations has yet to be investigated. In this work, we survey a variety of tensor basis neural network models for modeling hyperelastic materials in a finite deformation context, including a number of so far unexplored formulations which use theoretically equivalent invariants and generators to Finger-Rivlin-Ericksen. Furthermore, we compare potential-based and coefficient-based approaches, as well as different calibration techniques. Nine variants are tested against both noisy and noiseless datasets for three different materials. Theoretical and practical insights into the performance of each formulation are given.
## 1 Introduction
Recently, there has been dramatically increased interest in machine learning (ML) in the computational sciences. This rise in popularity is due to: the ability of machine learning models to directly utilize experimental data in simulation environments, the potential speed up of ML models in comparison to traditional numerical models and methods, as well as the general utility and open-access ecosystem of ML tools. Nevertheless, many scientific ML (SciML) applications suffer from two interconnected bottlenecks: a lack of generalization capabilities due to poor extrapolations and a lack of trustworthiness due to the opaqueness of the trained models. The main premise in SciML is that the underlying data often comply with physical laws (known or yet to be discovered) or otherwise connect to known mathematical structure, which can help surmount the aforementioned bottlenecks via a physics-informed paradigm. The promise of SciML can lead to myriad benefits
such as: more accurate predictions, reduction of unnecessary human involvement, speed-up of the processing-performance-product development cycle, and minimization of the computational costs of detailed simulations. Particular to the focus of this work, an automated data-driven approach for constitutive modeling can have significant payoffs in material discovery, industrial engineering simulations and research. Many developments have been made in this arena for fluid closure models [1; 2; 3]; in this work we focus on constitutive models for solids.
A number of distinct approaches to forming constitutive models with ML have been investigated. ML tools have been utilized in parameter estimation of known constitutive models [4]. This is a task that becomes more complex as model parameters increase and experimental observations are limited. This is especially true for traditional optimization approaches due to the non-convex nature of the optimization problem at hand. Mixing traditional and ML approaches to representation and calibration via symbolic regression [5; 6; 7; 8; 9; 10] has been widely explored. This approach selects from a library of known models that directly enforce physical and mechanistic constraints (depending on the specific model choices) to distill parsimonious data-driven constitutive models. Notable developments include the approach of Wang _et al_. [11] who used reinforcement learning to turn model building into a competitive game. Also Schmidt _et al_. [12] used symbolic regression to distill constitutive laws from unlabeled data. Later, De Lorentzis and co-workers [13; 14] utilized sparse regression to discover interpretable constitutive laws for a wide array of material classes. An interesting extension to this work was the development of an unsupervised Bayesian framework for discovering hyperelasticity models which accounts for uncertainty [15]. Neural networks and Gaussian process models have been widely employed as replacements for human-selected, traditional model forms. In fact, the use of ML black-box constitutive models has been extensively studied for over 30 years. Starting from the influential works of Ghaboussi and collaborators [16; 17; 18], these tools have been employed for different material models with increasing complexity over the years [19; 20; 21; 22].
A significant current challenge is generating trustworthy models from low-data (constrained by experimental/computational cost) and limited-data (constrained by experimental design and observation). To this end efforts have been made to train data-driven constitutive models that do not only train with raw stress-strain data but incorporate additional physics-based restrictions to the trained model [23; 24; 25; 26; 27; 28]. These models, referred to as _physics-informed_ or _physics-guided data-driven_ constitutive models try to enforce a variety of physical principles and mechanics-informed assumptions. From enforcing objectivity, to material symmetries [1], thermodynamic constraints [29; 30] and polyconvexity [31; 32] there are approaches that enforce these condition weakly through the loss function [33; 34; 35] or strictly in the construction of the ML representation [1; 36; 37; 38]. A large majority of the proposed works in the literature for physics-guided constitutive models are based on neural networks [23; 39; 36; 24; 25; 26; 33] due to the flexibility of this paradigm.
Material frame indifference is a primary concern in developing constitutive models [40]. Ling _et al_. [1] introduced the tensor basis neural network (TBNN) to embed objectivity through an equivariant NN formulation. An anisotropic hyperelastic model was formed from the scalar invariants and tensor basis of the strain using atomistic crystal data, in addition to fluids applications. Later Frankel _et al_. [36] adapted the tensor basis representations to a Gaussian process formalism to represent general tensor functions and hyperelastic data. This was extended by Fuhg and Bouklas [37] to anisotropic materials, strictly enforcing known symmetries up to transverse isotropy; this work showed that this simplified learning approach led to significant generalization capabilities when the physics do not radically change outside of the training region. This approach was also utilized in Kalina _et al_. [41] integrated in a multiscale framework with automated data-mining. In
Fuhg _et al._[32], tensor basis NNs were utilized to discover the character of the anisotropy of the material through labeled data. Even though several works have focused on utilizing tensor basis representation theorems in learning of hyperelastic responses from labeled data pairs, there has not been an extensive study aimed at discovering the most efficient tensor basis representations for the learning tasks at hand in the context of finite deformation and hyperelasticity.
In the context of hyperelasticity, strict enforcement of polyconvexity requirements [42] for the strain energy density has also proven extremely useful towards generalization, discovery, and robustness. Input convex neural networks have been utilized for the enforcement of polyconvexity towards learning hyperelastic responses [31, 32], and in some cases even interpretability can be achieved [43] due to the non-parametric nature of the specific implementation. Alternately, neural ordinary differential equations have also been utilized towards strict enforcement of polyconvexity [44]. More recently Linden _et al._[38] presents a thorough review of techniques to enforce physical constraints and mechanistic assumptions towards learning hyperelasticity with NNs. Such approaches are crucial for the efficient utilization of the data and the development of robust material models that can efficiently generalize.
This work provides a limited survey of the wide variety of tensor basis techniques and contrasts their performance on representative data in the low-data regime (100 training points). We focus on stress representations for hyperelastic materials since they are the fundamental basis for finite deformation mechanics. The contributions of this work are: novel formulations based on the variety that the tensor basis framework affords, exploration of different methods of calibrating the models to data, and demonstration of the effects of noise and incompatible representations on physics-constrained formulations. To this end, we utilize well-known hyperelastic models as data generators.
In Sec. 2 we develop a multitude of equivariant _tensor basis_ neural networks (TBNNs) [1] formulations from classical representation theory. Then in Sec. 3 we give details of the data generation and training methodology. Sec. 4 presents the results of testing the models in and out of distribution and without and with additive noise. Finally in Sec. 5 we summarize the findings and conclude with avenues for future work.
## 2 Stress representations
In this work, we develop and compare a variety of tensor basis neural network (TBNN) formulations for stress representations. In this section, we introduce the fundamental differences between the representations and the neural network formulations that follow directly from the representations.
### Tensor basis models
Hyperelasticity is the prevailing theory for the description of finite deformation solid mechanics for continua in the absence of inelastic phenomena. The theory posits a potential \(\Psi\) from which the second Piola-Kirchhoff stress \(\mathbf{S}\) can be derived:
\[\mathbf{S}=2\boldsymbol{\partial}_{\mathbf{C}}\Psi(\mathbf{C})\, \tag{1}\]
as a function of the right Cauchy-Green stretch tensor \(\mathbf{C}=\mathbf{F}^{T}\mathbf{F}\). Here \(\mathbf{F}=\partial_{\mathbf{X}}\boldsymbol{\chi}\) is the deformation gradient of the spatial position \(\mathbf{x}=\boldsymbol{\chi}(\mathbf{X},t)\) at time \(t\) with respect to the corresponding reference position \(\mathbf{X}\) of the material. This potential ensures deformations are reversible, and is also utilized in some incremental formulations of large strain plasticity [45, 46].
In this work, we limit the discussion to isotropic hyperelasticity. In this case material frame invariance of the potential leads to the reduction of the inputs of \(\Psi\) to three scalar invariants \(I_{a}\) of \(\mathbf{C}\) and an equivariant stress function:
\[\mathbf{S}=2\,\boldsymbol{\partial}_{\mathbf{C}}\Psi(I_{1}(\mathbf{C}),I_{2}( \mathbf{C}),I_{3}(\mathbf{C})) \tag{2}\]
The chain rule results in the summation of material-specific, scalar derivative functions and an _a priori_ known tensor basis:
\[\mathbf{S}=2\boldsymbol{\partial}_{\mathbf{C}}\Psi(\mathbf{C})=2\,\sum_{a=1}^{ 3}\boldsymbol{\partial}_{I_{a}}\Psi\,\boldsymbol{\partial}_{\mathbf{C}}I_{a} \tag{3}\]
Typically the principal invariants,
\[I_{1}=\mathrm{tr}(\mathbf{C}),\quad I_{2}=\mathrm{tr}\big{(}\mathbf{C}^{-1} \big{)}\det(\mathbf{C}),\quad I_{3}=\det(\mathbf{C})\, \tag{4}\]
from the Cayley-Hamilton theorem
\[\mathbf{C}^{3}-I_{1}\mathbf{C}^{2}+I_{2}\mathbf{C}-I_{3}\mathbf{I}=\mathbf{0} \tag{5}\]
are employed. Note the second invariant is equivalently \(I_{2}=\frac{1}{2}(\mathrm{tr}\big{(}\mathbf{C})^{2}-\mathrm{tr}\big{(} \mathbf{C}^{2}\big{)})\). A three-term formula for the stress
\[\mathbf{S}=2\,[\partial_{I_{1}}\Psi+I_{1}\partial_{I_{2}}\Psi]\,\mathbf{I}-2 \partial_{I_{2}}\Psi\,\mathbf{C}+2I_{3}\,\partial_{I_{3}}\Psi\,\mathbf{C}^{-1} \tag{6}\]
comes from collecting terms with like powers of \(\mathbf{C}\). This is a well-known and arguably the most widely used stress representation for isotropic materials. It was first introduced by Finger [47] but was further popularized by Rivlin and Ericksen [48].
A generalization of this representation can be compactly written as a tensor basis expansion
\[\mathbf{S}=\sum_{a=1}^{3}c_{a}(\mathcal{I})\,\mathbf{B}_{a}\, \tag{7}\]
where the 3 coefficients \(c_{a}\) are functions of a set of 3 independent invariants \(\mathcal{I}\) and the basis \(\mathbf{B}_{a}\) must span \(\{\mathbf{C}^{a},a=-1,0,1\}\). For instance, Eq. (6) can be expressed as
\[\mathcal{I} = \{I_{a},\,a=1,2,3\} \tag{8}\] \[\mathcal{B} = \{\mathbf{C}^{a},\,a=0,1,-1\} \tag{9}\]
and
\[\begin{array}{rcl}c_{0}&=&2\,[\partial_{I_{1}}\Psi+I_{1}\partial_{I_{2}} \Psi]\\ c_{1}&=&-2\,\partial_{I_{2}}\Psi\\ c_{-1}&=&2I_{3}\,\,\partial_{I_{3}}\Psi\end{array} \tag{10}\]
Note that the Cayley-Hamilton theorem Eq. (5) allows the power basis to be shifted to higher or lower powers
\[\mathbf{C}^{3+k}=I_{1}\mathbf{C}^{2+k}-I_{2}\mathbf{C}^{1+k}+I_{3}\mathbf{C}^ {k}\text{ for }k\in\{\ldots,-2,-1,0,1,2,\ldots\}\, \tag{11}\]
for example
\[\mathcal{B}=\{\mathbf{C}^{a},\,a=0,1,2\}\, \tag{12}\]
via
\[I_{3}\mathbf{C}^{-1}=\mathbf{C}^{2}-I_{1}\mathbf{C}+I_{2}\mathbf{I}\;. \tag{13}\]
This basis together with the principal invariants (4) is another form of the Rivlin-Ericksen representation [48]. Also, the basis that results from the chain rule:
\[c_{a} = \{\partial_{I_{a}}\Psi,\,a=1,2,3\} \tag{14}\] \[\mathcal{B} = \{\boldsymbol{\partial}_{\mathbf{C}}I_{a},\,a=1,2,3\} \tag{15}\]
is part of an equally valid representation.
To calibrate Eq. (7), the model output can be regressed directly to stress data, or the coefficients for a given basis, e.g. \(\mathcal{B}=\{\mathbf{I},\mathbf{C},\mathbf{C}^{-1}\}\), can be determined at each data point \((\mathbf{C}_{i},\mathbf{S}_{i})\) via:
\[\begin{bmatrix}c_{1}\\ c_{2}\\ c_{3}\end{bmatrix}=\begin{bmatrix}1&\epsilon_{1}&\epsilon_{1}^{-1}\\ 1&\epsilon_{2}&\epsilon_{2}^{-1}\\ 1&\epsilon_{3}&\epsilon_{3}^{-1}\end{bmatrix}^{-1}\begin{bmatrix}\sigma_{1}\\ \sigma_{2}\\ \sigma_{3}\end{bmatrix} \tag{16}\]
using the fact that any power basis, such as Eq. (9), is collinear with \(\mathbf{S}\). Here \(\sigma_{a}\) and \(\epsilon_{a}\) are the eigenvalues of the stress and stretch tensors, herein \(\mathbf{S}\) and \(\mathbf{C}\), respectively. If the eigenvalues are distinct, Eq. (16) provides a unique solution for the coefficient values; however, multiplicity of strain eigenvalues requires special treatment, see Refs. [49, 50, 30] and App. A, which also outlines alternate solution procedures. Alternatively, we can use the Gram-Schmidt procedure
\[\mathbf{B}_{a}=\tilde{\mathbf{B}}_{a}-\sum_{b=1}^{a-1}\frac{\tilde{\mathbf{B} }_{a}:\mathbf{B}_{b}}{\mathbf{B}_{b}:\mathbf{B}_{b}} \tag{17}\]
to orthogonalize the basis \(\tilde{\mathbf{B}}_{a}\in\{\mathbf{I},\mathbf{C},\mathbf{C}^{-1}\}\), which results in
\[\mathcal{B} = \left\{\mathbf{I},\,\mathrm{dev}(\mathbf{C}),\,\mathrm{dev}( \mathbf{C}^{-1})-\left[\mathbf{C}^{-1}:\frac{\mathrm{dev}(\mathbf{C})}{\| \,\mathrm{dev}(\mathbf{C})\|}\right]\frac{\mathrm{dev}(\mathbf{C})}{\|\, \mathrm{dev}(\mathbf{C})\|}\right\} \tag{18}\]
if we keep the same scalar invariants. Herein \(\mathrm{dev}(\mathbf{C})=\mathbf{C}-1/3\,\mathrm{tr}(\mathbf{C})\mathbf{I}\). The fact that the Gram-Schmidt procedure starting with \(\mathbf{B}_{1}=\mathbf{I}\) and \(\mathbf{B}_{2}=\mathbf{C}\) leads to a spherical-deviatoric split is noteworthy. Orthogonality of the basis allows for direct determination of the coefficients:
\[c_{a}=\mathbf{S}:\mathbf{B}_{a} \tag{19}\]
Likewise, Gram-Schmidt applied to \(\{\mathbf{C}^{a},a=0,1,2\}\) gives the unnormalized basis
\[\mathcal{B} = \left\{\mathbf{I},\,\mathrm{dev}(\mathbf{C}),\,\|\,\mathrm{dev} (\mathbf{C})\|^{2}\,\mathrm{dev}(\mathbf{C}^{2})-(\mathbf{C}^{2}:\mathrm{dev} (\mathbf{C}))\,\mathrm{dev}(\mathbf{C})\right\} \tag{20}\]
Similarly, we can use a formulation inspired by the work Criscione _et al._[51] which effects an orthogonal spherical-deviatoric split of the basis via invariants:
\[\mathcal{I} = \left\{K_{1}=\mathrm{tr}(\mathbf{C}),\,K_{2}=\|\,\mathrm{dev}( \mathbf{C})\|,\,K_{3}=\det\left(\frac{\mathrm{dev}(\mathbf{C})}{\|\,\mathrm{ dev}(\mathbf{C})\|}\right)\right\} \tag{21}\] \[\mathcal{B} = \left\{\mathbf{I},\mathbf{A},-\frac{1}{3}\mathbf{I}-\mathrm{tr} \big{(}\mathbf{A}^{3}\big{)}\mathbf{A}+\mathbf{A}^{2}\right\} \tag{22}\]
where \({\bf A}={\rm dev}({\bf C})/\|\,{\rm dev}({\bf C})\|\). The resulting stress representation is
\[{\bf S} = \partial_{K_{1}}\Psi\,{\bf I}+\partial_{K_{2}}\Psi\,{\bf A}+ \partial_{K_{3}}\Psi\,\frac{1}{K_{2}}\left[-\frac{1}{3}{\bf I}-{\rm tr}\big{(}{ \bf A}^{3}\big{)}{\bf A}+{\bf A}^{2}\right]\] \[= \underbrace{\left[\partial_{K_{1}}\Psi-\frac{1}{3K_{2}}\, \partial_{K_{3}}\Psi\!\right]}_{c_{0}}\,{\bf I}+\underbrace{\left[\partial_{K _{2}}\Psi-\frac{3K_{3}}{K_{2}}\,\partial_{K_{3}}\Psi\right]}_{c_{1}}\,{\bf A} +\underbrace{\frac{1}{K_{2}}\,\partial_{K_{3}}\Psi}_{c_{2}}\,{\bf A}^{2}\]
Note Criscione _et al._[51] formulate the representation in terms of the spatial Hencky stretch, and here we apply the invariants of the same form to \({\bf C}\). This formulation combines derivative connection of the potential, invariants and the basis in the sense that the basis is a result of the choice of invariants, as in Eq. (14), and orthogonality of the basis. A better behaved set of related invariants
\[{\cal I}=\big{\{}{\rm tr}({\bf C}),\,{\rm tr}\,{\rm dev}({\bf C})^{2},\,{\rm det }\,({\rm dev}({\bf C}))\big{\}} \tag{24}\]
which eliminate the normalization in Eq. (21), leads to the basis
\[{\cal B}=\{{\bf I},2\,{\rm dev}\,{\bf C},({\rm det}({\rm dev}\,{\bf C}))\,{\rm dev }(({\rm dev}({\bf C}))^{-1})\} \tag{25}\]
See App. B for further details on the construction of an orthogonal basis.
With any of these representations, a densely connected neural network (NN) can be employed as a representation of the potential \(\Psi({\cal I})\) itself or the coefficient functions \(c_{a}({\cal I})\) directly. Summation of the coefficients with the known basis \({\cal B}\), as in Eq. (7), completes the formulation of a _tensor basis neural network_ (TBNN) [1]. Sec. 3.3, App. C and App. D provide details of the implementation of the TBNNs.
### Additional physical constraints
Other fundamental considerations, in addition to equivariance of the stress \({\bf S}({\bf C})\), constrain the form of the coefficient functions \(c_{a}({\cal I})\). Of the various constraints (rank-1 convexity, strong ellipticity, Hadamard stability, _etc._[40, 52, 53]), polyconvexity was proved by Ball [42] to ensure the existence of solutions in the context of hyperelasticity. For isotropic materials, polyconvexity requires that \(\Psi\) is convex in the triplet \(({\bf F},{\rm cof}\,{\bf F},{\rm det}\,{\bf F})\) which can be fulfilled when
\[\Psi=\Psi(I_{1},I_{2},I_{3})\mbox{ is convex in each of its arguments \@@cite[cite]{[\@@bibref{}{Ball}{}{}]}.} \tag{26}\]
An input convex neural network (ICNN) [54] satisfies these conditions and has been utilized for modeling hyperelastic materials in various recent studies [31, 38, 27]. Alternatively, if we assume that \(\Psi\) is polyconvex, we know that the derivatives of \(\Psi\) have to be non-decreasing, _i.e._\(\partial_{I_{a}}\Psi\geq 0\) with regards to \(I_{a}\). Assuming a representation of the form of Eq. (3)
\[{\bf S}=2\,\sum_{a=1}^{3}\mathbf{\partial}_{I_{i}}\Psi\,\mathbf{\partial}_{{\bf C}}I_{a}=\underbrace{\mathbf{\partial}_{I _{1}}\Psi}_{c_{0}}\,{\bf I}+\underbrace{\mathbf{\partial}_{I_{2}} \Psi}_{c_{1}}\,(I_{1}{\bf I}-{\bf C})+\underbrace{I_{3}\mathbf{\partial }_{I_{3}}\Psi}_{c_{-1}}\,{\bf C}^{-1}\,\,, \tag{27}\]
this implies that \(c_{1}(I_{1},I_{2}^{0},I_{3}^{0})\) is monotonically increasing in \(I_{1}\) for fixed \(I_{2}^{0}\) and \(I_{3}^{0}\). Note that the basis element \(I_{1}{\bf I}-{\bf C}=-\,{\rm dev}\,{\bf C}\) naturally arises from the Cayley-Hamilton/principal invariants, c.f. Eq. (4) and Eq. (10). We enforce this condition via an input monotone (or in fact monotonically
non-decreasing) neural network [27] which guarantees that the outputs of a neural network are monotonically non-decreasing in each of its inputs. Since to the best of our knowledge, no currently proposed neural network architecture enforces that each output individually is monotonically non-decreasing to only a subset of its inputs, and proposing a network of this kind is out of the scope of this work, we remark that this is an overconstrained way of enforcing the convexity condition.
Additional constitutive constraints resulting from mechanistic assumptions, include that the stress in the reference configuration is zero,
\[\mathbf{S}(\mathbf{C}=\mathbf{I})=\mathbf{0}\ \text{implies}\ \sum_{a}c_{a}(I_{1}^{0},I_{2}^{0},I_{3}^{0})=0. \tag{28}\]
with \(I_{1}^{0}=3,I_{2}^{0}=3,I_{3}^{0}=1\). One possible solution to enforcing this is to refactor the basis to form a Saint-Venant-like expansion:
\[\mathbf{S}=\sum_{a=1}^{3}c_{a}\mathbf{E}^{a} \tag{29}\]
where \(\mathbf{E}=1/2(\mathbf{C}-\mathbf{I})\). A \(\mathbf{C}\) based version is likewise:
\[\mathbf{S}=\sum_{a=1}^{3}c_{a}\mathbf{C}^{a}. \tag{30}\]
Note the coefficient functions \(c_{a}\) for these two representations are distinct but related, as are all the other representations introduced in this section. The requirements at the reference state \(\mathbf{C}=\mathbf{I}\) can be seen as a special case of the more general condition of symmetric loading where 2 or 3 of the eigenvalues are equal, examples include equibiaxial and hydrostatic/volumetric loadings.
The set of points where the eigenvalues are unique is dense in the invariant input space [55, 56], whereas highly symmetric cases are often used in testing and experiments since they are more easily understood and yet are sparse in the invariant input space. Since the unique case is dense there are continuous extensions for the coefficient functions to the case of eigenvalue multiplicity; however, the formula for the solution of the coefficients Eq. (16) does not provide them since the determinant of the system goes to zero. Although not well-cited, the important body of theoretical work starting with Serrin [55, 56, 57, 58] relates the smoothness of \(\mathbf{S}(\mathbf{C})\) or \(\mathbf{S}(\mathbf{E})\) to the smoothness of the coefficient functions with respect to the scalar invariants. Since most classical work treated only polynomial functions of the invariants, these developments have not been fully utilized; however, in the present context, we are forming general coefficient functions with neural networks. Man [56] proved that \(\mathbf{S}\) needs to be two degrees more continuous than the desired degree of smoothness of the coefficient functions, in particular \(\mathbf{S}(\mathbf{C})\) needs to be twice differentiable for \(c_{a}(\mathcal{I})\) to be continuous. Note that smooth solutions to the balance of momentum already require \(\mathbf{S}\) to be \(C^{1}\) and \(\Phi\) to be \(C^{2}\). Also, Scheidler [57] provided coefficient values from derivatives of the stress with respect to particular deformations, unlike Eq. (14).
Smoothness and growth considerations affect the choice of NN activations. For example, the St. Venant-like basis (29) incurs certain growth and asymptotic behavior. Refactoring the coefficients as
\[\tilde{c}_{a}=\|\mathbf{E}\|^{n}c_{a} \tag{31}\]
can enforce asymptotic behavior near \(\mathbf{C}\rightarrow\mathbf{I}\) as in Ref. [29]. The orthonormal basis formulations also need special consideration due to the normalization which creates non-smoothness in the coefficients, as in Eq. (18). An unnormalized basis, such as Eq. (20), avoids these issues.
### Summary of selected stress representations
In Sec. 4 we compare a number of distinct formulations of TBNNs for hyperelastic response listed in Table 1. Three are based on representing the strain energy potential \(\Psi(\mathcal{I})\) directly: (a) using the principal invariants \(\mathcal{I}\) as inputs to a standard feed-forward dense neural network (_Rivlin-Pot_), (b) using the principal invariants with an input convex neural network (_Convex-Pot_), and (c) using spherical-deviatoric split invariants in a standard dense neural network (_Crisc-Pot_). For these models the derivative of the potential with respect to these invariants through automatic differentiation provides the stress response. Six other models are based on coefficient-basis product formulations: (a) the customary power basis and coefficient functions in terms of the principal invariants (_Rivlin-Coeff_), (b) an input monotone neural network formulation of the coefficient functions with the power basis (_Mono-Coeff_) (c) the orthogonal basis with the Criscione invariants (_Crisc-Coeff_), (d) the orthogonal basis with the principal invariants (_Orthnorm-Coeff_), (e) an unnormalized orthogonal basis with more regular invariants (_Orth-Coeff_), and (f) a St.Venant-like basis with the principal invariants (_StV-Coeff_). For these both the coefficient functions \(c_{a}(\mathcal{I})\) and the basis \(\mathbf{B}_{a}\) are chosen. Table 1 summarizes the differences in the TBNN variants. In addition to these variations, we also explored how the method for calibration to stress data, e.g. via the coefficients found through regression or projection, or implicitly through direct calibration to stress, affects the model accuracy.
## 3 Data and training
For this study, we train the various NN models enumerated in Table 1 to stress data generated with classical hyperelastic models. In this section, we briefly discuss the classical data-generating models and give a detailed description of the data-generation process.
### Data and training
We remark that the complexity of the coefficient and potential functions is intrinsically connected to the stress measure, the basis function, and the invariants. To emphasize this consider the second Piola-Kirchhoff stress given by
\[\mathbf{S}=c_{0}^{C}\mathbf{I}+c_{1}^{C}\mathbf{C}+c_{-1}^{C}\mathbf{C}^{-1}. \tag{32}\]
\begin{table}
\begin{tabular}{|c|c c c|c c c|} \hline & invariants & basis & coefficients & potential & convex & orthogonal basis \\ \hline Rivlin-Pot & Eq. (4) & Eq. (15) & Eq. (14) & \(\times\) & & \\ Convex-Pot & Eq. (4) & Eq. (15) & Eq. (14) & \(\times\) & \(\times\) & \\ Crisc-Pot & Eq. (21) & Eq. (22) & Eq. (23) & \(\times\) & & \(\times\) \\ Rivlin-Coeff & Eq. (4) & Eq. (12) & Eq. (7) & & & \\ Mono-Coeff & Eq. (4) & Eq. (27) & Eq. (27) & & \(\times\) & \\ Crisc-Coeff & Eq. (21) & Eq. (22) & Eq. (7) & & & \(\times\) \\ Orthnorm-Coeff & Eq. (21) & Eq. (18) & Eq. (7) & & & \(\times\) \\ Orth-Coeff & Eq. (24) & Eq. (20) & Eq. (7) & & & \(\times\) \\ StV-Coeff & Eq. (4) & Eq. (29) & Eq. (7) & & & \\ \hline \end{tabular}
\end{table}
Table 1: Tensor basis neural network variants.
Naively, one could presume that using the Kirchhoff stress tensor \(\mathbf{\tau}\) and the left Cauchy-Green tensor \(\mathbf{B}\) with an equivalent basis representation (\(\mathbf{I}\), \(\mathbf{B}\), \(\mathbf{B}^{-1}\)), i.e.
\[\mathbf{\tau}=c_{0}^{B}\mathbf{I}+c_{1}^{B}\mathbf{B}+c_{-1}^{B}\mathbf{B}^{-1} \tag{33}\]
the respective coefficients might be the same, e.g. \(c_{a}^{C}=c_{a}^{B}\) for \(a=0,1,-1\). However, recalling that the Kirchhoff stress can be expressed as \(\mathbf{\tau}=\mathbf{FSF}^{T}\), Eq. (32) can be rewritten as
\[\mathbf{\tau}=c_{0}^{C}\mathbf{B}+c_{1}^{C}\mathbf{B}^{2}+c_{-1}^{C}\mathbf{I}. \tag{34}\]
Hence, under the assumption that the eigenvalues are unique, we find that
\[c_{0}^{B}\mathbf{I}+c_{1}^{B}\mathbf{B}+c_{-1}^{B}\mathbf{B}^{-1}=c_{0}^{C} \mathbf{B}+c_{1}^{C}\mathbf{B}^{2}+c_{-1}^{C}\mathbf{I} \tag{35}\]
which yields
\[c_{0}^{B}=c_{-1}^{C}-c_{1}^{C}I_{2},\qquad c_{1}^{B}=c_{0}^{C}+c_{1}^{C}I_{1}, \qquad c_{-1}^{B}=c_{1}^{C}I_{3}. \tag{36}\]
via the Cayley-Hamilton theorem (5). The complexity of the two sets of coefficient functions \(\{c_{0}^{C},c_{1}^{C},c_{-1}^{C}\}\) and \(\{c_{0}^{B},c_{1}^{B},c_{-1}^{B}\}\) is therefore clearly different. Using the Cayley-Hamilton theorem (5) to transform the model representation would also alter the complexity of the coefficient functions. In order to make the following comparisons as fair as possible we have restricted ourselves to second Piola-Kirchhoff stress representations and data. Note that the Piola transform would also affect the orthogonality of the basis.
We furthermore remark that additively separable energies that are based on the Valanis-Landel hypothesis [59, 60, 61] lead to more trivial calibrations, i.e. if
\[\Psi(I_{1},I_{2},I_{3})=\Psi_{1}(I_{1})+\Psi_{2}(I_{2})+\Psi_{3}(I_{3}) \tag{37}\]
we can see from Eq. (10) that this would result in
\[\begin{split} c_{0}(I_{1},I_{2})&=2\left[\partial_{ I_{1}}\Psi_{1}(I_{1})+I_{1}\partial_{I_{2}}\Psi_{2}(I_{2})\right]\\ c_{1}(I_{2})&=-2\,\partial_{I_{2}}\Psi_{2}(I_{2}) \\ c_{-1}(I_{3})&=2I_{3}\partial_{I_{3}}\Psi_{3}(I_{3}).\end{split} \tag{38}\]
Hence, this leads to \(c_{1}\) and \(c_{-1}\) being functions of only one invariant and \(c_{0}\) reduced to a function of two invariants. In order to avoid these simplifications we use only hyperelastic models that are not additively decomposable with regards to their inputs.
Note the definition of the invariants can be engineered to reduce the complexity of the coefficient functions for a particular material dataset. In a limiting case, the coefficient functions are themselves invariants and hence present the simplest representation in some sense, albeit one that is hard to discover _a priori_ from the measured data. Representation complexity is particularly important in the low data regime which we explore.
### Data models
Three well-known compressible hyperelastic models were selected to generate training data: (a) Mooney-Rivlin [62, 63], (b) a modified version of Carroll's hyperelastic law [64, 65], and (c) a Gent-type model [66, 67]. Each is expressed in terms of the invariants \(I_{1}=\mathrm{tr}(\mathbf{C})\), \(I_{2}=\mathrm{tr}\big{(}\mathbf{C}^{-1}\big{)}\det(\mathbf{C})\), and \(J=\sqrt{\det\mathbf{C}}\)
The specific compressible Mooney-Rivlin model considered here has the strain energy function
\[\Psi=\theta_{1}\left(\frac{I_{1}}{J^{2/3}}-3\right)+\theta_{2}\left(\frac{I_{2}}{ J^{4/3}}-3\right)+\theta_{3}(J-1)^{2}\, \tag{39}\]
which yields a second Piola-Kirchhoff stress of the form:
\[\mathbf{S}=\underbrace{2\left(\frac{\theta_{1}}{J^{2/3}}+I_{1}\frac{\theta_{2 }}{J^{4/3}}\right)}_{c_{0}^{*}}\mathbf{I}\underbrace{-2\frac{\theta_{2}}{J^{4 /3}}}_{c_{1}^{*}}\mathbf{C}+\underbrace{J\left[-\frac{2}{3}\theta_{1}\frac{I_{ 1}}{J^{5/3}}-\frac{4}{3}\theta_{2}\frac{I_{2}}{J^{7/3}}+2\theta_{3}(J-1)\right] }_{c_{-1}^{*}}\mathbf{C}^{-1} \tag{40}\]
We use (scaled) material parameters ( \(\theta_{1}=0.92\) Pa, \(\theta_{2}=2.37\) Pa and \(\theta_{3}=10.001\) MPa) from fits to vulcanized rubber data, c.f. Ref. [68], for data generation.
Following Ref. [65], a modified Carroll model is defined by the strain energy function
\[\Psi=\theta_{1}\left(\frac{I_{1}}{J^{2/3}}-3\right)+\theta_{2}\left[\left( \frac{I_{1}}{J^{2/3}}\right)^{4}-81\right]+\theta_{3}\left(\sqrt{\frac{I_{2}} {J^{4/3}}}-\sqrt{3}\right)+\theta_{4}(J-1)^{2}. \tag{41}\]
This energy results in
\[\mathbf{S} =\underbrace{2\left[\frac{\theta_{1}}{J^{2/3}}+4\theta_{2}\left( \frac{I_{1}}{J^{2/3}}\right)^{3}+I_{1}\left(\theta_{2}\frac{1}{2J^{2/3}I_{2}} \right)\right]}_{c_{0}^{*}}\mathbf{I}\underbrace{-2\theta_{2}\frac{1}{2J^{2/3 }I_{2}}}_{c_{1}^{*}}\mathbf{C} \tag{42}\] \[+\underbrace{J\left(-\frac{2}{3}\theta_{1}\frac{I_{1}}{J^{4/3}}- \frac{8}{3}\theta_{2}\frac{I_{1}^{4}}{J^{11/3}}-\frac{2}{3}\theta_{3}\frac{ \sqrt{I_{2}}}{J^{5/3}}+\theta_{4}(J-1)\right)}_{c_{-1}^{*}}\mathbf{C}^{-1}\]
We use a scaled version of the material parameters reported in Ref. [65], in particular \(\theta_{1}=151.09387\) GPa, \(\theta_{2}=0.3028\) MPa, \(\theta_{3}=68.33070\) GPa, and \(\theta_{4}=500\) TPa.
Lastly, we also utilize the response of a compressible version of the Gent+Gent model, as named in Ref. [69], that is defined by the strain energy function
\[\Psi=-\frac{\theta_{1}}{2}J_{m}\log\left(1-\frac{I_{1}-3}{J_{m}}\right)-\theta _{2}\log\left(\frac{I_{2}}{J}\right)+\theta_{3}\left(\frac{1}{2}(J^{2}-1)-\log J \right)\, \tag{43}\]
where we choose \(\theta_{1}=2.4195\) MPa, \(\theta_{2}=1.8146\) MPa, \(\theta_{3}=1.2097\) MPa and \(J_{m}=77.931\). This strain energy yields a second Piola-Kirchhoff stress of the form:
\[\mathbf{S}=\underbrace{2\left[-\frac{\theta_{1}}{2}J_{m}\frac{1}{I_{1}-3-J_{m }}+\theta_{2}\frac{I_{1}}{I_{2}}\right]}_{c_{0}^{*}}\mathbf{I}\underbrace{-2 \theta_{2}\frac{1}{I_{2}}}_{c_{1}^{*}}\mathbf{C}+\underbrace{J\,\theta_{3}(J- \frac{1}{J})}_{c_{-1}^{*}}\mathbf{C}^{-1} \tag{44}\]
The Gent+Gent model is not polyconvex; however, it is convex over a limited range where \(I_{1}<J_{m}+3\). For simplicity, we refer to this model simply as Gent.
Note that hereafter \(c_{a}^{*}\) denote the true coefficients, which differ from the extracted coefficients \(c_{a}\) near ill-conditioned solves, and the fitted NN coefficients \(\hat{c}_{a}\).
### Training and validation
For sampling, we define a nine-dimensional space around the undeformed configuration of the deformation gradient as
\[\overline{F}_{ij}\in[F_{ij}^{L},F_{ij}^{U}]=\begin{cases}1-\delta\leq 1\leq 1+ \delta,&\text{when i=j},\\ -\delta\leq 0\leq\delta,&\text{otherwise}.\end{cases} \tag{45}\]
We then define a training region with \(\delta=0.2\) and a test region with \(\delta=0.3\) and use the space-filling sampling technique proposed in Ref. [37] to generate \(100\) training points and \(10,000\) test points that fill their respective spaces. Figure 1 shows the spread of these samples in invariant space \((I_{1},I_{2},J)\). Then given the triple \((I_{1},I_{2},I_{3}=J^{2})\) we can reconstruct the right Cauchy-Green tensor as
\[\mathbf{C}=\begin{bmatrix}\frac{1}{3}I_{1}-2\sqrt{H}\cos\left(\frac{\pi-\beta }{3}\right)&0&0\\ 0&\frac{1}{3}I_{1}-2\sqrt{H}\cos\left(\frac{\pi+\beta}{3}\right)&0\\ 0&0&\frac{1}{3}I_{1}-2\sqrt{H}\cos\left(\frac{\beta}{3}\right)\end{bmatrix} \tag{46}\]
where
\[H=\frac{1}{9}(I_{1}^{2}-3I_{2}),\qquad G=\frac{1}{3}I_{1}I_{2}-I_{3}-\frac{2}{ 27}I_{1}^{3},\qquad\beta=\arccos\left(-\frac{G}{2H^{3/2}}\right) \tag{47}\]
from which we can obtain the values of the invariants and the basis of all the investigated stress representations of Sec. 2. When training the model we use an \(80/20\) split to obtain \(20\) validation data points.
After obtaining a set of training and testing data we added noise to the resulting coefficient values to disrupt symmetries and analytic functional forms. In particular, we take the coefficient values \(c_{0},c_{1},c_{-1}\) corresponding to the representation
\[\mathbf{S}=c_{0}\mathbf{I}+c_{1}\mathbf{C}+c_{-1}\mathbf{C}^{-1} \tag{48}\]
for every data point, and define a noisy version \(\tilde{c}_{a}=c_{a}+\mathcal{N}(0,0.02\left|c_{a}\right|)\) for \(a=0,1,-1\) which then gives a noisy stress
\[\tilde{\mathbf{S}}=\tilde{c}_{0}\mathbf{I}+\tilde{c}_{1}\mathbf{C}+\tilde{c} _{-1}\mathbf{C}^{-1}. \tag{49}\]
This \(\tilde{\mathbf{S}}\) was then used as the target stress to obtain the coefficients for all other models, e.g. the Criscione model. Hence, we generate \(100\) noisy training samples that have the same invariants as the noiseless counterparts and use the same noiseless data for the test set. An example of a generated noisy test set is shown in Figure 2 for the Mooney-Rivlin model.
All the tensor basis neural network models [1] were implemented in _PyTorch_[70]. Potential models were formed from a multilayer, densely connected feedforward neural network (NN) with a single output
\[\Phi=N\!N(\mathcal{I}) \tag{50}\]
where the coefficients \(c_{a}=\partial_{I_{a}}\Phi\) are obtained through automatic differentiation. Summation with the known basis provides the stress
\[\mathbf{S}=\sum_{a}c_{a}\mathbf{B}_{a} \tag{51}\]
The coefficient-based models utilized a monolithic NN with 3 outputs
\[c_{a}=N\!\!N_{a}(\mathcal{I}) \tag{52}\]
and the same summation to form the stress. To be consistent all TBNN models consisted of 3 layers with 30 neurons per layer and a _Softplus_ activation function [71]. App. C and App. D provide additional details of the implementation of potential and coefficient-based TBNNs, respectively.
The training loss was formulated on the mean squared error of either the stress components or the coefficients
\[\text{MSE}=\lambda_{\mathbf{S}}\sum_{i=1}^{N_{\text{train}}}\|\mathbf{S}_{i}- \hat{\mathbf{S}}_{i}\|^{2}+\lambda_{c}\sum_{i=1}^{N_{\text{train}}}\sum_{a=1}^ {3}\|[c_{a}]_{i}-[\hat{c}_{a}]_{i}\|^{2}\, \tag{53}\]
since these are available from representative volume element (RVE)/experimental data, whereas the strain energy is less accessible. Although mixing both losses proved useful in preliminary studies, all reported data is from models trained with either \(\lambda_{\mathbf{S}}=1,\lambda_{c}=0\) or \(\lambda_{\mathbf{S}}=0,\lambda_{c}=1\). Note the coefficients scale as \(|c_{a}|\sim\|\mathbf{S}|\ \|\mathbf{B}_{a}\|^{-1}\) so the coefficient-based loss can suffer from numerical conditioning issues. All models were trained for \(50,000\) epochs using the _Adam_ optimizer [72] with a constant learning rate of \(0.001\).
We compared the performance of the models using the normalized mean squared error of the stress over the \(10,000\) testing data points
\[\text{RMSE}=\sqrt{\frac{\sum_{i=1}^{N_{\text{test}}}\|\mathbf{S}_{i}-\hat{ \mathbf{S}}(\mathbf{C}_{i})\|^{2}}{\sum_{i}\|\mathbf{S}_{i}\|^{2}}} \tag{54}\]
where \(\hat{\mathbf{S}}_{i}\) is the predicted stress and \(\mathbf{S}_{i}\) represents the ground truth at data point \(i\).
## 4 Results
First, we survey the test losses of the models enumerated in Table 1. Then we undertake detailed investigations of why the theoretically equivalent representations do or do not perform well by examining where the largest errors occur and how the predictions compare to held-out data.
Figure 1: Space-filling training/validation (colored by maximum stress component normalized by the mean for the Mooney-Rivlin data) and test samples in invariant space.
### Comparison of test losses
For each TBNN model described in Table 1 we assembled an ensemble of 30 parameterizations using random initialization of the NN parameters and shuffling the training/validation subsets of the 100 training points. Figure 3 shows the range of RMSE test errors for the six datasets from the 3 traditional models described in Sec.3, each with and without noise. Clearly, the various theoretically equivalent TBNN formulations perform differently and each of the datasets evoke different errors. Overall the polyconvex (Conv-Pot) and monotonic (Mono-Coeff) models appear to perform the best, although the standard potential-based (Rivlin-Pot) model has comparable performance to Mono-Coeff. The coefficient-based Rivlin-Coeff has considerably higher test errors than the potential-based Rivlin-Pot, despite Rivlin-Pot relying on automatic differentiation. The Criscione and other orthogonal models perform worse than the convex, monotonic and Rivlin models but they do better on the Gent data than on the other two datasets. We observe that the Gent model has a qualitatively different functional form than the selected Mooney-Rivlin and Carroll data-generating models, e.g. the presence of log terms in the energy Eq. (43). The Orth model with smoother invariants performs the best of the orthogonal basis models and is an anomaly in that it trains better indirectly to stress than to the extracted coefficients. This may be due to the conditioning issues with solving for coefficients of a power basis, mentioned in Sec. 3.3. The St. Venant model (both \(\mathbf{C}\) and \(\mathbf{E}\) based) is an outlier with large errors likely due to the mismatch in the growth of the tensor basis and the data, which necessitates more complex coefficient functions. Although small, the differences in calibration techniques can have an effect. Training to stress, instead of extracted coefficients, can regularize the trained coefficient functions, since stress is smoother than the coefficient functions as per the Man-Serrin theorem discussed in Sec. 2.2. Also training to stress can discourage a potential reinforcement of bias from individually trained coefficient functions that need to coordinate to form an accurate stress. Training to coefficients, on the other hand, removes the potentially ill-conditioned linear algebra implicit in training to stress. Projection of data onto expected bases may also remove discrepancies as with the noisy datasets.
The test errors seem to be largely dominated by testing the models in extrapolation, more insight will be given in the following sections.
Figure 2: Effective stress noise in percent of Frobenius norm error over the test data of a Mooney-Rivlin dataset.
### Locations of worst errors
The worst 1% errors of the 10,000 sample test set for each model are shown in Figure 4 for the Mooney-Rivlin data, in Figure 5 for the modified Carroll data, and in Figure 6 for the Gent data. For reference, the undeformed state is at \((I_{1}=3,I_{2}=3,I_{3}=1)\) which is inside the hull of sample points shown in these figures. Generally speaking, for most models, the worst errors are at the boundary of the test locus where they are forced into extrapolation. Note that \(I_{3}\) is associated with volumetric deformation, \(I_{1}\) can be interpreted at the linearization of \(I_{3}\), while \(I_{2}\) is sensitive to shear and deviatoric deformations.
Examining Figure 4, the Convex-Pot, Rivlin-Pot, and Orth-Coeff have the largest errors where the invariants (and eigenvalues of **C**) are large, while the Crisc-Coeff and the similar Orthnorm-Coeff have largest errors in the low region. The Crisc-Pot has relatively large errors at both extremes. Of the better-performing models, the Mono-Coeff formulation is an outlier since it has its worst errors in the midrange of the invariants, albeit still at the boundary. Likewise, the Orthnorm-Coeff has high error in the midrange, as well as the low range, while the St. Venant model performs particularly poorly in the low to midrange. The patterns are relatively unchanged with discrepancy
Figure 3: Comparison of TBNN variants summarized in Table 1 over test sets for the three different data models with and without noise. Note that coefficient-based variants with \({}^{*}\) were trained to stress data, and had to discover the coefficient values. Also the **E**-based variant of the St.Venant basis Eq. (29) had errors well above the displayed range so only the **C**-based variant Eq. (30) is shown.
added by noise, with the exception of the worst errors transitioning to the lower range for the most accurate model, Convex-Pot. The errors for the Carroll data shown in Figure 5 largely resemble those for the Mooney-Rivlin data, although for this data the Mono-Coeff and Orth-Coeff models seem more sensitive to noise. They, together with the Convex-Pot, shift their worst error locations with noise. The errors for the Gent data, shown in Figure 6, however, present different patterns. For this case, all models perform worst where the invariants are small, which can be ascribed to the Gent model being ill-defined where \(I_{1}\) becomes less than a value determined by the parameter \(J_{m}\). There are also scattered worst error locations in the midrange for the Rivlin-Coeff model.
Figure 7, Figure 8, and Figure 9 provide another view of the error patterns and corroborate the observations from the previous plots. These figures illustrate the correlation of the maximum errors with the difference in the largest and smallest (Rivlin-Ericksen) coefficient values for a particular data point in the test set. Large differences in the coefficient values are associated with less symmetric deformations. For each model, the figures show how the worst errors shift as a function of the difference in the coefficient values. For the Mooney-Rivlin data, only the worst-performing model, StV-C, has a single locus of maximal errors. Of the best-performing models, Convex-Pot, Rivlin-Pot, and Rivlin-Coeff have similar patterns that remain stable after the injection of noise. The Mono-Coeff model, on the other hand, changes the locations of where the worst errors occur relative to the difference in the coefficient values and also has the worst errors on par with Convex-Pot. Again the patterns for the Carroll data are similar to the Mooney-Rivlin data, while the Gent data present qualitatively different patterns. For the Gent data, all models, except the worst performing StV-C, have overlapping loci of worst errors that do not change appreciably with noise. This is perhaps due to discrepancies between the relatively simple TBNN models and the Gent data.
Next, we aim to test the performance of the TBNN variants to generalization. We conjecture that the differences in generalization performance can be attributed to how much of the complexity of the stress-strain mapping is intrinsically provided by the nonlinearity of the stress representation bases. Meaning, if the nonlinearity of the bases in the chosen representation is approaching the nonlinearity of the stress-strain mapping then the coefficient can be described by simpler lower-order functions. If the coefficients are lower-order functions, i.e. constant, linear, or quadratic, then accurately extrapolating this behavior with a TBNN is going to require less training data and a less complex NN to be accurate.
To check this hypothesis we offer the following approach. Consider the mapping \(\mathcal{M}_{IC}\) from the respective invariants to the respective coefficient values, e.g
\[\mathcal{M}_{IC}:\mathcal{I}\rightarrow[c_{1},c_{2},c_{3}]. \tag{55}\]
Let this mapping be approximated by a polynomial regressor of \(n\)-th order with interaction terms denoted by \(\hat{\mathcal{M}}^{n}_{IC}(\mathcal{I})\), e.g. for \(n=2\)
\[\hat{\mathcal{M}}^{2}_{IC}(\mathcal{I})=\hat{\mathbf{c}}^{2}_{IC}=\mathbf{\alpha}_{0} +\mathbf{\alpha}_{1}I_{1}+\mathbf{\alpha}_{2}I_{2}+\mathbf{\alpha}_{3}I_{3}+\mathbf{\alpha}_{4 }I_{1}^{2}+\mathbf{\alpha}_{5}I_{1}I_{2}+\mathbf{\alpha}_{6}I_{2}^{2}+\mathbf{\alpha}_{7}I _{2}I_{3}+\mathbf{\alpha}_{8}I_{3}^{2} \tag{56}\]
where \(\mathbf{\alpha}_{i}\in\mathbb{R}^{3}\). To check the potential complexity of this mapping we look at two scenarios. First, how polynomial regression fitted on the training data predicts the test data, and second, how well \(\hat{\mathcal{M}}^{n}_{IC}(\mathcal{I})\) predicts the test data coefficients if it was trained on the test data. The root-mean-squared error (RMSE) between the reference and predicted test data coefficients of the former is shown in Figure 10 for the three noiseless data cases. Surprisingly, polynomials of second order generalize the best for Rivlin-Coeff and Mono-Coeff. This leads us to the conjecture that the
complexity of the coefficient functions of Rivlin-Coeff and Mono-Coeff are generally lower than the other representation. This seems to correlate with the results of the TBNN generalization errors c.f. Figure 3. The coefficient error between the regressor of an increasing polynomial order, trained on the test data and evaluated on the test data is displayed in Figure 11. It can be seen that, compared to the previous case (Figure 3), Mono-Coeff and Rivlin-Coeff still have the lowest errors but more significantly that the complexity of the coefficient functions seems to have changed, i.e. while second order polynomials where best for models trained on the training data, now an increasing polynomial order seems to help to accurately fit the coefficient functions. Note that the upward trends after initial low discrepancy fits in some of the data in Figure 10 could possibly be attributed to overfitting of the polynomials to the 100-point training dataset.
Next, we aim to gauge what the contribution of the basis representation is on the accuracy. From the output of the polynomial regression of (56) an \(n\)-th order polynomial prediction of the stress can be obtained
\[\hat{\mathbf{S}}^{n}_{IC}=\sum_{a=1}^{3}\,\hat{c}^{n}_{a,IC}\mathbf{B}_{a}. \tag{57}\]
We furthermore assume an alternative mapping \(\mathcal{M}_{IS}\) from the invariants of the representation to
Figure 4: Mooney-Rivlin data, worst 1% errors for each model.
the symmetric components of the stress
\[\mathcal{M}_{IS}:\mathcal{I}\rightarrow[S_{11},S_{12},S_{13},S_{22},S_{23},S_{33}] \tag{58}\]
for which we build a similar \(n\)-th order polynomial referred to as \(\hat{\mathcal{M}}_{IS}^{n}\). Then, we can find the difference between the RMSEs of \(\hat{\mathbf{S}}_{IC}^{n}\) and \(\hat{\mathcal{M}}_{IS}^{n}\) evaluated on the test data, i.e.
\[\Delta\text{RMSE}^{n}(\mathcal{I}^{test})=\text{RMSE}(\hat{\mathcal{M}}_{IS}^ {n}(\mathcal{I}^{test}))-\text{RMSE}(\hat{\mathbf{S}}_{IC}^{n}(\mathcal{I}^{ test})).\]
Simplistically, this difference between the stress errors between these two regressors will help us judge the role and contribution of the basis generators, i.e. if \(\Delta\text{RMSE}^{n}>0\) then the bases have a positive contribution to the prediction which means that the RMSE of obtaining the stress from a linear combination of coefficients and bases \(\hat{\mathbf{S}}_{IC}^{n}=\sum_{a=1}^{3}\,\hat{c}_{a,IC}^{n}\mathbf{B}_{a}\) is lower than the mapping from invariants to stress directly. This would tell us that the basis components take complexity out of the system. On the other hand if \(\Delta\text{RMSE}^{n}<0\) the basis components make the mapping between invariants and stress more complex.
In Figure 12 we compare the RMSE-difference of models trained on the training data and evaluated on the test data while Figure 13 highlights the RMSE difference when the models were trained
Figure 5: Modified Carroll data, worst 1% errors for each model.
and tested on the test data. It can be seen that the basis components of the Mono-Coeff and Rivlin-Coeff representations generally help in reducing the complexity of the invariants-stress mapping, which is the reason why the invariants-stress mapping are not the same.
Figure 6: Gent data, worst 1% errors for each model.
Figure 7: Maximal errors for each model for the Mooney-Rivlin data. The horizontal lines indicate where the data has been clipped.
especially for lower polynomial order while the opposite is true for the remaining representations that were investigated.
Overall we believe that this investigation of the data through the lens of a polynomial regressor suggests that our hypothesis is valid. The TBNN has better generalization capabilities for some of the stress representations because the complexity of the mapping is reduced owing to nonlinearities introduced by the basis components.
Figure 8: Maximal errors for each model for the modified Carroll data. The horizontal lines indicate where the data has been clipped.
Figure 9: Maximal errors for each model for the Gent data. The horizontal lines indicate where the data has been clipped.
### Interpolation
The TBNN models have the ability to form smooth extensions to coefficient functions for high symmetry loading. Consider the parameterized invariants [73]
\[\begin{split} I_{1}(\gamma)&=3-2\,\gamma+\gamma^{2}\\ I_{2}(\gamma)&=3-4\gamma+2\,\gamma^{2}\\ I_{3}(\gamma)&=(1-\,\gamma)^{2}\end{split} \tag{59}\]
with \(\gamma\in[-0.2,0.2]\). A uniaxial extension can be observed for \(\gamma<0\) while \(\gamma>0\) yields an (equi)biaxial extension. This path is highlighted in Figure 14 in the projected invariant space and is inside the training region even though not explicitly part of the training data set. We specifically focus on this path due to the fact that it is characterized by two coalescent principal stretches \(\lambda_{2}=\lambda_{1}\) for \(\gamma<0\) and \(\lambda_{3}=\lambda_{2}\) for \(\gamma>0\) where \(\lambda_{1}\leq\lambda_{2}\leq\lambda_{3}\). Here \(\lambda_{a}=\sqrt{\epsilon}_{a}\) are the principal stretches. As described earlier and as seen in App. A, this means that the solution matrix for the coefficients of some of the presented stress representations is ill-conditioned and special schemes are
Figure 11: RMSE between predicted coefficient values and reference coefficient values using a polynomial of \(n\)-th order as the regressor which was trained on all available data (test data) and tested on the same data.
Figure 10: RMSE between predicted coefficient values and reference coefficient values using a polynomial of \(n\)-th order as the regressor which was trained on the training data and tested on the test data set.
needed to be able to solve for the coefficients. We remark that:
1. Depending on the loading path, these schemes lead to discontinuities near the reference state \(\mathbf{C}=\mathbf{I}\). In particular, this is the case for the Rivlin-Coff (\(\mathbf{S}=c_{1}\mathbf{I}+c_{2}\mathbf{C}+c_{3}\mathbf{C}^{-1}\)) and St.V-Coeff (\(\mathbf{S}=c_{1}\mathbf{C}+c_{2}\mathbf{C}^{2}+c_{3}\mathbf{C}^{3}\)) representations for the path described in Eq. (59).
2. Due to the space-filling way the training data was generated the principal strains of all training data points are unique, apart from the undeformed configuration. This means that the trained models have not been trained on coefficients that came as a result of the special schemes described in App. A. Hence, examining the predicted coefficients in this high-symmetry case provides an interesting and interpretable test case.
Figure 15 shows the trained coefficients and stress predictions for the Rivlin-Coeff and St.V-Coeff models over \(\gamma\). Clearly, the extracted coefficient functions near the reference state \(\mathbf{C}=\mathbf{I}\) become discontinuous; however, the built-in continuity of the NN enables an approximate continuous
Figure 12: The RMSE difference between the stress prediction of an n-th order polynomial trained from the invariants to coefficients \(\hat{\mathbf{S}}^{n}_{IC}\) and an n-th order polynomial trained from the invariants directly to the stress \(\hat{\mathcal{M}}^{n}_{IS}\), i.e. \(\Delta\text{RMSE}^{n}(\mathcal{I}^{test})=\text{RMSE}(\hat{\mathcal{M}}^{n}_ {IS}(\mathcal{I}^{test}))-\text{RMSE}(\hat{\mathbf{S}}^{n}_{IC}(\mathcal{I}^{ test}))\). Both models were trained on the **training** data and tested on the **test** data.
Figure 13: The RMSE difference between the stress prediction of an n-th order polynomial trained from the invariants to coefficients \(\hat{\mathbf{S}}^{n}_{IC}\) and an n-th order polynomial trained from the invariants directly to the stress \(\hat{\mathcal{M}}^{n}_{IS}\), i.e. \(\Delta\text{RMSE}^{n}(\mathcal{I}^{test})=\text{RMSE}(\hat{\mathcal{M}}^{n}_ {IS}(\mathcal{I}^{test}))-\text{RMSE}(\hat{\mathbf{S}}^{n}_{IC}(\mathcal{I}^ {test}))\). Both models were trained on all available **test** data and tested on the same data.
extension. This approximation \(\hat{c}_{a}\) is different than the coefficients \(c_{a}\) extracted from the equation system, altered to accommodate multiplicity, but still yields an accurate stress representation for Rivlin-Coeff and a sufficient one for StV-Coeff. It seems that the smooth extension avoids large errors that would be incurred if the extracted coefficients were approximated.
Remarkably, the resulting predicted coefficients for Rivlin-Coeff are practically equivalent to the derived coefficients from the potential prediction of Convex-Pot, Figure 16. This is interesting since Convex-Pot has not been trained on the coefficients. We furthermore remark that (for this loading case) not all of the representations show a discontinuous coefficient behavior as a result of the coalescent principal strains. For example for Mono-Coff, whose coefficient matrix was also ill-conditioned and had to be adapted, the coefficient behavior is smooth over \(\gamma\), see Figure 17. In this case, the NN coefficients are basically identical to the extracted coefficients obtained from the altered equation system.
### Extrapolation
To highlight the predictive quality of selected representations consider a loading path given by the homogenous deformation [73]
\[\begin{split} I_{1}&=3-1.6\,\gamma+\gamma^{2}\\ I_{2}&=3-3.2\gamma+1.64\,\gamma^{2}\\ I_{3}&=(1-0.8\,\gamma)^{2}\end{split} \tag{60}\]
where \(\gamma\in[0,1]\). The loading path projected into invariant space is shown in Figure 18. We can see that it starts at the undeformed configuration and goes beyond even the range of the test data. The deformation leaves the hull of the training data points at \(\gamma\approx 0.25\). We highlight that this path is not explicitly part of the training data set. For five of the representations, Figure 19 shows the expected coefficients and trained coefficients respectively. The range of the training regime is highlighted. Surprisingly, all the selected models fit the reference coefficients sufficiently well, even St.V-Coeff which was by far the worst-performing approach in terms of generalization error. Comparing the fits of the models to the conclusions drawn from Figure 3, it becomes evident that the magnitude-wise largest coefficient predictions of the poorly performing representations tend to stagnate earlier. Similar reasoning can be followed when examining Figure 20 which plots three components of the expected stress and their predicted counterparts from the neural network models,
Figure 14: Uniaxial and Biaxial extension parameterization in invariant space given by \(\gamma\)
where we observe that the representations which have only one significant coefficient on the path are also the best performing.
## 5 Conclusion
In this work, we investigated the a wide variety of tensor basis neural network models, including previously unexplored alternatives to the classical Finger-Rivlin-Ericksen stress representation, such as TBNNs with an orthogonal basis and others with a St. Venant basis. We compared coefficient-based TBNNs against potential-based models and discussed and summarized techniques to obtain reference coefficient values from stress-strain data pairs.
In our cases studies involving six test datasets for three materials and two noise levels, the representations derived from the classical Finger-Rivlin-Ericksen formulation yield the best generalization performance. This was surprising to us, because we had initially believed that in particular, the
Figure 15: Reference and predicted coefficients around the undeformed configuration for uniaxial (\(\gamma<0\)) and biaxial (\(\gamma>0\)) extension for the Rivlin-Coeff and St.V-Coeff representations obtained from the Mooney-Rivlin model.
potential advantages of orthogonal bases, e.g. continuity of the coefficients and linear independence, would also translate into better extrapolations. We found that the generalization capabilities of the stress representations is largely dependent on the simplicity and lower complexity of the coefficient functions in the sense that the coefficient functions are smooth and monotonic, like low-order polynomials. This is the case when the stress generators (bases) already describe a majority of the stress-strain data complexities and hence the invariant-coefficient mappings are simpler. The introduction of physics-constrained extensions to the TBNN in particular convexity (potential) or monotonicity (coefficients) appears to be the most beneficial for accurate extrapolation and generalization. We also observed that the assumption of the existence of a potential helps the performance,
Figure 16: Reference and predicted coefficients around the undeformed configuration for uniaxial (\(\gamma<0\)) and biaxial (\(\gamma>0\)) extension for the Rivlin-Coeff and Convex-Pot representations obtained from the Mooney-Rivlin model.
Figure 17: Reference and predicted coefficients around the undeformed configuration for uniaxial (\(\gamma<0\)) and biaxial (\(\gamma>0\)) extension for the Mono-Coeff representation obtained from the Mooney-Rivlin model.
i.e. potential-based models generalize better than coefficient-based models for the same stress representation.
In future work the study will be extended to include anisotropic representations. We also aim to introduce monotonically increasing neural network formulations that eliminate the restrictiveness of current implementations, where a subset of the outputs is monotonically increasing with only a subset of the inputs, to potentially improve the performance of the monotonic Rivlin representation.
## Acknowledgments
REJ would like to thank Professor J.B. Estrada (University of Michigan) and NB would like to thank Professor K.T. Ramesh (Johns-Hopkins University) for independently pointing out Criscione's work on stress representations with an orthogonal basis.
JF and NB gratefully acknowledge support by the Air Force Office of Scientific Research under award number FA9550-22-1-0075.
Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA-0003525. This paper describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the paper do not necessarily represent the views of the U.S. Department of Energy or the United States Government.
Figure 18: Parameterization in invariant space given by \(\gamma\)
Figure 19: Selected Mooney-Rivlin coefficient curves over parameter \(\gamma\). Dashed lines: extracted coefficients, solid lines: trained NN coefficients.
Figure 20: Selected Mooney-Rivlin stress-strain curves over parameter \(\gamma\). Dashed lines: data, solid lines: trained NN. |
2305.00171 | Novel method for identifying the heaviest QED atom | The bound state of a $\tau^+\tau^-$ pair by the electromagnetic force is the
heaviest and smallest QED atom. Since the discovery of the two lightest QED
atoms more than 60 years ago, no evidence for the third one has been found. We
demonstrate that the $J_\tau$ ($\tau^+\tau^-$ atom with $J^{PC}=1^{--}$)
resonance signal can be observed with a significance larger than $5\sigma$
including both statistical and systematic uncertainties, via the process
$e^+e^-\to X^+ Y^- \slashed{E}$ ($X,\,Y=e$, $\mu$, $\pi$, $K$, or $\rho$, and
$\slashed{E}$ is the missing energy) with $1.5~{\rm ab^{-1}}$ data taken around
the $\tau$ pair production threshold. The $\tau$ lepton mass can be measured in
a precision of 2 keV with the same data sample. This is within one year's
running time of the proposed super tau-charm factory or super charm-tau
factory. | Jing-Hang Fu, Sen Jia, Xing-Yu Zhou, Yu-Jie Zhang, Cheng-Ping Shen, Chang-Zheng Yuan | 2023-04-29T04:51:08Z | http://arxiv.org/abs/2305.00171v2 | # Novel method for identifying the heaviest QED atom
###### Abstract
The bound state of a \(\tau^{+}\tau^{-}\) pair by the electromagnetic force is the heaviest and smallest QED atom. Since the discovery of the two lightest QED atoms more than 60 years ago, no evidence for the third one has been found. We demonstrate that the \(J_{\tau}\) (\(\tau^{+}\tau^{-}\) atom with \(J^{\rm PC}=1^{--}\)) resonance signal can be observed with a significance larger than \(5\sigma\) including both statistical and systematic uncertainties, via the process \(e^{+}e^{-}\to X^{+}Y^{-}\bar{\mathbf{E}}\) (\(X\), \(Y=e\), \(\mu\), \(\pi\), \(K\), or \(\rho\), and \(\bar{\mathbf{E}}\) is the missing energy) with 1.5 ab\({}^{-1}\) data taken around the \(\tau\) pair production threshold. The \(\tau\) lepton mass can be measured in a precision of 2 keV with the same data sample. This is within one year's running time of the proposed super tau-charm factory or super charm-tau factory.
Quantum electrodynamics (QED) atoms are formed of lepton pairs (\(e^{+}e^{-}\), \(\mu^{+}e^{-}\), \(\tau^{+}e^{-}\), \(\mu^{+}\mu^{-}\), \(\tau^{+}\mu^{-}\), and \(\tau^{+}\tau^{-}\)) by electromagnetic interactions, similar to hydrogen formed of a proton and an electron. Their properties have been studied to test QED theory [1; 2], fundamental symmetries [3; 4], gravity [5], and search for physics beyond the Standard Model [6; 7; 4], and so on. The first QED atom was discovered in 1951, it is the \(e^{+}e^{-}\) bound state and named positronium [8]; the second one was discovered in 1960, it is the \(\mu^{+}e^{-}\) bound state and named muonium [9]. No other QED atom was found in the past 63 years. The heaviest and smallest QED atom, a bound state of \(\tau^{+}\tau^{-}\)[10], is named tauonium [11; 12], dtauonium [13; 14], or true tauonium [15]. We classify them following the charmonium spectroscopy [16] simply as \(J_{\tau}(nS)\), \(\eta_{t}(nS)\), and \(\chi_{\tau^{+}}(nP)\) for the states with quantum numbers of \(n^{3}S_{1}\), \(n^{1}S_{0}\), and \((n+1)^{3}P_{J}\), respectively. There were many theoretical studies in the literature since the discovery of the \(\tau\) lepton. The spectroscopy of \(\tau^{+}\tau^{-}\) atoms was studied in Ref. [13]. The production of \(\eta_{\tau}\) was considered in Refs. [14; 15] and that of the process \(e^{+}e^{-}\to J_{\tau}(1S)\to\mu^{+}\mu^{-}\) in Refs. [11; 17; 18] and in Ref. [19] at the future Super Tau Charm Facility (STCF). In addition, study of the \(J_{\tau}(nS)\) via the processes \(e^{+}e^{-}\to J_{\tau}(1S)\to\) light hadrons [11; 12] and \(e^{+}e^{-}\to J_{\tau}(nS)\to\gamma\eta_{\tau}\) were proposed [10].
In this Letter, we introduce a novel method for identifying \(J_{\tau}(nS)\) by measuring the cross section ratio \(\sigma(e^{+}e^{-}\to X^{+}Y^{-}\bar{\mathbf{E}})/\sigma(e^{+}e^{-}\to\mu^{+}\mu^{-})\). Here \(X\), \(Y=e\), \(\mu\), \(\pi\), \(K\), or \(\rho\), and \(\bar{\mathbf{E}}\) is the missing energy. By fitting the \(\sigma(e^{+}e^{-}\to X^{+}Y^{-}\bar{\mathbf{E}})/\sigma(e^{+}e^{-}\to\mu^{+}\mu^{-})\) distribution in the vicinity of the \(\tau\) pair production threshold with and without a \(J_{\tau}(nS)\) resonance component, we can identify its existence. We show below that the \(\tau\) lepton mass measured with the process \(e^{+}e^{-}\to\tau^{+}\tau^{-}\to X^{+}Y^{-}\bar{\mathbf{E}}\) will also be affected after considering the \(J_{\tau}(nS)\) contribution.
The \(\tau\) lepton mass is one of the fundamental parameters of the Standard Model. A precise \(\tau\) mass measurement is essential to check the lepton flavor universality and restrict the sensitivity of the \(\nu_{\tau}\) mass [20]. In the previous measurements of doing \(e^{+}e^{-}\to\tau^{+}\tau^{-}\) cross section scan at the BE-SIII [21; 22; 23] and KEDR experiments [24], the contribution of \(e^{+}e^{-}\to J_{\tau}(nS)\) was not included in theoretical calculation. This is not a problem at these experiments as the uncertainties are at 100 keV level (The current world average value of \(\tau\) mass is \(m_{\tau}^{\rm PDG}=(1776.86\pm 0.12)\) MeV [16]), but the effect should not be ignored in the next generation experiments which aim at a precision of one or two orders of magnitude better than previous ones.
The cross section \(\sigma(e^{+}e^{-}\to X^{+}Y^{-}\bar{\mathbf{E}})\) around the \(\tau^{+}\tau^{-}\) production threshold is [21; 22; 23; 24]
\[\sigma(W,m_{\tau},\Gamma_{\tau},\delta_{w})=\int_{m_{\tau}}^{\infty} dW^{\prime}\frac{1}{\sqrt{2\pi}\delta_{w}}e^{-\frac{(W-W^{\prime})^{2}}{2\delta_{w} ^{2}}}\times\] \[\int_{0}^{1-\frac{\delta_{w}^{2}}{2\delta_{w}^{2}}}dx\;F(x,W^{ \prime})\frac{\bar{\sigma}(W^{\prime}\sqrt{1-x},m_{\tau},\Gamma_{\tau})}{|1- \Pi(W^{\prime}\sqrt{1-x})|^{2}}. \tag{1}\]
Here, \(W\) is the center-of-mass energy, \(\delta_{w}\) is the energy spread, \(F(x,W)\) is the initial state radiation factor [25], \(\Pi(W)\) is the vacuum polarization factor [26], and \(\bar{\sigma}(W,m_{\tau},\Gamma_{\tau})\) is the Born cross section. With \(J_{\tau}(nS)\) atoms included, Eq. (1) differs from those given in Refs. [21; 22; 23; 24], where \(2m_{\tau}\) is replaced by the ground state mass \(m_{\ell}\), in the range of integration, \(\tau\) decay width \(\Gamma_{\tau}\) is added as a variable, and the contribution of \(J_{\tau}(nS)\) atoms (\(\sigma^{J_{\tau}}(W)\)) is included in \(\bar{\sigma}(W,m_{\tau},\Gamma_{\tau})\) as
\[\bar{\sigma}(W,m_{\tau},\Gamma_{\tau})=\bar{\sigma}^{J_{\tau}}(W)+\bar{\sigma}^{ \rm con}(W), \tag{2}\]
where \(\bar{\sigma}^{\rm con}(W)\) is the cross section from the \(e^{+}e^{-}\to\tau^{+}\tau^{-}\) continuum process and is calculated to the next-to-leading order (NLO) of the fine structure constant \(\alpha\), as has been done for \(\bar{\sigma}^{J_{\tau}}(W)\).
At the NLO [13; 27], the cross section \(\bar{\sigma}^{J_{\tau}}\) from narrow resonances is given by the Breit-Wigner function [16]
\[\bar{\sigma}^{J_{\tau}}(W)=\sum_{n}\frac{6\pi^{2}|1-\Pi(2m_{\tau} )|^{2}\left(1-\frac{3\alpha}{4\pi}\right)}{W2^{\Gamma_{\rm total}(nS)}}\times\] \[\delta(W-m_{J_{\tau}(nS)})\Gamma^{J_{\tau}(nS)}_{X^{*}Y^{*}\bar{E} }\times\Gamma^{J_{\tau}(nS)}_{e^{+}e^{-}}, \tag{3}\]
where \(m_{J_{\tau}(nS)}=2m_{\tau}+E_{n}\) and \(E_{n}=-\alpha^{2}m_{\tau}/(4n^{2})\) are the mass and binding energy of \(J_{\tau}(nS)\), respectively, \(|1-\Pi(2m_{\tau})|^{2}(1-3\alpha/4\pi)\) is recalled here since the initial state radiation factor and the vacuum polarization factor have been considered in Eq. (1), \(\Gamma^{J_{\tau}(nS)}_{X^{*}Y^{*}\bar{E}}\) is the partial decay width of \(J_{\tau}(nS)\to X^{+}Y^{-}\bar{E}\), and \(\Gamma^{J_{\tau}(nS)}_{e^{+}e^{-}}\) is that of \(J_{\tau}(nS)\to e^{+}e^{-}\). We have
\[\Gamma^{J_{\tau}(nS)}_{e^{+}e^{-}} = \frac{\alpha^{5}m_{\tau}}{6n^{3}|1-\Pi(2m_{\tau})|^{2}}\left(1- \frac{13\alpha}{4\pi}+C^{\rm sg}_{\rm coulomb}\frac{\alpha}{\pi}\right),\] \[\Gamma^{J_{\tau}(nS)}_{X^{*}Y^{*}\bar{E}} = 2\Gamma_{\tau}+\Gamma(J_{\tau}(nS)\to\gamma X_{\tau}t),\ \text{and}\] \[\Gamma^{J_{\tau}(nS)}_{\rm total} = \Gamma^{J_{\tau}(nS)}_{X^{*}Y^{*}\bar{E}}+(2+R)\Gamma^{J_{\tau}(nS )}_{e^{+}e^{-}}. \tag{4}\]
The factor \((2+R)\) comes from \(e^{+}e^{-},\mu^{+}\mu^{-}\), and hadronic final states with \(R=2.342\pm 0.0645\)[28], the total \(\tau\) decay width \(\Gamma_{\tau}=2.2674\pm 0.0039\) meV [16], and \(\Gamma(J_{\tau}(nS)\to\gamma X_{\tau}t)\) is the \(E1\) transition width (The annihilation decays of \(\chi_{T}\) are ignored since their contributions are smaller than \(\bar{\sigma}^{J_{\tau}}(W)\) by a factor of \(10^{-6}\).) With Green functions, the Coulomb corrections (\(C^{\rm sg}_{\rm coulomb}\)) are calculated [13; 29] as 5.804, 4.428, 3.810, 3.518, 3.358, 3.256, 3.186, 3.134, 3.093, and 3.061 for \(n=1-10\), respectively. Most of the NLO corrections of \(\Gamma(J_{\tau}(nS)\to e^{+}e^{-})\) come from the vacuum polarization factor \(\Pi\). Then we get
\[\bar{\sigma}^{J_{\tau}}(W)=(3.11\pm 0.02)\;\delta\left(\frac{W-2m_{\tau}+13.8 \text{ keV}}{1\;\text{MeV}}\right)\,\text{pb}, \tag{5}\]
where \(-13.8\text{ keV}=\sum_{n}E_{n}B^{J_{\tau}(nS)}_{X^{*}Y^{*}\bar{E}}\Gamma^{J_{ \tau}(nS)}_{e^{+}e^{-}}/\sum_{n}B^{J_{\tau}(nS)}_{X^{*}Y^{*}\bar{E}}\Gamma^{J_{ \tau}(nS)}_{e^{+}e^{-}}\) with \(B^{J_{\tau}(nS)}_{X^{*}Y^{*}\bar{E}}\) being the branching fraction of \(J_{\tau}(nS)\to X^{+}Y^{-}\bar{E}\). The uncertainty from \(R\) is one order of magnitude greater than that from \(m_{\tau}\) and \(\Gamma_{\tau}\).
We use the NLO cross sections \(\bar{\sigma}^{\rm con}(W)\) and take NNLO corrections as uncertainties here [30]. To reduce the uncertainties from the initial state radiation factor and the vacuum polarization factor in Eq. (1), and that from the integrated luminosity [28], we introduce \(R_{X^{*}Y^{*}\bar{E}}\), ratio of the cross sections, as
\[R_{X^{*}Y^{*}\bar{E}}(W,\delta_{w},m_{\tau})=\frac{\sigma(W,m_{\tau},\Gamma_{ \tau},\delta_{w})}{\sigma^{\mu^{+}\mu^{-}}(W,\delta_{w})}. \tag{6}\]
Here, \(\sigma^{\mu^{+}\mu^{-}}(W,\delta_{w})\) is calculated with \(\bar{\sigma}^{\mu^{+}\mu^{-}}(W)=\frac{4\alpha\bar{\sigma}^{2}(1+3\alpha/4\pi)} {3W^{2}}\) in Eq. (1). The higher order correction terms, such as \(9\alpha m_{\mu}^{2}/\pi W^{2}\) and \(m_{\mu}^{4}/W^{4}\), are ignored because they are merely global factors of about \(2\times 10^{-5}\) here. With \(m_{\tau}=m_{\tau}^{\rm PDG}\) and \(\delta_{w}=1\) MeV [19], the cross sections \(\sigma^{\rm con}(m_{\tau}^{\rm PDG})\), \(\sigma^{\rm res}(m_{\tau}^{\rm PDG})\), and \(\sigma^{\rm total}(m_{\tau}^{\rm PDG})\) are shown in Fig. 1.
Next, we estimate the sensitivity of observing the \(J_{\tau}(nS)\) at a future high luminosity facility such as the STCF [19] and the super charm-tau factory (SCT) [31]. To determine which energy points are optimal for the study, we use the \(\chi^{2}\) values per integrated luminosity as
\[\frac{\chi_{i}^{2}}{\mathcal{L}_{i}}=\frac{(\sigma_{i}^{\rm total}(m_{\tau}^{ \rm PDG})-\sigma_{i}^{\rm con}(m_{\tau}))^{2}\cdot\varepsilon_{X^{*}Y^{*}\bar{E} }}{\sigma_{i}^{\rm total}(m_{\tau}^{\rm PDG})}, \tag{7}\]
where \(\sigma_{i}^{\rm total}(m_{\tau}^{\rm PDG})=\sigma(W,m_{\tau},\Gamma_{\tau}, \delta_{w})\) in Eq. (1) is the total cross section for energy point \(i\) assuming \(m_{\tau}=m_{\tau}^{\rm PDG}\), \(\sigma_{i}^{\rm con}=\bar{\sigma}^{\rm con}\). in Eq. (2) is the cross section with only continuum included, \(\varepsilon_{X^{*}Y^{*}\bar{E}}=8\%\) is the reconstruction efficiency of \(e^{+}e^{-}\to X^{+}Y^{-}\bar{E}\) events [19; 21], and \(\mathcal{L}_{i}\) is the integrated luminosity. Note that in the calculation of \(\sigma_{i}^{\rm con}(m_{\tau})\), \(m_{\tau}\) is allowed to vary so that \(\frac{\chi_{i}^{2}}{\mathcal{L}_{i}}\) has different values at each energy point. Here, we choose the best solution by minimizing the value of \(\Sigma_{i}\frac{\chi_{i}^{2}}{\mathcal{L}_{i}}\) within the region of \(3.54<W<3.56\) GeV. In the end, we find the values of \(\frac{\chi_{i}^{2}}{\mathcal{L}_{i}}\) are relatively large at \(W=3552.56\) and \(3555.83\) MeV, and that at \(3552.56\) MeV is about half of that at \(3555.83\) MeV. Besides the above two energy points, an additional energy point of \(3549.00\) MeV is needed to obtain the whole lineshape of the \(e^{+}e^{-}\to X^{+}Y^{-}\bar{E}\) cross section.
We determine how large data samples are required in order to observe the \(J_{\tau}(nS)\) at \(W=3549.00\), \(3552.56\), and \(3555.83\) MeV by performing \(10^{5}\) sets of simulated pseudo-experiments. The numbers of expected events for \(e^{+}e^{-}\to X^{+}Y^{-}\bar{E}\) and \(e^{+}e^{-}\to\mu^{+}\mu^{-}\) in simulated data samples are determined by \(N^{\rm data}_{X^{*}Y^{*}\bar{E}}=\sigma^{\rm total}(m_{\tau}^{\rm PDG})\cdot \mathcal{L}\cdot\varepsilon_{X^{*}Y^{*}\bar{E}}\) and \(N^{\rm data}_{\mu^{+}\mu^{-}}=\sigma^{\mu^{+}\mu^{-}}\cdot\mathcal{L}\cdot \varepsilon_{\mu^{+}\mu^{-}}\), where \(\varepsilon_{\mu^{+}\mu^{-}}=45\%\) is the reconstruction efficiency of \(e^{+}e^{-}\to\mu^{+}\mu^{-}\) events [19; 32]. The statistical uncertainties of \(N^{\rm data}_{\mu^{+}\mu^{-
where the number of expected background events is zero. The numbers of expected events and the statistical uncertainties for \(e^{+}e^{-}\to X^{+}Y^{-}\bar{\mathbf{E}}\) and \(e^{+}e^{-}\to\mu^{+}\mu^{-}\) in the simulated data samples are summarized in Table 1, where the integrated luminosities are optimized and determined based on the \(\chi^{2}\) value to estimate the \(J_{\tau}(nS)\) signal significance reaching \(5\sigma\) level (discussed below). For each set of pseudoexperiment, we generate randomly the numbers of events (\(N^{\rm data}_{X^{+}Y^{-}\bar{\mathbf{E}},i}\) and \(N^{\rm data}_{\mu^{+}\mu^{-},i}\), \(i=1\), \(2\), and \(3\)) according to Poisson distributions.
A least-square fit is applied to each set of the pseudoexperiment with
\[\chi^{2}=\sum_{i=1}^{3}\left(\frac{\mathcal{R}^{\rm data}_{i}-\hat{\mathcal{R }}_{i}(m_{\tau})}{\Delta\mathcal{R}^{\rm data}_{i}}\right)^{2}, \tag{8}\]
where \(\mathcal{R}^{\rm data}_{i}=\frac{N^{\rm data}_{X^{+}Y^{-}\bar{\mathbf{E}},i}}{N^{ \rm data}_{\mu^{+}\mu^{-},i}}\) and \(\Delta\mathcal{R}^{\rm data}_{i}\) is its statistical uncertainty calculated from those of \(N^{\rm data}_{X^{-}Y^{-}\bar{\mathbf{E}},i}\) and \(N^{\rm data}_{\mu^{+}\mu^{-},i}\); \(\hat{\mathcal{R}}_{i}(m_{\tau})\) is the expected ratio at the \(\tau\) mass \(m_{\tau}\) to be determined from the fit. The fit to one pseudoexperiment is shown in Fig. 2(a), and the corresponding contribution from the \(J_{\tau}(nS)\) resonance cross section (\(\sigma^{\rm res.}\)) is shown in Fig. 2(b). For \(10^{5}\) sets of simulated pseudoexperiments, the average value of \(\chi^{2}/{\rm ndf}\) is 0.7/2, where ndf is the number of degrees of freedom. This indicates a very good fit to the simulated data samples.
By removing the \(J_{\tau}(nS)\) resonance contribution in calculating \(\hat{\mathcal{R}}_{i}(m_{\tau})\) and refining the data, we find a much poorer fit quality (the average value of \(\chi^{2}/{\rm ndf}\) is 51/2 for the \(10^{5}\) sets of simulated pseudoexperiments) and the difference in the \(\chi^{2}\)s measures the statistical significance of the \(J_{\tau}(nS)\) signals. Figure 3 shows the normalized distribution of the statistical significances in all the pseudoexperiments. We conclude that in the scenario of taking 5 fb\({}^{-1}\) data at 3549.00 MeV, 500 fb\({}^{-1}\) at 3552.56 MeV, and 1000 fb\({}^{-1}\) at 3555.83 MeV as indicated in Table 1, we have a 96% chance of discovering the \(J_{\tau}(nS)\) with a statistical significance larger than \(5\sigma\) and an almost 100% chance of observing it with a significance larger than \(3\sigma\). These data samples correspond to about 350 and 175 days' data taking time at the STCF [19] and SCT [31], with designed instantaneous luminosities of 0.5\(\times 10^{35}\) and 1.0\(\times 10^{35}\) cm\({}^{-2}\)s\({}^{-1}\), respectively. Here, we assume the efficiency and \(\delta_{w}\) at SCT are the same as those at STCF.
With these data samples, we obtain a high precision \(\tau\) mass measurement, the above fit yields
\[m_{\tau}=(1776.8600\pm 0.0002({\rm stat.})\pm 0.0018({\rm syst.}))~{}{\rm MeV},\]
where the first and second uncertainties are statistical and systematic, respectively. The fit with the \(J_{\tau}(nS)\) contribution removed results in a \(-4\) keV shift relative to the nominal fit with both resonance and continuum contributions, which is twice as large as the total systematic uncertainty discussed below and should not be ignored in the future high precision measurements.
The systematic uncertainties in the \(\tau\) mass measurement are listed in Table 2. The uncertainty associated with the energy scale is assessed by shifting \(W\) by a factor of \(10^{-6}\)[19], which results in an uncertainty of 1.8 keV in \(\tau\) mass. By changing the energy spread [19], we find the \(\tau\) mass is changed by 0.3 keV. By changing the NLO correction with the NNLO correction in the calculation of the \(e^{+}e^{-}\to X^{+}Y^{-}\bar{\mathbf{E}}\) cross sections, we find the \(\tau\) mass changes by 0.1 keV which is included as the uncertainty due to the theoretical accuracy. Since we perform the fit to the ratio of observed \(e^{+}e^{-}\to X^{+}Y^{-}\bar{\mathbf{E}}\) and \(\mu^{+}\mu^{-}\) events, the uncertainty from the integrated luminosity is cancelled. The uncertainty in the \(e\), \(\mu\), \(\pi\), \(K\), and \(\pi^{0}\) reconstruction and identification efficiencies is about 5% [19], and this changes the fitted \(\tau\) mass by 0.1 keV. For the first energy point with a small expected number of events and large statistical uncertainty, we enlarge the uncertainty on the \(N^{\rm data}_{X^{+}Y^{-}\bar{\mathbf{E}}}\) by a factor
Figure 3: Normalized distribution of the statistical significance of the \(J_{\tau}(nS)\) signals in all the pseudoexperiments.
Figure 2: (a) The fit to \(\mathcal{R}^{\rm data}\) from one set of pseudoexperiment data, and (b) the \(\sigma^{\rm res.}\) contribution from the fit. The dots with error bars are the pseudoexperiment data, and the red curves display the best fit.
of three, and the resulting \(\tau\) mass does not change. Assuming all these sources are independent, we add them in quadrature to obtain the total systematic uncertainty, which is listed in Table 2. After considering the above systematic uncertainties, the \(J_{\tau}(nS)\) signal significance is almost not changed.
In summary, we show that the \(\tau^{+}\tau^{-}\) atom with \(J^{PC}=1^{--}\), \(J_{\tau}\), can be observed with a significance larger than \(5\sigma\) with a 1.5 ab\({}^{-1}\) data sample at the proposed high luminosity experiments STCF and SCT, by measuring the ratio of the processes \(e^{+}e^{-}\to X^{+}Y^{-}\bar{E}\) and \(e^{+}e^{-}\to\mu^{+}\mu^{-}\). With the same data sample, the \(\tau\) lepton mass can be measured with a precision of better than 2 keV, a factor of about 50 improvement over the existing world best measurements.
_Acknowledgments. --_ We thank Prof. Kuang-Ta Chao for valuable and helpful discussions. This work is supported in part by National Key Research and Development Program of China under Contract No. 2020YFA0406300, National Natural Science Foundation of China (NSFC) under contract No. 11975076, No. 12161141008, No. 12135005, and No. 12075018; and the Fundamental Research Funds for the Central Universities Grant No. 4007022302.
|
2301.12082 | Pushing the Limits of Fewshot Anomaly Detection in Industry Vision:
Graphcore | In the area of fewshot anomaly detection (FSAD), efficient visual feature
plays an essential role in memory bank M-based methods. However, these methods
do not account for the relationship between the visual feature and its rotated
visual feature, drastically limiting the anomaly detection performance. To push
the limits, we reveal that rotation-invariant feature property has a
significant impact in industrial-based FSAD. Specifically, we utilize graph
representation in FSAD and provide a novel visual isometric invariant feature
(VIIF) as anomaly measurement feature. As a result, VIIF can robustly improve
the anomaly discriminating ability and can further reduce the size of redundant
features stored in M by a large amount. Besides, we provide a novel model
GraphCore via VIIFs that can fast implement unsupervised FSAD training and can
improve the performance of anomaly detection. A comprehensive evaluation is
provided for comparing GraphCore and other SOTA anomaly detection models under
our proposed fewshot anomaly detection setting, which shows GraphCore can
increase average AUC by 5.8%, 4.1%, 3.4%, and 1.6% on MVTec AD and by 25.5%,
22.0%, 16.9%, and 14.1% on MPDD for 1, 2, 4, and 8-shot cases, respectively. | Guoyang Xie, Jinbao Wang, Jiaqi Liu, Feng Zheng, Yaochu Jin | 2023-01-28T03:58:32Z | http://arxiv.org/abs/2301.12082v3 | # Pushing the Limits of Fewshot Anomaly Detection in Industry Vision: Graphcore
###### Abstract
In the area of fewshot anomaly detection (FSAD), efficient visual feature plays an essential role in memory bank \(\mathcal{M}\)-based methods. However, these methods do not account for the relationship between the visual feature and its rotated visual feature, drastically limiting the anomaly detection performance. To push the limits, we reveal that rotation-invariant feature property has a significant impact in industrial-based FSAD. Specifically, we utilize graph representation in FSAD and provide a novel visual isometric invariant feature (VIIF) as anomaly measurement feature. As a result, VIIF can robustly improve the anomaly discriminating ability and can further reduce the size of redundant features stored in \(\mathcal{M}\) by a large amount. Besides, we provide a novel model GraphCore via VIIFs that can fast implement unsupervised FSAD training and can improve the performance of anomaly detection. A comprehensive evaluation is provided for comparing GraphCore and other SOTA anomaly detection models under our proposed fewshot anomaly detection setting, which shows GraphCore can increase average AUC by 5.8%, 4.1%, 3.4%, and 1.6% on MVec AD and by 25.5%, 22.0%, 16.9%, and 14.1% on MPDD for 1, 2, 4, and 8-shot cases, respectively.
## 1 Introduction
With the rapid development of deep vision detection technology in artificial intelligence, the detection of anomalies/defects on the surface of industrial products (including scratches, stains, missing, etc.) has received unprecedented attention. Changeover in manufacturing refers to the process of converting a line or machine from processing one product to another. Since the equipment has not been completely fine-tuned after the starting of the production line, changeover frequently results in unsatisfactory anomaly detection (AD) performance.
How to achieve rapid training of industrial product models in the changeover scenario while assuring accurate anomaly detection is a critical issue in the actual production process. The current state
of AD in the industry is as follows: (1) In terms of detection accuracy, the performance of state-of-the-art (SOTA) AD models degrades dramatically during changeover. Current mainstream work utilizes a considerable amount of training data as input to train the model, as shown in Fig. 1(a). However, this will make data collecting challenging, even for unsupervised learning. As a result, many approaches based on fewshot learning at the price of accuracy have been proposed. For instance, Huang et al. (2022) employ meta learning, as shown in Fig. 1(b). While due to complicated settings, it is impossible to flexibly migrate to the new product during changeover, and the detection accuracy cannot be guaranteed. (2) In terms of training speed, when a large amount of data is utilized for training, the training progress for new goods is slowed in actual production line. As is well-known, vanilla unsupervised AD requires to collect a large amount of information. Even though meta learning works in fewshot learning, as shown in Fig. 1(b), it is still necessary to train a massive portion of previously collected data.
We state that AD of industrial products requires just a small quantity of data to achieve performance comparable to a large amount of data, i.e. a small quantity of image data can contain sufficient information to represent a large number of data. Due to the fact that industrial products are manufactured with high stability (no evident distortion of shape and color cast), the taken images lack the diversity of natural images, and there is a problem with the shooting angle or rotation. Therefore, it is more essential and necessary to extract rotation-invariant structural features. As graph neural networks (GNNs) are capable of robustly extracting non-serialized structural features (Han et al. (2022), Bruna et al. (2013), Hamilton et al. (2017), Xu et al. (2018)), they are more suited than convolution neural networks (CNNs) to handle the problem of extracting rotation-invariant features. For this reason, the _core idea_ of the proposed _GraphCore_ method in this paper is to use the visual isometric invariant features (VIIFs) as the anomaly measurement features. In the method using memory bank (\(\mathcal{M}\)) as the AD paradigm, PatchCore (Roth et al. (2022)) uses ResNet (He et al. (2016)) as the feature extractor. However, since their features obtained by CNNs do not have rotation invariance (Dieleman et al. (2016)), a large number of redundant features are stored in \(\mathcal{M}\). Note that these redundant features maybe come from multiple rotation features of the same patch structure. It will hence require a huge quantity of training data to ensure the high accuracy of the test set. To avoid these redundant features, we propose VIIFs, which not only produce more robust visual features, but also dramatically lower the size of \(\mathcal{M}\) and accelerate detection.
Based on the previous considerations, the goal of our work is to handle the cold start of the production line during the changeover. As shown in Fig. 1(c), a new FSAD method, called _GraphCore_, is developed that employs a small number of normal samples to accomplish fast training and competitive AD accuracy performance of the new product. On the one hand, by utilizing a small amount of data, we would rapidly train and accelerate the speed of anomaly inference. On the other hand, because we directly train new product samples, adaptation and migration of anomalies from the old product to the new product do not occur.
**Contributions.** In summary, the main contributions of this work are as follows:
* We present a feature-augmented method for FSAD in order to investigate the property of visual features generated by CNNs.
* We propose a novel anomaly detection model, GraphCore, add a new VIIF into the memory bank-based AD paradigm, which can drastically reduce the quantity of redundant visual features.
* The experimental results show that the proposed VIIFs are effective and can greatly enhance the FSAD performance on MVTec AD and MPDD.
**Related Work.** Fewshot anomaly detection (FSAD) is an attractive research topic. However, there are only a few of papers devoted to the industrial image FSAD. Some works (Liznerski et al. (2020); Pang et al. (2021); Ding et al. (2022)) experiment with fewshot abnormal images in the test set, which contradicts our assumptions where no abnormal images existed. And others (Wu et al. (2021); Huang et al. (2022)) conduct experiments in a meta-learning setting. This configuration has the disadvantage of requiring a high number of base class images and being incapable of addressing the shortage of data under cold-start conditions in industrial applications. PatchCore (Roth et al. (2022)), SPADE (Cohen and Hoshen (2020)) and PaDiM (Defard et al. (2021)) investigated AD performance on MVTec AD in a fewshot setting. However, these approaches are not intended for changeover-based fewshot settings, thus their performance cannot satisfy the requirements of manufacturing
changeover. In this research, we propose a feature augmentation method for FSAD that can rapidly finish the training of anomaly detection models with a small quantity of data and meet manufacturing changeover requirements.
## 2 Approach
**Problem Setting.** Fig. 1(c) outlines the formal definition of the problem setting for the proposed FSAD. Given a training set of only \(n\) normal samples, where \(n\leq 8\), from a certain category. At test time, given a normal or abnormal sample from a target category, the anomaly detection model should predict whether or not the image is anomalous and localize the anomaly region if the prediction result is anomalous.
**Challenges.** For the FSAD proposed in Fig. 1(c), we attempt to detect anomalies in the test sample using only a small number of normal images as the training dataset. The key challenges consist of: (1) Each category's training dataset contains only normal samples, i.e there are no annotations at the image or pixel level. (2) There are few normal samples of the training set available. In our proposed setting, there are less than 8 training samples.
**Motivation.** In the realistic industrial image dataset (Bergmann et al. (2019); Jezek et al. (2021)), the images under certain categories are extremely similar. Most of them can be converted to one another with simple data augmentation, such as the meta nut (Fig. 2) and the screw (Fig. 6). For instance, rotation augmentation can effectively provide a new screw dataset. Consequently, when faced with the challenges stated in Section 2, our natural inclination is to acquire additional data through data augmentation. Then, the feature memory bank (Fig. 4) can store more useful features.
### Augmentation+PatchCore
As a means of validating our insight, we have adapted PatchCore (Roth et al. (2022)) to our model. We denote augmentation (rotation) with PatchCore as Aug.(R). The architecture is depicted in detail in Fig. 2. Before extracting features from the ImageNet pretrained model, we augment the data (e.g., by rotating the data).
Figure 1: Different from (a) vanilla unsupervised AD and (b) fewshot unsupervised AD in meta learning. As input training samples, our setting (c) only utilizes a small number of normal samples. For our setting (c), there is no requirement to aggregate training categories in advance. The proposed model, vision isometric invariant GNN, can fast obtain the invariant feature within a few normal samples, and its accuracy outperforms models trained in a meta-learning context.
In training phase, the aim of training phase is to build up a memory bank, in which store the neighbourhood aware features from all of normal samples. At test time, the test image are predicted as anomalies if at least one patch is anomalous, and pixel-level anomaly segmentation is computed via the score of each patch feature. The feature memory construction method is shown in Algorithm 1. In default, we set ResNet18 (He et al. (2016)) as the feature extraction model. Conceptually, coreset sampling (Sener and Savarese (2018)) for memory bank aims to balance the size of memory bank with the performance of anomaly detection. And the size of memory bank has a huge impact on the inference speed. In Section 3.3, we discuss the effect of sampling rate in detail.
In testing phase, with the normal patches feature bank \(\mathcal{M}\), the image-level anomaly score \(s\) for the test image \(x^{test}\) is computed by the maximum score \(s^{*}\) between the test image's patch feature \(\mathcal{P}(x^{test})\) and its respective nearest neighbour \(m^{*}\) in \(\mathcal{M}\):
From Table 2 and Table 3, we can easily observed that the performance of Aug.(R) greatly outperforms the SOTA models under the proposed fewshot setting.
in Fig. 3, we propose a new model for feature extraction: vision isometric invariant graph neural network (VIIG). The proposed model is motivated by Section 2 and tries to extract visual isometric invariant feature (VIF) from each patch of the normal sample. As previously stated, the majority of industrial visual anomaly detection datasets are transformable via rotation, translation, and flipping. Thus, the isomorphism of GNN suited industrial visual anomaly detection excellently.
### Graph Representation of Image
Fig. 4 shows that the feature extraction process of GraphCore. Specifically, for a normal sample image with a size of \(H\times W\times 3\), we evenly separate it as an \(N\) patch. In addition, each patch is transformed into a feature vector \(f_{i}\in\mathbb{R}^{D}\). So we have the features \(F=[f_{1},f_{2},\cdots,f_{N}]\), where \(D\) is the feature dimension and \(i=1,2,\cdots,N\). We view these features as unordered nodes \(\mathcal{V}=\{v_{1},v_{2},\cdots,v_{N}\}\). For certain each node \(v_{i}\), we denote the \(K\) nearest neighbours \(\mathcal{N}(v_{i})\) and add an edge \(e_{ij}\) directed from \(v_{j}\) to \(v_{i}\) for all \(v_{j}\in\mathcal{N}(v_{i})\). Hence, each patch of normal samples can be denoted as a graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\). \(\mathcal{E}\) refers all the edges of Graph \(\mathcal{G}\).
### Graph Feature Processing
Fig. 4 shows the architecture of the proposed vision isometric invariant GNN. To be specific, we set the feature extraction as GCN (Kipf & Welling (2017)). We aggregate features for each node by exchanging information with its neighbour nodes. In specific, the feature extraction operates as follows:
\[\mathcal{G}^{{}^{\prime}}=F(\mathcal{G},\mathcal{W})=Update(Aggregate( \mathcal{G},W_{aggregate}),W_{update}), \tag{1}\]
where \(W_{aggregate}\) and \(W_{update}\) denote the weights of the aggregation and update operations. Both of them can be optimized in an end-to-end manner. Specifically, the aggregation operation for each node is calculated by aggregating neighboring nodes' features:
\[f_{i}^{{}^{\prime}}=h(f_{i},g(f_{i},\mathcal{N}(f_{i}),W_{aggregate}),W_{ update}), \tag{2}\]
where \(h\) is the node feature update function and \(g\) is the node feature aggregate feature function. \(\mathcal{N}(f_{i}^{l})\) denotes the set of neighbor nodes of \(f_{i}^{l}\) at the \(l\)-th layer. Specifically, we employ max-relative graph convolution (Li et al. (2019)) as our operator. So \(g\) and \(h\) are defined as:
\[g(\cdot)=f_{i}^{{}^{\prime\prime}}=max(\{f_{i}-f_{j}|j\in\mathcal{N}(x_{i})\}), \tag{3}\]
\[h(\cdot)=f_{i}^{{}^{\prime}}=f_{i}^{{}^{\prime\prime}}W_{update}. \tag{4}\]
In Equations 3 and 4, \(g(\cdot)\) is a max-pooling vertex feature aggregator that aggregates the difference in features between node \(v_{i}\) and all of its neighbors. \(h(\cdot)\) is an MLP layer with batch normalization and ReLU activation.
### GraphCore Architecture
Fig. 5 shows the whole architecture of GraphCore. In the training phase, the most significant difference between GraphCore and Augmentation+PatchCore is the feature memory bank construction algorithm. The feature construction algorithm is the same with Aug.(R) memory bank in Algorithm 1. Note that we use vision isometric invariant GNN as feature extractor \(\mathcal{P}\) without data augmentation. In the testing phase, the computation of anomaly score \(s*\) for GraphCore is highly similar to the one in Augmentation+PatchCore. The only difference is the feature extraction method for each normal patch sample.
Figure 3: Convolution feature VS vision isometric invariant feature.
### A Unified View of Augmentation+PatchCore and GraphCore
Fig. 6 depicts a unified view of both Augmentation+PatchCore and GraphCore. GraphCore is prompted by Augmentation+PatchCore to obtain the isometric invariant feature. Therefore, GraphCore is able to improve the probability of locating a feature subset and it allows the anomaly score of a test image to be calculated most precisely and rapidly. Table 1 shows the difference among PatchCore, Augmentation+PatchCore and GraphCore in terms of architecture details.
## 3 Experiment
### Experiment setting
**Datasets.** To demonstrate the generalization of our proposed method, we conduct experiments on three datasets, namely MVTec AD (Bergmann et al. (2019)), MPDD (Jezek et al. (2021)) and MVTec LOCO AD (Bergmann et al. (2022)).
**Competing Methods.** RegAD (Huang et al. (2022)) is the SOTA FSAD method. It works under meta-learning setting: aggregated training on multiple categories and adapting to unseen categories, using fewshot unseen images as a support set. However, our proposed fewshot setting utilizes only a few images as a training set, and not several categories. Taking into account the fairness of the experiments, we reimplement the classical and SOTA approaches in the field of unsupervised anomaly detection, such as SPADE (Cohen and Hoshen (2020)), STPM (Wang et al. (2021)), RD4AD (Deng and Li (2022)), CFA (Lee et al. (2022)), and PatchCore (Roth et al. (2022)), using the official source code for comparison under our fewshot setting. PatchCore-1 is the result of our reimplementation with a 1% sampling rate, PatchCore-10 and PatchCore-25 are the results at 10% and 25% sampling rates, respectively, and RegAD-L is the RegAD experiment with our fewshot setting.
### Comparison with the SOTA Methods
The comparative findings between MVTec and MPDD are shown in Table 2. Especially, the performance of RegAD under the meta-learning setting is also listed in the table. In comparison to SOTA models, GraphCore improves average AUC by 5.8%, 4.1%, 3.4%, and 1.6% on MVTec and by 25.5%, 22.0%, 16.9%, and 14.1% on MPDD for 1, 2, 4, and 8-shot cases, respectively. From Fig. 7, it can be easily observed that GraphCore significantly outperform the SOTA approah at the image and pixel level from 1-shot to 8-shot. As can be seen, the performance of GraphCore and Augmentation+PatchCore surpass the other methods when using only a few samples (no more than 8 samples) for training.
Considering that RegAD only shows detailed results of various categories above 2-shot, we only show the detailed results of 2-shot in the main text, and the results of 1-shot, 4-shot, and 8-shot
\begin{table}
\begin{tabular}{c|c|c c c c c c c c c c c c} \hline
**Dataset** & **K** & **AggD** & **GraphCore** & **SAME** & **Patch** & **STM** & **BED** & **Patch-Cover-1** & **Patch-Cover-2** & **RegAD-L** & **RegAD** \\ \hline \multirow{4}{*}{MVTec} & 17.43\%, **39.84** & **56.93** & **56.73** & **68.73** & **68.31** & **68.31** & 68.25 & 42.46 & 46.09 & 58.01 & 43.02 & 61.92 & - & 82.46 \\ & 2 & 0949.3 & **39.04** & **39.04** & 7.291 & 79.73 & 79.90 & 79.90 & 74.29 & 85.71 & 87.87 & 88.49 & 86.01 & 87.29 & 83.13 & 85.78 \\ AD & 4 & 0229.50 & **52.91** & **53.91** & - & - & - & - & - & - & 84.92 & 88.25 & - & 84.92 & 88.25 \\ & 8 & 439.49 & **56.93** & **56.91** & - & 75.33 & 76.61 & 86.73 & 73.00 & 41.95 & - & - & 87.46 & - & 87.42 & - \\ \hline \multirow{4}{*}{MPD} & 1 & 13.93\%, **34.70** & **34.70** & **35.24** & 58.77 & - & 51.73 & 93.92 & 51.54 & 53.73 & 59.29 & 52.75 & - & - & - & 57.4 & - \\ & 2 & 4649.94 & **45.45** & 54.76** & 48.22 & - & 58.07 & 45.45 & 41.74 & 58.01 & 76.72 & - & - & 50.83 & - & 44.03 \\ & 4 & 84.995 & **85.78** & **75.53** & 93.78 & - & 58.37 & 92.67 & 82.62 & 61.75 & 59.80 & - & - & 54.2 & 68.83 & 9.8 \\ \hline \multirow{4}{*}{MPD} & 8 & 15.95\%, **56.98** & 60.929.70 & - & 58.76 & 62.61 & 71.06 & 62.475.7 & 60.00 & 30.03 & - & - & 61.11 & 71.96 & 1.1 \\ \cline{1-1} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-111} \cline{2-11} \cline{2-111} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-111} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-111} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-111} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-111} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-111} \cline{2-111} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-111} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-111} \cline{2-11} \cline{2-111} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-111} \cline{2-11} \cline{2-111} \cline{2-11} \cline{2-11} \cline{2-111} \cline{2-11} \cline{2-111} \cline{2-11} \cline{2-111} \cline{2-11} \cline{2-11
are in the appendix. As shown in Table 3, GraphCore outperforms all other baseline methods in 12 out of the 15 categories at the image level, and outperforms all other baselines in 11 out of the 15 categories at the pixel level on MVTec AD. Moreover, results in Table 4 show that GraphCore outperforms all other baselines in 5 out of the 6 categories at the image level, and outperforms all other baselines in all categories at the pixel level on MPDD.
### Ablation Studies
**Sampling Rate.** When demonstrated in Fig. 8, our technique significantly improves as the sample rate increases from 0.0001 to 0.001, after which the increase in sampling rate has a flattening effect on the performance gain. In other words, as the sampling rate steadily increases, the performance of GraphCore is insensitive to the sampling rate.
\begin{table}
\begin{tabular}{c c|c c c c c c c c} \hline \hline
**Category** & **Aug(R)** & **GraphCore** & **CFA** & **SPADE** & **P4DM** & **STPM** & **RD4AD** & **PatchCore-10** & **PatchCore-25** & **RegAD** \\ \hline Bracket Black & 66.892.1 & 67.092.5 & 54.375.8 & 62.472.8 & **94.5**75.1 & 91.775.4 & 58.6/78.9 & 63.31- \\ Bracket Brown & 76.191.9 & 77.2192.6 & 66.87.75 & 59.571.9 & 62.373.2 & 58.8/73.4 & 70.7/76.9 & 59.41- \\ Bracket White & 87.297.1 & **89.497.5** & 68.770.8 & 67.272.4 & 53.864.2 & 55.6/62.4 & 70.4/68.1 & 55.61- \\ Connector & 98.697.2 & **98.9**9**97.7 & 58.588.82 & 59.282.8 & 51.63/8.4 & 53.782.3 & 59.21/85.2 & 73.01- \\ Metal Plate & **99.9**9**98.4 & **99.9**9**99.1 & 62.784.3 & 64.275.9 & 62.483.2 & 65.26/76.5 & 64.1/86.3 & 61.7- \\ Tubes & 79.292.6 & **79.8**9**93.1 & 40.77.28 & 35.56/76.8 & 49.675.6 & 45.977.1 & 34.3/79.5 & 67.1- \\ \hline Average & 84.694.9 & **85.4**9**5.4 & 58.6 & 78.22 & 58.0/75.4 & 62.4/75.8 & 61.8/74.5 & 59.6/79.2 & 63.4/93.2 \\ \hline \hline \end{tabular}
\end{table}
Table 4: FSAD results on MPDD. The number of shot K is 2, the sampling ratio is 0.01, x\(|\)y represents image AUROC and pixel AUROC. The results for PaDiM and PatchCore-10, PatchCore-25 are reported from Roth et al. (2022). The results for RegAD are reported from Huang et al. (2022). The best-performing method is in bold.
Figure 8: Ablation results on sampling rates and the number of \(N\) nearest neighbors.
\begin{table}
\begin{tabular}{c|c c c c c c c c c} \hline \hline
**Category** & **Aug(R)** & **GraphCore** & **CFA** & **SPADE** & **P4DM** & **STPM** & **RD4AD** & **PatchCore-10** & **PatchCore-25** & **RegAD** \\ \hline Bottle & 99.796. & **98.9**98.98 & 93.793.5 & 97.586.3 & 98.494.6 & 91.821.9 & 97.998.5 & - & - & 99.490.0 \\ Cable & 99.796. & 98.924.96 & 98.938.9 & 94.074.86 & - & 60.251.6 & 63.564.9 & **94.997.8** & - & - & 61.917.1 \\ Capsule & 66.597.2 & **73.2**91.82 & 57.348.9 & 47.798.3 & - & 52.590.3 & 78.229.7 & 62.977.7 & - & - & 67.997.3 \\ Capset & **99.49** & **99.49**6.97 & 69.79.7 & 92.195.6 & - & 90.860.5 & 92.872.4 & 99.199.0 & - & - & 96.598.9 \\ Gold & 75.779.8 & 81.500.6 & 80.481.4 & 58.755 & - & 72.661.2 & 52.756.3 & 61.767.5 & - & - & **84.68**/77.4 \\ Hardware & **99.79**9.59 & 99.849.4 & **99.8**2 & 58.298 & - & 90.347.5 & 49.643.9 & 93.596.4 & - & - & 96.098.1 \\ leather & **10009.3** & **1009.19** & **1009.19** & 99.399.2 & - & 99.582.3 & - & 95.752.6 & 96.356.0 & 1009.3 & - & - & 99.490.0 \\ Meu Net & 50.006.8 & **93.8**9** & 66.367.0 & 63.595.5 & - & 94.514.1 & 63.689.0 & 92.097.1 & - & - & 97.490.9 \\ Plant & 87.539.1 & **88.9**4.1 & 67.419.5 & 50.257.2 & - & 87.499.6 & 63.780.2 & **53.749.08** & - & - & 81.393.6 \\ Szew & 63.630.0 & **63.7**9.56 & 52.947.6 & 51.370.2 & - & 51.951.8 & 54.360.8 & 48.390.8 & - & - & 52.594.4 \\ Tune & **10009.100.5** & **99.58** & 91.82.0 & **92.3** & 91.458.2 & 88.959.2 & 109.60.0 & - & - & 94.394.3 \\ TouchSh & 83.698.2 & **83.73** & **86.39** & 83.93.6 & - & 76.366.3 & 77.183.3 & 83.989.2 & - & 86.068.2 \\ Transica & 96.394.1 & **97.19** & 72.50.3 & 53.167.6 & - & 82.47.5 & 73.167.3 & 86.995.0 & - & - & 86.093.4 \\ Wool & 97.136.4 & **97.19** & **97.39** & 98.922.4 & 50.389.7 & - & 95.88.48 & 93.739.8 & 92.739.03 & - & - & **99.293.5** \\ Zipper & 96.990.0 & **97.59** & **90.5** & 90.541.49 & 40.937. & - & 47.656.3 & 48.551.2 & 95.389.2 & - & - & 86.395.1 \\ \hline Average & 94.949.6 & **91.949** & 81.191.0 & 70.793.9 & 86.990.5 & 74.259.8 & 75.571.8 & 87.894.8 & 86.490.1 & 87.290.3 & 85.794.6 \\ \hline \hline \end{tabular}
\end{table}
Table 3: FSAD results on MVTec AD. The number of shot K is 2, the sampling ratio is 0.01, x\(|\)y represents image AUROC and pixel AUROC. The results for RegAD are reported from Huang et al. (2022). The best-performing method is in bold.
**Nearest Neighbour.** In Fig. 8, the green color represents the performance of GraphCore's 9 nearest neighbour search and the blue color represents the performance of GraphCore's 3 nearest neighbour search. As can be seen, increasing the number of neighbors from 3 to 9 greatly increases performance at the pixel level when the sampling rate is low, but does not enhance performance at the image level. As the sampling rate increases, the gain of the number of pixels' neighbors approaches zero.
**Augmentation Methods.** Fig. 9 demonstrates that the performance of PatchCore on MVTec AD and MPDD is relatively weak, but Aug.(R) demonstrates higher performance. It demonstrates heuristically that our enhancement to feature rotation is significantly effective. Moreover, GraphCore outperforms Aug.(R) by a large margin, confirming our assumption that GraphCore can extract the isometric invariant feature from industrial-based anomaly detection images.
### Visualization
Fig. 10 shows the visualization results obtained by our method on MVTec AD and MPDD with sampling rates of 0.01 and 1 shot, respectively. Each column represents a different item type, and the four rows, from top to bottom, are the detection image, anomaly score map, anomaly map on detection image, and ground truth. According to the results, our method can produce a satisfactory impact of anomaly localization on a variety of objects, indicating that it has strong generalized ability even in the 1-shot case.
Figure 10: Visualization results of the proposed method on MVTec AD and MPDD. The first row denotes the training example in 1-shot setting. The second row is test samples (abnormal). The third row is the heatmap on test samples. The fourth row is anomaly mask (ground truth).
Figure 9: GraphCore vs Augmentation+PatchCore vs PatchCore on various number of shot (K).
## 4 Conclusion
In this study, we introduce a new approach, GraphCore, for industrial-based fewshot visual anomaly detection. Initially, by investigating the CNN-generated feature space, we present a simple pipeline - Augmentation+PatchCore - for obtaining rotation-invariant features. It turns out that this simple baseline can significantly improve anomaly detection performance. We further propose GraphCore to capture the isometric invariant features of industrial normal samples. It outperforms the SOTA models by a large margin using only a few normal samples (\(\leq 8\)) for training. The majority of industrial datasets for anomaly detection possess isomorphism, which is a property perfectly suited to GraphCore. We will continue to push the limits of industrial-based fewshot anomaly detection in the future.
## 5 Acknowledgments
This work is supported by the National Natural Science Foundation of China under Grant No. 61972188, 62122035, and 62206122.
## 6 Appendix
### Dataset
**MVTec AD** is the most popular dataset for industrial image anomaly detection (Bergmann et al. (2019)), which consists of 15 categories of items, including a total of 3629 normal images as a training set, and a collection of 1725 normal images and abnormal images as a test set. All images have a resolution between 700\(\times\)700 and 1024\(\times\)1024 pixels.
**MPDD** is a more challenging AD dataset containing 6 classes of metal parts (Jezek et al. (2021)). The images are taken in different spatial directions, and distances, and under the condition of non-uniform background, so it is more challenging. The training set contains 888 normal images, and the test set contains 176 normal images and 282 abnormal images. The resolution of all images is 1024\(\times\)1024 pixels.
**MVTec LOCO AD** adds logical abnormal images outside the structural class abnormal image (Bergmann et al. (2022)). The dataset contains 1,772 normal images as a training set and 304 normal images are used as a validation set. The test set contains 575 normal images, 432 structural abnormal images, and 561 logic abnormal images. Due to the different calculation methods of logic abnormal detection metric, we abandon the logical abnormal image of the test concentration, retaining the remaining 575 normal images and 432 structural abnormal images as a test set for experiments. Each image is 850 to 1600 pixels in height and 800 to 1700 pixels wide.
### Experiment results
### Ablation Stadies
|
2303.12823 | Data-Driven Leader-following Consensus for Nonlinear Multi-Agent Systems
against Composite Attacks: A Twins Layer Approach | This paper studies the leader-following consensuses of uncertain and
nonlinear multi-agent systems against composite attacks (CAs), including Denial
of Service (DoS) attacks and actuation attacks (AAs). A double-layer control
framework is formulated, where a digital twin layer (TL) is added beside the
traditional cyber-physical layer (CPL), inspired by the recent Digital Twin
technology. Consequently, the resilient control task against CAs can be divided
into two parts: One is distributed estimation against DoS attacks on the TL and
the other is resilient decentralized tracking control against actuation attacks
on the CPL. %The data-driven scheme is used to deal with both model
non-linearity and model uncertainty, in which only the input and output data of
the system are employed throughout the whole control process. First, a
distributed observer based on switching estimation law against DoS is designed
on TL. Second, a distributed model free adaptive control (DMFAC) protocol based
on attack compensation against AAs is designed on CPL. Moreover, the uniformly
ultimately bounded convergence of consensus error of the proposed double-layer
DMFAC algorithm is strictly proved. Finally, the simulation verifies the
effectiveness of the resilient double-layer control scheme. | Xin Gong, Jintao Peng, Dong Yang, Zhan Shu, Tingwen Huang, Yukang Cui | 2023-03-22T17:20:35Z | http://arxiv.org/abs/2303.12823v1 | Data-Driven Leader-following Consensus for Nonlinear Multi-Agent Systems against Composite Attacks: A Twins Layer Approach
###### Abstract
This paper studies the leader-following consensuses of uncertain and nonlinear multi-agent systems against composite attacks (CAs), including Denial of Service (DoS) attacks and actuation attacks (AAs). A double-layer control framework is formulated, where a digital twin layer (TL) is added beside the traditional cyber-physical layer (CPL), inspired by the recent Digital Twin technology. Consequently, the resilient control task against CAs can be divided into two parts: One is distributed estimation against DoS attacks on the TL and the other is resilient decentralized tracking control against actuation attacks on the CPL. First, a distributed observer based on switching estimation law against DoS is designed on TL. Second, a distributed model free adaptive control (DMFAC) protocol based on attack compensation against AAS is designed on CPL.
Moreover, the uniformly ultimately bounded convergence of consensus error of the proposed double-layer DMFAC algorithm is strictly proved. Finally, the simulation verifies the effectiveness of the resilient double-layer control scheme.
Cyber attacks, data-driven, leader-following consensus, model-free adaptive control, nonlinear multi-agent systems, twin layer.
## I Introduction
In recent years, multi-agent systems (MASs) have gained popularity and are used in many different contexts, including satellite formation[1, 2], mobile robots [3, 4], autonomous surface vehicles [5, 6] and UAV cluster control [7, 8], etc. Cooperative control of MASs has become a hot research fields[6, 8, 9, 10] and the consensus problem is one of the most important research focuses in Cooperative control. In this paper, we work toward a resilient control schemes of leader-following consensus control of nonlinear MAS against various cyber attacks in the framework of hierarchical distributed control.
Multi-agent systems rely on mutual communication to achieve cooperative control. However, in a complex communication environment, it cannot avoid communication uncertainty, such as cyber attacks. Common types of cyber attacks include Denial of Service (DoS) attacks [11, 12] and actuation attacks (AAs) [13, 14, 15, 16]. Among them, DoS attacks could cut off the communication channel between agents through various means while AAs directly inject attack signals into the actuators of agents to offset the system control input. All of the above brings damage to the security, robustness and information integrity of MASs, making it difficult to design controllers to resist the damage and achieve great self-control. In the existing papers, defense strategies against cyber attacks are mainly based on attack detection and attack adaptive methods. The former needs to constantly detect and identify attacks, which brings a great computational burden to the system. The latter can achieve acceptable system performance without attack detection, but its defense strategy is only effective for one type of attack according to existing papers [15, 16, 17]. In this paper, the idea of hierarchical control [18] is introduced to cope with multiple cyber attacks at the same time through building a digital twin layer. And we focus on leader-following consensus control for nonlinear discrete systems with unknown models. In practical applications, model uncertainty and nonlinear problems can not be avoided, hence control of nonlinear unknown systems cannot be ignored.
Most of the research on the consensus control of MASs assumes that the system dynamics are known and have accurate dynamic models. However, an accurate system model means high measurement cost, so that system model in practical application is imprecise and uncertain from the perspective of cost saving. Meanwhile, nonlinearity is a typical nature of complexity in nature and even the linear dynamics of agents cannot avoid the existence of nonlinear parts. Therefore, the research on consensus control of nonlinear uncertain MASs is of great significance. In dealing with system nonlinear problems, neural networks (NNs) is a feasible approach known from the papers [19, 20, 21] because of its excellent estimation ability, but the adaptive controller design based on neural network needs a training process to provide appropriate training parameters. In addition, a consensus control approach based on iterative learning control (ILC) is also being studied for nonlinear multi-agent systems [22, 23, 24]. However, this method is based on the assumption that there is a priori knowledge of the leader's state trajectory in the whole control process. Therefore, it is not real-time leader-following tracking. Fortunately, the model free adaptive control (MFAC) method is used to achieve real-time leader-following consensus control for discrete-time nonlinear systems with unknown dynamics. In the MFAC [25, 26, 27], nonlinear systems were represented by
a linear data model linked to input/output (I/O) data, and the control protocol was created based on the linear data model without the need for an accurate understanding of the system structure.
There are also many papers applying NNs and ILC to MFAC [26, 28, 29, 30, 31]. In [30, 31], the MFAC is used to solve the problem of imprecise model and NNs is used to estimate the sensor error. The network computation in these papers is set up in the cloud due to high computing burden, but there is also a hidden danger of cyber attacks. And in paper [26], ILC is applied to iteratively learn the optimal control input sequence in the whole control process according to data-driven models, but it is also unavoidable to assume that the expected trajectory or leader state is known throughout the control period, so that real-time tracking cannot be realized. In contrast to apply NNs and ILC to MFAC above, this paper introduces the idea of hierarchical control in MFAC, in which we build a digital twin layer (TL), corresponding to the cyber-physical layer (CPL). TL has the same number of leaders and followers and the same topology as CPL. Since the twin layer has no practical physical significance, it has high confidentiality and security and can be immune to AAs. Therefore, we can divide the leader-following control against DoS attacks and AAs into two parts: resisting DoS attacks to achieve consensus control on the TL and resisting AAs to achieve consensus control on the CPL. A switching control law is designed for DoS on the TL and an adaptive controller with attacks compensation is designed on the CPL for unbounded AAs. Both of them achieve the uniformly ultimately bounded (UUB) convergence of tracking error.
Inspired by the foregoing discussions, a new hierarchical control scheme based on MFAC is proposed to realize the leader-following consensus control for nonlinear MASs against DoS attacks and AAs. The main contributions of the article are as follows:
1. A double-layer resilient control framework, including TL and CPL, is designed to achieve resilient leader-following consensus against cyber attacks. The adding TL can be deployed in the Cloud, which is not existing physically and has less physical meaning. Thus, the TL has higher security and confidentiality than the CPL, which is immune from AAs. Consequently, the control task can be divided into two parts: distributed estimation against DoS attacks on the TL and decentralized control against AAs on the CPL. The attack defense strategy based on the DMFAC approach is designed for TL and CPL, respectively.
2. On the TL, a distributed switching estimation scheme is proposed, which switches according to whether the DoS attacks occur or not. The above estimation scheme owns UUB convergence, with an explicit upper bound. The tolerable DoS attack magnitude is also discussed.
3. The considered AAs on the CPL can be unbounded, which outperforms most of the previous works towards bounded AAs []. Based on the rationale of compact form dynamic linearization (CFDL), an MFAC-based decentralized control scheme against unbounded AAs is designed, which possesses UUB convergence.
_Notations:_ The symbols \(\mathbb{R}^{n}\) and \(\mathbb{R}^{n\times n}\) refer to sets of real vectors of dimension \(n\) and matrix of dimension \(n\times n\) respectively. And symbol \(\mathbb{N}\) refer to nonnegative integer set. Denote \(I_{n}\in\mathbb{R}^{n\times n}\) as an identity matrix of dimension \(n\times n\) and \(1_{n}\in\mathbb{R}^{n}\) as a column vector filled with \(1\). \(\otimes\) represents the Kronecker product. \(\mathrm{diag}(\mathrm{x}_{1},\mathrm{x}_{2},\ldots,\mathrm{x}_{i})\) refer to a diagonal matrix with \(x_{1},x_{2},\ldots,x_{i}\) as the diagonal elements.
## II Preliminaries And System Setup
### _Graph Theory_
In Graph Theory, a directed graph \(\mathcal{G}=(\mathcal{V},\mathcal{E},\mathbb{A})\) can be used to illustrate information communication between agents, where \(\mathcal{V}=\{1,2,\ldots,N\}\) is a group of agents and \(\mathcal{E}=\mathcal{V}\times\mathcal{V}\) is a group of edges indicating the flow of information between agents. An edge \(e_{ij}\) in \(\mathcal{G}\) indicates that the information of node \(j\) is available to that of node \(i\), and agent \(j\) is denoted as a neighbor of agent \(i\). The index set of all neighbors of agent \(i\) is denoted by \(N_{i}=\{j:(i,j)\in\mathcal{E}\}\). In an undirected graph, \((i,j)\in\mathcal{E}\Leftrightarrow(j,i)\in\mathcal{E}\). The adjacent matrix \(\mathbb{A}\triangleq[a_{ij}]\in\mathbb{R}^{N\times N}\), where \(a_{ij}=1\) if \((i,j)\in\mathcal{E}\), and \(a_{ij}=0\) otherwise. The Laplacian matrix \(L\triangleq[l_{ij}]\in\mathbb{R}^{N\times N}\), where \(l_{ii}=\sum_{j=1,j\neq i}^{N}a_{ij}\), and \(l_{ij}=-a_{ij}\) for \(i\neq j\). It is easy to obtain that the accumulation of elements in every row of matrix is zero and has \(N\) nonzero eigenvalue. A information channel between agent \(i\) and agent \(j\) is a sequence of edges \((i,j_{1}),(j_{2},j_{3}),\ldots,(j_{l},j)\) in \(\mathcal{G}\) with distinct agents \(j_{k}\), \(k=1,2,\ldots,l\). If there exists a path for every two nodes, we said the undirected graph is strongly connected.
In this paper, \(\mathcal{G}=(\mathcal{V},\mathcal{E},\mathbb{A})\) and Laplacian matrix \(L\) describe the topological relationship between followers. And here is a matrix \(C=\mathrm{diag}(c_{1},c_{2},\cdots,c_{N})\) to indicate whether leaders communicate with followers, where \(c_{i}=1\) if agent \(i\) can receive information from leader, and \(c_{i}=0\) otherwise.
### _CFDL Data Models_
In the framework of formation-tracking control, we consider an unknown nonlinear MAS, in which the agents be classified into two groups:
1. One leader is denoted as agent \(0\);
2. \(N\) followers are denoted as agent \(1,2,\ldots,N\).
In existing research, a group of agents with the same dynamic are frequently taken into account in consensus control. As opposed to that, the MASs considered in this paper is heterogeneous and even unknown nonlinear, and the nonlinear dynamics of leader \(0\) and \(N\) followers are shown below:
\[\left\{\begin{aligned} y_{0}(k+1)&=f_{0}(y_{0}(k))\\ y_{i}(k+1)&=f_{i}(y_{i}(k),u_{i}(k)),\quad i=1,2, \ldots,N,\end{aligned}\right. \tag{1}\]
where \(y_{i}(k)\in\mathbb{R}\) is the state output, \(u_{i}(k)\in\mathbb{R}\) is control input and \(f_{i}(\cdot)\) is an unidentified nonlinear function of agent \(i\), respectively.
As described in II-A, \(\mathcal{G}=(\mathcal{V},\mathcal{E},\mathbb{A})\) only represents the topological relationship between followers. Considering the communication between leaders and followers, \(\mathcal{\tilde{G}}=(\mathcal{V}\cup\{0\},\mathcal{\tilde{E}},\mathbb{A})\) is introduced to represent the topological relationship between all agents in MASs. The following is the necessary assumption for \(\mathcal{\tilde{G}}\).
**Assumption 1**: _The communication graph \(\vec{\mathcal{G}}\) which describes information communication among agents is directed and fixed strongly connected, that is, at least one of the follower agents has access to the leader._
**Remark 1**: \(\vec{\mathcal{G}}\) _is fixed strongly connected in Assumption 1, which ensures that information can be transmitted between any two followers. As long as one follower receives the leader's state information, all followers can get the leader's state information through the topological network. \(\square\)_
Because the system is nonlinear even unknown, we can not get a definite linear system model. So a new model free adaptive control method based on data-driven is introduced, in which only the control input and control output of the system are used in the control process. The MFAC has been widely used in nonlinear or unknown systems, and the following necessary assumptions for the MFAC are put forward for completing the following analysis.
**Assumption 2**: _([32]) The partial derivative of \(f_{i}(\cdot)\) with respect to \(u_{i}(k)\) is continuous._
**Assumption 3**: _([32]) The follower systems in (1) satisfies the generalized Lipschitz condition, that is, if \(|\Delta u_{i}(k)|\neq 0\), \(|\Delta y_{i}(k+1)|\leq b_{c}|\Delta u_{i}(k)|\) holds for any \(k\), where \(\Delta y_{i}(k+1)=y_{i}(k+1)-y_{i}(k)\), \(\Delta u_{i}(k)=u_{i}(k)-u_{i}(k-1)\) and \(b_{c}\) is positive constant._
**Remark 2**: _Assumption 2 is a conventional constraint condition for nonlinear systems. Assumption 3 states that the bounded input increment leads to the bounded output increment. Given the energy of the system, if changes in the control input are limited, changes in the output are also limited and cannot increase indefinitely. \(\square\)_
**Lemma 1**: _([33]) If the follower systems in (1) satisfies Assumptions 2, 3 and \(|\Delta u_{i}(k)|\neq 0\) for \(\forall k\), then the models of followers can be transformed into_
\[\Delta y_{i}(k+1)=\phi_{i}(k)\Delta u_{i}(k), \tag{2}\]
_where \(\phi_{i}(k)\) is a pseudo-partial-derivative (PPD) parameter that fulfills \(|\phi_{i}(k)|\leq b_{c}\)._
**Assumption 4**: _The sign of the PPD parameter remains unchanged for all \(k\) and satisfies \(\phi_{i}(k)>\varepsilon>0\) or \(\phi_{i}(k)<-\varepsilon\). Keeping the generality intact, \(\phi_{i}(k)>\varepsilon\) is assumed in the following discussion._
**Remark 3**: _Most of model-based control methods have a similar assumption as Assumption 4, which means that the input of system increase does not lead to the output decrease. This assumption is crucial to ensure that the movement of systems is in the desired direction even when the systems is nonlinear or even unknown during our tracking process. \(\square\)_
Next, the follower systems in (1) could is represented as the following model-free data models according to Lemma 1:
\[y_{i}(k+1)=y_{i}(k)+\phi_{i}(k)\Delta u_{i}(k),\quad i=1,2,\ldots,N. \tag{3}\]
## III Problem Formulation
In this section, a leader-following consensus of unknown nonlinear MASs against multiple cyber attacks is considered. The cyber attacks that the systems may suffer during the control process will also be illustrated.
### _Attack Descriptions_
Potential attackers who can derive the MASs by launching DoS attacks and AAs are considered in this paper. The specific definitions of the above two attacks will be presented below.
#### Iii-A1 DoS Attacks
DoS attacks are conducted to derive the control performance of MAS by cutting off the communication channel between agents. Due to the limitation of energy, DoS attacks occur intermittently. The \(i\)th DoS attacks interval is denoted as \([T_{i}^{on},T_{i}^{off})\), Wherein \(\mathbb{N}\) and \(T_{i}^{off}\in\mathbb{N}\) are denoted as the start and end time instant of the DoS attacks in entire control period. The union of DoS attacks in the interval \([0,k]\) with \(k\in\mathbb{N}\) can be obtained:
\[\Xi_{d}(0,k)=\{\cup_{i\in\mathbb{N}}[T_{i}^{on},T_{i}^{off})\}\cap[0,k]. \tag{4}\]
Next, the union of interval without DoS attack be obtained:
\[\Xi_{s}(0,k)=[0,k]\backslash\Xi_{d}(0,k). \tag{5}\]
**Assumption 5**: \(|\Xi_{d}(0,k)|\) _and \(|\Xi_{s}(0,k)|\) defined as total time interval of DoS Attack and total time interval without DoS respectively satisfies the following condition:_
\[|\Xi_{u}(0,k)|\leq M+\beta k, \tag{6a}\] \[|\Xi_{u}(0,k)|+|\Xi_{s}(0,k)|=k, \tag{6b}\]
_where \(M>0\) and \(0<\beta<1\) are constants to be confirmed._
**Remark 4**: _Limited by energy, DoS attacks cannot last forever, and there will be certain constraints as Assumption 5. From the perspective of energy, \(M\) represents the maximum duration of DoS attacks, which depends on the attacker's own energy storage. \(\beta\) represents the charging rate and cannot be greater than \(1\). It is assumed that the energy consumption and energy supplement during DoS attack occurring is simultaneous. When \(\beta>1\), the energy supplement is greater than the energy consumption, meaning that DoS attack can last forever, which is inconsistent with reality. \(\square\)_
A flag signal introduced to indicate DoS Attack occurs or not is as follows:
\[\psi(k)=\left\{\begin{aligned} & 0,&\text{if }k\in\Xi_{d}(0,k),\\ & 1,&\text{if }k\in\Xi_{s}(0,k).\end{aligned}\right.\]
#### Iii-A2 Unbounded Actuation Attacks
Actuation attacks from potential attackers acts on the control input of system, by injecting a wrong attack signal into the motor input of each agent to deteriorate system performance. When the MAS is under AAs, the control input of each agent system is shown as follows:
\[\bar{u}_{i}(k)=u_{i}(k)+\chi_{i}(k), \tag{7}\]
in which \(\chi_{i}(k)\) is denoted as the unknown and possibly unbounded actuation attack signals, and \(\bar{u}_{i}(k)\) is control input of actual actuator being polluted by AAs. Although the attack signal can be unbounded, the control input of the system will be limited by the objective conditions. Meanwhile, the unbounded AAs in this paper must meet the following assumption.
**Assumption 6**: _AAs signals grow from zero and the variation of AA signals at each sampling time is bounded by \(\bar{d}\), that is, \(\chi_{i}(0)=0\) and \(|\Delta\chi_{i}(k)|<\bar{d}\)._
**Remark 5**: _Maybe the unbounded Actuation attacks are a bit unrealistic from the perspective of the objective structure and energy limitation of the actuator. However, the unbounded AAs here are only an attack signal passed to the actuator and do not represent the actual offset of the actual actuator input. When the attack signal tends to be unbounded, although the actuator cannot reach infinity due to its structure and energy limitations, it will reach its own input threshold. When designing attack compensation for AAs, we still have to treat attack signals as possibly unbounded. \(\square\)_
### _Problem Formulation_
A global tracking error is defined to measure the tracking performance, which is as follows:
\[e_{i}(k)=y_{0}(k)-y_{i}(k),i=1,2,\ldots,N. \tag{8}\]
Without being attacked, the local tracking error of the \(i\)th followers is denoted as
\[\xi_{i}(k)=\sum_{j\in N_{i}}a_{ij}(y_{j}(k)-y_{i}(k))+c_{i}(y_{0}(k)-y_{i}(k)), \tag{9}\]
where \(a_{ij}\) are the parameters in the adjacency matrix and \(c_{i}=1\) or \(0\) denotes that if there is a communication channel from leader \(0\) to follower \(i\) or not.
Based on the settings in Subsection III-A, the nonlinear dynamics of followers in (1) affected by AAs are expressed as
\[\bar{y}_{i}(k+1)=f_{i}(\bar{y}_{i}(k),\bar{u}_{i}(k)),\quad i=1,2,\ldots,N+1, \tag{10}\]
where \(\bar{y}_{i}(k)\) is denoted as state output under cyber attacks.
Considering the damage of both DoS attacks and AAs, the local tracking error in (9) is deteriorated into, due to the several cyber attacks, the form as
\[\bar{\xi}_{i}(k)=\sum_{j\in N_{i}}a_{ij}^{\psi(k)}(\bar{y}_{j}(k)-\bar{y}_{i}( k))+c_{i}^{\psi(k)}(y_{0}(k)-\bar{y}_{i}(k)). \tag{11}\]
From the two forms of local tracking error (8) and (9), it is easy to see that the cyber attacks on the system are incredibly destructive. Leader-following consensus control against these attacks will be challenging.
Based on the above discussion about both several attack descriptions and CFDL data models of followers, we will study the leader-following consensus control of MASs against cyber attacks, including DoS attacks and unbounded AAs. The details are as follows.
**Problem LFCCA** (Leader-following consensus control against composite attacks) : In the case of two malicious cyber attacks described in Subsection III-A, design a novel distributed protocols for systems 1 based on MFAC so that the global tracking error \(e_{i}(k)\) in (8) is UUB convergence under the above Assumptions 1-6, that is, \(\lim_{k\rightarrow\infty}\|e_{i}(k)\|\leq B,i=1,2,\ldots,N\).
## IV Main Results
Inspired by the recent sprung-up digital twin technology [34], a double-layer distributed model free adaptive control (DMFAC) framework is investigated in this section. The hierarchal control scheme solves the LFCCA by employing a nonlinear TL against DoS attacks and a distributed adaptive control with attack compensation against AAs on the CPL, both only using the I/O data of MASs.
### _Design of TL based on Data-driven against Frequency-constrained DoS Attacks_
In this paper, the MASs is subject to DoS and AAs attacks from covert attackers. As shown in Fig. 1, a double layer-framework based on the TL is built to divide the resilient control scheme against cyber attacks into leader-following control against DoS attacks on the TL and point-to-point following against AAs on the CPL.
The TL has superior information privacy and transmits less significant physical signals, making it impervious to many assaults, including AAs. DoS attacks, however, could still work on the TL and cut off the information channel between agents since it can be easy to implement even for attacks without the knowledge of MASs to paralyze the consensusability of MASs with a limited budget as [35].
In the double-layer framework, the leader can transmit self-state information \(y_{i}\) to the TL in real-time, and transmit its state information to the corresponding virtual followers on the TL according to the topological network, which is equivalent to that there is a virtual leader on the TL, and its dynamics and topology are consistent with the actual leader described in (1).
The system models of virtual followers are unknown and nonlinear and reconstructed on the TL as follows:
\[\left\{\begin{aligned} &\tilde{y}_{0}(k+1)=\tilde{f}_{0}(\tilde{y}_{i}(k)), \\ &\tilde{y}_{i}(k+1)=\tilde{f}_{i}(\tilde{y}_{i}(k),\tilde{u}_{i}( k)),\quad i=1,2,\ldots,N,\end{aligned}\right. \tag{12}\]
where \(\tilde{y}_{i}(k)\in\mathbb{R}\) is the state output, \(\tilde{u}_{i}(k)\in\mathbb{R}\) is control input and \(\tilde{f}_{i}(\cdot)\) is an potentially unknown nonlinear function on the TL, respectively.
Fig. 1: MASs against cyber attacks: A double-layer framework.
**Remark 6**: _State information \(y_{0}\) transmitted by actual leader on the CPL to TL is regarded as the state information \(\tilde{y}_{0}\) from virtual leaders on the TL, thus \(y_{0}\) and \(\tilde{y}_{0}\) have the same dynamics, control input and state output. For the convenience of later analysis, \(y_{0}\) is used to express \(y_{0}\) and \(\tilde{y}_{0}\) uniformly. \(\square\)_
In order to improve security and privacy on the TL, the follower models on the TL could be different from that on the CPL, that is, \(\tilde{f}_{i}(\cdot)\) of followers above can be designed as nonlinear uncertainty.
**Remark 7**: _The virtual followers on the TL are just some data, and its dynamics can be designed into any ideal form, such as homogeneous or heterogeneous, linear or nonlinear and known or uncertain. In order to achieve a good control effect, the dynamics of virtual followers can be designed as a simple homogeneous linear form, which also increases the risk of being attacked. Therefore, system dynamics are also designed as unknown nonlinear form like that of followers on the CPL, so as to improve the invisibility of virtual leaders on the TL under attacks. \(\square\)_
The following necessary assumptions are put forward for completing the leader-following consensus analysis on the TL.
**Assumption 7**: _The partial derivative of \(\tilde{f}_{i}(\cdot)\) with respect to \(\tilde{u}_{i}(k)\) is continuous. The sytems of virtual follower in (12) meets the generalized Lipschitz condition, that is, if \(\Delta\tilde{u}_{i}(k)\neq 0\), \(\Delta\tilde{y}_{i}(k+1)|\leq b_{t}\Delta\tilde{u}_{i}(k)|\). if \(|\Delta\tilde{u}_{i}|\neq 0\) for \(\forall k\), then the system can be transformed into \(\Delta\tilde{y}(k+1)=\Phi(k)\Delta\tilde{u}(k)\), where \(\Phi(k)\) is bounded and satisfies \(0<\varepsilon<\Phi(k)\leq b_{t}\)._
**Remark 8**: _Assumption 7 for virtual followers on the TL is similar to above Assumption 2 and 3 for followers on the CPL, all of that is necessary assumptions to convert the unknown nonlinear system models into data-driven models. \(\square\)_
Next, a data-driven model of virtual followers similar to (3) is obtained as
\[\tilde{y}_{i}(k+1)=\tilde{y}_{i}(k)+\Phi_{i}(k)\Delta\tilde{u}_{i}(k),\quad i =1,2,\ldots,N. \tag{13}\]
Due to the difficulty in obtaining the value of the PPD parameter, the estimator is required to estimate \(\Phi(k)\) as follows:
\[\hat{\Phi}_{i}(k)= \hat{\Phi}_{i}(k-1)+\frac{\eta_{t}\Delta\tilde{u}_{i}(k-1)}{\mu_ {t}+\Delta\tilde{u}_{i}(k-1)^{2}}\] \[\times\left[\Delta\tilde{y}_{i}(k)-\hat{\Phi}_{i}(k-1)\Delta \tilde{u}_{i}(k-1)\right], \tag{14}\]
where \(\eta_{t}\leq 1\) is a step coefficient and \(\mu_{t}\) is positive constant as penalty factor.
**Remark 9**: _The estimation algorithm (14) of PPD parameter is obtained by minimizing the performance function as follows:_
\[J_{1}[\hat{\Phi}_{i}(k)]= [\Delta\tilde{y}_{i}(k)-\hat{\Phi}_{i}(k)\Delta\tilde{u}_{i}(k-1 )]^{2}\] \[+\mu_{t}[\hat{\Phi}_{i}(k)-\hat{\Phi}_{i}(k-1)]^{2}.\]
\(\square\)__
Define the following local tracking errors of virtual follower \(i\) on the TL:
\[\tilde{\xi}_{i}(k) =\sum_{j\in N_{i}}a_{ij}(\tilde{y}_{j}(k)-\tilde{y}_{i}(k))+c_{i }(y_{0}(k)-\tilde{y}_{i}(k)) \tag{15}\] \[=\sum_{j\in N_{i}}a_{ij}(\tilde{e}_{i}(k)-\tilde{e}_{j}(k))+c_{i }\tilde{e}_{i}(k),\]
where \(\tilde{e}_{i}(k)=y_{0}(k)-\tilde{y}_{i}(k)\) is global errors on the TL, \(a_{ij}\) is the parameters in the adjacency matrix and \(c_{i}=1\) or \(0\) denotes that there is a communication channel from leader \(0\) to agent \(i\) or not.
Then we obtain the control algorithm:
\[\tilde{u}_{i}(k)=\tilde{u}_{i}(k-1)+\frac{\gamma_{t}\hat{\Phi}_{i}(k)}{\lambda _{t}+\hat{\Phi}_{i}(k)^{2}}\tilde{\xi}_{i}(k), \tag{16}\]
where \(\gamma_{t}<1\) is a step coefficient and \(\lambda_{t}\) is positive constant as penalty factor.
**Remark 10**: _Similar to the estimation algorithm above, the control law is obtained according to following performance function:_
\[J_{2}[\tilde{u}_{i}(k)]=\tilde{\xi}_{i}(k+1)^{2}+\lambda_{t}[\tilde{u}_{i}(k )-\tilde{u}_{i}(k-1)]^{2}.\]
\(\square\)__
Considering the DoS attacks, flag signal \(\psi(k)\) is introduced into the control law, that is,
\[\tilde{u}_{i}(k)=\tilde{u}_{i}(k-1)+\psi(k)\frac{\gamma_{t}\hat{\Phi}_{i}(k)}{ \lambda_{t}+\hat{\Phi}_{i}(k)^{2}}\tilde{\xi}_{i}(k). \tag{17}\]
**Remark 11**: _When DoS attack occurs, the communication between virtual agents is interrupted, so that the followers cannot get the state information of the virtual leader and that of neighbors' agents to update the control input according to control algorithm (16). As a result, a defense strategy against DoS described by (17) is designed, which only uses state information before the attack. We took the conservative defensive strategy to keep the control input unchanged, enabling followers to maintain the original motion state under DoS attacks and to follow the virtual leader after DoS attacks. \(\square\)_
Thus, a novel DMFAC algorithm based on data-driven against DoS attacks on the TL is proposed as
\[\hat{\Phi}_{i}(k)= \hat{\Phi}_{i}(k-1)+\frac{\eta_{t}\Delta\tilde{u}_{i}(k-1)}{\mu_ {t}+\Delta\tilde{u}_{i}(k-1)^{2}}\] \[\times\left[\Delta\tilde{y}_{i}(k)-\hat{\Phi}_{i}(k-1)\Delta \tilde{u}_{i}(k-1)],\] \[\hat{\Phi}_{i}(k)= \hat{\Phi}_{i}(0),\text{ if }|\hat{\Phi}_{i}(k)|<\varepsilon\text{ or } \mathrm{sign}(\hat{\Phi}_{i}(k))=\mathrm{sign}(\hat{\Phi}_{i}(1)),\] \[\tilde{u}_{i}(k)= \tilde{u}_{i}(k-1)+\psi(k)\frac{\gamma_{t}\hat{\Phi}_{i}(k)}{ \lambda_{t}+\hat{\Phi}_{i}(k)^{2}}\tilde{\xi}_{i}(k). \tag{18}\]
**Theorem 1**: _Considering the MASs mentioned at (12) satisfies Assumptions 7, leader-following consensus on the TL is achieved by (18) if following conditions are satisfied, so that global tracking errors \(\tilde{e}_{i}\) is UUB convergence._
\[\frac{b_{t}\gamma_{t}\max_{i\in N}(\sum_{j\in N_{i}}a_{ij}+c_{i})} {2\sqrt{\lambda_{t}}} <1, \tag{19a}\] \[\beta<\frac{-\ln\alpha_{1}}{\ln\alpha_{2}-\ln\alpha_{1}} <1. \tag{19b}\]
**Proof.** Let
\[\tilde{\xi}(k)=\left[\begin{array}{c}\tilde{\xi}_{1}(k)\\ \tilde{\xi}_{2}(k)\\ \vdots\\ \tilde{\xi}_{N}(k)\end{array}\right],\ \tilde{E}(k)=\left[\begin{array}{c}\tilde{e}_{1}(k)\\ \tilde{e}_{2}(k)\\ \vdots\\ \tilde{e}_{N}(k)\end{array}\right],\] \[\tilde{U}(k)=\left[\begin{array}{c}\tilde{u}_{1}(k)\\ \tilde{u}_{2}(k)\\ \vdots\\ \tilde{u}_{N}(k)\end{array}\right].\]
The compact representation of local tracking errors in (15) is defined as
\[\tilde{\xi}(k)=(L+C)\tilde{E}(k). \tag{20}\]
Next, the compact form of control law (17) could be obtained as follow:
\[\tilde{U}(k)=\tilde{U}(k-1)+\psi(k)P(k)(L+C)\tilde{E}(k), \tag{21}\]
where \(P(k)=\mathrm{diag}(\rho_{1}(k),\rho_{2}(k),\ldots,\rho_{N}(k))\) and \(\rho_{i}(k)=\frac{\gamma_{i}\tilde{\Phi}_{i}(k)}{\lambda_{t}+\tilde{\Phi}_{i} (k)^{2}}\) and matrix \(L\) and matrix \(C\) is defined in II-A.
When DoS Attack launches, that is, \(k\in\Xi_{s}(0,k)\), the leader-following tracking error on the TL can be obtained:
\[\begin{split}\tilde{E}(k+1)&=Y_{0}(k+1)-\tilde{Y}(k +1)\\ &=Y_{0}(k)-\tilde{Y}(k)-\Delta\tilde{Y}(k+1)+\Delta Y_{0}(k+1)\\ &=\tilde{E}(k)-\Phi(k)\Delta\tilde{U}(k)+\Delta Y_{0}(k+1)\\ &=(I_{N}-\Phi(k)P(k)(L+C))\tilde{E}(k)+\Delta Y_{0}(k+1)\\ &=(I_{N}-G(k))\tilde{E}(k)+\Delta Y_{0}(k+1),\end{split} \tag{22}\]
where \(\Phi(k)=diag(\Phi_{1}(k),\Phi_{2}(k),\cdots,\Phi_{N}(k))\),
\(\tilde{Y}(k)=[\tilde{y}_{1}(k),\tilde{y}_{2}(k),\cdots,\tilde{y}_{N}(k)]^{ \mathrm{T}}\),
\(\Delta\tilde{Y}(k)=[\Delta\tilde{Y}_{1}(k),\Delta\tilde{Y}_{2}(k),\cdots, \Delta\tilde{Y}_{N}(k)]^{\mathrm{T}}\),
\(Y_{0}(k)=[y_{0}(k),y_{0}(k),\cdots,y_{0}(k)]^{\mathrm{T}}\)
and \(\Delta Y_{0}(k)=[\Delta y_{0}(k),\Delta y_{0}(k),\cdots,\Delta y_{0}(k)]^{ \mathrm{T}}\)
are N-dimensional column vectors.
Since \(0<\Phi(k)\leq\bar{\Phi}=b_{t},0<\rho_{i}(k)\leq\bar{\rho}=\frac{\gamma_{t}}{2 \sqrt{\lambda_{t}}}\) and \(0<\frac{\gamma_{t}\tilde{\Phi}_{i}(k)}{\lambda_{t}+\tilde{\Phi}_{i}(k)^{2}} \leq\frac{\gamma_{t}\tilde{\Phi}_{i}(k)}{2\sqrt{\lambda_{t}}\tilde{\Phi}_{i} (k)}=\frac{\gamma_{t}}{2\sqrt{\lambda_{t}}}\), we have
\[0<\|G(k)\|\leq\overline{G}=\frac{b_{t}\gamma_{t}\max_{i\in N}(\sum_{j\in N_{ i}}a_{ij}+c_{i})}{2\sqrt{\lambda_{t}}}. \tag{23}\]
Let
\[\frac{b_{t}\gamma_{t}\max_{i\in N}(\sum_{j\in N_{i}}a_{ij}+c_{i})}{2\sqrt{ \lambda_{t}}}<1, \tag{24}\]
meaning that \(0<\|G(k)\|<1\), so that the matrix \([I_{N}-G(k)]\) is an irreducible sub-stochastic matrix. Next, a maximum error is defined as \(\tilde{e}_{\max}(k)=\max(\tilde{e}_{1}(k),\tilde{e}_{2}(k),\ldots,\tilde{e}_{ N}(k))\) and \(\|\Delta Y_{0}(k+1)\|\leq\Omega\) is assumed, we have
\[|\tilde{e}_{\max}(k+1)|\leq\alpha_{1}|\tilde{e}_{\max}(k)|+\Omega, \tag{25}\]
where \(\alpha_{1}=1-\underline{G}<1\) and \(\underline{G}\) is the minimum value of \(G(k)\).
**Remark 12**: _Because of the lag of communication, we can only get the current control input based on the current information, and the most ideal tracking effect also exist tracking errors due to the change of the leader's state \(\|\Delta Y_{0}(k+1)\|\). So the upper bound of \(\|\Delta Y_{0}(k+1)\|\) is introduced here to constrain the upper bound of global tracking error \(\tilde{e}_{i}\)._
When the system encounters DoS attacks, that is, \(k\in\Xi_{a}(0,k)\), flag signal \(\psi(k)\) defined in III-A is zeros, that is, \(\psi(k)=0\). In this situation, \(\tilde{u}_{i}(k)=\tilde{u}_{i}(k-1)\) in line with (17). According to Lemma 1, one gets
\[\tilde{Y}(k+1)=\tilde{Y}(k). \tag{26}\]
Then the tracking error can be rewritten as follows:
\[\begin{split}\tilde{E}(k+1)&=Y_{0}(k+1)-\tilde{Y}(k +1)\\ &=Y_{0}(k)-\tilde{Y}(k)+\Delta\tilde{Y}_{0}(k+1)\\ &=\tilde{E}(k)+\Delta Y_{0}(k+1).\end{split} \tag{27}\]
Next, we have
\[|\tilde{e}_{\max}(k+1)|\leq\alpha_{2}|\tilde{e}_{\max}(k)|+\Omega, \tag{28}\]
where \(\alpha_{2}>1\).
In summary, the max global tracking error on the TL is represented as
\[|\tilde{e}_{\max}(k+1)|=\left\{\begin{aligned} &\leq\alpha_{1}|\tilde{e}_{\max}(k)|+\Omega,& \text{if }k\in\Xi_{s}(0,k),\\ &\leq\alpha_{2}|\tilde{e}_{\max}(k)|+\Omega,&\text{if }k\in\Xi_{a}(0,k).\end{aligned}\right. \tag{29}\]
It should be noted that MASs on the TL will be in two situations according to the difference of \(k\), that is, \(k\in\Xi_{s}(0,k)\) and \(k\in\Xi_{a}(0,k)\). Next, one case as \(k\in\Xi_{a}(0,k)\) is discussed as follows, and the discussion of the second case is similar to the first case.
\[\begin{split}&|\tilde{e}_{\max}(k+1)|\\ &<\alpha_{2}|\tilde{e}_{\max}(k)|+\Omega\\ &\leq\alpha_{2}^{2}|\tilde{e}_{\max}(k-1)|+\alpha_{2}\Omega+\Omega \\ &\leq\alpha_{2}^{k-T_{i}^{cn}+1}|\tilde{e}_{\max}(T_{i}^{on})|+ \sum_{n=0}^{k-T_{i}^{cn}}\alpha_{2}^{n}\Omega\\ &\leq\alpha_{2}^{k-T_{i}^{cn}+1}(\alpha_{1}|\tilde{e}_{\max}(T_{i}^ {on}-1)|+\Omega)\\ &\quad+\sum_{n=0}^{k-T_{i}^{cn}}\alpha_{2}^{n}\Omega\\ &\leq\alpha_{2}^{k-T_{i}^{cn}+1}(\alpha_{1}^{2}|\tilde{e}_{\max}(T_{i} ^{on}-2)|+\alpha_{1}\Omega+\Omega)\\ &\quad+\sum_{n=0}^{k-T_{i}^{cn}}\alpha_{2}^{n}\Omega\\ &\quad+\sum_{n=0}^{k-T_{i}^{cn}}\alpha_{2}^{n}\Omega\\ &\leq\alpha_{2}^{k-T_{i}^{cn}+1}\alpha_{1}^{T_{i}^{cn}-T_{i-1}^{ off}}|\tilde{e}_{\max}(T_{i-1}^{off})|\\ &\quad+\alpha_{2}^{k-T_{i}^{cn}+1}\sum_{n=0}^{k-T_{i}^{cn}-T_{i-1}^ {off}-1}\alpha_{1}^{n}\Omega+\sum_{n=0}^{k-T_{i}^{cn}}\alpha_{2}^{n}\Omega.\end{split} \tag{30}\]
when \(|\Xi_{s}(0,k)|=k-|\Xi_{a}(0,k)|\) and \(|\Xi_{a}(0,k)|\leq M+\beta k\) mentioned at Assumption 5 is introduced in (30), the local tracking error is transformed into
\[|\tilde{e}_{\max}(k+1)|\] \[\leq\alpha_{1}^{k-|\Xi_{n}(0,k)|}\alpha_{2}^{|\Xi_{n}(0,k)|}|\tilde{ e}_{\max}(0)|\] \[\quad+\sum_{j=0}^{k}\alpha_{1}^{k-j-|\Xi_{n}(j,k)|}\alpha_{2}^{|\Xi _{n}(j,k)|}\Omega\] \[\leq e^{(k-|\Xi_{n}(0,k)|)\ln\alpha_{1}+|\Xi_{n}(0,k)|\ln\alpha_{2} }|\tilde{e}_{\max}(0)|\] \[\quad+\sum_{j=0}^{k}e^{(k-j-|\Xi_{n}(j,k)|)\ln\alpha_{1}+|\Xi_{n} (j,k)|\ln\alpha_{2}}\Omega\] \[\leq e^{(k-M-\beta k)\ln\alpha_{1}+(M+\beta k)\ln\alpha_{2}}| \tilde{e}_{\max}(0)|\] \[\quad+\sum_{j=0}^{k}e^{(k-j-M-\beta(k-j))\ln\alpha_{1}+(M+\beta(k -j))\ln\alpha_{2}}\Omega\] \[\leq e^{(\ln\alpha_{2}-\ln\alpha_{1})M+[\ln\alpha_{1}+\beta(\ln \alpha_{2}-\ln\alpha_{1})]k}|\tilde{e}_{\max}(0)|\] \[\quad+\sum_{j=0}^{k}e^{(\ln\alpha_{2}-\ln\alpha_{1})M+[\ln\alpha_ {1}+\beta(\ln\alpha_{2}-\ln\alpha_{1})](k-j)}\Omega\] \[\leq e^{(\ln\alpha_{2}-\ln\alpha_{1})M}e^{[\ln\alpha_{1}+\beta(\ln \alpha_{2}-\ln\alpha_{1})]k}|\tilde{e}_{\max}(0)|\] \[\quad+e^{(\ln\alpha_{2}-\ln\alpha_{1})M}\Omega\] \[\quad\times\sum_{j=0}^{k}e^{[\ln\alpha_{1}+\beta(\ln\alpha_{2}- \ln\alpha_{1})](k-j)}. \tag{31}\]
Let \(\ln\alpha_{1}+\beta(\ln\alpha_{2}-\ln\alpha_{1})<0\), the maximum tracking error \(\tilde{e}_{\max}\) is uniformly bounded, that is, global tracking error \(\tilde{e}_{i}\) on the TL is UUB with error upper \(B_{t}\) as follows:
\[|\tilde{e}_{\max}(k+1)|\leq\frac{e^{(\ln\alpha_{2}-\ln\alpha_{1})M}}{1-e^{[\ln \alpha_{1}+\beta(\ln\alpha_{2}-\ln\alpha_{1})]}}\Omega=B_{t}. \tag{32}\]
The proof is completed.
### _One-to-One Tracking between CPL and TL against Unbounded AAs_
Tracking errors between TL and CPL is defined as
\[\sigma_{i}(k)=\tilde{y}_{i}(k)-y_{i}(k) \tag{33}\]
In Theorem 1, we have proved that TL can resist DoS attacks with limited attack frequency, and the global tracking error on the TL is UUB convergence. In the following, We prove that the global tracking error on the CPL is also UUB convergence under unbounded AAs.
According to the general data-driven method, a DMFAC is proposed to drive \(y_{i}(k)\) to \(\tilde{y}_{i}(k)\) without considering AAs described as
\[\hat{\phi}_{i}(k)= \hat{\phi}_{i}(k-1)+\frac{\eta_{c}\Delta u_{i}(k-1)}{\mu_{c}+ \Delta u_{i}(k-1)^{2}}\] \[\times[\Delta y_{i}(k)-\hat{\phi}_{i}(k-1)\Delta u_{i}(k-1)],\] \[\hat{\phi}_{i}(k)= \hat{\phi}_{i}(0),\text{ if }|\hat{\phi}_{i}(k)|<\varepsilon\text{ or } \text{sign}(\hat{\phi}_{i}(k))=\text{sign}(\hat{\phi}_{i}(0)),\] \[u_{i}(k)= u_{i}(k-1)+\frac{\gamma_{c}\hat{\phi}_{i}(k)}{\lambda_{c}+\hat{ \phi}_{i}(k)^{2}}[\tilde{y}_{i}(k+1)-y_{i}(k)], \tag{34}\]
where \(\lambda_{c}\) and \(\mu_{c}\) are positive penalty factor and \(0<\eta_{c}<1\) and \(0<\gamma_{c}<1\) are step coefficient.
**Remark 13**: _Generally speaking, we can't get the value of \(\tilde{y}\) at \(k+1\) ahead of current time \(k\). However, \(\tilde{y}\) is only the result of data operation in the digital twin layer, so the value of \(\tilde{y}\) at \(k+1\) can be calculated immediately according to the state and input at time \(k\), and transmitted to the agents in the CPL through the channel between the CPL and TL. This is why \(\tilde{y}(k+1)\) is used in (34) at current time \(k\). \(\square\)_
Actuation attacks signal is a bias signal injected into the control input, deteriorating the tracking performance of MASs. When encountering unbounded AAs, the control input of each followers is actually described as:
\[\bar{u}_{i}(k) =u_{i}(k-1)+\frac{\gamma_{c}\hat{\phi}_{i}(k)}{\lambda_{c}+\hat{ \phi}_{i}(k)^{2}}[\tilde{y}_{i}(k+1)-y_{i}(k)]+\chi_{i}(k)\] \[=\bar{u}_{i}(k-1)+\frac{\gamma_{c}\hat{\phi}_{i}(k)}{\lambda_{c}+ \hat{\phi}_{i}(k)^{2}}[\tilde{y}_{i}(k+1)-y_{i}(k)]+\Delta\chi_{i}(k), \tag{35}\]
where \(\chi_{i}(k)\) is actuation attack signals and \(\Delta\chi_{i}(k)=\chi_{i}(k)-\chi_{i}(k-1)\).
In the data-driven model presented in (3), \(\Delta u_{i}(k)\) is used as the system input instead of \(u_{i}(k)\). Therefore, when analyzing the impact of actuation attack signal \(\chi_{i}(k)\) on system stability, analyzing \(\Delta\chi_{i}(k)\) is necessary. The value of \(\Delta\chi_{i}(k)\) is unknown and can not be obtained. Therefore, a estimator is designed to estimate the value of \(\Delta\chi_{i}(k)\) as follows:
\[\Delta\hat{\chi}_{i}(k)=\left\{\begin{array}{ll}\frac{\bar{d}| \Delta\hat{\chi}_{i}(k-1)-r_{i}(k)\sigma_{i}(k)|}{d+|\Delta\hat{\chi}_{i}(k-1) -r_{i}(k)\sigma_{i}(k)|}&\text{if }k>0\\ 0&\text{if }k=0,\end{array}\right. \tag{36}\]
where \(\bar{d}\) is the upper variation of actuation attack signals defined in Assumption 6, \(\Delta\hat{\chi}_{i}(k)\) is the estimated value of \(\Delta\chi_{i}(k)\) and \(r_{i}(k)=\frac{\gamma\hat{\phi}_{i}(k)}{\lambda_{c}+|\hat{\phi}_{i}(k)|^{2}}\) is a adaptive scale factor with \(0<\gamma<\gamma_{c}\).
**Remark 14**: _The attack signal \(\chi_{i}\) acts on the controller input, and finally affects the point-to-point tracking error \(\sigma_{i}\) between CPL and TL. Because there is a causal relationship between \(\Delta\chi_{i}\) and \(\sigma_{i}\), it is an effective and feasible method to reconstruct \(\Delta\chi_{i}\) according to the tracking error \(\sigma_{i}\). \(\square\)_
Considering the compensation of \(\Delta\chi_{i}(k)\), a novel DMFAC is proposed as
\[\hat{\phi}_{i}(k)= \hat{\phi}_{i}(k-1)+\frac{\eta_{c}[\Delta u_{i}(k-1)+\Delta\hat{ \chi}_{i}(k-1)]}{\mu_{c}+|\Delta u_{i}(k-1)+\Delta\hat{\chi}_{i}(k-1)|^{2}}\] \[\times[\Delta y_{i}(k)-\hat{\phi}_{i}(k-1)[\Delta u_{i}(k-1)+ \Delta\hat{\chi}_{i}(k-1)]]\] \[\hat{\phi}_{i}(k)= \hat{\phi}_{i}(0),\text{ if }|\hat{\phi}_{i}(k)|<\varepsilon\text{ or } \text{sign}(\hat{\phi}_{i}(k))=\text{sign}(\hat{\phi}_{i}(0)),\] \[u_{i}(k)= u_{i}(k-1)+\frac{\gamma_{c}\hat{\phi}_{i}(k)}{\lambda_{c}\hat{\phi}_{i }(k)^{2}}[\tilde{y}_{i}(k+1)-y_{i}(k)]-\Delta\hat{\chi}_{i}(k), \tag{37}\]
**Theorem 2**: _Problem LFCCA is solved by double-layer DMFAC algorithm in (18), (36) and (37) if following condi
tions hold simultaneously:
\[\frac{b_{t}\gamma_{t}\max_{i\in N}(\sum_{j\in N_{i}}a_{ij}+c_{i})}{2 \sqrt{\lambda_{t}}}<1, \tag{38a}\] \[\beta<\frac{-\ln\alpha_{1}}{\ln\alpha_{2}-\ln\alpha_{1}}<1,\] (38b) \[0<\frac{\gamma_{c}b_{c}}{2\sqrt{\lambda_{c}}}<1. \tag{38c}\]
**Proof.** Obviously, \(\hat{\phi}_{i}\) is bounded if the reset mechanism is activated. The situation that the reset mechanism is not activated is discussed below. Let \(\tilde{\phi}_{i}=\hat{\phi}_{i}-\phi_{i}\), one gets
\[\begin{split}&\tilde{\phi}_{i}(k)\\ &=\hat{\phi}_{i}(k)-\phi_{i}(k)\\ &=\hat{\phi}_{i}(k-1)-\phi_{i}(k)+\frac{\eta_{c}[\Delta u_{i}(k- 1)+\Delta\hat{\chi}_{i}(k-1)]}{\mu_{c}+\left|\Delta u_{i}(k-1)+\Delta\hat{\chi }_{i}(k-1)\right|^{2}}\\ &\quad\times[\Delta y(k)-\hat{\phi}_{i}(k-1)[\Delta u_{i}(k-1)+ \Delta\hat{\chi}_{i}(k-1)]]\\ &=(1-\frac{\eta_{c}[\Delta u_{i}(k-1)+\Delta\hat{\chi}_{i}(k-1)] ^{2}}{\mu_{c}+\left|\Delta u_{i}(k-1)+\Delta\hat{\chi}_{i}(k-1)\right|^{2}} )\tilde{\phi}_{i}(k-1)\\ &\quad+\phi_{i}(k-1)-\phi_{i}(k)+\frac{\eta_{c}[\Delta u_{i}(k- 1)+\Delta\hat{\chi}_{i}(k-1)]}{\mu_{c}+\left|\Delta u_{i}(k-1)+\Delta\hat{ \chi}_{i}(k-1)\right|^{2}}\\ &\quad\times(\Delta\chi_{i}(k-1)-\Delta\hat{\chi}_{i}(k-1))\end{split} \tag{39}\]
Since \(0<\eta_{c}<1\), we have \(\frac{\eta_{c}[\Delta u_{i}(k-1)+\Delta\hat{\chi}_{i}(k-1)]^{2}}{\mu_{c}+ \left|\Delta u_{i}(k-1)+\Delta\hat{\chi}_{i}(k-1)\right|^{2}}<1\). It is easy to prove that \(|\frac{\eta_{c}[\Delta u_{i}(k-1)+\Delta\hat{\chi}_{i}(k-1)]}{\mu_{c}+\left| \Delta u_{i}(k-1)+\Delta\hat{\chi}_{i}(k-1)\right|^{2}}|<1\) and \(\Delta[\hat{\chi}_{i}(k)]<\bar{d}\). Combined with \(\phi_{i}<b_{c}\) and \(|\Delta\chi_{i}(k)|<\bar{d}\), one gets
\[\begin{split}|\tilde{\phi}_{i}(k)|\leq&\frac{\eta_{c }[\Delta u_{i}(k-1)+\Delta\hat{\chi}_{i}(k-1)]^{2}}{\mu_{c}+\left|\Delta u_{i}( k-1)+\Delta\hat{\chi}_{i}(k-1)\right|^{2}}|\tilde{\phi}_{i}(k-1)|+2b_{c}\\ &+2|\frac{\eta_{c}[\Delta u_{i}(k-1)+\Delta\hat{\chi}_{i}(k-1)] }{\mu_{c}+\left|\Delta u_{i}(k-1)+\Delta\hat{\chi}_{i}(k-1)\right|^{2}}|\phi_ {i}(k-1)\bar{d}\\ \leq& g^{k-1}\|\tilde{\phi}_{i}(1)\|+\frac{2b_{c}(1+ \bar{d})}{1-g},\end{split} \tag{40}\]
where \(0<\frac{\eta_{c}[\Delta u_{i}(k-1)+\Delta\hat{\chi}_{i}(k-1)]^{2}}{\mu_{c}+ \left|\Delta u_{i}(k-1)+\Delta\hat{\chi}_{i}(k-1)\right|^{2}}<g<1\).
Thus, the Boundedness of \(\tilde{\phi}\) is demonstrated. According to the definition of \(\tilde{\phi}\) and \(\phi<b_{c}\), it can be obtained that \(\phi\) is bounded.
So the global tracking errors on the CPL gets
\[\begin{split} e_{i}(k+1)=& y_{0}(k+1)-y_{i}(k+1) \\ =& y_{0}(k)-y_{i}(k)+\Delta y_{0}(k+1)-\phi_{i}(k) \Delta u_{i}(k)\\ =&(1-\frac{\gamma_{c}\phi_{i}(k)\hat{\phi}_{i}(k)}{ \lambda_{c}+\hat{\phi}_{i}(k)^{2}})e_{i}(k)+\Delta y_{0}(k+1)\\ &+\frac{\gamma_{c}\phi_{i}(k)\hat{\phi}_{i}(k)}{\lambda_{c}+\hat{ \phi}_{i}(k)^{2}}[y_{0}(k)-\tilde{y}_{i}(k+1)]\\ &-\phi_{i}(k)\Delta\chi_{i}(k)+\phi_{i}(k)\Delta\hat{\chi}_{i}(k) \\ =&(1-\frac{\gamma_{c}\phi_{i}(k)\hat{\phi}_{i}(k)}{ \lambda_{c}+\hat{\phi}_{i}(k)^{2}})[e_{i}(k)+\Delta y_{0}(k+1)]\\ &+\frac{\gamma_{c}\phi_{i}(k)\hat{\phi}_{i}(k)}{\lambda_{c}+\hat{ \phi}_{i}(k)^{2}}\bar{e}_{i}(k+1)-\phi_{i}(k)\Delta\chi_{i}(k)\\ &+\phi_{i}(k)\Delta\hat{\chi}_{i}(k)\end{split} \tag{41}\]
According to Theorem 1, the global tracking error on the TL is UUB convergence, that is, \(\tilde{e}_{i}(k+1)\leq B_{t}\). Next, the global tracking error on the CPL is Further simplified into
\[\begin{split} e_{i}(k)=&(1-\frac{\gamma_{c}\phi_{i} (k)\hat{\phi}_{i}(k)}{\lambda_{c}+\hat{\phi}_{i}(k)^{2}})[e_{i}(k)+\Delta y_{0} (k+1)]\\ &+\frac{\gamma_{c}\phi_{i}(k)\hat{\phi}_{i}(k)}{\lambda_{c}+\hat{ \phi}_{i}(k)^{2}}\bar{e}_{i}(k+1)-\phi_{i}(k)\Delta\chi_{i}(k)\\ &+\phi_{i}(k)\Delta\hat{\chi}_{i}(k)\\ \leq&\alpha e_{i}(k)+\alpha\Omega+(1-\alpha)B_{c}+2b_{c} \overline{d}\\ \leq&\alpha^{k}e_{i}(0)+B_{t}+\frac{2b_{c}\overline{d}+ \alpha\Omega}{1-\alpha}\\ \leq&\alpha^{k}e_{i}(0)+B,\end{split} \tag{42}\]
in which \(|\Delta y_{0}(k)|<\Omega\), \(|\Delta\chi_{i}(k)|<\overline{d}\),\(0<\frac{\gamma_{c}\phi_{i}(k)\hat{\phi}_{i}(k)}{\lambda_{c}+\left|\hat{\phi}_{i}(k) \right|^{2}}\leq\frac{\gamma_{c}b_{c}}{2\sqrt{\lambda_{c}}}<1\) as \(\alpha=\max(1-\frac{\gamma_{c}\phi_{i}(k)\hat{\phi}_{i}(k)}{\lambda_{c}+\left| \hat{\phi}_{i}(k)\right|^{2}})<1\) and \(B=B_{t}+\frac{2b_{c}\overline{d}+\alpha\Omega}{1-\alpha}\).
So the global tracking errors \(e_{i}(k)\) on the CPL is UUB convergence. The proof is completed. \(\blacksquare\)
According to the conclusion of Theorem 2, a double-layer DMFAC framework based on the TL is proposed to solve the **Problem LFCCA**. As shown in Fig. 2, leaders on the CPL transmit their state information to virtual leader on the TL. Virtual agents on the TL communicate with each other through topological network, and finally realize MFAC-based leader-following control on the TL. Because there are only some data calculations on digital TL, virtual followers can calculate the state information of the next moment through control input from DMFAC algorithm according to its own system model. Followers on the CPL receive the state information of the virtual follower at the next moment as tracking reference point. The detailed steps of the double-layer DMFAC algorithm are described in Algorithm 1.
Fig. 2: Double-layer DMFAC control framework.
## V Numerical Simulation
In this section, we show the effectiveness of the aforementioned theoretical result using a simulated example.
Considering a nonlinear MAS with one leader and four followers, the communication topology satisfying Assumption 1 is shown in Fig. 3, in which the leader is denoted as node \(0\) and four followers are denoted as node \(1,\cdots,4\). As the illustration shown in Fig. 3, the leader \(0\) can only communicate with followers \(1\) and \(3\), transmitting own state information to them. The followers \(2\) and \(4\) can not receive the leader information, but it could receive the information from agents \(1\) and \(3\) to finish the tracking task due to that the communication graph is strongly connected. According to Communication topology in Fig. 3, the Laplacian matrix of the graph is obtained as
\[L=\left[\begin{array}{cccc}1&0&0&-1\\ -1&2&-1&0\\ 0&-1&1&0\\ -1&0&-1&2\end{array}\right],\]
and \(C=\mathrm{diag}(1,0,1,0)\).
occurrence of DoS attacks). According to the convergence rate of the tracking error at each time in the simulation results, we can get the convergence rates \(\alpha_{1}=0.9\) and \(\alpha_{2}=1.05\) in two cases, which proves that the choice of DoS attack parameters satisfies the condition of Theorem 1.
As can be seen from the local enlarged image in Fig. 11, it is a process of estimating \(\hat{\Phi}_{i}\) of DMFAC at the beginning, which is accompanied by large tracking fluctuations. Although the virtual followers on the TL will maintain the original control input and deviate from the leader when the DoS attacks occur, it can speed up to follow the leader when the attacks are over. Tracking error on the TL, which is UUB convergence with \(B_{t}=0.32\), obviously, is illustrated in Fig. 11 (yellow area is the boundary range).
### _Performance of CPL with Double-layer DMFAC Algorithm_
Then we focus on the performance of CPL against both DoS attacks and unbounded AAs under double-layer DMFAC framework.
The AAs is chosen as: \(\chi_{1}(k)=0.01k\), \(\chi_{2}(k)=0.02k\), \(\chi_{1}(k)=-0.01k\) and \(\chi_{1}(k)=-0.02k\), variation of which are bounded by \(\vec{d}=0.03\). According to the nonlinear dynamics of actual followers, we can get \(bc=1.5\). The initial state and and initial input of followers are \(Y(0)=\left[0\,0\,0\,0\right]^{\mathrm{T}}\) and \(U(0)=\left[0\,0\,0\,0\right]^{\mathrm{T}}\), respectively. In order to meet the conditions of Theorem 2, DMFAC parameters on the CPL are selected as \(\eta_{c}=1\), \(\mu_{t}=c\), \(\gamma_{c}=0.8\) and \(\lambda_{t}=c\).
Combined with DMFAC on the TL, the effect of the double-layer DMFAC algorithm is shown in the Fig. 11 and Fig. 11 below. The trajectories of all agents on the CPL are depicted in Fig. 11 while the tracking error of followers is shown in Fig. 11. Same as track on the TL above, there is a fluctuation adjustment at the beginning of the control process. In this process, DMFAC algorithm iteratively calculates the DMFAC parameters \(\hat{\phi}_{i}\) and AAs attack compensation value \(\Delta\hat{\chi}_{i}\). In addition, as can be seen from the partial enlarged view, when DoS attacks occur, the tracking effect will be disturbed, but after the attacks, it can quickly achieve the desired tracking effect. In addition, the tracking error will fluctuate slightly in the middle of the control process, which is related to the leader's state Variation. At this moment, the leader's state Variation is at its maximum, and the PDD parameter estimation \(\hat{\phi}_{i}\) in DMFAC is also changing rapidly, so there is a small fluctuation. Besides, the range of a tracking error is also shown in the yellow area of the Fig. 11 with upper bound \(B=0.38\).
## VI Conclusion
The leader-following consensus control for unknown non-linear MASs against requency-constrained DoS attacks and unbounded AAs has been solved in this paper. A double-layer DMFAC framework based on the TL is proposed, which not only solves the problem of unknown nonlinearity with FDL-MFAC, but also resists both DoS attacks and AAs in different defense strategies. Strict proof is presented to guarantee that the tracking error is UUB convergence, which is verified by simulation above. Future works will consider the privacy of TL against other attacks, or the ability to solve other attacks except DoS attacks and AAs by building a multi-layer control framework.
|
2304.08604 | Effects of Clutter on Egocentric Distance Perception in Virtual Reality | To assess the impact of clutter on egocentric distance perception, we
performed a mixed-design study with 60 participants in four different virtual
environments (VEs) with three levels of clutter. Additionally, we compared the
indoor/outdoor VE characteristics and the HMD's FOV. The participants wore a
backpack computer and a wide FOV head-mounted display (HMD) as they
blind-walked towards three distinct targets at distances of 3m, 4.5m, and 6m.
The HMD's field of view (FOV) was programmatically limited to
165{\deg}$\times$110{\deg}, 110{\deg}$\times$110{\deg}, or
45{\deg}$\times$35{\deg}. The results showed that increased clutter in the
environment led to more precise distance judgment and less underestimation,
independent of the FOV. In comparison to outdoor VEs, indoor VEs showed more
accurate distance judgment. Additionally, participants made more accurate
judgements while looking at the VEs through wider FOVs. | Sina Masnadi, Yahya Hmaiti, Eugene Taranta, Joseph J. LaViola Jr | 2023-04-17T20:44:46Z | http://arxiv.org/abs/2304.08604v1 | # Effects of Clutter on Egocentric Distance Perception in Virtual Reality
###### Abstract.
To assess the impact of clutter on egocentric distance perception, we performed a mixed-design study with 60 participants in four different virtual environments (VEs) with three levels of clutter. Additionally, we compared the indoor/outdoor VE characteristics and the HMD's FOV. The participants wore a backpack computer and a wide FOV head-mounted display (HMD) as they blind-walked towards three distinct targets at distances of 3m, 4.5m, and 6m. The HMD's field of view (FOV) was programmatically limited to 165\(\times\)110', 110'\(\times\)110', or 45'\(\times\)35'. The results showed that increased clutter in the environment led to more precise distance judgment and less underestimation, independent of the FOV. In comparison to outdoor VEs, indoor VEs showed more accurate distance judgment. Additionally, participants made more accurate judgements while looking at the VEs through wider FOVs.
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer and Image Processing
+
Footnote †: journal: Computer and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer and Image Processing
+
Footnote †: journal: Computer and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer and Image Processing
+
Footnote †: journal: Computer and Image Processing
+
Footnote †: journal: Computer and Image Processing
Footnote †: journal: Computer
Background and Related Work
The distance underestimation phenomenon is due to different factors that have been investigated by prior research spanning the past two decades. These factors can be categorized into three main groups: virtual environment characteristics, apparatus characteristics, and evaluation techniques.
### Virtual Environment Characteristics
Several research investigations on distance perception in VEs have discovered numerous factors that influence distance judgment in those environments. Kunz et al. showed that distance perception was influenced by the caliber and extent of the graphics quality related to the VE (Kurz et al., 2017). Furthermore, the camera placement was shown by Leyrer et al. to contribute to distance underestimation, especially when the camera is positioned higher than the user's real eye-height (Kurz et al., 2018). In our study, the camera was placed at the user's eye-level and the environments were designed to be detailed and realistic.
Aseeri et al. conducted experiments that showed the virtual human entourage (existence of life size human avatars in the scene) does not have a significant influence on the sense of scale in an indoor environment (Bahdan et al., 2015). Self-avatars, when enabled in a VE, contribute to better distance perception by the embodied users as shown by Mohler et al. (Mohler et al., 2017). In our study, we did not include a self-avatar to focus only on the effect of clutter on distance perception.
The levels of detail in the environment and the visual cues available to the user can also affect distance and dimension judgment. Loyola et al. suggest that more available visual cues in a VE bring about a better accuracy in distance judgment and in dimension estimations, especially regarding egocentric dimensions (Kurz et al., 2018). Lessels et al. showed that having high-fidelity scenes in VEs is important and having a set simple control system leads to the preserving of spatial orientation, especially when fetching a target (Larson et al., 2019).
It has been shown by multiple studies that being in an indoor environment provides a better understanding of distances to the user when compared to an outdoor environment. Creem-Regehr et al. performed a study which showed less underestimations in indoor VEs in contrast with outdoor VEs (Casaneda et al., 2019). In a related study, Houck et al. found that there is a robust influence of the width of indoor environment on egocentric distance perception which provides better understanding on what elements in an indoor environment is important (Kurz et al., 2018). In addition, Andre and Rogers proved through a blind-walking evaluation that people underestimate distances greater in outdoor real-world environments compared to indoor environments (Casaneda et al., 2019). Masnadi et al. also found that people tend to underestimate distances greater in outdoor VEs compared to indoor VEs (Masnadi et al., 2019). Our study is different from previous work since it evaluates multiple levels of clutter in various indoor and outdoor VEs. We focused on the effects of clutter as a the main factor in the misjuddment of egocentric distance in VR. With the abundance of visible objects, more visual cues are present, which might lead to the augmentation of perceivable objects by users. Thus, users have to look at more objects, ignore some or deviate their focus to others depending on their needs, which results in several effects on the visual perception of the user, considering that in the presence of visual clutter there is an increase in visual stimuli.
### Apparatus Characteristics
It is also important to state the effects that the chosen apparatus might have on egocentric distance judgment. Part of a study lead by Combe et al. has shown that the weight of the HMD does not have a major effect over short distance perception (Kurz et al., 2017). However, it was shown through several studies that the weight of an HMD and its inertia both influence the perception of distance in VR, with a tendency towards underestimation (Kurz et al., 2017; Loyola et al., 2017; Loyola et al., 2017). Multiple investigations have shown that the judged distance accuracy was positively related to HMD FOV and its resolution, and negatively related to the HMD weight (Kurz et al., 2018; Masnadi et al., 2019; Masnadi et al., 2019). Moreover, the parallax effect was shown not to impact the distance perception task of blind-walking (Kurz et al., 2018). In our study, participants looked at the environment while standing in the same place, nevertheless, subtle moves could possibly happen before the start of the evaluation task.
Pfeil et al. showed that distance underestimation remains a problem in video see-through HMDs and their results showed the influence of weight (negative correlation) and FOV of the HMD (positive correlation) on distance perception accuracy (Pfeil et al., 2019). Vaziri et al. reported similar results (Vaziri et al., 2019; Vaziri et al., 2019).
### Evaluation Technique
There are various evaluation techniques for analyzing the egocentric distance judgment in VEs.
Some of the most popular techniques are blind-throwing, timed imagined walking, verbal estimation, and blind-walking (Kurz et al., 2018; Masnadi et al., 2019). In a _blind throwing_ task, users are blindfolded after they have seen the target in the environment and are ready to throw an object to the marked target (Vaziri et al., 2019; Masnadi et al., 2019). It has been shown in the case of _verbal estimation_, the margin of error increases with the growth of the distance between the user and target (Kurz et al., 2017; Masnadi et al., 2019; Masnadi et al., 2019). _Timed imagined walking_ consists of showing the target to the user, then when they are ready they are blindfolded and asked to imagine themselves walking to the target such that when they arrive there in their mind, they inform the investigator (Vaziri et al., 2019; Masnadi et al., 2019).
In our study, we used the _blind-walking_ technique, which is the most popular method. It is similar to timed imagined walking, yet the difference is that the user physically walks towards the target while keeping their eyes closed or blind-folded, then they stop and let the researcher know they reached the target (Vaziri et al., 2019; Masnadi et al., 2019; Masnadi et al., 2019; Masnadi et al., 2019; Masnadi et al., 2019).
## 3. Methods
We conducted a user study to evaluate distance judgment with different clutter levels in various VEs along with multiple HMD FOVs. The methods used for the study and their details are described in the sections that follow.
### Study Design
The user study revolved around the blind-walking-based task. The task, which was set to estimate the user's perception of distance, consisted in asking the user to look at the VE through the HMD to determine the distance of a predefined target from themselves, then closing their eyes and walking to the target without opening their eyes during the walking process. We performed a \(4\times 3\times 3\times 3\) mixed-design study, such that the between-subject factors were
environment-based characteristics - two indoor and two outdoor (4 levels) - along with the within-subject factors being the clutter-level (3 levels), fov (3 levels), and target distance (3 levels). In addition, the clutter levels, target distances, and FOV Dimensions used were consistent and the same across all the environments used in the study either indoors or outdoors.
### Study Variables
#### 3.2.1. Between-Subjects Variables
Four different environments were used in the study. However, each user participating in the study saw only one environment from the available ones mentioned. These environments were sourced from the Unity3D Asset Store1234 and were modified to meet our needs. A round-robin order was used to assign an environment to each user. The environments available consisted of Indoor1, Indoor2, Outdoor1, and Outdoor2, where every environment was designed to be realistic and was displayed to the participants using Unity3D. The _Indoor1_ VE (see Figure 1) was a library with a 10m\(\times\)7m size, a ceiling with a 4m height, along with desks, sofas and bookshelves along three sides of the room. The _Indoor2_ VE (see Figure 2) was a 10m\(\times\)5m sized living room of an apartment and had a ceiling of a 3m height along with windows in a side of the room, a carpet, and furniture. The _Outdoor1_ VE (see Figure 3) was a sidewalk in a suburban neighborhood. This environment was in daylight and it contained a few cars parked on the street and on one side a fence. The _Outdoor2_ VE (see Figure 4) was an island in daylight that contained trees, ropes, and barrels along with cottages and boxes made out of wood. The items described for each environment were positioned in a way so that they did not interfere with the walking area.
Footnote 1: [https://assetstore.unity.com/packages/3d/environments/urban/library-interior-archiv-160154](https://assetstore.unity.com/packages/3d/environments/urban/library-interior-archiv-160154); retrieved 2022-06-12
Footnote 2: [https://assetstore.unity.com/packages/3d/environments/urban/suburb-neighborhood-house-pack-modular-72/12](https://assetstore.unity.com/packages/3d/environments/urban/suburb-neighborhood-house-pack-modular-72/12); retrieved 2022-06-12
Footnote 3: [https://assetstore.unity.com/packages/3d/props/apartment-kit-124055](https://assetstore.unity.com/packages/3d/props/apartment-kit-124055); retrieved 2022-06-12
#### 3.2.2. Within-Subjects Variables
We designed three clutter levels for each environment. The clutter levels consisted of: _1:uncluttered, 2:semi-cluttered, and 3cluttered._ We defined _clutter_ as the number of objects visible to the user at each scene. Three different FOVs were simulated inside the same HMD using the software. The chosen FOV levels were picked based on the physical FOVs of popular VR headsets: Pimax 5K (165\(\times\) 110\({}^{\circ}\)), HTC Vive/Oculus Quest (110\({}^{\circ}\)\(\times\) 110\({}^{\circ}\)), and nVisor ST60 (45\(\times\) 35\({}^{\circ}\)). The target object chosen for the study was a red cylinder, which had a 10cm diameter and a 5cm height. The target was on the ground and the distance of the target from the user's starting point was either 3m, 4.5m, or 6m. The target design was similar to a previous study done by Masnadi et. al (Masnadi et al., 2019). Figure 5 demonstrates all of the possible target distances in an environment. The choice of these distance ranges was made based on prior blind-walking user studies (Masnadi et al., 2019; Masnadi et al., 2019). To prevent tiredness and reduce the learning effect along with reducing the number of trials, we chose three target distances instead of four (3m, 4m, 5m, and 6m) while keeping the range unchanged. To add more realism to the study, we did set the target object to cast and receive shadows with the aim of offering the participants depth cues that are realistic in addition to making the target blend in with the environment. In addition, the target choice was also made with the purpose of enabling the participants to even walk over it or get near it without being afraid of encountering an obstacle.
We used a rectangular boundary for each scene as the _safe area_, such that the safe area refers to the area in the VE where the user can be placed along with the target without the user encountering a collision with the other virtual objects present in the scene while walking to the target. We designed a randomization of the user's location in each of the environments at each trial. In every trial, the camera, which represents the starting point of the user, had its transform parameters randomized such that both the targets set and the start point would both be within the described safe area. Furthermore, the FOV and the target placement were different for every two consecutive trials. Randomizing the starting point was done to reduce the possibility that the user memorizes the number of steps. This randomization also ensures environmental variations that promote the reduction of environment-based effects.
Figure 1. Indoor1 Environment Levels Of Clutter
The combination of the factors-consisting of the clutter level, FOV, and target distance- resulted in 27 different conditions. Every condition was presented three times to every participant which resulted in 81 blind-walk trials, where the order was randomized for every participant, such that no consecutive trial had a common FOV or target distance in order to minimize the memorization effect. During each measurement, the error, which is the distance of the user's position from the actual target, was recorded. A positive error value was recorded if the user walked passed the target, and a negative error value was recorded if the user walked short of the target.
### Participants
The participants were recruited from the university population and their number totaled 60 users (46 male, 13 female, and one non-binary) with ages ranging from 18 to 34 (M=21.45, SD=3.44) along with heights ranging from 152cm to 208cm (M=174.12, SD=11.55). Among the users, 35 were wearing glasses or contact lenses and kept them on during the study. Furthermore, participants were asked to share their VR experience in a scale ranging from 1 (least experienced) to 5 (most experienced), which resulted in M=2.57 and SD=1.23.
### Apparatus
The study was performed using a Pimax 5K Plus VR headset, which has a 165" (horizontal)\(\times\)110" (vertical) FOV and a resolution of 2560 \(\times\) 1440 pixels per eye. In addition, the stated headset has a 120hz refresh rate and a weight of 470g. We used an HP Z VR Backpack equipped with an Intel 7820HQ CPU, an NVIDIA Quadro P5200 GPU, and 32GB memory. Moreover, we added a tiny speaker on the backpack near the back of the user's head with the aim of providing verbal instructions. Using the backpack required adding external batteries to it, which result at the end in a weight of 4.35kg - including the weight of the backpack itself, batteries, and the harness. We used a canvas through the Unity3D editor in order to limit the user's FOV, which was set at a distance of 1cm from the cameras. We opted for using cutouts in a black plane to recreate the desired FOVs.
We designated an empty area in our closed laboratory to perform this study. The room dimensions were \(6m(w)\times 9m(I)\times 3.3m(h)\)
Figure 3. Outdoor1 Environment Levels Of Clutter
Figure 2. Indoor2 Environment Levels Of Clutter
and the empty area in the middle of the room was \(4m(w)\times 9m(l)\). The furthest target was 6m away from the user and there was a 2m distance between this target and the closest non-study object. In addition, the chosen Apparatus relies on the use of SteamVR5, which shows a visual safeguard when the user gets near the borders of the study area. This Safeguard was kept enabled as a precaution, yet based on the empty area we chose to perform the study, the Safeguard never appeared and consequently did not influence the perception of distance.
Footnote 5: [https://partner.steangames.com/doc/features/steamvr/info](https://partner.steangames.com/doc/features/steamvr/info)
To avoid interfering with the visual display perceived by the user and prevent a negative impact on the performance, we have designed a remote based user monitoring tool for the user study. We designed a system to monitor the behaviour of the participant and visualize what they are viewing in real-time on another computer. This system transfers compressed images from the headset to the monitoring computer through network without a negative impact on performance nor the ongoing study in general. Along with transmitting images, the designed tool transmitted trial information, which helped the researcher have a better grasp of the ongoing user activity. The study code was made public on Github 6. The posted project provides a Unity prefab, which allows performing the study by adding it to any Unity 3D scene.
Footnote 6: Anonymous
### Procedure
After receiving the user's consent to participate in the study, we proceeded to evaluate their visual acuity to ensure that they were eligible to participate in the study using a Snellen chart (see (Shi et al., 2017)). A score better than 20/32 for each eye was required to participate. This was necessary to be able to distinguish visual details in the scene. Moreover, users wearing glasses or lenses were asked to keep them during the study. All the participants that failed the vision evaluation were dismissed. On the other hand, the participants that had an adequate vision score were asked to complete a survey that asked for the following information: age, height, gender, handedness, and experience with VR.
After these steps were completed, the investigator proceeded to explain the study-related tasks in detail and the purpose of each step and requirement in the study. In addition, the participants were shown the gear to be used and how to adjust it. Before wearing the backpack and headset, users were asked if they had any questions so the researcher could clarify any present inquiry.
Users were prompted to wear the backpack and adjust it as needed. Then, we gave them the headset and helped them wear it and adjust it so it was comfortable and well-placed on their head. Figure 6 shows a user with the HMD used in the study along with
Figure 4. Outdoor2 Environment Levels Of Clutter
Figure 5. All Three Targets Positions Possible In The Study Seen From a User’s Point Of View
the HP backpack. After the user was equipped with the gear and was ready, we informed them that we would turn off the lights so they would not have any indication from the outside that could help them in the target distance evaluation, then we started the evaluation process and initiated the data collection process.
At first, the user was asked to begin from a specific start position, which initially was logged to be the starting position and they had to return to the start position to begin the next trial. The user looked at a black screen at this point. Then, whenever the user was ready, they let the researcher know by saying the word "ready" and then we showed them the VE such that the FOV and target position were automatically changed following a pre-processed trial sequence. For five to six seconds the user visualized the environment and tried their best to understand the distance between them and the target. When they were ready to walk, they informed the researcher by saying "ready". Afterward, the researcher initiated the walking process by hiding the environment through a button click on a wireless keyboard that made the environment invisible. Simultaneously, a clear and faint audio source was played from the backpack speaker that says "go". Then, the participant started walking with closed eyes towards the target and they stopped as soon as they realized the current location was the one where the perceived target was. While the participant was walking, all the content of the VE was hidden and a black screen was displayed to ensure that in the case the participant opens their eyes, no information related to the environment would be given while walking to the target. When they reached their desired spot for the current trial, the participant said "here" to confirm verbally with the researcher that they reached the target. The researcher logged the position through a button click on the wireless keyboard. Simultaneously, a faint audio source is played that said "done" to inform the participant that they can open their eyes and follow a red arrow that showed up near their feet to guide them back to the starting position. This guidance arrow was always attached to the participant's feet, and it was always pointing to the start position. When the arrow showed up, the participant followed its direction until they see a green-colored arrow that marks the starting position. This green arrow is an alignment arrow that helps the user correctly align with the real-world room. The user moved so that the red (direction arrow) and green (alignment arrow) arrows were overlapping and in the same direction. When the alignment was correct, only the green arrow remains and then the user informed the researcher that they were ready to view the next scene by saying "ready". After getting the confirmation, the researcher pressed a button key on the wireless keyboard to show the VE leading to the start of a new trial.
Since this user study was performed during COVID-19, we prioritized the safety of each one of the users and researchers. Therefore, we relied on the previously stated arrows to keep a safe distance between the researcher and users along with reducing their physical interaction. Through the arrows used, the researcher did not need to give any indication to the users about how to navigate back to their starting position nor to manually intervene to bring them back to it. Moreover, the study design, as explained, reduced the chance of having verbal interactions between the researcher and participants, which prevented the user from receiving audio cues from the outside. In addition, the equipment used in the study was sanitized and cleaned before and after each user study, while also both the researchers and users were required to wear masks.
Every user evaluation was allocated a one-hour time slot and the user received $10 USD in cash at the completion of the study.
### Hypotheses
For the last couple of decades, researchers have not concluded that the FOV is a crucial factor that brings about the underestimation of distance in VR (Krause et al., 2017; Wang et al., 2018). Nevertheless, new research has presented that modern technology eradicates distance underestimation in VR, which can correlate to improving the FOV (Krause et al., 2017; Wang et al., 2018). Furthermore, other works have found that distance underestimation is negatively impacted by reduced FOVs (Krause et al., 2017; Wang et al., 2018). Moreover, it is important to mention that the perception of distance is also influenced by other factors such as the environment itself. Creem-Reephr et al. and Masnadi et. al came to the finding that distance perception is more precise in indoor settings in contrast to outdoor environments (Masnadi et al., 2018; Wang et al., 2018). Adding more objects into an environment is a practice that is shared by several designers, as they intend to improve distance judgment in their environments. However, there is no evidence that adding more clutter to environments provides visual cues that improve distance perception (Krause et al., 2017; Wang et al., 2018). As a consequence, we designed and conducted our study based on the described parameters and our hypotheses are:
Figure 6. A user wearing the backpack and the HMD
* Participants will more accurately estimate distances in more cluttered environments.
* Participants will more accurately estimate distances when viewing indoor environments.
* Participants will more accurately estimate distances with wider FOVs.
* Clutter and FOV have independent effect on distances judgement.
## 4. Results
In this section, we will show the results of our study along with the data analysis using ANOVA. The errors are reported in cm.
### Repeated Measures ANOVA Results:
We performed a Shapiro-Wilks normality test on the data which showed that the data were not normally distributed (p<.01). Therefore, we used the ARTool to transform the gathered data (Sanchez et al., 2017). Afterward, we performed a mixed ANOVA test which had the environment as the between-subject variable and clutter, FOV, and distance as the within-subject variables. We found a main effect of clutter, FOV, distance, and environment. An interaction effect between Clutter\(\times\)Environment, FOV\(\times\)Distance and Distance\(\times\)Environment was found. However, we did not find an interaction effect between Clutter\(\times\)FOV. Table 1 shows the results of the omnibus test in detail.
### Implication Of Findings:
Our study resulted in acquiring insight on distance perception in multiple levels of clutter through different FOVs. Below, we will describe in details the implications of our results.
#### 4.2.1. Main Effect of Clutter
We found a main effect of clutter on perception of distance (p =.009). Higher clutter in the environment results in more precise estimation and judgment of distances and reduces underestimation (see Figure 7). A significant difference was found between uncluttered (M=-105.1, SD=95.3) and cluttered (M=101.0, SD=95.9) after the pairwise comparison using Bonferroni adjustments (p =.014). There was no significant difference between uncluttered and semi-cluttered (M=-102.0, SD=96.7) (p =.095), and cluttered and semi-cluttered (p =.095).
#### 4.2.2. Distances are Underestimated with narrower FOVs
A significant effect of FOV was perceived through our findings (p<.001). Distance underestimation was the lowest and most accurate through the \(165^{\circ}\times 110^{\circ}\) view which is the widest FOV. Moreover, the smallest FOV resulted in the highest underestimation of distance and the highest recorded error. In addition, distance judgment deteriorated with the decrease in the FOV. Figure 8 shows the error by FOV. We administered a pairwise comparison with Bonferroni adjustments which reflected a significant difference between the \(165^{\circ}\times 110^{\circ}\) (M=-94.0, SD=95.5) and \(110^{\circ}\times 110^{\circ}\) (M=-105.3, SD=96.6) views (p <.001) and also between \(165^{\circ}\times 110^{\circ}\) and \(45^{\circ}\times 35^{\circ}\) (M=-107.7, SD=95.2) (p <.001). However, the difference was not significant between \(110^{\circ}\times 110^{\circ}\) and \(45^{\circ}\times 35^{\circ}\) (p =.085). Masnadi et al. also performed the same analysis and our results confirm theirs (Mang et al., 2017).
#### 4.2.3. Indoor vs Outdoor
We found a significant difference between indoor (M=-71.4, SD=80.3) and outdoor (M=-134.0, SD=100.0) environments (p<.001). This is inline with previous studies in the literature. Figure 10 shows the mean error for each target distance by environment
#### 4.2.4. FOV and Clutter have an independent effect on distance perception
No significant interaction between clutter and FOV was perceived, which reflects the fact that FOV and clutter independently impact the distance judgment. The improvement of distance perception when clutter and FOV increase can be observed in Figure 9.
## 5. Discussion
Based on the performed blind-walk user study, we have found that the presence of clutter in VEs results in the improvement of distance judgment and perception. Along with that, the study results show that wider FOVs cause more accurate distance perception and less underestimation.
### H1: Clutter in Virtual Environments
Based on the study results, it is perceivable that the increase of clutter in VEs leads to a better perception of distance along with decreasing the underestimation of distance. In the conducted study,
Figure 8. The figure shows the mean error of different FOVs. (95% CI)
Figure 7. The figure shows the mean error of different clutter levels. (95% CI)
we experimented using three different levels of clutter for each of the four environments that we used, this serves as a means to be able to generalize our idea that the increase of clutter in an environment results in an improvement of the perception of distance. This finding reveals that distance perception is improved after adding objects to the scene and also by adding more clutter to it independent from any influence of being in an indoor or outdoor setting.
### H2: Indoor vs Outdoor Environments
Through the performed study, we perceived that participants showed better performance in indoor virtual settings compared to outdoor ones. We assume that this might be related to the participants being more used to indoor environments and mostly due to the fact that they are accustomed to home alike settings along with them performing more daily tasks in indoor environments. This difference still needs additional investigation in order to better understand the underlying factors that differentiate indoor and outdoor environments.
### H3: FOV of the HMD
Three FOVs were used throughout the investigation (\(165^{\circ}\times 110^{\circ}\), \(110^{\circ}\times 110^{\circ}\), and \(45^{\circ}\times 35^{\circ}\)). Based on the findings through the study, a significant difference between the widest FOV and the two other FOVs was found, which can be justified by the fact that the stimulation of the far-periphery area of the eye which is above \(120^{\circ}\) is only possible through the \(165^{\circ}\times 110^{\circ}\) FOV.
### H4: Clutter and FOV
We found that clutter and FOV had an independent effect on the perception of distance since no interaction effect was perceived. This shows that regardless of FOV, clutter can improve the perception of distance and the same applies to FOV regardless of clutter level.
## 6. Limitations and Future Work
The results of our study can be mainly used in a VR context and the same results cannot be claimed to be applicable in AR or real-world as further investigation is required with the aim of generalizing our results. Therefore, it is crucial to perform studies to evaluate the effects of clutter on egocentric distance perception through AR devices and also in real world.
In addition, generalizing the results found to all potential VR users population is still questionable as the targeted population of the study was mainly from the student body of the university, which had limitation of the age of the participants and as a consequence contributed to a limitation of the findings of our study.
There was not a direct comparison with real-world environments. Our conducted study was focused on the VEs and their characteristics along with comparing them to each other besides evaluating the impact of clutter in each one of those environments. Our effort was to diversify the possible VEs in order to be able to generalize the findings to various VEs.
In this paper, we defined clutter as the number of objects present in the scene of a chosen environment. Clutter can have different definitions in different contexts, which can be exemplified through the following: the overall contrast of the scene, the total length of
\begin{table}
\begin{tabular}{l|c c c}
**Effect on Error** & **ANOVA Result** & \\ \hline \hline \multicolumn{4}{c}{Main Effects} \\ \hline Clutter & \(F(2,59)=4.673\) & \(p=.009\) & \(\eta_{p}^{2}=.026\) \\ FOV & \(F(2,59)=29.748\) & \(p<.001\) & \(\eta_{p}^{2}=.145\) \\ Distance & \(F(2,59)=254.180\) & \(p<.001\) & \(\eta_{p}^{2}=.591\) \\ Environment & \(F(3,56)=17.598\) & \(p<.001\) & \(\eta_{p}^{2}=.231\) \\ \hline \hline \multicolumn{4}{c}{Interaction Effects} \\ \hline Clutter\(\times\)FOV & \(F(4,173)=2.519\) & \(p=.051\) & \(\eta_{p}^{2}=.013\) \\ Clutter\(\times\)Distance & \(F(4,173)=1.152\) & \(p=.331\) & \(\eta_{p}^{2}=.007\) \\ Clutter\(\times\)Environment & \(F(6,352)=2.557\) & \(p=.019\) & \(\eta_{p}^{2}=.042\) \\ FOV\(\times\)Distance & \(F(4,173)=3.434\) & \(p=.009\) & \(\eta_{p}^{2}=.019\) \\ FOV\(\times\)Environment & \(F(6,352)=1.399\) & \(p=.214\) & \(\eta_{p}^{2}=.023\) \\ Distance\(\times\)Environment & \(F(6,352)=14.935\) & \(p<.001\) & \(\eta_{p}^{2}=.203\) \\ \hline Clutter\(\times\)FOV\(\times\)Distance & \(F(8,169)=.238\) & \(p=.984\) & \(\eta_{p}^{2}=.001\) \\ Clutter\(\times\)FOV\(\times\)Environment & \(F(12,525)=1.381\) & \(p=.170\) & \(\eta_{p}^{2}=.023\) \\ Clutter\(\times\)Distance\(\times\)Environment & \(F(12,525)=1.298\) & \(p=.215\) & \(\eta_{p}^{2}=.000\) \\ FOV\(\times\)Distance\(\times\)Environment & \(F(12,525)=1.350\) & \(p=.186\) & \(\eta_{p}^{2}=.022\) \\ \hline Clutter\(\times\)FOV\(\times\)Distance\(\times\)Environment & \(F(24,513)=1.445\) & \(p=.076\) & \(\eta_{p}^{2}=.024\) \\ \hline \hline \end{tabular}
\end{table}
Table 1. Repeated Measures ANOVA results.
the edges, and also set the clutter definition based on the information theory and so forth, which represent different approaches to set the definition of clutter for future investigations.
We stated beforehand, users tend to perform better in indoor environments, which will require additional investigation with the aim of justifying this phenomenon. This probably correlates to the familiarity with the indoor environments and users being accustomed to the size of the indoor objects. Moreover, there are walls and ceilings in an indoor environment that can provide perspective cues to the user.
In our study, users could move their eyes around while looking at a limited FOV which resulted in stimulating the peripheral vision with depth cues, even in narrower FOVs. In future work, adopting headsets with eye-tracking systems would make it possible to better understand the periphery and far-periphery stimulation effect on the perception of distance. In such a system, the limited FOV can follow the user's gaze and makes it possible to isolate periphery stimulation.
## 7. Conclusion
We performed a study using a blind-walking task that represents an action-based evaluation. We found improvement in distance judgment accuracy and a reduction of distance underestimation by adding more clutter and objects to the environment. Based on our results, using cluttered environments improves the perception of distance in contrast with environments without clutter. In addition, we have shown that this improvement of distance judgment through adding more clutter was independent of FOV. In this study, we consider the environmental characteristics to be significant, considering that results showed participants perform better indoors in the blind-walking task, whereas more underestimation was recorded in the outdoor cases. Based on these findings, it is important to mention that such environmental factors should be highlighted and emphasized on when developing VR-based environments, tasks, and systems.
## Acknowledgments
We would like to thank all the members that contributed to this project and the anonymous reviewers for their valuable feedback.
|
2306.06050 | Branching via Cutting Plane Selection: Improving Hybrid Branching | Cutting planes and branching are two of the most important algorithms for
solving mixed-integer linear programs. For both algorithms, disjunctions play
an important role, being used both as branching candidates and as the
foundation for some cutting planes. We relate branching decisions and cutting
planes to each other through the underlying disjunctions that they are based
on, with a focus on Gomory mixed-integer cuts and their corresponding split
disjunctions. We show that selecting branching decisions based on quality
measures of Gomory mixed-integer cuts leads to relatively small
branch-and-bound trees, and that the result improves when using cuts that more
accurately represent the branching decisions. Finally, we show how the history
of previously computed Gomory mixed-integer cuts can be used to improve the
performance of the state-of-the-art hybrid branching rule of SCIP. Our results
show a 4% decrease in solve time, and an 8% decrease in number of nodes over
affected instances of MIPLIB 2017. | Mark Turner, Timo Berthold, Mathieu Besançon, Thorsten Koch | 2023-06-09T17:20:11Z | http://arxiv.org/abs/2306.06050v2 | # Branching via Cutting Plane Selection: Improving Hybrid Branching
###### Abstract
Cutting planes and branching are two of the most important algorithms for solving mixed-integer linear programs. For both algorithms, disjunctions play an important role, being used both as branching candidates and as the foundation for some cutting planes. We relate branching decisions and cutting planes to each other through the underlying disjunctions that they are based on, with a focus on Gomory mixed-integer cuts and their corresponding split disjunctions. We show that selecting branching decisions based on quality measures of Gomory mixed-integer cuts leads to relatively small branch-and-bound trees, and that the result improves when using cuts that more accurately represent the branching decisions. Finally, we show how the history of previously computed Gomory mixed-integer cuts can be used to improve the performance of the state-of-the-art hybrid branching rule of SCIP. Our results show a \(4\%\) decrease in solve time, and an \(8\%\) decrease in number of nodes over affected instances of MIPLIB 2017.
## 1 Introduction
This paper proposes a new criterion for branching in branch-and-cut algorithms to solved Mixed-Integer Linear Programs (MILP) based on the disjunctions that underpin both branching candidates and several families of cutting planes. A MILP is an optimisation problem that is classically defined as:
\[\min_{\mathbf{x}}\{\mathbf{c}^{\intercal}\mathbf{x}\,\,\mid\,\mathbf{Ax} \leq\mathbf{b},\,\,\,1\leq\mathbf{x}\leq\mathbf{u},\,\,\,\mathbf{x}\in\mathbb{ Z}^{|\mathcal{J}|}\times\mathbb{R}^{n-|\mathcal{J}|}\} \tag{1}\]
Here, \(\mathbf{c}\in\mathbb{R}^{n}\) is the objective coefficient vector, \(\mathbf{A}\in\mathbb{R}^{m\times n}\) is the constraint matrix, \(\mathbf{b}\in\mathbb{R}^{m}\) is the right hand side constraint vector, \(\mathbf{l},\mathbf{u}\in\mathbb{R}^{n}\cup\{-\infty,\infty\}^{n}\) are the lower and upper variable bound vectors, and \(\mathcal{J}\subseteq\{1,\ldots,n\}\) is the set of indices of integer variables. We denote the the feasible region of the linear programming (LP) relaxation of (1) as \(\mathcal{P}\), where the LP is derived by relaxing the integrality requirements of (1). An optimal solution to (1) is denoted \(\mathbf{x}^{*}\), a feasible solution denoted \(\hat{\mathbf{x}}\), and an LP optimal solution is denoted as \(\mathbf{x}^{LP}\).
The core algorithm for solving MILPs is _branch-and-cut_, see [1] for a thorough introduction of MILP solving methods. Branch-and-cut combines two main components, _branch-and-bound_ and _cutting planes_. The branch-and-bound algorithm recursively partitions the MILP search space by a process called _branching_ i.e. splitting a problem into smaller subproblems. This recursion creates a _tree_, where each node is a subproblem. Traditionally branching is performed on an integer variable, \(x_{i}\) with fractional LP value, \(x_{i}^{LP}\), creating the two LP subproblems with feasible regions \(\mathcal{P}\,\cap\,\{x_{i}^{LP}\leq\lfloor x_{i}^{LP}\rfloor\}\) and \(\mathcal{P}\,\cap\,\{x_{i}^{LP}\leq\lfloor x_{i}^{LP}\rfloor\}\), thus making the current LP solution infeasible in both children problems. An example branching procedure is visualised in Figure 1. The algorithm bounds the optimal objective through upper bounds from feasible solutions of Problem (1) obtained at leaf nodes of the tree, and lower bounds from LP relaxations
all subtrees of a node. The algorithm of branch-and-bound that we focus on is _variable selection_, which is concerned with determining which variable to branch on at a given node from the given candidates.
A cutting plane, or \(cut\), parameterised by \((\mathbf{\alpha},\beta)\in\mathbb{R}^{n+1}\), is an inequality \(\mathbf{\alpha}^{\intercal}\mathbf{x}\leq\beta\) that is violated by at least one solution of the LP relaxation but that does not increase the optimal value of the problem when added to the problem, i.e., it is valid for (1). This definition of a cut is more general than the classical one, which requires that a cut does not remove any integer-feasible solution to (1), and separates some LP feasible fractional solution. Our definition however captures additional families of cuts such as symmetry-breaking cuts [2, 3]. The cutting plane algorithm iteratively generates cuts, applies them, and re-solves the tightened LP relaxation. In the classical case, this algorithm is repeated until an integral LP relaxation solution is achieved. In branch-and-cut, this algorithm is repeated at the root node until some termination criteria is met, with additional cuts applied throughout the branch-and-bound tree. The procedure of cutting planes that we focus on is _cut selection_, which is concerned with deciding which subset of the computed cuts to actually add to the LP relaxation.
In this paper we propose a technique for the variable selection problem based on cut selection. We note that the opposite direction - using variable selection techniques for cut selection - also presents a valid avenue of research, however this lies outside the scope of this paper. For each branching candidate, we generate a cut, greedily select the best cut according to standard cut scoring measures, and then branch on the corresponding candidate. Specifically, we will generate Gomory Mixed-Integer (GMI) cuts, for which we provide a detailed introduction in Section 3. This approach to variable selection is objective-free, i.e. not based on the objective vector, and is thus complementary to standard pseudo-cost based approaches [4]. We show the effectiveness of multiple variants of this branching rule using different levels of strengthened cuts, and compare them to standard branching rules from the literature. Finally, we show how this information can be incorporated into the hybrid branching rule of SCIP [5, 6], resulting in an improvement of general solver performance.
## 2 Related Work
Branching in MILP has been thoroughly studied, both theoretically and computationally, see [7, 4, 8]. The current state-of-the-art variable selection method is hybrid branching [5], which is reliability pseudo-cost branching [4] with integrated constraint satisfaction and satisfiability problem techniques. An array of other selection rules exist, such as nonchimerical fractionality branching [9], cloud branching [10], and general disjunction branching [11]. The above-stated methods, unlike our cut selection approach to branching, often depend on the objective function for variable selection. We note that variable selection has also served as the playground for introducing machine learning to MILP solvers, see [12, 13] for early examples, and [14] for an overview.
Cutting plane selection has been less studied than variable selection. It is however currently experiencing a recent refocus through machine learning-driven research, see [15] for an overview. Early computational studies, see [1, 16], show that a diverse set of measures is necessary for good performance when scoring cuts. Both studies, as well as the computational study [17], use parallelism-based cut filtering algorithms, and show that the inclusion of filtering methods is critical to performance. These studies suggest that it is preferable for performance to select from a large set of weaker cuts than from a small set of stronger cuts. More recent work on cut selection, for which [18] provides ample motivation, is machine learning
Figure 1: (Left) An example branching decision. The red point is the LP optimal solution, the larger polytope the original LP feasible region, and the two blue polytopes the feasible LP regions of the subproblems. (Right) An example cutting plane.
based. Specifically, research has focused on theoretical guarantees [19, 20, 21], new scoring measures [22], the amount of cuts to select [23], and learning to score cuts with supervised [24, 25], imitation [26], and reinforcement learning [27, 23].
Our work is not the first to use cutting plane selection to dictate branching decisions. Moreover, it is not the first to use GMI cuts specifically, see [28, 29]. In both papers, the split disjunctions, which define the GMI cuts of tableau rows, are used as branching candidates. The efficacy of the GMI cuts are used to filter the set of branching candidates, where ultimately strong branching is used as the final selection criterion. In [29], additional experiments are presented that compare disjunctions derived from reduce-and-split cuts, see [30]. Our research differs from [28, 29] in that we branch on elementary splits, i.e., single variable disjunctions, we perform extensive computational experiments using non-strengthened GMI cuts, and we integrate our approach within existing state-of-the-art history-based methods in a modern MILP solver.
## 3 Gomory Mixed-Integer Cuts
This section provides a thorough introduction to Gomory Mixed-Integer (GMI) cuts. Following the history and general introduction of Subsection 3.1, we introduce disjunctive, split, and intersection cuts in Subsection 3.2. We then step through the derivation of GMI inequalities in Subsection 3.3, ending with how GMI cuts are used and derived in practice in Subsection 3.4. The geometric interpretation of GMI cuts will be provided, and linked to the relations between the families of cuts introduced in Subsection 3.2. For alternative overviews of GMI cuts, see [31, 32, 30, 29].
### GMI Introduction and History
First introduced in 1960 [33], GMI inequalities are general-purpose inequalities valid for arbitrary bounded MILPs. They can be used to iteratively tighten an LP relaxation of an MILP, and when they are generated to separate a specific solution, are referred to as cutting planes, or \(cuts\). In practice, they are generated to separate the current LP solution using the simplex tableau, see Subsection 3.4.
Following the landmark paper [34], GMI cuts were empirically shown to be a computational success. This success was in spite of a commonly held belief that only cuts derived from the specific structure of an MILP instance were computationally useful. Some examples of structured inequalities or cuts are knapsack cover and flow cover inequalities [35]. The summarised reasons for the success of [34] was their intelligent lifting procedure to globally valid cuts, their selection algorithm, their use of branch-and-cut as opposed to pure cutting plane approaches, and the increased robustness of LP solvers. For a more complete history behind the resurgence of GMI inequalities, see [36]. Advances on GMI cuts have continued, where we name reduce-and-split cuts [30] and LaGromory cuts [37] as examples. To stress the importance of these cuts in the current MILP landscape, we note that GMI cuts are continually noted as computationally necessary [38, 39], and are used in every state-of-the-art MILP solver, see Xpress [40], Gurobi [41], CPLEX [42], HiGHS [43], and SCIP [6].
### Disjunctive, Split, and Intersection Cuts
It is common in the literature to find compact introductions of GMI cuts that mention they are either disjunctive cuts, intersection cuts, or split cuts. All these statements are true, and moreover, the families of cuts have a clear hierarchy [44, 45]:
GMI cuts from basic feasible solutions \(\;\subset\;\)Split cuts \(\;\subset\;\) Intersection cuts \(\;\subset\;\)Disjunctive cuts
We highlight that while our work on branching leverages GMI cuts, it can also leverage any family of cuts that fit into this hierarchy and can be derived from disjunctions.
#### 3.2.1 Disjunctions and Disjunctive Cuts
A linear disjunction is a set of linear inequalities joined by _and_, _or_, and _negation_ operators (see [44] for a thorough introduction). The solution set of a disjunction is a _disjunctive set_. For MILPs, every integer-feasible solution to (1) is an element of a disjunctive set. A _disjunctive cut_ is any cut derived from such a disjunctive set, i.e., all elements in the disjunctive set remain feasible and some fractional solution outside the disjunctive set is separated.
A linear disjunctive set represents a union of polyhedra. It is defined as:
\[\mathcal{D}:=\bigcup_{i=1}^{|\mathcal{D}|}\mathcal{D}_{i},\;\;\text{where}; \mathcal{D}_{i}\subseteq\mathcal{P}\;\;\forall i\in\{1,\cdots|\mathcal{D}|\}\]
An example disjunction is visualised in Figure 2, along with a valid disjunctive cut.
#### 3.2.2 Splits and Split Cuts
A _split disjunction_, or _split_, is defined by an integer \(\pi_{0}\in\mathbb{Z}\) and an integral vector \(\mathbf{\pi}\in\mathbb{Z}^{|\mathcal{J}|}\times\mathbf{0}^{n-|\mathcal{J}|}\), which has zero entries for coefficients of continuous variables. We denote the split disjunction as \(\mathcal{D}(\mathbf{\pi},\pi_{0})\), where \((\mathbf{\pi},\pi_{0})\) define the two hyperplanes:
\[\mathbf{\pi}^{\intercal}\mathbf{x}\leq\pi_{0} \tag{2}\] \[\mathbf{\pi}^{\intercal}\mathbf{x}\geq\pi_{0}+1\]
The disjunctive set \(\mathcal{D}=\bigcup_{i\in\{1,2\}}\mathcal{D}_{i}\) formed by the hyperplanes is:
\[\mathcal{D}_{1} :=\mathcal{P}\cap\{\mathbf{x}\in\mathbb{R}^{n}|\mathbf{\pi}^{ \intercal}\mathbf{x}\leq\pi_{0}\}\] \[\mathcal{D}_{2} :=\mathcal{P}\cap\{\mathbf{x}\in\mathbb{R}^{n}|\mathbf{\pi}^{ \intercal}\mathbf{x}\geq\pi_{0}+1\}\]
The disjunction is valid as \(\mathbf{\pi}^{\intercal}\mathbf{x}\) must always take an integer value in a feasible solution to (1) due to the design of \(\mathbf{\pi}\). We observe that the disjunctive set \(\mathcal{D}\) can be written as the complement of a set \(\mathcal{S}\) intersected with \(\mathcal{P}\), where \(\mathcal{S}\) is defined as:
\[\mathcal{S}:=\{\mathbf{x}\in\mathbb{R}^{n}\,|\,\pi_{0}<\mathbf{\pi}^{\intercal} \mathbf{x}<\pi_{0}+1\} \tag{3}\]
Note that notation is often abused, and the split can reference either the set \(\mathcal{S}\) from (3) or the boundary of the set \(\mathcal{S}\), i.e. the two hyperplanes from (2). From a split disjunction, we can derive a _split cut_. A split cut, \((\mathbf{\alpha},\beta)\), is a valid inequality for both \(\mathcal{D}_{1}\) and \(\mathcal{D}_{2}\), and separates some points from \(\mathcal{S}\cap\mathcal{P}\). In the mixed-integer case, unlike the pure integer case [46], a finite amount of split cuts is not always sufficient for defining the integer hull and proving optimality, see [47]. A split is called _simple_ or _elementary_ if it only acts on a single variable, i.e. \(\mathbf{\pi}=\mathbf{e}_{i}\) for some \(i\in\mathcal{J}\). An example (simple) split disjunction alongside a valid split cut is visualised in Figure 3.
#### 3.2.3 Intersection Cuts
Some cuts are derived from the standard form of a MILP, which is defined using equality constraints instead of inequalities. In particular, intersection cuts and GMI cuts are derived using this standard form. Given
Figure 3: (Left) An example (simple) split. (Right) An example (simple) split cut.
our definition of a MILP in (1), we can transform it to a standard form MILP in higher dimension by adding non-negative slack variables to each constraint. We can additionally substitute and introduce variables to shift variable bounds while keeping an equivalent formulation. We do this procedure to obtain the following MILP, where for ease of notation we will continue to use \(\mathbf{c}\) and \(\mathbf{A}\).
\[\underset{\mathbf{x}}{\text{argmin}}\{\mathbf{c}^{\intercal}\mathbf{x}\ \mid\ \mathbf{A}\mathbf{x}=\mathbf{b},\ \ \mathbf{x}\geq\mathbf{0},\ \ \mathbf{x}\in\mathbb{Z}^{|\mathcal{J}|}\times\mathbb{R}^{n+m-|\mathcal{J}|}\} \tag{4}\]
The simplex method typically used to solve LP relaxations of (4) returns a _basis_, \(\mathcal{B}\subseteq\{1,...,n+m\}\), where \(|\mathcal{B}|=m\). The basis is an index set of variables and relates to an extreme point of the LP relaxation, \(\bar{\mathbf{x}}\in\mathbb{R}^{n+m}\), which is a basic solution. In practice the simplex method returns the optimal basic solution \(\mathbf{x}^{LP}\). Associated with every basic solution \(\bar{\mathbf{x}}\) is the LP cone, or corner polyhedron, \(\mathcal{C}(\bar{\mathbf{x}})\subseteq\mathbb{R}^{n+m}\), whose apex is \(\bar{\mathbf{x}}\) and whose rays are defined by the n-hyperplanes that form the basis. These rays are the columns of the simplex tableau relating to the non-basic variables. Note that in the case of primal degeneracy, multiple bases may result in the same extreme point but in different LP cones, and as such \(\mathcal{C}(\mathcal{B})\) is the more appropriate notation. Two example LP cones are visualised in Figure 4.
An intersection cut, similar to a split cut, is defined w.r.t. a set \(\mathcal{S}\subseteq\mathbb{R}^{n+m}\), which lies in the same dimension as \(\mathbf{x}\) in the new space. Unlike split cuts however, \(\mathcal{S}\) is not necessarily defined by two hyperplanes. Rather, it needs to be convex, to contain in its interior a current LP-feasible fractional solution we want to separate, and to not contain any integer-feasible solution in its interior. In the context of MILP, the set \(\mathcal{S}\) is a lattice-free set [48]. In addition to the set \(\mathcal{S}\), an intersection cut is also defined w.r.t. a simplicial conic relaxation of the feasible region of (1), see Figure 4 for example simplicial conic relaxations derived from bases. We note that while any simplicial conic relaxation can be exploited, an LP cone derived from a basis is used in practice. The idea to generate intersection cuts is to collect the intersection points of each ray with the boundary of the closure of \(\mathcal{S}\), and form a valid inequality as the hyperplane that contains all the intersection points. When using the LP cone \(\mathcal{C}(\mathbf{x}^{LP})\) and a set \(\mathcal{S}\) containing \(\mathbf{x}^{LP}\), the generated inequality will be a cut. Two examples of intersection cuts are visualised in Figure 5. For a deeper look into intersection cuts, we refer readers to [48, 44].
Figure 4: The shaded area is the feasible region of \(\mathcal{C}(\bar{\mathbf{x}})\). The red dot is the apex of the simplicial conic relaxation of the feasible region of (1), and \(\mathbf{r}_{1},\mathbf{r}_{2}\) are the rays of the cone. (Left) The red dot is both \(\mathbf{x}^{LP}\) and \(\bar{\mathbf{x}}\). (Right) The red dot is a primal infeasible \(\bar{\mathbf{x}}\).
Figure 5: (Left) Example intersection cut. (Right) Example intersection cut that is also a split cut.
### GMI Inequality Derivation
We will now derive the GMI inequality, which we note again is general-purpose and requires no additional problem structure.
**Definition 1** (GMI inequality).: _Given a valid equality for the LP relaxation of (4), \(\mathbf{a^{\intercal}x}=b\), we distinguish the variables into those with integer requirements and those that are continuous, i.e., \(\sum_{i\in\mathcal{J}}a_{i}x_{i}+\sum_{i\in[n]\setminus\mathcal{J}}a_{i}x_{i}=b\). Let \([n]=\{1,...,n\}\), \(b=\lfloor b\rfloor+f_{0}\), where \(0<f_{0}<1\), and \(a_{i}=\lfloor a_{i}\rfloor+f_{i}\), where \(0\leq f_{i}<1\) and \(i\in[n]\). The GMI inequality is:_
\[\sum_{i\in\mathcal{J},f_{i}\leq f_{0}}\frac{f_{i}}{f_{0}}x_{i}+\sum_{i\in \mathcal{J},f_{i}>f_{0}}\frac{1-f_{i}}{1-f_{0}}x_{i}+\sum_{i\in[n]\setminus \mathcal{J},a_{i}\geq 0}\frac{a_{i}}{f_{0}}x_{i}-\sum_{i\in[n]\setminus\mathcal{J},a_{i }<0}\frac{a_{i}}{1-f_{0}}x_{i}\geq 1 \tag{5}\]
_Derivation._ The logic of the GMI inequality is that if \(f_{0}>0\), then fractional multiples of integer variables and multiples of continuous variables must account for \(f_{0}\). Specifically, they must sum to \(f_{0}\) and a potential integer. That is:
\[\sum_{i\in\mathcal{J},f_{i}\leq f_{0}}f_{i}x_{i}+\sum_{i\in\mathcal{J},f_{i}>f _{0}}(f_{i}-1)x_{i}+\sum_{i\in[n]\setminus\mathcal{J}}a_{i}x_{i}=k+f_{0},\quad k \in\mathbb{Z} \tag{6}\]
This partition of \(f_{i}\) values around \(f_{0}\) is possible due to the observation that for \(i\) such that \(f_{i}>0\), \(a_{i}\) can be written as \(a_{i}=\lfloor a_{i}\rfloor+f_{i}\) or as \(a_{i}=\lceil a_{i}\rceil+(f_{i}-1)\). For example, \(3.6=3+0.6\) or equivalently, \(3.6=4-0.4\). This partition is done as it results in a strictly stronger cut than if the case \(a_{i}=\lfloor a_{i}\rfloor+f_{i}\) is always used [32, 30, 29]. Specifically, it results in smaller coefficients for like terms as \(\frac{1-f_{i}}{1-f_{0}}<\frac{f_{i}}{f_{0}}\) when \(f_{i}>f_{0}\).
Let us create a disjunction for two cases for inequality (6), where \(k\leq-1\) or \(k\geq 0\). In the case \(k\leq-1\) we have that:
\[\sum_{i\in\mathcal{J},f_{i}\leq f_{0}}f_{i}x_{i}+\sum_{i\in \mathcal{J},f_{i}>f_{0}}(f_{i}-1)x_{i}+\sum_{i\in[n]\setminus\mathcal{J}}a_{i }x_{i} \leq-(1-f_{0})\] \[\Rightarrow-\sum_{i\in\mathcal{J},f_{i}\leq f_{0}}\frac{f_{i}}{ 1-f_{0}}x_{i}+\sum_{i\in\mathcal{J},f_{i}>f_{0}}\frac{1-f_{i}}{1-f_{0}}x_{i}- \sum_{i\in[n]\setminus\mathcal{J}}\frac{a_{i}}{1-f_{0}}x_{i} \geq 1 \tag{7}\]
In the second case, \(k\geq 0\) we have that:
\[\sum_{i\in\mathcal{J},f_{i}\leq f_{0}}f_{i}x_{i}+\sum_{i\in \mathcal{J},f_{i}>f_{0}}(f_{i}-1)x_{i}+\sum_{i\in[n]\setminus\mathcal{J}}a_{i }x_{i} \geq f_{0}\] \[\Rightarrow\sum_{i\in\mathcal{J},f_{i}\leq f_{0}}\frac{f_{i}}{f_ {0}}x_{i}-\sum_{i\in\mathcal{J},f_{i}>f_{0}}\frac{1-f_{i}}{f_{0}}x_{i}+\sum_{i \in[n]\setminus\mathcal{J}}\frac{a_{i}}{f_{0}}x_{i} \geq 1 \tag{8}\]
As \(\mathbf{x}\geq 0\), we can derive a globally valid inequality for the disjunctive set from the inequalities (7) - (8) by taking the maximum coefficient of each term over the two inequalities. That is the inequalities \(\mathbf{a^{\intercal}x}\geq 1\) and \(\mathbf{a^{\intercal}x}\geq 1\) implies \(\sum_{i=1}^{n}\max(a_{i},a_{i}^{\prime})x_{i}\geq 1\). We have grouped the terms in their derivation above such that at most one is positive. The result of this derivation is exactly the GMI inequality (5).
### GMI Cuts in Practice
In general, it is \(\mathcal{NP}\)-hard to find a GMI cut that separates a given LP-_feasible_ solution, or to determine if such a cut exists, see [49, 32]. It is not \(\mathcal{NP}\)-hard, however, to separate a given basic solution of \(\mathcal{P}\), e.g., an LP-_optimal_ solution found by a simplex algorithm.
Consider a row of the simplex tableau for variable \(x_{j}\) of basis \(\mathcal{B}\). The row is an aggregated equality constraint, created from a linear combination of original constraints, where basic variable \(x_{j}\) is described purely in terms of the non-basic variables. That is:
\[x_{j}=\bar{x}_{j}-\sum_{i\notin\mathcal{B}}\bar{a}_{ji}x_{i} \tag{9}\]
Here \(\bar{x}_{j}\) is the right-hand side value of the tableau row and \(\bar{a}_{ji}\) is the tableau entry for the row of basic variable \(x_{j}\) and column of variable \(x_{i}\). In the considered case of \(\mathbf{x}\geq\mathbf{0}\), the \(\bar{x}_{j}\) is the value of variable \(x_{j}\) at the basic solution, with the non-basic variables all taking values \(\mathbf{0}\).
A GMI cut is derived from applying the GMI inequality procedure from Subsection 3.3 to the aggregated equality constraint (9). This procedure is only applied to rows of the simplex tableau that correspond to integer variables with fractional LP solutions. This is because these rows have a fractional right hand side, and the resulting GMI inequality guarantees separation of the current LP solution. An inequality produced by this method is called a GMI cut.
Geometrically, a GMI cut is a split cut, and therefore also an intersection cut and a disjunctive cut. Specifically, it is an intersection cut for the split \(\mathcal{D}(\mathbf{\pi}^{G},[\bar{x}_{j}])\), where \(\mathbf{\pi}^{G}\) is defined as follows:
\[\mathbf{\pi}^{G}\in\mathbb{Z}^{n},\quad\text{where }\mathbf{\pi}^{G}_{i}:=\begin{cases} \lfloor\bar{a}_{ji}\rfloor,&\text{if }(f_{i}\leq f_{0})\ \wedge\ i\notin\mathcal{B}\\ \lceil\bar{a}_{ji}\rceil,&\text{if }(f_{i}>f_{0})\ \wedge\ i\notin\mathcal{B}\\ 1,&\text{if }i=j\\ 0,&\text{if }(i\neq j)\wedge i\in\mathcal{B}\end{cases}\quad\forall i\in\{1,...,n\} \tag{10}\]
The GMI cut of the tableau row (9) is the strengthened version of the intersection cut obtained from the elementary split \(\mathcal{D}(\mathbf{\mathrm{e}}_{j},[\bar{x}_{j}])\), see [30, 44]. Deriving a cut using the elementary split for the simplex tableau row (9) of variable \(x_{j}\) without the strengthening procedure results in the intersection cut (11). This cut is obtained by treating integer variables similarly to continuous ones for the GMI derivation.
\[\sum_{i\notin\mathcal{B},a_{ji}\geq 0}\frac{\bar{a}_{ji}}{f_{0}}x_{i}-\sum_{i \notin\mathcal{B},\bar{a}_{ji}<0}\frac{\bar{a}_{ji}}{1-f_{0}}x_{i}\geq 1 \tag{11}\]
We denote this inequality as _weak-GMI_, and note that the GMI cut will always dominate the associated weak-GMI cut. We also note that the strengthening procedure is performed by using fractional coefficient values \(f_{i}\) instead of \(\bar{a}_{ij}\) for integer variables and the partitioning of those fractional coefficients \(f_{i}\) around \(f_{0}\).
## 4 Cutting Plane Selection for Variable Selection
The core idea of our work is to use measures of cuts to evaluate and decide on corresponding branching candidates. Specifically, we will generate the GMI cut from the corresponding tableau row of each branching candidate, and use cut selection techniques to dictate branching decisions. We will additionally augment the default SCIP hybrid branching rule with history-based scores of already-computed GMI cuts from previous separation rounds.
Currently, history-based approaches, see [4, 5], are the backbone behind branching rules used in MILP solvers [6, 43]. _Pseudo-costs_[50], the most prolific case of history-based approaches, estimate scores for a branching candidate based on the historical objective value improvement of child nodes spawned from branching on the candidate. One can consider pseudo-costs as an approximation of _strong-branching_ scores, see e.g. [4], which are derived from directly solving the upper and lower LP relaxations of all branching candidates. In our approach, branching scores are derived from cut quality measures of cuts generated from each branching candidate. It is complementary to pseudo-costs in that it provides an objective-free measure. These cut-based scores can be integrated into SCIP's default scoring rule, using a history of cut quality measures from previously generated cuts. This is similar to other history-based scores, such as those based on bound inferences, conflict information, and subproblem infeasibility [5].
The classical cut scoring measure is _efficacy4_, which denotes the Euclidean distance between the LP optimal solution and the cut hyperplane. Given a cut \((\mathbf{\alpha},\beta)\in\mathbb{R}^{n+1}\) and the LP optimal solution \(\mathbf{x}^{LP}\), efficacy is defined as:
Footnote 4: Main selection criteria for most MILP solvers, e.g., FICO Xpress 9.0 and SCIP 8.0
\[\mathtt{eff}(\mathbf{\alpha},\beta,\mathbf{x}^{LP}):=\frac{\mathbf{\alpha}^{\intercal }\mathbf{x}^{LP}-\beta}{\|\mathbf{\alpha}\|} \tag{12}\]
When scoring cuts for the purpose of branching, we will rely on efficacy as our cut measure. We note that there exists many more potential cut scoring measures [16, 22], however preliminary results of their inclusion led to negligible improvements.
GMI cuts are not the only cuts associated with split disjunctions, or even the elementary split, i.e. branching decisions. For example, lift-and-project cuts [51, 52] are intersection cuts of elementary splits. The elementary splits from which these cuts are derived, however, are not necessarily related to the current LP basis, nor even necessarily related to a primal-feasible LP basis. Nevertheless, scoring measures for this family of cuts are also a potentially potent indicator of good branching decisions. We however restricted our study to GMI cuts which are readily computed and available for all variables in all MILP solvers.
Experiments
We conduct three experiments: first, we analyse the effectiveness of our initial approach compared to standard branching rules (Subsection 5.1). Then, we refine our approach to a history-based one, and determine the best parameter value for including our approach in the state-of-the-art branching rule _hybrid branching_ (Subsection 5.2). Finally, we compare our integrated branching rule to default SCIP with experiments run in exclusive mode, i.e. one job per machine (Subsection 5.3). We perform experiments on the MIPLIB 2017 benchmark set5[53], which we will now simply refer to as MIPLIB. For all these experiments we present two variants: Firstly, we use default SCIP on the original instances to analyse the impact on the the out-of-the-box behaviour of a MILP solver. Secondly, we use SCIP with heuristics disabled and the optimal solution provided, which reduces random noise and emphasises the effect of branching rules.
Footnote 5: MIPLIB 2017 – The Mixed Integer Programming Library [https://miplib.zib.de/](https://miplib.zib.de/).
We define a run as an instance random-seed pair for which we use a given branching rule. All results are obtained by averaging results over the SCIP random seeds \(\{1,2,3,4,5\}\). For all experiments, SCIP 8.0.3 [6] is used, with PySCIPOpt[54] as the API, and Xpress 9.0.2[40] as the LP solver. For Subsections 5.1 and 5.2, experiments are run in non-exclusive mode on a cluster equipped with Intel Xeon Gold 6342 CPUs running at 2.80GHz, where each run is restricted to 2GB memory, 2h time limit, and the LP solver is restricted to a single thread. For Subsection 5.3 experiments are run in exclusive mode on a cluster equipped with Intel Xeon Gold 5122 CPUs running at 3.60GHz, where each run is restricted to 48GB memory, 2h time limit, and the LP solver is restricted to a single thread. The code used for all experiments is available and open-source6, and will be integrated in the next release of SCIP.
Footnote 6: [https://github.com/Opt-Mucca/branching-via-cut-selection](https://github.com/Opt-Mucca/branching-via-cut-selection)
For the entirety of our experiments, we filter out any instance that for any random seed was solved to optimality without branching, hit a memory limit, or encountered LP errors. Note that the instance is only filtered in a comparison of branching rules when one of the criteria is met for a run on one of the compared branching rules. When comparing results from branching rules, we use individual instance-seed pairs as data points as opposed to the aggregate performance over the random seeds. Additionally, when shifted geometric means are referenced, we use a shift of 100, 10s, and 1s for number of nodes, solving time, and branching time respectively. We finally note that certain instances were excluded when no optimal solution was available on the MIPLIB website.
### Gomory Cut-Based Branching Rules
To rank the effectiveness of our GMI-based branching rules, we compare them against standard branching rules from the literature, with Table 1 containing a complete list.
The shifted geometric means over three performance metrics on our data sets are presented in Table 2. We observe expected performance from the standard branching rules. _Fullstrong_ requires the least nodes to prove optimality over all data sets, while _hybrid_ is the branching rule that most quickly proves optimality. Our newly introduced branching rule _GMI_, is regrettably inferior to default SCIP over all metrics and data sets, however we observe that it clearly has a positive signal due to it requiring substantially less nodes than _random_ to prove optimality over all data sets. Most interesting is the relative performance of _weak-GMI_ to _GMI_, where _weak-GMI_ wins over all metrics and data sets. This suggests that the strengthened cut, while strictly better than the weaker version in a cutting plane context, has lost some level of the representation of the disjunction that the weaker cut is derived from.
For running time, we must also address the overhead of our branching rule. While ultimately faster per node than strong branching, we still need to generate a GMI cut for every branching candidate at every node. This overhead is significant, and is the reason why _random_ is on average faster to solve over MIPLIB both with
\begin{table}
\begin{tabular}{l c} \hline \hline Branching Rule & Description \\ \hline _GMI_ & Generate GMI cuts from tableau. Select candidate from cut with largest efficacy. \\ _weak-GMI_ & Generate weak-GMI cuts from Tableau. Select candidate from cut with largest efficacy. \\ _fullstrong_ & Solve LP relaxations of children nodes for all candidates, see [4]. \\ _hybrid_ & Reliability pseudo-cost / Hybrid. (Default SCIP scoring rule, see [4, 5]) \\ _random_ & Select random candidate. \\ \hline \hline \end{tabular}
\end{table}
Table 1: Branching rules used in Experiment 5.1
and without a provided solution. This is despite requiring over twice as many nodes. This can be verified by seeing that _GMI_ and _weak-GMI_ both are much faster than _random_ when removing branching time from consideration.
### History-Based GMI Branching
Our approaches _GMI_ and _weak-GMI_ were shown to make substantially better branching decisions than _random_, but were ultimately too slow, and were not as good as LP relaxation-based branching rules. The default SCIP branching rule, while dominated by pseudo-costs, is a hybrid method, with scores from a weighted sum of metrics. Most of these metrics are history-based, meaning that they use information from different parts of the solving process, and are quick to evaluate. Given that GMI cuts are already generated by SCIP throughout the solve process, we store for each variable, the normalised efficacy of the most recent GMI cut generated from a tableau row when the variable is basic and fractional. This normalised efficacy can then be used to augment the branching candidate's score of default SCIP. We normalise efficacy by the maximum GMI cut's efficacy from the given separation round, and note that we only store the normalised efficacy of the most recent cut if the non-normalised efficacy is above some epsilon tolerance. We stress here that this approach requires no additional overhead, as the cuts themselves as well as their efficacies are already computed in the separation process.
We denote our new branching rule _gmi_-\(10^{-x}\), where \(10^{-x}\) denotes the coefficient used in the weighted sum scoring rule for the normalised efficacy of the last generated GMI. The shifted geometric mean of performance metrics for various coefficient values are presented in Table 3. We observe that too high of a coefficient, as in _gmi_-\(10^{-2}\), results in worse performance than default SCIP over all metrics and all data sets. By decreasing the coefficient value, we see an improvement in performance, with _gmi_-\(10^{-5}\) being the best performing rule w.r.t. both nodes and solve time over all data sets. We also observe that the branching rules on either side of _gmi_-\(10^{-5}\), i.e. _gmi_-\(10^{-4}\) and _gmi_-\(10^{-6}\), always outperform default SCIP, indicating that \(10^{-5}\) is a sweet spot. We therefore conclude that \(10^{-5}\) is a good and robust coefficient choice for improving hybrid
\begin{table}
\end{table}
Table 2: Shifted geometric mean results. Best branching rule per metric in **bold**.
branching, i.e. default SCIP, once again noting that it requires no additional overhead since branching time is functionally identical.
Instead of simply using the efficacy of the most recently generated GMI cut, we performed many preliminary experiments using the historical normalised average efficacy of all generated GMI cuts. While this approach also had parameter values that outperformed default SCIP, it was thoroughly outperformed by the efficacy of the most-recently generated GMI cut. For the implementation of both approaches, ignoring GMI cuts that only separated the LP solution by a marginal efficacy improved performance. We also performed preliminary experiments using a new homogeneous MILP instance set, SNDlib-MIPs [55], which was inspired by SNDLib [56], but that the results while better than default SCIP, were only a marginal improvement.
Up to this point, our runs were affected by our experimental setup, where memory limits reduced the size of instances that we considered, and the non-exclusive mode of the computation cluster introduced additional noise w.r.t. solve time. We therefore perform a more in-depth comparison of _hybrid_ and _gmi_-\(10^{-5}\) over MIPLIB in the following subsection.
### Improving Default SCIP
Our in-depth comparison presented in Table 4 shows that our branching rule clearly outperforms default SCIP's hybrid branching. Over MIPLIB both with and without a solution provided, our augmented branching method results in faster solve times, and fewer nodes. We stress here, however, that due to the size of the coefficient value for the normalised efficacy of the last generated GMI cut, our approach will often act more as a tie breaker rather than as the predominant branching decision.
The performance improvement of _gmi_-\(10^{-5}\) becomes even more apparent when we look only at affected instances, see Table 5. Over MIPLIB, we achieve a \(4\%\) speedup on affected instances, and require \(8\%\) fewer nodes. This improvement becomes even more apparent when an optimal solution is provided. Our approach however is outperformed by default on unsolved instances. We do stress that upon further investigation, we found no evidence that _gmi_-\(10^{-5}\) is outperformed on larger or more complex instances that solved within
\begin{table}
\end{table}
Table 3: Shifted geometric mean results. Branching rules better than default in _italics_, best in **bold**.
the time limit. The large amount of affected instances indicates that the MILP solver ends up in situations where the scores of branching candidates, especially the pseudo-costs, are very similar.
Given the frequency of affected instance-seed pairs when comparing our branching rule to SCIP default, we do an analysis of the initialisation of the different measures in hybrid branching at the root node. The results are presented in Table 6, where we reference [5] for definitions of conflict, inference, and cutoff. We observe that \(10.7\%\) of instance-seed pairs make a different root node branching decision than _hybrid_ when using _gmi_-\(10^{-5}\) as a tiebreaker, and that \(54.4\%\) of variables have at least one GMI cut generated from tableau rows when they were fractional and basic. This initialisation ratio is important, as it is substantially larger than that of
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multicolumn{5}{c}{All instance-seed pairs} \\ \hline pseudo-cost & conflict & inference & cutoff & gmi \\ \hline
0.208 (0.428) & 0.0 (0.084) & 1 (0.920) & 0 (0) & 0.544 (0.537) \\ \hline \multicolumn{5}{c}{Affected instance-seed pairs (10.7\%)} \\ \hline
0.833 (0.562) & 0.0 (0.011) & 1 (0.873) & 0 (0) & 0.580 (0.556) \\ \hline \hline \end{tabular}
\end{table}
Table 6: Median (Mean) proportion of fractional variables with branching scores initialised at root node over MIPLIB with optimal solution provided and heuristics disabled.. A variable is marked as initialised if there is at least one record of the appropriate score. Affected instance-seed pairs are those with different root branching decisions for _hybrid_ and _gmi_-\(10^{-5}\).
\begin{table}
\begin{tabular}{c c|c c} \hline \hline Metric & Pairs & _hybrid_ & _gmi_-\(10^{-5}\) \\ \hline \hline \multicolumn{5}{c}{MIPLIB} \\ \hline Time (s) & 578 & 411 & **398** \\ \hline \multicolumn{5}{c}{MIPLIB with optimal solution provided and heuristics disabled} \\ \hline Time (s) & 528 & 381 & **361** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Shifted geometric mean results. Best branching rule per metric in **bold**.
\begin{table}
\begin{tabular}{c c|c c} \hline \hline Metric & Pairs & _hybrid_ & _gmi_-\(10^{-5}\) \\ \hline \hline Metric & \multicolumn{2}{c}{MIPLIB} \\ \hline Time (s) & 578 & 411 & **398** \\ \hline \multicolumn{5}{c}{MIPLIB with optimal solution provided and heuristics disabled} \\ \hline Time (s) & 528 & 381 & **361** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Shifted geometric mean results. Best branching rule per metric in **bold**.
\begin{table}
\begin{tabular}{c c c c} \hline \hline \multicolumn{5}{c}{MIPLIB} \\ \multicolumn{5}{c}{67.1\% instance-seed pairs affected} \\ \hline Time (s) & Nodes & \(\Delta-\)Solved & \(\Delta-\)Dual & \(\Delta-\)Primal \\ \hline
**0.96** & **0.92** & -1/5/78) & -11/472) & -4/472) \\ \hline \multicolumn{5}{c}{MIPLIB with optimal solution provided and heuristics disabled} \\ \multicolumn{5}{c}{69.5\% instance-seed pairs affected} \\ \hline
**0.91** & **0.89** & -1/5/28) & -6/307) & - \\ \hline \hline \end{tabular}
\end{table}
Table 5: Ratio of shifted geometric means for _gmi_-\(10^{-5}\) vs _hybrid_ over affected instances (solved to optimality for both branching rules). \(\Delta\) is the difference in wins over the instance-seed pairs for amount solved, and the respective bounds for unsolved instances. Entries are in bold when our approach is better than SCIP default.
pseudo-costs over the whole data set, and provides a new signal for branching that is well initialised before branching begins. We also note that affected instances have a much higher initialisation rate of pseudo-costs than average, indicating that a frequent issue is not necessarily the lack of strong branching initialisation, but rather that the best initialisations all take the same value.
Figure 6 shows the distribution of performance improvement per affected instance-seed pair. We observe a diverse distribution, with the majority of instance-seed pairs either performing at least \(10\%\) better or at least \(10\%\) worse in both number of nodes and solve time. This is surprising as our method only adds a relatively small value to the weighted sum branching score, and therefore acts as a tiebreaker. While many instances exhibit worse performance on _gmi_-\(10^{-5}\), there are consistently more instance-seed pairs that perform correspondingly better than default SCIP over all levels of improvement. This is evident both for number of nodes and for solve time, and is confirmed by individual Wilcoxon signed-rank tests over all data sets, where a p-value of at most 0.028 was observed.
## 6 Conclusion
In this paper, we developed a new branching rule based on the correspondence between Gomory mixed-integer cuts and split disjunctions, leveraging the efficacy of cutting planes as a measure for the relevance
Figure 6: Bar plots of relative improvement of _gmi_-\(10^{-5}\) compared to default SCIP over affected instance-seed pairs. (Left) Number of nodes. (Right) Solve time.
of a variable for branching. We used the branching rule with both unstrengthened and strengthened cuts, showing that the unstrengthened versions, which are directly derived from potential branching decisions, provide a better measure to reduce the number of nodes, and solve time. Our branching rule results in low numbers of nodes while being less expensive than strong branching. When integrated in the state-of-the-art hybrid branching algorithm of SCIP, the score provided by our branching rule significantly reduces both solve time and number of nodes over MIPLIB 2017. Future work includes extending our idea beyond Gomory mixed-integer cuts, to any cut that is linked to a split disjunction, e.g., lift-and-project cuts.
## Acknowledgements
We thank Tobias Achtherberg, Antonia Chmiela, and Leona Gottwald for insightful discussions that helped this paper. The work for this article has been conducted in the Research Campus MODAL funded by the German Federal Ministry of Education and Research (BMBF) (fund numbers 05M14ZAM, 05M20ZBM). The described research activities are funded by the Federal Ministry for Economic Affairs and Energy within the project UNSEEN (ID: 03E11004-C).
|
2304.07537 | Gradient-less Federated Gradient Boosting Trees with Learnable Learning
Rates | The privacy-sensitive nature of decentralized datasets and the robustness of
eXtreme Gradient Boosting (XGBoost) on tabular data raise the needs to train
XGBoost in the context of federated learning (FL). Existing works on federated
XGBoost in the horizontal setting rely on the sharing of gradients, which
induce per-node level communication frequency and serious privacy concerns. To
alleviate these problems, we develop an innovative framework for horizontal
federated XGBoost which does not depend on the sharing of gradients and
simultaneously boosts privacy and communication efficiency by making the
learning rates of the aggregated tree ensembles learnable. We conduct extensive
evaluations on various classification and regression datasets, showing our
approach achieves performance comparable to the state-of-the-art method and
effectively improves communication efficiency by lowering both communication
rounds and communication overhead by factors ranging from 25x to 700x. Project
Page: https://flower.ai/blog/2023-04-19-xgboost-with-flower/ | Chenyang Ma, Xinchi Qiu, Daniel J. Beutel, Nicholas D. Lane | 2023-04-15T11:48:18Z | http://arxiv.org/abs/2304.07537v3 | # Gradient-less Federated Gradient Boosting Trees with Learnable Learning Rates
###### Abstract.
The privacy-sensitive nature of decentralized datasets and the robustness of eXtreme Gradient Boosting (XGBoost) on tabular data raise the needs to train XGBoost in the context of federated learning (FL). Existing works on federated XGBoost in the horizontal setting rely on the sharing of gradients, which induce per-node level communication frequency and serious privacy concerns. To alleviate these problems, we develop an innovative framework for horizontal federated XGBoost which does not depend on the sharing of gradients and simultaneously boosts privacy and communication efficiency by making the learning rates of the aggregated tree ensembles learnable. We conduct extensive evaluations on various classification and regression datasets, showing our approach achieves performance comparable to the state-of-the-art method and effectively improves communication efficiency by lowering both communication rounds and communication overhead by factors ranging from 25x to 700x.
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none:
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none none
+
Footnote: copyrighted: none none
+
Footnote: copyrighted: none
extra cryptographic calculations. Thus, the high communication overhead makes it difficult to deploy horizontal federated XGBoost for practical uses.
2. _Serious privacy concerns_. The sharing of gradients and even confident information was proved to be insecure in the distributed training of ML models (Kang et al., 2017; Li et al., 2018). As the training data can be reconstructed using gradients, such sharing needs to be protected.
Existing research on horizontal federated XGBoost tackles the aforementioned two problems by seeking a trade-off between privacy and communication costs. A few works take stronger defenses against privacy leaks. FedXGB (Li et al., 2018) developed a new secure aggregation protocol by applying homomorphic encryption and secret sharing on shared parameters directly. However, this induces high communication and computation overhead at per-node level communication frequency. Some works decrease the resolution of the raw data distribution by generating a surrogate representation using gradient histogram (Kang et al., 2017; Li et al., 2018; Li et al., 2018; Li et al., 2018). Histogram-based methods accelerate the training process by building quantile sketch approximation, but the communication frequency still correlates to the depth of the trees. Besides, they can still leak privacy because the gradients related to the bins and the thresholds can be inferred (Li et al., 2018). Other works obfuscate the raw data distribution with methods including clustering-based k-anonymity (Li et al., 2018) and locality-sensitive hashing (Li et al., 2018). Although the required communication overhead is less than encryption-based methods, these approaches have a trade-off between model performance and the number of clients.
In this work, we ask the fundamental question: _if it is possible not to rely on the sharing of gradients and hessians to construct a federated XGBoost in the horizontal setting?_ In this way, we can simultaneously boost privacy and disentangle the per-node level communication frequency. We find it to be possible by formulating an important intuition: as the local datasets of clients can be heterogeneous in the horizontal setting, using a fixed learning rate for each tree may be too weak since each tree can make different amounts of mistakes on unseen data with distribution shifts. To this end, we make the learning rates of the aggregated tree ensembles learnable by training a small one-layer 1D CNN with kernel size and stride equal to the number of trees in each client tree ensemble. We use the prediction outcomes as inputs directly. This novel framework preserves privacy. The clients only need to send the constructed tree ensemble to the server. The sharing of gradients and hessians, which may leak sensitive information, is not required. In addition, the number of communication rounds is independent of any hyperparameter related to the trained XGBoost. In practice, we find 10 communication rounds to be sufficient for the global federated XGBoost model to reach performance comparable to the state-of-the-art method. Moreover, the total communication overhead to train a global federated XGBoost model is independent of the dataset size. Our approach induces total communication overhead lower than previous works in the order of tens to hundreds.
The main contributions of this work are summarized as:
* We propose a novel privacy-preserving framework, **FedXGBoost** with **I**e**m**f**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**t**e**d**s**t**t**e**d**s**t**t**e**d**s**t**e**d**s**t**t**e**d**s**t**e**d**s**t**t**e**d**s**t**e**d**s**t**t**e**d**s**t**e**d**s**t**e**d**s**t**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**s**t**e**d**s**t**e**d**s**t**e**s**t**e**d**s**t**e**d**s**t**e**s**t**e**d**s**t**e**d**s**t**e**s**t**e**d**s**t**e**s**t**e**d**s**t**e**d**s**t**e**s**t**e**d**s**t**e**t**e**d**s**t**e**s**t**e**d**s**t**e**d**s**t**e**d**s**t**e**s**t**e**d**s**t**e**s**t**e**d**s**t**e**d**s**t**e**s**t**e**d**s**t**e**d**s**t**e**s**t**e**d**s**t**e**s**t**e**d**s**t**e**s**t**e**d**s**t**e**s**t**e**d**s**t**e**d**s**t**e**s**t**e**d**s**t**e**s**t**e**d**s**t**e**s**t**e**d**s**t**e**s**t**e**d**s**t**e
The optimal weight \(w_{j}^{*}\) and objective \(obj^{*}\) are derived from the objective function involving regularization terms:
\[w_{j}^{*}=-\frac{G_{j}}{H_{j}+\lambda},\quad obj^{*}=-\frac{1}{2}\sum_{j=1}^{T} \frac{G_{j}^{2}}{H_{j}+\lambda}+\gamma T \tag{4}\]
where \(T\) is the leaf node number and \(\lambda\) and \(\gamma\) are the regularization for the leaf weights and leaf number, respectively.
From root to leaf nodes, the best split can be found by maximizing \(SplitGain=obj^{*}_{before}-obj^{*}_{after}\), which is:
\[SplitGain=\frac{1}{2}[~{}\frac{G_{L}^{2}}{H_{L}+\lambda}+\frac{G_{R}^{2}}{H_{ R}+\lambda}-\frac{G_{L}^{2}+G_{R}^{2}}{H_{L}+H_{R}+\lambda}]~{}-\gamma \tag{5}\]
where \(G_{L}\) and \(H_{L},G_{R}\), and \(H_{R}\) are the sums of the gradients and hessians of the data samples partitioned into the left and right branch based on the splitting point's feature constraint.
## 3. Method
In this section, we provide a detailed description of our approach. We first formulate our intuitions in Section 3.1. We then facilitate our intuitions in Section 3.2 and discuss how to learn the learning rates using proposed, interpretable one-layer 1D CNN in Section 3.3. Finally, we develop new framework **FedXGBllr** to train federated XGBoost in Section 3.4.
### Intuitions
#### A fixed learning rate is too weak
Local datasets of clients participating in FL can be heterogeneous (i.e., non-IID). The trained model on the client's local dataset converges to its local optima. When the model is sent to other clients and evaluated on their local datasets, it suffers from degradation in performance because different clients' local optima are divergent. The adverse effects of data heterogeneity in FL over NN-based approaches are widely researched (Han et al., 2017; Li et al., 2018; Li et al., 2019; Li et al., 2019). More recent works demonstrate that XGBoost also experiences deterioration in model performance with heterogeneous local datasets (Li et al., 2018; Li et al., 2019).
We argue that the core reason causing performance degradation, when the built XGBoost model is evaluated on other unseen datasets with distribution shifts, is that each tree in the tree ensemble makes different amounts of mistakes.
Consider the example illustrated in Fig. 2. We have an XGBoost model consisting of \(M\) trees in total where \(f_{t}\) denotes the \(t\)-th tree, \(t=1..M\). The XGBoost model is trained on the dataset \(\{x_{i}^{*},y_{i}^{*}\}_{i=1}^{N}\) for a regression task. We send this XGBoost model to two other clients and evaluate on their respective local datasets, \(S_{1}\) and \(S_{2}\).
The prediction outcomes of the first three trees in the XGBoost tree ensemble on two data samples \(\{x_{a}^{1},y_{a}^{1}\}\in S_{1}\) and \(\{x_{b}^{2},y_{b}^{2}\}\in S_{2}\) are also labeled in Fig. 2. Their ground truths are equal such that \(y_{a}^{1}=y_{b}^{2}=100\). Since local datasets \(S_{1}\) and \(S_{2}\) belong to two heterogeneous clients, the trees perform differently across data samples. For the first tree \(f_{i}\), it gives a good initial prediction for \(x_{b}^{2}\) (110) but not for \(x_{a}^{1}\) (60). The second and third trees \(f_{2}\) and \(f_{3}\), on the contrary, sufficiently correct the residuals made by the first tree \(f_{1}\) for \(x_{a}^{1}\) (30, 5) but not for \(x_{b}^{2}\) (-1, 20). In this case, a fixed learning rate (e.g., \(\eta=0.3\)) may be too weak because ideally, we want a higher learning rate for \(f_{2}(x_{a}^{1})\) and \(f_{3}(x_{a}^{1})\) but a lower learning rate for \(f_{2}(x_{b}^{2})\) and \(f_{3}(x_{b}^{2})\).
#### Moving towards the global optima
As explained previously, data heterogeneity causes the trained XGBoost models on different clients' local datasets to converge to local optima that are far from each other. Consequently, given an unseen data sample, these XGBoost tree ensembles output different prediction results. However, among all XGBoost tree ensembles, some can give more accurate predictions because the unseen data sample may be closer to the underlying distribution of their trained datasets. Thus, applying a weighted sum on the diverse prediction results given by all XGBoost tree ensembles can lead to a more accurate final prediction value, helping us to move towards the global optima.
It is important to point out that the approach of utilizing weighted sum to converge to the global optima is proved to be effective in the previous literature. FedAvg (Zhou et al., 2018) used the weighted sum of the aggregated model parameters according to the number of data samples presented in the clients' local datasets, and many kinds of literature have given theoretical convergence guarantees for the method (Li et al., 2019; Li et al., 2019). Later FL strategies such as FedProx (Li et al., 2019) also adopted the weighted sum of aggregated model parameters.
### Tree Ensembles Aggregation
Suppose there are \(K\) clients participating in the training of federated XGBoost, and denote them as \((O_{1},O_{2},...,O_{K})\). All clients' local datasets have different sample IDs but the same feature dimension \(D\). Each client trains a XGBoost tree ensemble consisting of \(M\) trees using its local dataset, where \(f_{O_{k}^{*}}\) denotes the \(t\)-th tree constructed by client \(k\), \(t=1...M\) and \(k=1...K\). To facilitate our intuitions, the final prediction result given an arbitrary data sample with feature dimension \(D\) is calculated by the weighted sum of all trees from all
Figure 2. An example of the impact of local data heterogeneity on the performance of XGBoost model.
clients as shown in Fig. 3. Each vertical tree chain is the tree ensemble built by one client, where \(W_{t}^{k}\) is the learning rate assigned to \(f_{O_{t}^{k}}\) and \(Z_{k}\) is the weight applied to the prediction result calculated by client \(O_{k}\)'s tree ensemble. We refer to this system as the aggregated tree ensemble. Both \(f_{O_{t}^{k}}\) and \(Z_{k}\) are learnable, which will be revealed in Section 3.3.
For all clients to calculate the final prediction result, each client needs to receive the aggregated tree ensemble with the help of the server. First, each client ensures that within its tree ensemble, all trees are sorted (i.e., if the tree ensemble is stored in an array, the \(t\)-th tree is at the \(t\)-th position). Then, as shown in Fig. 4(a), each client sends their built XGBoost tree ensemble and client ID (\(CID=k\)) to the server. The server sorts and concatenates all tree ensembles using _CID_s such that the \(k\)-th tree ensemble is always adjacent to both \((k-1)\)-th and \((k+1)\)-th tree ensembles, as illustrated by the input layer of Fig. 4(b). Finally, the server broadcasts the sorted, aggregated tree ensembles to every client.
### Learnable Learning Rates by One-layer 1D CNN
We develop a method to learn the learning rate \(W_{t}^{k}\) assigned to each tree \(f_{O_{t}^{k}}\) by transforming the aggregated tree ensembles in Fig. 3 to a one-layer 1D CNN as shown in Fig. 4(b). In the first 1D convolution layer, the inputs are the prediction outcome of all trees. \(G\) is the chosen activation function.
InterpretabilityThe small-sized model is interpretable. The kernel size and stride of the 1D convolution are equal to the number of trees, \(M\), in each client's tree ensemble. Thus, each channel of the 1D convolution is the learnable learning rates (\(W_{t}^{k}\)) for all \(f_{O_{t}^{k}}\) in the tree ensemble of a specific client \(k\), and the number of convolution channels can be understood as the number of learning rate strategies that can be applied. The classification head, fully connected (FC) layer, contains the weighting factors (\(Z_{k}\)) to balance the prediction outcomes of each client's tree ensemble and calculate the final prediction result, which is also updated during training. The incentive for introducing activation \(G\) is to avoid overfitting because a portion of the learned strategies will be deactivated. We set \(G\) to be the most used activation function, ReLU.
### FedXGBllr
We introduce the new framework, **FedXGBllr**, to train a global federated XGBoost model by learning the learning rate for each tree with FL. The global federated XGBoost model consists of all clients' locally trained XGBoost tree ensembles and the globally trained one-layer 1D CNN. The detailed procedure is shown in Algorithm. 1. At round 0 (line 1 to 7), each client first trains its local XGBoost tree ensemble. The server then conducts tree ensemble aggregation and CNN initialization. After receiving the aggregated tree ensemble, all clients calculate the prediction outcomes given the aggregated tree ensemble on their local data samples. The calculated prediction outcomes are inputs of the CNN. It is worth noticing that the clients only build XGBoost models at round 0, and the aggregated tree ensemble is fixed after round 0. For the federated training of the one-layer 1D CNN after round 1 (line 8), the protocol follows the standard FL algorithm, and we use FedAvg (Zhu et al., 2017).
In FedXGBllr, the number of communication rounds is equal to the FL training rounds (\(R\)) because we send the trees (at round 0) and CNN's model parameters (after round 1).
## 4. Experiments
In this section, we conduct extensive experiments to validate the effectiveness of our approach. We start by describing experiment setup and implementation details in Section 4.1. We then discuss the experimental results with comparisons to the centralized baseline and the state-of-the-art method
Figure 4. The pipeline. (a) tree ensembles aggregation and (b) one-layer 1D CNN to study the learning rates and output the final prediction result.
Figure 3. The aggregated tree ensemble. The final prediction given by the weighted sum of all trees.
in Section 4.2. Finally, we provide ablation studies and analysis to justify the interpretability and low communication overhead of our approach in Section 4.3.
### Experiment Setup and Implementations
**Comparison methods** We benchmark our method against one of the state-of-the-art and most influential works on horizontal federated XGBoost, SimFL (Zhu et al., 2019), which adopts locality-sensitive hashing in the context of FL. Opposed to our method, SimFL trains the global XGBoost model by sharing the weighted gradients across rounds. We also use the centralized XGBoost trained on the whole dataset as baseline.
**Dataset** Following SimFL (Zhu et al., 2019), we evaluate our method on the same six tabular datasets for classification. We also conduct our experiment on four tabular datasets for regression. All datasets can be downloaded from LIBSVM data website 1. The information of each dataset is summarized in Table. 1. For all datasets, the training set to test set ratio is \(0.75:0.25\) with random shuffling. The test set is used as the global test set at the server side. For the training set, we equally divide it according to the number of clients and assign the partitioned datasets to each client as their local dataset. SimFL (Zhu et al., 2019) only conducts experiments using 2 clients. We also provide the results of our method using 5 and 10 clients.
Footnote 1: [https://www.csie.ntu.edu.tw/](https://www.csie.ntu.edu.tw/) cjlin/libsvmtools/datasets/
**Evaluation metric** We report the performance on classification and regression datasets using Accuracy and Mean Squared Error (MSE) respectively, which is common practice.
**Implementation details** We use the Python package xgboost to train the local XGBoost models. Following SimFL (Zhu et al., 2019), the maximum depth of all trees is set to 8. For our implementation, we set the number of trees in each tree ensemble to be 500 divided by the number of clients. The initial learning rate (\(\eta\)) is the same for all trees and is set to 0.1. Note that \(\eta\) is a fixed hyperparameter of XGBoost (explained in Section 2), and is not the learnable learning rates (\(W_{k}^{k}\)) that are refined by the one-layer 1D CNN (explained in Section 3.3). For the globally trained one-layer 1D CNN and FL infrastructures, including clients and a server, we implement our method with PyTorch under Flower (Beng et al., 2019), an end-to-end FL framework. For the CNN, we employ Kaiming initialization (Krizhevsky et al., 2014) and set the number of convolution channels to 64. For each client, we train the CNN using Adam (Kingmae and Ba, 2014) with learning rate (\(\alpha\)) 0.001, \(\beta_{1}\) momentum 0.5, and \(\beta_{2}\) momentum 0.999. The local
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Dataset** & **Task Type** & **Data No.** & **Dimension** & **Size** \\ \hline
**a9a** & classification & 32,561 & 123 & 16MB \\
**cod-ma** & classification & 59,535 & 8 & 2.1MB \\
**ijcnn1** & classification & 49,990 & 22 & 4.4MB \\
**real-sim** & classification & 72,309 & 20,958 & 6.1GB \\
**HIGGS** & classification & 1,000,000 & 28 & 112MB \\
**SUSY** & classification & 1,000,000 & 18 & 72MB \\ \hline
**abalone** & regression & 4,177 & 8 & 253KB \\
**cpusmall** & regression & 8,192 & 12 & 684KB \\
**space\_ga** & regression & 3,167 & 6 & 553KB \\
**YearPredictionMSD** & regression & 515,345 & 90 & 615MB \\ \hline \hline \end{tabular}
\end{table}
Table 1. Summary of datasets
epoch (\(E\)) is set to 100, and the batch size (\(B\)) is set to 64. We train on one Nvidia Tesla V100 GPU.
### Experimental Results
Table. 2 demonstrates the quantitative results of FedXGBllr. The number of communication rounds (\(R\)) is set to 10. For all experiments, we take an average of 5 runs. From the results, our approach outperforms or reaches comparable accuracy to SimFL (Krizhevsky et al., 2017) and the centralized baselines on all six classification datasets with 2 clients. For the regression datasets, our approach achieves comparable or slightly higher MSE compared to the centralized baseline.
For both classification and regression datasets, our method performs better on larger datasets. We hypothesize this is due to the generalization capability of CNN scaling up with the volume of data. Additionally, as the number of clients increases from 2 to 5 and 10, the performance slightly decreases. We think it is reasonable because FL is harder with more clients (Krizhevsky et al., 2017).
The results suggest 10 rounds are sufficient for FedXGBllr to build a good global federated XGBoost model. However, it is worth mentioning that the number of rounds needed to reach good performance correlates with the number of local epochs (\(E\)) to train the one-layer 1D CNN on the client side. A higher \(E\) may require fewer communication rounds (consider the extreme when \(E=1\)). Our implementation uses \(E=100\), and leaves the optimal trade-off of \(E\) and \(R\) for future studies.
### Ablation Studies and Analysis
#### Communication overhead
We compare the total communication overhead to build a global federated XGBoost model of our approach to baseline, SimFL (Krizhevsky et al., 2017). For all comparisons, we assume the number of clients \(K\) to be 10 and the total number of built XGBoost trees \(M\) to be 500 with a depth \(L\) of 8 in order to be consistent with SimFL's efficiency experiments. The communications overhead of our approach is independent of the dataset size \(N\), and can be expressed as:
\[2K(M\times SZ\_t+R\times SZ\_nn) \tag{6}\]
where \(R\) is the number of FL training rounds, \(SZ\_t\) is the size of each tree in bytes, and \(SZ\_nn\) is the size of the one-layer 1D CNN in bytes. Therefore, \(2KM\times SZ\_t\) is the communication overhead during tree ensembles aggregation at round 0, and \(2KR\times SZ\_t\) is the communication overhead of the federated training of the CNN from round 1 to \(R\). We assume \(R\) to be 10 because this number is sufficient for our approach to reach good performance as explained in Section 4.2. \(SZ\_nn\) is 0.03MB (Table. 5). In practice, \(SZ\_t\) is negligible as the size of 500 trees built by the xgboost package is only 48 bytes.
The communication overhead of SimFL (Krizhevsky et al., 2017) in bytes is given by \(8KN\times Hash+8M[N+(2^{L}-1)(K-1)]\), where \(Hash\) is the number of hash functions. Table. 3 illustrates the comparison of the total communication overhead. We include the results using the six classification datasets because SimFL (Krizhevsky et al., 2017) provided the exact values on them. We can see the communication overhead of our approach is significantly lower especially as the dataset size scales up. We save the communication cost by at least a factor of 25, and can save up to a factor of 700.
Although we did not compare the exact numbers, our communication overhead is also significantly lower than encryption-based methods such as FedXGB (Krizhevsky et al., 2017), whose training shares encryption keys and communication cost scales up linearly with both input size and the number of clients.
#### Model interpretability
We want to know if the interpretability of our one-layer 1D CNN couples with the high performance? We change the first 1D convolution layer with kernel size and stride equal to the number of trees in each client tree ensemble with: 1) standard convolution with kernel size 3, stride 1, and 2) FC layer with dimension 256, and remove the flattened layer. The number of communication rounds is set to 10. We fix the number of clients to be 5. The results are shown in Table. 4. We also show the number of parameters and total size of each model in Table. 5.
From the results, our one-layer 1D CNN reaches the best performance on all datasets although it has the smallest number of parameters and total size. This suggests the effectiveness and interpretability of our CNN model. We find that for all datasets, the performance gap between our interpretable CNN and 2-layer FCNN is much larger than the gap
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**Dataset** & \multicolumn{3}{c}{**FedXGBllr**} & **SimFL (Krizhevsky et al., 2017)** & **Centralized Baseline** \\ \cline{2-5} & 2 clients & 5 clients & 10 clients & 2 clients & \\ \hline
**a9a** & 85.1 & 85.1 & 84.7 & 84.9 & 84.9 \\
**cool-ma** & 97.0 & 96.5 & 95.8 & 94.0 & 93.9 \\
**ijcnn1** & 96.3 & 96.0 & 95.3 & 96.4 & 96.3 \\
**real-sim** & 93.4 & 93.8 & 92.7 & 92.9 & 93.5 \\
**HIGGS** & 71.5 & 70.9 & 70.3 & 70.7 & 70.7 \\
**SUSY** & 82.5 & 81.7 & 81.2 & 80.4 & 80.0 \\ \hline
**abalone** & 3.6 & 4.4 & 4.9 & - & 1.3 \\
**cpussmall** & 8.0 & 8.5 & 9.5 & - & 6.7 \\
**space\_ga** & 0.024 & 0.033 & 0.034 & - & 0.024 \\
**YearPredictionMSD** & 80.3 & 82.7 & 91.6 & - & 80.5 \\ \hline \hline \end{tabular}
\end{table}
Table 2. Quantitative results of FedXGBllr compared to SimFL and centralized baseline - Accuracy \(\uparrow\) (for the first six classification datasets), MSE \(\downarrow\) (for the last four regression datasets).
between our interpretable CNN and CNN with standard kernel size and stride. Also, the gap exaggerates as the dataset size increases. We argue that in addition to our reasoning in Section 3.3, it is because CNN can leverage the temporal information across the tree ensembles built by the clients, and our interpretable CNN has the right amount of temporal resolution (i.e., with kernel size = stride).
## 5. Conclusion and Future Works
We propose a novel framework, FedXGBllr, for horizontal federated XGBoost which does not rely on the sharing of gradients and hessians. Extensive evaluations prove our approach is robust, interpretable, and communication efficient. Specifically, we reach performance comparable to state-of-the-art method and reduce communication cost with factors ranging from 25x to 700x. We use FedAvg (Krizhevsky et al., 2015) in this work to learn the learnable learning rates by training a small one-layer 1D CNN. It is important to point out that more advanced FL training algorithms can also be applied and better performance may be achieved; however, we leave it for future studies as it is not the focus of this research. Future works also include extending FedXGBllr to vertical setting.
|
2306.12025 | Averaging symmetric positive-definite matrices on the space of
eigen-decompositions | We study extensions of Fr\'{e}chet means for random objects in the space
${\rm Sym}^+(p)$ of $p \times p$ symmetric positive-definite matrices using the
scaling-rotation geometric framework introduced by Jung et al. [\textit{SIAM J.
Matrix. Anal. Appl.} \textbf{36} (2015) 1180-1201]. The scaling-rotation
framework is designed to enjoy a clearer interpretation of the changes in
random ellipsoids in terms of scaling and rotation. In this work, we formally
define the \emph{scaling-rotation (SR) mean set} to be the set of Fr\'{e}chet
means in ${\rm Sym}^+(p)$ with respect to the scaling-rotation distance. Since
computing such means requires a difficult optimization, we also define the
\emph{partial scaling-rotation (PSR) mean set} lying on the space of
eigen-decompositions as a proxy for the SR mean set. The PSR mean set is easier
to compute and its projection to ${\rm Sym}^+(p)$ often coincides with SR mean
set. Minimal conditions are required to ensure that the mean sets are
non-empty. Because eigen-decompositions are never unique, neither are PSR
means, but we give sufficient conditions for the sample PSR mean to be unique
up to the action of a certain finite group. We also establish strong
consistency of the sample PSR means as estimators of the population PSR mean
set, and a central limit theorem. In an application to multivariate
tensor-based morphometry, we demonstrate that a two-group test using the
proposed PSR means can have greater power than the two-group test using the
usual affine-invariant geometric framework for symmetric positive-definite
matrices. | Sungkyu Jung, Brian Rooks, David Groisser, Armin Schwartzman | 2023-06-21T05:23:36Z | http://arxiv.org/abs/2306.12025v1 | # Averaging symmetric positive-definite matrices on the space of eigen-decompositions
###### Abstract
We study extensions of Frechet means for random objects in the space \(\mathrm{Sym}^{+}(p)\) of \(p\times p\) symmetric positive-definite matrices using the scaling-rotation geometric framework introduced by Jung et al. [_SIAM J. Matrix. Anal. Appl._**36** (2015) 1180-1201]. The scaling-rotation framework is designed to enjoy a clearer interpretation of the changes in random ellipsoids in terms of scaling and rotation. In this work, we formally define the _scaling-rotation (SR) mean set_ to be the set of Frechet means in \(\mathrm{Sym}^{+}(p)\) with respect to the scaling-rotation distance. Since computing such means requires a difficult optimization, we also define the _partial scaling-rotation (PSR) mean set_ lying on the space of eigen-decompositions as a proxy for the SR mean set. The PSR mean set is easier to compute and its projection to \(\mathrm{Sym}^{+}(p)\) often coincides with SR mean set. Minimal conditions are required to ensure that the mean sets are non-empty. Because eigen-decompositions are never unique, neither are PSR means, but we give sufficient conditions for the sample PSR mean to be unique up to the action of a certain finite group. We also establish strong consistency of the sample PSR means as estimators of the population PSR mean set, and a central limit theorem. In an application to multivariate tensor-based morphometry, we demonstrate that a two-group test using the proposed PSR means can have greater power than the two-group test using the usual affine-invariant geometric framework for symmetric positive-definite matrices.
[ Averaging symmetric positive-definite matrices on the space of eigen-decompositions]Averaging symmetric positive-definite matrices on the space of eigen-decompositions
[Averaging symmetric positive-definite matrices on the space of eigen-decompositions
Primary 62R30; secondary 62E20
scaling-rotation distance, statistics on manifolds, strong consistency, central limit theorem
## 1 Introduction
Recently, much work has been done to advance the statistical analysis of random symmetric positive-definite (SPD) matrices. Applications in which data arise as SPD matrices include analysis of diffusion tensor imaging (DTI) data (Alexander, 2005; Batchelor et al., 2005), multivariate tensor-based morphometry (TBM) (Lepore et al., 2008; Paquette et al., 2017), and tensor computing (Pennec, Fillard and Ayache, 2006). In this paper we consider the setting in
which we have a random sample of SPD matrices and wish to estimate a population mean.
Location estimation is an important first step in the development of many statistical techniques. For applications in which data are SPD matrices, these techniques include two sample hypothesis testing (Schwartzman, Dougherty and Taylor, 2010) for comparing average brain scans from two groups of interest, principal geodesic analysis (Fletcher et al., 2004) for visualizing major modes of variation in a sample of SPD matrices, and weighted mean estimation, which has useful applications in diffusion tensor processing, including fiber tracking, smoothing, and interpolation (Batchelor et al., 2005; Carmichael et al., 2013).
One of the challenges of developing methods for analyzing SPD-valued data is that the positive-definiteness constraint precludes \(\mathrm{Sym}^{+}(p)\), the space of \(p\times p\) SPD matrices, from being a vector subspace of \(\mathrm{Sym}(p)\), the space of all symmetric \(p\times p\) matrices. This can be easily visualized for \(p=2\); the free coordinates (two diagonal elements and upper off-diagonal element) of all \(2\times 2\) SPD matrices in \(\mathrm{Sym}(2)\cong\mathbb{R}^{3}\) constitutes an open convex cone. Hence, conventional estimation or inferential techniques developed for data that varies freely over Euclidean space may not be appropriate for the statistical analysis of SPD matrices. With this in mind, many location estimation frameworks for \(\mathrm{Sym}^{+}(p)\) have been developed in recent years, including the log-Euclidean framework (Arsigny et al., 2007), affine-invariant framework (Fletcher et al., 2004; Pennec, Fillard and Ayache, 2006), log-Cholesky framework (Lin, 2019), and Procrustes framework (Dryden, Koloydenko and Zhou, 2009; Masarotto, Panaretos and Zemel, 2019); see Feragen and Fuster (2017) for other examples. Given a sample of SPD matrices, most of these estimation methods amount to transforming the SPD-valued observations, averaging in the space of the transformed observations, and then mapping the mean of the transformed data into \(\mathrm{Sym}^{+}(p)\). For example, the log-Euclidean method maps each observation into \(\mathrm{Sym}(p)\) via the matrix logarithm, computes the sample mean of the transformed observations, and then maps that mean into \(\mathrm{Sym}^{+}(p)\) via the matrix exponential function, while the Procrustes size-and-shape method begins with averaging the Cholesky square roots of observations, and then maps the average \(\hat{L}\) to \(\mathrm{Sym}^{+}(p)\) as \(\hat{\Sigma}=\hat{L}\hat{L}^{T}\), where \(A^{T}\) denotes the transpose of a matrix \(A\).
While these geometric frameworks account for the positive-definiteness constraint of \(\mathrm{Sym}^{+}(p)\), it is not clear which, if any, of the log-Euclidean, affine-invariant, or Procrustes size-and-shape frameworks is most "natural" for describing deformations of SPD matrices. Motivated by the analysis of DTI data, a setting in which observations are SPD matrices represented as ellipsoids in \(\mathbb{R}^{3}\), Jung, Schwartzman and Groisser (2015) developed a different framework, called the scaling-rotation (SR) framework for \(\mathrm{Sym}^{+}(p)\). Under this framework, the distance between SPD matrices \(X\) and \(Y\) is defined as the minimal amount of rotation of axes and scaling of axis lengths necessary to deform the ellipsoid associated with \(X\) into the ellipsoid associated with \(Y\). For this, an SPD matrix \(X\) is decomposed into eigenvectors and eigenvalues, which respectively stand for rotations and scalings. The SR framework yields interpolation curves that have desirable properties, including constant rate of rotation and log-linear scaling
of eigenvalues, and is the only geometric framework (compared to the aforementioned frameworks) to produce both pure-scaling interpolation curves and pure-rotation curves when the endpoints differ by pure scaling or pure rotation. While interpolation approaches similar to the SR framework can be found in Wang et al. (2014) and Collard et al. (2014), only the SR framework addresses the non-uniqueness of eigen-decompositions (Groisser, Jung and Schwartzman, 2017, 2021). See Feragen and Fuster (2017) and Feragen and Nye (2020) for a comparison of the SR framework with other geometric frameworks for SPD matrices.
A major complication in developing statistical procedures using the SR framework is that eigen-decompositions are not unique. For example, an SPD matrix \(X=\operatorname{diag}(8,3)=\left(\begin{smallmatrix}8&0\\ 0&3\end{smallmatrix}\right)\) can be eigen-decomposed into either
\[X=U_{1}D_{1}U_{1}^{T},\quad U_{1}=\left(\begin{smallmatrix}1&0\\ 0&1\end{smallmatrix}\right),\ D_{1}=\left(\begin{smallmatrix}8&0\\ 0&3\end{smallmatrix}\right),\]
or
\[X=U_{2}D_{2}U_{2}^{T},\quad U_{2}=\left(\begin{smallmatrix}0&-1\\ 1&0\end{smallmatrix}\right),\ D_{2}=\left(\begin{smallmatrix}3&0\\ 0&8\end{smallmatrix}\right).\]
(There are in fact 4 distinct eigen-decompositions for \(\operatorname{diag}(8,3)\), if the eigen-vector matrices are required to be orthogonal matrices of positive determinant.) Write \((U_{X},D_{X})\) for an eigen-decomposition (a pair of eigenvector and eigenvalue matrices) of an SPD matrix \(X\), and let \(\mathcal{F}\) be the eigen-composition map, e.g., \(\mathcal{F}(U_{X},D_{X})=U_{X}D_{X}U_{X}^{T}=X\) (see Definition 2.1). The SR framework defines the "distance" between \(X,Y\in\operatorname{Sym}^{+}(p)\) to be \(d_{\mathcal{SR}}(X,Y):=\inf d_{M}((U_{X},D_{X}),(U_{Y},D_{Y}))\), where the infimum is taken over all possible eigen-decompositions of both \(X\) and \(Y\), and \(d_{M}\) is the (geodesic) distance function on the space \(M(p)\) of eigen-decompositions (see Definition 2.2). \(\operatorname{Sym}^{+}(p)\) is a stratified space; the stratum to which \(X\in\operatorname{Sym}^{+}(p)\) belongs is determined by the topological structure of the fiber \(\mathcal{F}^{-1}(X)\) (the set of all eigen-decompositions corresponding to \(X\in\operatorname{Sym}^{+}(p)\)). The scaling-rotation distance \(d_{\mathcal{SR}}\) fails to be a true metric on \(\operatorname{Sym}^{+}(p)\), and is difficult to compute because the set we minimize over in the definition of \(d_{\mathcal{SR}}(X,Y)\) is a pair of these fibers (whose topology varies with the strata of \(X\) and \(Y\)). With these complications in mind, the goal of this paper is to establish location-estimation methods using the SR framework as a foundation for future methods that will inherit the interpretability of the framework.
If one of the well-established geometric frameworks, such as the affine-invariant or log-Cholesky frameworks, is used, then \(\operatorname{Sym}^{+}(p)\) is understood as a Riemannian manifold with a Riemannian metric tensor defined on the tangent bundle. The Riemannian metric gives rise to a distance function, say \(d\), and \((\operatorname{Sym}^{+}(p),d)\) is a metric space. For these metric spaces, the Frechet mean (Frechet, 1948) is a natural candidate for a location parameter, and conditions that guarantee uniqueness of Frechet means, convergence of empirical Frechet means to the population counterpart, and central-limit-theorem type results, are well-known (_cf._ Afsari, 2011; Bhattacharya and Patrangenaru, 2003, 2005; Bhattacharya and Lin, 2017; Huckemann, 2011a,b; Eltzner et al., 2021; Schotz, 2022).
But in the SR framework, since \(d_{\mathcal{SR}}\) is not a true metric on \(\mathrm{Sym}^{+}(p)\), many of the theoretical properties of Frechet means (if they are defined) are no longer guaranteed. Moreover, on a practical side, computing a scaling-rotation (SR) mean, defined as a minimizer over the sum of squared SR distances to observations, requires discrete optimization in general and is thus challenging to implement. As a proxy for the SR mean, we define a _partial scaling-rotation (PSR) mean_ on the space of eigen-decompositions; for a finite sample \(X_{1},\ldots,X_{n}\in\mathrm{Sym}^{+}(p)\), the PSR mean set is the set of minimizers
\[\operatorname*{argmin}_{(U,D)}\frac{1}{n}\sum_{i=1}^{n}\left\{(U_{X},D_{X}) \in\mathcal{F}^{-1}(X_{i})\,d_{M}((U_{X},D_{X}),(U,D))\right\}^{2}. \tag{1.1}\]
See Section 3 for precise definitions and an iterative algorithm for computing a sample PSR mean. The PSR means can be thought of as a special case of generalized Frechet means, proposed in Huckemann (2011b) and studied in Huckemann (2011a); Huckemann and Eltzner (2021); Schotz (2019, 2022). The PSR means can be mapped to \(\mathrm{Sym}^{+}(p)\) (via the eigen-composition map \(\mathcal{F}\)), and we establish some sufficient conditions under which the PSR means are _equivalent_ to the SR mean. These conditions are related to the strata of \(\mathrm{Sym}^{+}(p)\) in which the sample and means are located.
Another artifact caused by the stratification of \(\mathrm{Sym}^{+}(p)\) is that the distance function \(d_{\mathcal{SR}}\) is not continuous on \(\mathrm{Sym}^{+}(p)\), and in principle we do not know whether an SR mean is well-defined. We show that the distance function \(d_{\mathcal{SR}}\), the cost function appeared in (1.1) for the PSR means, and their squares are _lower semicontinuous_, and thus are measurable, which guarantees that both the SR and PSR mean sets are well-defined. We also show that SR and PSR mean sets exist, under mild assumptions.
PSR means are never unique, due to the fact that eigen-decompositions are not unique. In the best case, there are \(2^{p-1}p!\) elements in the PSR mean set for a \(\mathrm{Sym}^{+}(p)\)-valued sample, corresponding to the number of distinct eigen-decompositions of any SPD matrix with no repeated eigenvalues. As a result, if a PSR mean set \(E_{n}^{(\mathcal{PSR})}\) consists of exactly \(2^{p-1}p!\) elements, then the corresponding \(\mathrm{Sym}^{+}(p)\)-valued mean, \(\mathcal{F}(E_{n}^{(\mathcal{PSR})})\) consists of a single element, and we may say that \(\mathcal{F}(E_{n}^{(\mathcal{PSR})})\) is unique. A sufficient condition to ensure such uniqueness will be given in Section 4.3 in terms of data-support diameter.
We also show that with only a finite-variance condition the sample PSR mean set is consistent with the population PSR mean set, in the sense of Bhattacharya and Patrangenaru (2003), following the now standard technique laid out in Huckemann (2011b) (with modifications required to the fact that the cost function in (1.1) is not continuous). With additional conditions, needed to ensure the equivalence between PSR mean sets and the SR mean, imposed, we conclude that the sample SR mean set is consistent with the (unique) SR mean. A type of central limit theorem for the PSR mean is also developed, in which the limiting normal distribution is defined on a tangent space of the space of eigen-decompositions. See Section 4 for theoretical properties of (partial) SR means, including existence, uniqueness, and asymptotic results. Although these
properties are developed to cope with the unique challenges (e.g. non-uniqueness of eigen-decompositions and the resulting stratification) coming from using the SR framework, we believe the course of our technical development will be instructive for developing statistics in other stratified Riemannian spaces.
Numerical results demonstrate the subtle difference between the SR mean and the PSR mean, and the advantage of (partial) SR means over other means defined via other geometric frameworks. The potential advantage of the SR framework with PSR means is further demonstrated in an application to multivariate TBM for testing the shape difference in lateral ventricular structure in the brains of pre-term and full-term infants, using data from Paquette et al. (2017). In particular, an approximate bootstrap test based on PSR means is found to be more powerful than that based on the affine-invariant means of Pennec, Fillard and Ayache (2006). We conclude with practical advice on the analysis of SPD matrices and a discussion of potential future directions of research. Technical details, proofs, and additional lemmas that may be useful in other contexts, are contained in Appendix B.
## 2 The Scaling-Rotation Framework
In this section we provide a brief overview of the scaling-rotation framework (Jung, Schwartzman and Groisser, 2015; Groisser, Jung and Schwartzman, 2017, 2021) for analyzing SPD-valued data. The motivation for the scaling-rotation framework is intuitive: Any \(X\in\text{Sym}^{+}(p)\) can be identified with the ellipsoid with surface coordinates \(\{y\in\mathbb{R}^{p}:y^{T}X^{-1}y=1\}\), so a measure of distance between \(X\) and \(Y\) can be defined as a suitable combination of the minimum amount of rotation of axes and stretching or shrinking of axes needed to deform the ellipsoid corresponding to \(X\) into the ellipsoid associated with \(Y\). Since the semi-axes and squared semi-axis lengths of the ellipsoid associated with an SPD matrix are its eigenvectors and eigenvalues, respectively, this _scaling-rotation distance_ is computed on the space of eigen-decompositions.
### Geometry of the Eigen-Decomposition Space
Recall that any \(X\in\text{Sym}^{+}(p)\) has an eigen-decomposition \(X=UDU^{T}\), where \(U\in SO(p)\), the space of \(p\times p\) rotation matrices, and \(D\in\text{Diag}^{+}(p)\), the space of \(p\times p\) diagonal matrices possessing positive diagonal entries. We denote the space of eigen-decompositions as \(M(p):=SO(p)\times\text{Diag}^{+}(p)\). The Lie groups \(SO(p)\) and \(\text{Diag}^{+}(p)\) carry natural bi-invariant Riemannian metrics \(g_{SO}\) and \(g_{\mathcal{D}^{+}}\), defined as follows. The tangent space at \(U\) of \(SO(p)\) is \(T_{U}(SO(p))=\{AU:A\in\mathfrak{so}(p)\}\), where \(\mathfrak{so}(p)\) is the space of \(p\times p\) antisymmetric matrices. At an arbitrary point \(U\in SO(p)\) we define \(g_{SO}\mid_{U}(A_{1},A_{2})=-\frac{1}{2}\text{tr}(A_{1}U^{T}A_{2}U^{T})\) for \(A_{1},A_{2}\in\mathfrak{so}(p)\), where \(\text{tr}(A)\) is the trace of the matrix \(A\). The tangent space \(T_{D}(\text{Diag}^{+}(p))=\{LD:L\in\text{Diag}(p)\}\) is canonically identified with \(\text{Diag}(p)\), the set of \(p\times p\) diagonal matrices, and we define \(g_{\mathcal{D}^{+}}\mid_{D}(L_{1},L_{2})=\text{tr}(L_{1}D^{-1}L_{2}D^{-1})\), for \(L_{1},L_{2}\in\text{Diag}(p)\). Given eigen
decompositions \((U_{1},D_{1})\) and \((U_{2},D_{2})\) of SPD matrices \(X_{1}\) and \(X_{2}\), we measure the distance between their eigen-decompositions using the following product metric:
**Definition 2.1**.: We define the _geodesic distance function_\(d_{M}\) on \(M(p)\), with a weighting parameter \(k>0\), by
\[d_{M}^{2}((U_{1},D_{1}),(U_{2},D_{2}))=kd_{SO}^{2}(U_{1},U_{2})+d_{\mathcal{D}^ {+}}^{2}(D_{1},D_{2}), \tag{2.1}\]
where \(d_{SO}(U_{1},U_{2})=\frac{1}{\sqrt{2}}\|\mathrm{Log}(U_{2}U_{1}^{T})\|_{F}\), \(d_{\mathcal{D}^{+}}(D_{1},D_{2})=\|\mathrm{Log}(D_{1})-\mathrm{Log}(D_{2})\|_ {F}\), and \(\|.\|_{F}\) denotes the Frobenius norm.
In Definition 2.1 and (2.2) below, \(\mathrm{Exp}(A)\) stands for the matrix exponential of \(A\), and \(\mathrm{Log}(R)\) for the principal matrix logarithm of \(R\).1 The weighting parameter \(k\) is a fixed constant throughout.
Footnote 1: The principal logarithm for rotation matrices is defined on the set \(\{R\in SO(p):R\) is not an involution\(\}\), a dense open subset of \(SO(p)\). When there exists no principal logarithm of \(R\), the notation \(\mathrm{Log}(R)\) denotes any solution \(A\in\mathfrak{so}(p)\) of \(\mathrm{Exp}(A)=R\) satisfying that \(\|A\|_{F}\) is the smallest among all such choices of \(A\). For such rare cases, the geodesic (2.2) is not unique, but \(\|\mathrm{Log}(R)\|_{F}\) is well defined.
For a geometric interpretation of the geodesic distance, note that the geodesic distance between eigen-decompositions \((U_{1},D_{1})\) and \((U_{2},D_{2})\) equals the length of the \(M(p)\)-valued geodesic
\[\gamma_{(U_{1},D_{1}),(U_{2},D_{2})}(t)=(\mathrm{Exp}(t\mathrm{Log}(U_{2}U_{1} ^{-1}))U_{1},\mathrm{Exp}(t\mathrm{Log}(D_{2}D_{1}^{-1}))D_{1}) \tag{2.2}\]
connecting \((U_{1},D_{1})\) and \((U_{2},D_{2})\), which is a minimal-length smooth curve connecting these two points when the tangent spaces of \(M(p)\) are equipped with the canonical inner product \(g_{M}=kg_{SO}\oplus g_{\mathcal{D}^{+}}\). The functions \(d_{\mathcal{D}^{+}}(D_{1},D_{2})\) and \(d_{SO}(U_{1},U_{2})\) in (2.1) have the following interpretations: \(d_{\mathcal{D}^{+}}(D_{1},D_{2})\) computes the Euclidean distance between \(\mathrm{Log}(D_{1})\) and \(\mathrm{Log}(D_{2})\), while \(d_{SO}(U_{1},U_{2})\) equals the magnitude of the rotation angle of \(U_{2}U_{1}^{-1}\) when \(p=2,3\).
The exponential map at \((U,D)\in M(p)\) is \(\mathrm{Exp}_{(U,D)}:T_{(U,D)}M(p)\to M(p)\), given by
\[\mathrm{Exp}_{(U,D)}((AU,LD))=(\mathrm{Exp}(A)U,\mathrm{Exp}(L)D). \tag{2.3}\]
The inverse of the exponential map at \((U,D)\in M(p)\), defined for \(\mathcal{U}_{(U,D)}=\{(V,\Lambda)\in M(p):\|\mathrm{Log}(VU^{T})\|_{F}<\pi\}\), is \(\mathrm{Log}_{(U,D)}:\mathcal{U}_{(U,D)}\to T_{(U,D)}M(p)\), and is given by
\[\mathrm{Log}_{(U,D)}((V,\Lambda))=(\mathrm{Log}(VU^{T})U,\mathrm{Log}(\Lambda D ^{-1})D). \tag{2.4}\]
With the Riemannian metric \(g_{M}=kg_{SO}\oplus g_{\mathcal{D}^{+}}\), the induced norm on the tangent space \(T_{(U,D)}M(p)\) satisfies
\[\|(A_{1}U,L_{1}D)-(A_{2}U,L_{2}D)\|_{(U,D)}^{2}=\frac{k}{2}\mathrm{tr}((A_{1}- A_{2})(A_{1}-A_{2})^{T})+\mathrm{tr}((L_{1}-L_{2})(L_{1}-L_{2})^{T}),\]
for any two tangent vectors \((A_{1}U,L_{1}D),(A_{2}U,L_{2}D)\in T_{(U,D)}M(p)\).
### Minimal Smooth Scaling-Rotation Curves and Scaling-Rotation Distance
Since eigen-decompositions are not unique, any method for computing the distance between SPD matrices using the eigen-decomposition space must take this non-uniqueness into account. To address this, Jung, Schwartzman and Groisser (2015) proposed the following distance for \(\operatorname{Sym}^{+}(p)\):
**Definition 2.2**.: Let \(\mathcal{F}:M(p)\to\operatorname{Sym}^{+}(p)\) denote the eigen-composition map \(\mathcal{F}(U,D)=UDU^{T}\), and for any \(X\in\operatorname{Sym}^{+}(p)\), let \(\mathcal{F}^{-1}(X)\) denote the set of eigen-decompositions of \(X\). The _scaling-rotation distance_ between \(X\in\operatorname{Sym}^{+}(p)\) and \(Y\in\operatorname{Sym}^{+}(p)\) is
\[d_{\mathcal{SR}}(X,Y)=\inf_{\begin{subarray}{c}(U_{X},D_{X})\in\mathcal{F}^{-1 }(X),\\ (U_{Y},D_{Y})\in\mathcal{F}^{-1}(Y)\end{subarray}}d_{M}((U_{X},D_{X}),(U_{Y},D_ {Y})).\]
Eigen-decompositions \((U_{X}^{*},D_{X}^{*})\in\mathcal{F}^{-1}(X)\) and \((U_{Y}^{*},D_{Y}^{*})\in\mathcal{F}^{-1}(Y)\) form a _minimal pair_ if \(d_{M}((U_{X}^{*},D_{X}^{*}),(U_{Y}^{*},D_{Y}^{*}))=d_{\mathcal{SR}}(X,Y)\).
_Remark 2.3_.: Since the sets \(\mathcal{F}^{-1}(X)\) and \(\mathcal{F}^{-1}(Y)\) are compact for any \(X,Y\in\operatorname{Sym}^{+}(p)\), there will always be a pair of eigen-decompositions of \(X\) and \(Y\) that form a minimal pair.
_Remark 2.4_.: The function \(d_{\mathcal{SR}}\) is not a true metric on \(\operatorname{Sym}^{+}(p)\) since there are instances in which the triangle inequality fails. It is a semi-metric and invariant under simultaneous matrix inversion, uniform scaling and conjugation by a rotation matrix (Jung, Schwartzman and Groisser, 2015, Theorem 3.11). When restricted to the subset of SPD matrices which possess no repeated eigenvalues, \(d_{\mathcal{SR}}\) is a true metric (Jung, Schwartzman and Groisser, 2015, Theorem 3.12).
For SPD matrices \(X,Y\) and their eigen-decompositions \((U_{X},D_{X})\in\mathcal{F}^{-1}(X)\), \((U_{Y},D_{Y})\in\mathcal{F}^{-1}(Y)\), one can create a smooth scaling-rotation (SSR) curve on \(\operatorname{Sym}^{+}(p)\) connecting \(X\) and \(Y\) as \(\chi_{X,Y}(t)=\mathcal{F}(\gamma_{(U_{X},D_{X}),(U_{Y},D_{Y})}(t))\), where \(\gamma_{(U_{X},D_{X}),(U_{Y},D_{Y})}(t)\) is a minimal-length geodesic curve defined in (2.2). If one considers the family of all possible geodesics in \((M(p),g_{M})\) from \(\mathcal{F}^{-1}(X)\) to \(\mathcal{F}^{-1}(Y)\), the scaling-rotation distance equals the length of the shortest geodesics in that family. By definition, the shortest geodesic (which may not be uniquely defined) connects a minimal pair \((U_{X}^{*},D_{X}^{*})\in\mathcal{F}^{-1}(X)\) and \((U_{Y}^{*},D_{Y}^{*})\in\mathcal{F}^{-1}(Y)\). Computing \(d_{\mathcal{SR}}(X,Y)\) for any dimension \(p\) is straightforward when \(X\) and \(Y\) both have no repeated eigenvalues, since \(X\) and \(Y\) then both have finitely many eigen-decompositions and therefore finitely many connecting SSR curves, or when one of \(X,Y\) is a scaled identity matrix. Formulas for computing \(d_{\mathcal{SR}}(X,Y)\) for all possible eigenvalue-multiplicity combinations of arguments \(X\) and \(Y\) are provided in Groisser, Jung and Schwartzman (2017) for \(p=2,3\).
### The stratification of \(\mathrm{Sym}^{+}(p)\) and fibers of the eigen-composition map
The space \(\mathrm{Sym}^{+}(p)\) is naturally stratified by the eigenvalue-multiplicity types. We will use the notation \(S^{\mathrm{top}}_{p}\) to denote the subset of SPD matrices which have no repeated eigenvalues (the superscript "top" refers to the "top stratum"). We also use the notation \(S^{\mathrm{lwr}}_{p}:=\mathrm{Sym}^{+}(p)\setminus S^{\mathrm{top}}_{p}\), for the union of all "lower" strata, and \(S^{\mathrm{bot}}_{p}\subset S^{\mathrm{lwr}}_{p}\) denotes the set of SPD matrices with equal eigenvalues. The eigenvalue-multiplicity stratification of \(\mathrm{Sym}^{+}(p)\) is equivalent to the fiber-type stratification of \(\mathrm{Sym}^{+}(p)\); SPD matrices \(X,Y\in\mathrm{Sym}^{+}(p)\) are in the same stratum if \(\mathcal{F}^{-1}(X)\) and \(\mathcal{F}^{-1}(Y)\) are diffeomorphic, as we elaborate below.
Let \(\mathrm{Part}\{1,\ldots,p\}\) be the set of partitions of \(\{1,\ldots,p\}\). Recall that \(\mathrm{Part}\{1,\ldots,p\}\) is partially ordered by the refinement relation, with "\(\mathsf{J}_{1}\leq\mathsf{J}_{2}\)," meaning that \(\mathsf{J}_{2}\in\mathrm{Part}\{1,\ldots,p\}\) is a refinement of \(\mathsf{J}_{1}\in\mathrm{Part}\{1,\ldots,p\}\). As an example, for \(p=2\), there are only two partitions \(\mathsf{J}_{\mathrm{top}}:=\{\{1\},\{2\}\}\) and \(\mathsf{J}_{\mathrm{bot}}:=\{\{1,2\}\}\), and \(\mathsf{J}_{\mathrm{bot}}\leq\mathsf{J}_{\mathrm{top}}\).
Each \(D\in\mathrm{Diag}^{+}(p)\) naturally determines a partition \(\mathsf{J}_{D}\in\mathrm{Part}\{1,\ldots,p\}\), depending on which diagonal elements are equal. The group \(SO(p)\) acts on \(\mathrm{Sym}^{+}(p)\) on the left via \((U,X)\mapsto UXU^{T}\). For \(D\in\mathrm{Diag}^{+}(p)\), the stabilizer subgroup \(G_{D}\) under the \(SO(p)\) action on \(\mathrm{Sym}^{+}(p)\) is \(G_{D}:=\{R\in SO(p):RDR^{T}=D\}\). The stabilizer \(G_{D}\) depends only on \(\mathsf{J}_{D}\), and generally has more than one connected component. Write \(G_{D}^{0}\subset G_{D}\) for the connected component of \(G_{D}\) containing the identity.
Let \(\mathcal{S}_{p}\) be the group of permutations of \(\{1,2,\ldots,p\}\). For a permutation \(\pi\in\mathcal{S}_{p}\) and \(D\in\mathrm{Diag}^{+}(p)\), the natural left action of \(\mathcal{S}_{p}\) on \(\mathrm{Diag}^{+}(p)\) is denoted by \(\pi\cdot D\), and is given by permuting the diagonal entries of \(D\). Write the matrix of the linear map "\(\pi\)." by \(P_{\pi}\in\mathbb{R}^{p\times p}\), where the entries of \(P_{\pi}\) are \((P_{\pi})_{ij}=\delta_{i,\pi(j)}\), so that \(\pi\cdot D=P_{\pi}DP_{\pi}^{T}\). We call a \(p\times p\) matrix \(P\) a _signed-permutation matrix_ if for some \(\pi\in\mathcal{S}_{p}\) the entries of \(P\) satisfy \(P_{ij}=\pm\delta_{i,\pi(j)}\). We call such \(P\)_even_ if \(\det(P)=1\). Each such \(P\) thus represents a permutation of coordinates in \(\mathbb{R}^{p}\), combined with an even number of sign changes. The set \(\mathcal{G}(p)\) of all such _even signed-permutation matrices_ has exactly \(2^{p-1}p!\) elements, and is a matrix subgroup of \(SO(p)\). The natural left-action of \(\mathcal{G}(p)\) on \(M(p)\) is given by
\[h\cdot(U,D):=(Uh^{-1},h\cdot D), \tag{2.5}\]
where \(h\in\mathcal{G}(p)\) and \(h\cdot D:=hDh^{-1}\). The action of \(h\) on \((U,D)\) represents the simultaneous permutation (by the unsigned permutation associated with \(h\)) of columns of \(U\) and diagonal elements of \(D\), and the sign-changes of the columns of \(U\). The identity element of \(\mathcal{G}(p)\) is \(I_{p}\).
It is shown in Jung, Schwartzman and Groisser (2015) that the fiber \(\mathcal{F}^{-1}(X)\)--that is, the set of eigen-decompositions of \(X\)--is characterized with any \((U,D)\in\mathcal{F}^{-1}(X)\) by
\[\mathcal{F}^{-1}(X)=\{h\cdot(UR,D):R\in G_{D}^{0},h\in\mathcal{G}(p)\}. \tag{2.6}\]
Thus, the left-action of \(\mathcal{G}(p)\) on \(M(p)\) is fiber-preserving.
The structure of fiber \(\mathcal{F}^{-1}(X)\) depends on the stratum to which \(X\) belongs. If \(X\in S_{p}^{\text{top}}\), then for any eigen-decomposition \((U,D)\in\mathcal{F}^{-1}(X)\), we have \(G_{D}^{0}=\{I_{p}\}\) and the orbit
\[\mathcal{G}(p)\cdot(U,D)=\{h\cdot(U,D):h\in\mathcal{G}(p)\} \tag{2.7}\]
is _exactly_ the set of eigen-decompositions of \(X\). Intuitively, any eigen-decomposition of \(X\in S_{p}^{\text{top}}\) can be obtained from any other by a sign-change of eigenvectors and a simultaneous permutation of eigenvectors and eigenvalues. In contrast, if \(X\in S_{p}^{\text{bot}}\) (i.e., \(X\) is a scaled identity matrix), then \(G_{X}^{0}=G_{X}=SO(p)\) and \(h\cdot X=X\) for all \(h\in\mathcal{G}(p)\), thus the fiber of \(\mathcal{F}\) at \(X\) is \(\mathcal{F}^{-1}(X)=SO(p)\times\{X\}\). A complete characterization of fibers of \(\mathcal{F}\) for other lower strata can be found in Groisser, Jung and Schwartzman (2017).
## 3 Location estimation under the scaling-rotation framework
### Frechet mean
An approach often used for developing location estimators for non-Euclidean metric spaces is Frechet mean estimation (Frechet, 1948), in which estimators are derived as minimizers of a metric-dependent sample mean-squared error.
**Definition 3.1**.: Let \(M\) be a metric space with metric \(\rho\) and suppose that \(X,X_{1},\ldots,X_{n}\) are i.i.d. \(M\)-valued random variables with induced probability measure \(P\) on \(M\). The _population Frechet mean set_ is
\[\operatorname*{argmin}_{C\in M}\int_{M}\rho^{2}(X,C)P(dX).\]
The _sample Frechet mean set_ is
\[\operatorname*{argmin}_{C\in M}\frac{1}{n}\sum_{i=1}^{n}\rho^{2}(X_{i},C).\]
Examples of location estimators that have been developed for \(\operatorname{Sym}^{+}(p)\) using the sample Frechet mean estimation framework include the log-Euclidean mean (Arsigny et al., 2007), affine-invariant mean (Fletcher et al., 2004; Pennec, Fillard and Ayache, 2006), Procrustes size-and-shape mean (Dryden, Koloydenko and Zhou, 2009), and the log-Cholesky average (Lin, 2019). Below, we allow ourselves to use the "Frechet mean" terminology of Definition 3.1 when the metric space \((M,\rho)\) is replaced by the semi-metric space \((\operatorname{Sym}^{+}(p),d_{\mathcal{S}R})\).
### Scaling-rotation means
We now define the population and sample scaling-rotation mean sets, consisting of the Frechet means of SPD matrices under the scaling-rotation framework. Let \(P\) be a Borel probability measure on \(\operatorname{Sym}^{+}(p)\), and \(X_{1},\ldots,X_{n}\) be deterministic data points in \(\operatorname{Sym}^{+}(p)\). Note that Borel measures on \(\operatorname{Sym}^{+}(p)\) include both discrete and absolutely continuous measures, as well as mixtures of those.
**Definition 3.2**.: The _population scaling-rotation (SR) mean set_ with respect to \(P\) is
\[E^{(\mathcal{SR})}:=\operatorname*{argmin}_{S\in\operatorname{Sym}^{+}(p)}f^{( \mathcal{SR})}(S),\quad f^{(\mathcal{SR})}(S)=\int_{\operatorname{Sym}^{+}(p)}d _{\mathcal{SR}}^{2}(X,S)P(dX). \tag{3.1}\]
Given \(X_{1},\ldots,X_{n}\in\operatorname{Sym}^{+}(p)\), the _sample SR mean set_ is
\[E^{(\mathcal{SR})}_{n}:=\operatorname*{argmin}_{S\in\operatorname{Sym}^{+}(p)}f ^{(\mathcal{SR})}_{n}(S),\quad f^{(\mathcal{SR})}_{n}(S)=\frac{1}{n}\sum_{i=1}^ {n}d_{\mathcal{SR}}^{2}(X_{i},S).\]
Since, for some \(S\in\operatorname{Sym}^{+}(p)\), the function \(d_{\mathcal{SR}}(\cdot,S):\operatorname{Sym}^{+}(p)\to\mathbb{R}\) has discontinuities (see Appendix A), we must address whether the objective function \(f^{(\mathcal{SR})}\) of (3.1) is well-defined. We defer this discussion to Section 4.1.
Locating a sample SR mean can be recast as solving a difficult constrained optimization problem on \(M(p)^{n}\) since
\[\frac{1}{n}\sum_{i=1}^{n}d_{\mathcal{SR}}^{2}(X_{i},S)=\frac{1}{n}\sum_{i=1}^ {n}d_{M}^{2}((U_{i}^{*},D_{i}^{*}),(U_{S}^{*,i},D_{S}^{*,i})), \tag{3.2}\]
where for each \(i=1,\ldots,n\), \((U_{i}^{*},D_{i}^{*})\in\mathcal{F}^{-1}(X_{i})\) and \((U_{S}^{*,i},D_{S}^{*,i})\in\mathcal{F}^{-1}(S)\) are an arbitrary minimal pair. Due to the non-uniqueness of eigen-decompositions, there may be many pairs of eigen-decompositions of \(X_{i}\) and \(S\) which form a minimal pair.
However, when \(S\in S_{p}^{\text{top}}\) the scaling-rotation distance simplifies to
\[d_{\mathcal{SR}}(X,S)=\inf_{(U_{X},D_{X})\in\mathcal{F}^{-1}(X)}d_{M}((U_{X},D _{X}),(U_{S},D_{S})), \tag{3.3}\]
where \((U_{S},D_{S})\) is _any_ eigen-decomposition of \(S\). In this case, \(d_{\mathcal{SR}}(X,S)\) is easier to compute since one can select an arbitrary eigen-decomposition \((U_{S},D_{S})\) of \(S\) and then determine the infimum of the distances between \((U_{S},D_{S})\) and the eigen-decompositions of \(X\). If \(S\) has repeated eigenvalues (or, equivalently, \(S\) is in a lower stratum), this simplification does not hold in general; there may be no eigen-decomposition of \(S\) that is at minimal distance from \(\mathcal{F}^{-1}(X_{i})\) simultaneously for all \(i\).
From the simplification in (3.3), we propose to solve for minimizers of the simplified objective function
\[(U,D)\mapsto\frac{1}{n}\sum_{i=1}^{n}\inf_{(U_{X},D_{X})\in\mathcal{F}^{-1}(X _{i})}d_{M}^{2}((U_{X},D_{X}),(U,D)),\]
where the argument \((U,D)\) is an arbitrarily chosen eigen-decomposition of the argument \(S\) from (3.2).
To formally define this simplified optimization problem, we first define the following measure of distance between an SPD matrix and a given eigen-decomposition of another SPD matrix:
**Definition 3.3**.: The _partial scaling-rotation (PSR) distance_ is the map \(d_{\mathcal{PSR}}:\operatorname{Sym}^{+}(p)\times M(p)\to[0,\infty)\) given by
\[d_{\mathcal{PSR}}(X,(U,D))=\inf_{(U_{X},D_{X})\in\mathcal{F}^{-1}(X)}d_{M}((U_{ X},D_{X}),(U,D)).\]
It can be checked from the definitions that for any \(X\in\operatorname{Sym}^{+}(p)\) and any \((U,D)\in M(p)\)
\[d_{\mathcal{SR}}(X,\mathcal{F}(U,D))\leq d_{\mathcal{PSR}}(X,(U,D)), \tag{3.4}\]
and by (3.3), the equality in (3.4) holds if \(\mathcal{F}(U,D)\in S^{\operatorname{top}}_{p}\).
**Definition 3.4**.: The population and sample _partial scaling-rotation (PSR) mean sets_ are subsets of \(M(p)\) and are defined respectively by \(E^{(\mathcal{PSR})}:=\operatorname{argmin}_{(U,D)\in M(p)}f^{(\mathcal{PSR})} (U,D)\) and \(E^{(\mathcal{PSR})}_{n}:=\operatorname{argmin}_{(U,D)\in M(p)}f^{(\mathcal{PSR })}_{n}(U,D)\), where
\[f^{(\mathcal{PSR})}(U,D) =\int_{\operatorname{Sym}^{+}(p)}d^{2}_{\mathcal{PSR}}(X,(U,D))P (dX), \tag{3.5}\] \[f^{(\mathcal{PSR})}_{n}(U,D) =\frac{1}{n}\sum_{i=1}^{n}d^{2}_{\mathcal{PSR}}(X_{i},(U,D)).\]
In Sections 4.1 and 4.2 we show that for any Borel probability measure on \(\operatorname{Sym}^{+}(p)\), the population mean set \(E^{(\mathcal{PSR})}\) is well-defined and non-empty. There, we also show that both \(E^{(\mathcal{PSR})}_{n}\) and \(E^{(\mathcal{SR})}_{n}\) are non-empty for any sample \(X_{1},\ldots,X_{n}\). An iterative algorithm to compute a sample PSR mean is given in Section 3.3.
The PSR means lie in \(M(p)\) and can be mapped to \(\operatorname{Sym}^{+}(p)\) via the eigen-composition map. The sample PSR mean set can be thought of as yielding an approximation of the sample SR mean set, and it is of interest to know when the two sets are "equivalent". The theorem below provides conditions under which \(E^{(\mathcal{PSR})}_{n}\subset M(p)\) is equivalent to \(E^{(\mathcal{SR})}_{n}\subset\operatorname{Sym}^{+}(p)\) in the sense that every member of \(E^{(\mathcal{PSR})}_{n}\) is an eigen-decomposition of a member of \(E^{(\mathcal{SR})}_{n}\) and vice-versa. Define \(M^{\operatorname{top}}(p)=\mathcal{F}^{-1}(S^{\operatorname{top}}_{p})\), the subset of \(M(p)\) consisting of all elements \((U,D)\in M(p)\) in which the diagonal elements of \(D\) are not repeated.
**Theorem 3.5**.: _Let \(E^{(\mathcal{PSR})}_{n}\) and \(E^{(\mathcal{SR})}_{n}\) be defined with deterministic data points \(X_{1},\ldots,X_{n}\in\operatorname{Sym}^{+}(p)\)._
1. \(E^{(\mathcal{PSR})}_{n}\supset\mathcal{F}^{-1}(E^{(\mathcal{SR})}_{n}\cap S^{ \operatorname{top}}_{p})\)_._
2. _If_ \(E^{(\mathcal{SR})}_{n}\cap S^{\operatorname{top}}_{p}\neq\emptyset\)_, then_ \(\mathcal{F}(E^{(\mathcal{PSR})}_{n})\subset E^{(\mathcal{SR})}_{n}\) _and_ \(E^{(\mathcal{PSR})}_{n}\subset\mathcal{F}^{-1}(E^{(\mathcal{SR})}_{n})\)_._
3. _If_ \(E^{(\mathcal{SR})}_{n}\cap S^{\operatorname{top}}_{p}\neq\emptyset\) _and_ \(E^{(\mathcal{PSR})}_{n}\subset M^{\operatorname{top}}(p)\)_, then_ \(E^{(\mathcal{PSR})}_{n}=\mathcal{F}^{-1}(E^{(\mathcal{SR})}_{n}\cap S^{ \operatorname{top}}_{p})\)_._
_In particular, since \(E^{(\mathcal{SR})}_{n}\neq\emptyset\) (see Corollary 4.10), parts (a) and (b) together imply that if \(E^{(\mathcal{SR})}_{n}\subset S^{\operatorname{top}}_{p}\), then_
\[E^{(\mathcal{PSR})}_{n}=\mathcal{F}^{-1}(E^{(\mathcal{SR})}_{n})\ \ \text{and}\ \ \mathcal{F}(E^{(\mathcal{PSR})}_{n})=E^{(\mathcal{SR})}_{n}.\]
_Moreover, the statements above hold when \(E_{n}^{(\mathcal{SR})}\) and \(E_{n}^{(\mathcal{PSR})}\) are replaced by \(E^{(\mathcal{SR})}\) and \(E^{(\mathcal{PSR})}\), respectively, provided that \(f^{(\mathcal{PSR})}(U,D)<\infty\) for some \((U,D)\in M(p)\)._
The previous theorem suggests that in many realistic situations, there may be no cost to using the PSR means in place of the SR means, which are more difficult to compute in practice. If minimizing \(f^{(\mathcal{SR})}\) or \(f_{n}^{(\mathcal{SR})}\) over \(S_{p}^{\mathrm{lwr}}\) (the union of lower strata of \(\mathrm{Sym}^{+}(p)\)) is feasible, then the following result can be used to tell whether a PSR mean is equivalent to an SR mean.
For the rest of the paper, we generally use the notation \(m\) rather than \((U,D)\) for an arbitrary element of \(M(p)\) if there is no explicit need for writing out the eigenvector and eigenvalue matrices separately.
**Theorem 3.6**.: _Let \(m^{\mathcal{PSR}}\in M(p)\) be a PSR mean with respect to a probability measure \(P\) on \(\mathrm{Sym}^{+}(p)\)._
1. _If_ \(f^{(\mathcal{SR})}(\mathcal{F}(m^{\mathcal{PSR}}))\leq\min_{S\in S_{p}^{ \mathrm{lwr}}}f^{(\mathcal{SR})}(S)\)_, then_ \(\mathcal{F}(m^{\mathcal{PSR}})\in E^{(\mathcal{SR})}\)_._
2. _If_ \(f^{(\mathcal{SR})}(\mathcal{F}(m^{\mathcal{PSR}}))>\min_{S\in S_{p}^{ \mathrm{lwr}}}f^{(\mathcal{SR})}(S)\)_, then_ \(\mathcal{F}(m^{\mathcal{PSR}})\notin E^{(\mathcal{SR})}\) _and_ \(E^{(\mathcal{SR})}\subset S_{p}^{\mathrm{lwr}}\)_._
_Let \(\hat{m}^{\mathcal{PSR}}\in M(p)\) be a sample PSR mean with respect to a given sample \(X_{1},\ldots,X_{n}\in\mathrm{Sym}^{+}(p)\). Similarly to the statements above,_
1. _If_ \(f_{n}^{(\mathcal{SR})}(\mathcal{F}(\hat{m}^{\mathcal{PSR}}))\leq\min_{S\in S_{ p}^{\mathrm{lwr}}}f_{n}^{(\mathcal{SR})}(S)\)_, then_ \(\mathcal{F}(\hat{m}^{\mathcal{PSR}})\in E_{n}^{(\mathcal{SR})}\)_._
2. _If_ \(f_{n}^{(\mathcal{SR})}(\mathcal{F}(\hat{m}^{\mathcal{PSR}}))>\min_{S\in S_{p}^{ \mathrm{lwr}}}f_{n}^{(\mathcal{SR})}(S)\)_, then_ \(\mathcal{F}(\hat{m}^{\mathcal{PSR}})\notin E_{n}^{(\mathcal{SR})}\) _and_ \(E_{n}^{(\mathcal{SR})}\subset S_{p}^{\mathrm{lwr}}\)_._
We remark that for \(p=2\), \(S_{p}^{\mathrm{lwr}}=\{cI_{2}:c>0\}\) and the function \(f_{n}^{(\mathcal{SR})}\) can be efficiently minimized over \(S_{p}^{\mathrm{lwr}}\) by a one-dimensional numerical optimization.
A key condition to ensure the equivalence of the SR means to PSR means is that all SR means have no repeated eigenvalues (i.e., \(E^{(\mathcal{SR})}\subset S_{p}^{\mathrm{top}}\)), which in fact depends on the distribution \(P\). Below, we give a sufficient condition for \(E^{(\mathcal{SR})}\subset S_{p}^{\mathrm{top}}\) or \(E_{n}^{(\mathcal{SR})}\subset S_{p}^{\mathrm{top}}\). Let \(\delta:\mathrm{Sym}^{+}(p)\rightarrow[0,\infty)\) be \(\delta(S)=\inf\{d_{\mathcal{SR}}(S,S^{\prime}):S^{\prime}\in S_{p}^{\mathrm{ lwr}}\}\). Thus, \(\delta(S)\) is a "distance" from \(S\) to lower strata of \(\mathrm{Sym}^{+}(p)\). (Because \(S_{p}^{\mathrm{lwr}}\) is closed, \(\delta(S)>0\) for any \(S\in S_{p}^{\mathrm{top}}\).)
**Theorem 3.7**.: _Let \(X\) be a \(\mathrm{Sym}^{+}(p)\)-valued random variable with distribution \(P\). Assume that there exists \(S_{0}\in S_{p}^{\mathrm{top}}\) and \(r\in(0,\delta(S_{0})/3)\) such that_
\[P(d_{\mathcal{SR}}(X,S_{0})\leq r)=1.\]
_Then \(E^{(\mathcal{SR})}\subset S_{p}^{\mathrm{top}}\)._
_Similarly, let \(X_{1},\ldots,X_{n}\in\mathrm{Sym}^{+}(p)\), and assume that there exists \(S_{0}\in S_{p}^{\mathrm{top}}\) and \(r\in(0,\delta(S_{0})/3)\) satisfying \(d_{\mathcal{SR}}(X_{i},S_{0})\leq r\) for \(i=1,\ldots,n\). Then \(E_{n}^{(\mathcal{SR})}\subset S_{p}^{\mathrm{top}}\)._
The condition of Theorem 3.7 requires that the sample lie in a ball that is sufficiently far from lower strata of \(\mathrm{Sym}^{+}(p)\), but this condition is by no means
necessary. The condition, however, cannot be replaced by the weaker condition that all data lie in \(S_{p}^{\text{top}}\); there are examples in which this weaker condition is met, but \(E_{n}^{(\mathcal{SR})}\subset S_{p}^{\text{lwr}}\). In Section 5.1, we provide numerical examples where the PSR means are equivalent (or not equivalent) to the SR means.
### Sample PSR Mean Estimation Algorithm
Given a sample \(X_{1},\ldots,X_{n}\in\text{Sym}^{+}(p)\), we propose an algorithm for approximating a member of \(E_{n}^{(\mathcal{PSR})}\), that is to find a minimizer of \(f_{n}^{(\mathcal{PSR})}\). The algorithm is similar to the generalized Procrustes algorithm (Gower, 1975).
**Procedure 3.8** (Sample PSR Mean).: Set tolerance \(\varepsilon>0\) and pick initial guess \((\hat{U}^{(0)},\hat{D}^{(0)})\in M(p)\). Set \(j=0\).
1. For \(i=1,\ldots,n\), find \((U_{i}^{(j)},D_{i}^{(j)})\in\mathcal{F}^{-1}(X_{i})\) that has the smallest geodesic distance from \((\hat{U}^{(j)},\hat{D}^{(j)})\).
2. Compute \((\hat{U}^{(j+1)},\hat{D}^{(j+1)})\in\text{argmin}_{(U,D)\in M(p)}\frac{1}{n} \sum_{i=1}^{n}d_{M}^{2}((U_{i}^{(j)},D_{i}^{(j)}),(U,D))\).
If \(|f_{n}^{(\mathcal{PSR})}(\hat{U}^{(j+1)},\hat{D}^{(j+1)})-f_{n}^{(\mathcal{PSR })}(\hat{U}^{(j)},\hat{D}^{(j)})|>\varepsilon\), increment \(j\) and repeat Steps 1 and 2. Otherwise, \((\hat{U}_{\mathcal{PSR}},\hat{D}_{\mathcal{PSR}})=(\hat{U}^{(j+1)},\hat{D}^{(j +1)})\) is the approximate sample PSR mean produced by this algorithm, given the tolerance \(\varepsilon\) and initial guess \((\hat{U}^{(0)},\hat{D}^{(0)})\).
_Remark 3.9_.: The above procedure will always terminate since \(f_{n}^{(\mathcal{PSR})}(U,D)\geq 0\) for any \((U,D)\in M(p)\) and \(f_{n}^{(\mathcal{PSR})}(\hat{U}^{(j)},\hat{D}^{(j)})\geq f_{n}^{(\mathcal{PSR })}(\hat{U}^{(j+1)},\hat{D}^{(j+1)})\) for any \(j\geq 0\).
If \(X_{i}\) lies in \(S_{p}^{\text{top}}\), performing Step 1 will simply require searching over the \(2^{(p-1)}p!\) distinct eigen-decompositions of \(X_{i}\) to find one that attains the minimal geodesic distance from \((\hat{U}^{(j)},\hat{D}^{(j)})\). Solving for the minimizing eigen-decomposition of \(X_{i}\) is also easy if \(X_{i}\) is a scaled identity matrix (\(X_{i}\in S_{p}^{\text{bot}}\)), since the fact that \(X_{i}=cI_{p}=U(cI_{p})U^{T}\) for any \(U\in SO(p)\) implies that \((\hat{U}^{(j)},cI_{p})\) will be the eigen-decomposition of \(X_{i}\) with minimal geodesic distance from \((\hat{U}^{(j)},\hat{D}^{(j)})\). Determining the minimizing eigen-decomposition of \(X_{i}\) when \(p=3\) and \(X_{i}\) has two distinct eigenvalues can be done by comparing three closed-form expressions, as described in Groisser, Jung and Schwartzman (2017). For \(p>3\), there are no known corresponding closed-form expressions for determining a minimizing eigen-decomposition of \(X_{i}\in S_{p}^{\text{lwr}}\setminus S_{p}^{\text{bot}}\).
The optimization problem over \(M(p)\) in Step 2 can be divided into separate minimization problems over \(\text{Diag}^{+}(p)\) and \(SO(p)\):
\[\hat{D}^{(j+1)} =\underset{D\in\text{Diag}^{+}(p)}{\operatorname{argmin}}\frac{1} {n}\sum_{i=1}^{n}\|\text{Log}(D_{i}^{(j)})-\text{Log}(D)\|_{F}^{2},\] \[\hat{U}^{(j+1)} \in\underset{U\in SO(p)}{\operatorname{argmin}}\frac{1}{n}\sum_{ i=1}^{n}\|\text{Log}(U_{i}^{(j)}U^{-1})\|_{F}^{2}.\]
The solution \(\hat{D}^{(j+1)}\) is uniquely given by \(\hat{D}^{(j+1)}=\operatorname{Exp}\{\frac{1}{n}\sum_{i=1}^{n}\operatorname{Log}(D _{i}^{(j)})\}\), while \(\hat{U}^{(j+1)}\) usually must be approximated via numerical procedures. It is shown in Manton (2004) that when the rotation matrices \(U_{1}^{(j)},\ldots,U_{n}^{(j)}\) lie within a geodesic ball of radius \(\frac{\pi}{2}\), there is a unique minimizer \(\hat{U}^{(j+1)}\), and this minimizer can be approximated by a globally convergent gradient descent algorithm on \((SO(p),g_{SO})\). It is highly unlikely that one would be able to de-couple estimation of the eigenvalue and eigenvector means in this manner while solving for a sample SR mean.
## 4 Theoretical Properties of Scaling-Rotation Means
### Lower semicontinuity and other properties of \(d_{\mathcal{SR}}\) and \(d_{\mathcal{PSR}}\)
One of the complications in using the SR framework is that the symmetric function \(d_{\mathcal{SR}}\) is not continuous in either variable (see Appendix A for an example). Unfortunately, \(d_{\mathcal{PSR}}\) is also not continuous at every point of \(\operatorname{Sym}^{+}(p)\times M(p)\), as illustrated by the following example. Let \(X(\varepsilon):=\operatorname{diag}(e^{\varepsilon},e^{-\varepsilon})\) and \((U,D)=(R(\theta),I_{2})\), where \(R(\theta)\) is the \(2\times 2\) rotation matrix corresponding to a counter-clockwise rotation by angle \(\theta\). Then for any \(\varepsilon\neq 0\) and \(0<|\theta|<\pi/4\),
\[d_{\mathcal{PSR}}(X(\varepsilon),(U,D))=(k\theta^{2}+2\varepsilon^{2})^{1/2},\]
which implies that \(d_{\mathcal{PSR}}(X(\varepsilon),(U,D))\to\sqrt{k}|\theta|\) as \(\varepsilon\to 0\). Since \(d_{\mathcal{PSR}}(X(0),(U,D))=0\), it follows that \(d_{\mathcal{PSR}}\) is not continuous at \((I_{2},(U,D))\), and therefore \(d_{\mathcal{PSR}}\) is not continuous on \(\operatorname{Sym}^{+}(p)\times M(p)\). Nevertheless, \(d_{\mathcal{PSR}}\) is continuous with respect to the second variable in \(M(p)\), and is jointly continuous on \(S_{p}^{\mathrm{top}}\times M(p)\), as we state below.
**Lemma 4.1**.:
* \(d_{\mathcal{PSR}}\) _is continuous on_ \(S_{p}^{\mathrm{top}}\times M(p)\)_._
* _For each_ \(S\in\operatorname{Sym}^{+}(p)\)_, the function_ \(d_{\mathcal{PSR}}(S,\cdot):M(p)\to[0,\infty)\) _is Lipschitz, with Lipschitz-constant 1. That is, for all_ \(m_{1},m_{2}\in M(p)\)_,_ \[|d_{\mathcal{PSR}}(S,m_{1})-d_{\mathcal{PSR}}(S,m_{2})|<d_{M}(m_{1},m_{2}).\] _In particular,_ \(d_{\mathcal{PSR}}(S,\cdot)\) _is uniformly continuous for each_ \(S\)_._
Since both \(d_{\mathcal{SR}}\) and \(d_{\mathcal{PSR}}\) are not continuous, in principle we do not know yet whether the integrals of \(d_{\mathcal{SR}}^{2}(\cdot,\Sigma)\) and \(d_{\mathcal{PSR}}^{2}(\cdot,(U,D))\), for \(\Sigma\in\operatorname{Sym}^{+}(p)\) and \((U,D)\in M(p)\), in Definitions 3.2 and 3.4, are well defined. A related question is: under which conditions do the population (partial) scaling-rotation means exist? A key observation in answering these questions is that these functions \(d_{\mathcal{SR}}^{2}(\cdot,\Sigma)\) and \(d_{\mathcal{PSR}}^{2}(\cdot;(U,D))\) are _lower semicontinuous_ (LSC). (Recall that a function \(f:\mathcal{X}\to\mathbb{R}\), where \(\mathcal{X}\) is a topological space, is LSC at a point \(x_{0}\in\mathcal{X}\) if for all \(\epsilon>0\), there exists an open neighborhood \(\mathcal{U}\) of \(x_{0}\) such that \(f(x)>f(x_{0})-\epsilon\) for all \(x\in\mathcal{U}\). If \(f\) is LSC at each \(x_{0}\in\mathcal{X}\), we say that \(f\) is LSC.)
**Definition 4.2**.: Let \(\mathcal{X}\) be a topological space and \(\mathcal{Y}\) be a set, and let \(f:\mathcal{X}\times\mathcal{Y}\to\mathbb{R}\).
1. We say that \(f\) is _LSC in its first variable, uniformly with respect to its second variable_, if for all \(x_{0}\in\mathcal{X}\) and \(\epsilon>0\), there exists an open neighborhood \(\mathcal{U}\) of \(x_{0}\) such that \[f(x,y)>f(x_{0},y)-\epsilon\quad\text{for all $x\in\mathcal{U}$ and all $y\in\mathcal{Y}$}.\] (4.1)
2. If \(\mathcal{Y}\) is also a topological space, we say that \(f\) _is LSC in its first variable,_ locally _uniformly with respect to its second variable_, if every \(y_{0}\in\mathcal{Y}\) has an open neighborhood \(\mathcal{V}\) such that \(f|_{\mathcal{X}\times\mathcal{V}}\) is LSC in the first variable, uniformly with respect to the second. If \(\mathcal{Y}\) is locally compact, this property is equivalent to: for every compact set \(K\subset\mathcal{Y}\), \(f|_{\mathcal{X}\times K}\) is LSC in the first variable, uniformly with respect to the second.
Any finite-dimensional manifold (in particular, \(M(p)\)) is locally compact.
**Theorem 4.3**.:
1. _Let_ \(S_{0}\in\operatorname{Sym}^{+}(p)\)_, and_ \(m_{0}\in M(p)\)_. Then the functions_ \(d^{2}_{\mathcal{SR}}(\cdot,S_{0})\) _and_ \(d^{2}_{\mathcal{SPR}}(\cdot,m_{0})\) _and their square-roots are LSC._
2. _The functions_ \(d_{\mathcal{SR}}(\cdot,\cdot)\)_,_ \(d^{2}_{\mathcal{SR}}(\cdot,\cdot)\)_,_ \(d_{\mathcal{PSR}}(\cdot,\cdot)\) _and_ \(d^{2}_{\mathcal{SPR}}(\cdot,\cdot)\) _are LSC in the first variable, locally uniformly with respect to the second variable._
In this theorem, part (a) is actually redundant; it is a special case of part (b), with the one-point set \(\{S_{0}\}\) playing the role of the compact set in Definition 4.2(b). Also, for \(d_{\mathcal{SR}}\) and \(d^{2}_{\mathcal{SR}}\), the terms "first variable" and "second variable" in Theorem 4.3 can be interchanged, since \(d_{\mathcal{SR}}\) is symmetric. Verifying Theorem 4.3 requires substantial background work regarding the geometry of the eigen-decomposition space \(M(p)\) and the eigen-composition map \(\mathcal{F}\). The following lemma is the key technical result used in proving Theorem 4.3. The radius-\(r\) open ball centered at \(m_{0}\in M(p)\) is \(B^{d_{M}}_{r}(m_{0}):=\{m\in M(p):d_{M}(m,m_{0})<r\}\).
**Lemma 4.4**.: _Let \(K\subset\operatorname{Sym}^{+}(p)\) be a compact set. Let \(\epsilon>0\) and let \(S\in\operatorname{Sym}^{+}(p)\). There exists \(\delta_{1}=\delta_{1}(S,K,\epsilon)>0\) such that for all \(S_{0}\in K\), all \(m_{0}\in\mathcal{F}^{-1}(S_{0})\), all \(m\in\mathcal{F}^{-1}(S_{0})\), and all \(S^{\prime}\in\mathcal{F}\big{(}B^{d_{M}}_{\delta_{1}}(m)\big{)}\),_
\[d_{\mathcal{SR}}(S^{\prime},S_{0})^{2}>d_{\mathcal{SR}}(S,S_{0})^{2}-\epsilon \tag{4.2}\]
_and_
\[d_{\mathcal{PSR}}(S^{\prime},m_{0})^{2}>d_{\mathcal{PSR}}(S,m_{0})^{2}-\epsilon. \tag{4.3}\]
Lemma 4.4 does not immediately imply that \(d_{\mathcal{SR}}(\cdot,S_{0})\) or \(d_{\mathcal{PSR}}(\cdot,m_{0})\) is LSC at \(S\), because the set \(\mathcal{F}(B^{d_{M}}_{\delta_{1}}(m))\) in the lemma is not always open in \(\operatorname{Sym}^{+}(p)\) (\(\mathcal{F}\) does not map arbitrary open sets to open sets). However, as we show in an appendix, there exists an open ball centered at \(\mathcal{F}(m)\) in \(\operatorname{Sym}^{+}(p)\) with radius smaller than \(\delta_{1}\) that is contained in \(\mathcal{F}(B^{d_{M}}_{\delta_{1}}(m))\) (Corollary B.13). The background and our proofs of these supporting results and Theorem 4.3 are provided in Appendix B.2.
Semicontinuous real-valued functions are (Borel) measurable, so an immediate consequence of Theorem 4.3(a) is that the integrals defining the objective functions \(f^{(\mathcal{SR})}\) and \(f^{(\mathcal{PSR})}\) for the population (partial) scaling-rotation means exist in \(\mathbb{R}\cup\{\infty\}\). This establishes the following.
**Proposition 4.5**.: _Let \(P\) be any Borel probability measure on \(\operatorname{Sym}^{+}(p)\)._
1. _For any_ \(S\in\operatorname{Sym}^{+}(p)\)_, the integral_ \(\int_{\operatorname{Sym}^{+}(p)}d^{2}_{\mathcal{SR}}(\cdot,S)dP\) _is well-defined in_ \([0,\infty]\)_._
2. _For any_ \(m\in M(p)\)_, the integral_ \(\int_{\operatorname{Sym}^{+}(p)}d^{2}_{\mathcal{SR}}(\cdot,m)dP\) _is well-defined in_ \([0,\infty]\)_._
(Proof for Proposition 4.5 is omitted.)
A finite-variance condition for the random variable \(X\in\operatorname{Sym}^{+}(p)\) with respect to the (partial) scaling-rotation distance (already needed to define \(E^{(\mathcal{SR})}\) and \(E^{(\mathcal{PSR})}\)) is required to establish (semi-)continuity of \(f^{(\mathcal{SR})}\) and \(f^{(\mathcal{PSR})}\), non-emptiness of \(E^{(\mathcal{SR})}\) and \(E^{(\mathcal{PSR})}\) (discussed in Section 4.2), and relationships between these sets (in Section 3.2). For a probability measure \(P\) on \(\operatorname{Sym}^{+}(p)\), we say \(P\) has _finite SR-variance_ if \(f^{(\mathcal{SR})}(S)<\infty\) for all \(S\in\operatorname{Sym}^{+}(p)\). Likewise, \(P\) has _finite PSR-variance_ if \(f^{(\mathcal{PSR})}(U,D)<\infty\) for all \((U,D)\in M(p)\). The following result shows that such a condition needs to be assumed only at a single point, rather than at all points.
**Lemma 4.6**.: _Let \(P\) be a Borel probability measure on \(\operatorname{Sym}^{+}(p)\) and let \(f^{(\mathcal{PSR})}\) and \(f^{(\mathcal{SR})}\) be the corresponding objective functions defined in equations (3.5) and (3.1)._
1. _If_ \(f^{(\mathcal{PSR})}(m)<\infty\) _for some_ \(m\in M(p)\)_, then_ \(f^{(\mathcal{PSR})}(m)<\infty\) _for any_ \(m\in M(p)\)_, and_ \(f^{(\mathcal{SR})}(S)<\infty\) _for any_ \(S\in\operatorname{Sym}^{+}(p)\)_._
2. _If_ \(f^{(\mathcal{SR})}(S)<\infty\) _for some_ \(S\in\operatorname{Sym}^{+}(p)\)_, then_ \(f^{(\mathcal{SR})}(S)<\infty\) _for any_ \(S\in\operatorname{Sym}^{+}(p)\)_._
By (3.4), any probability measure with finite PSR-variance always has finite SR variance.
We conclude this background section by answering a natural question: Are the SR and PSR mean functions \(f^{(\mathcal{SR})}\) and \(f^{(\mathcal{PSR})}\) (semi-)continuous?
**Lemma 4.7**.: _Let \(P\) be a Borel probability measure on \(\operatorname{Sym}^{+}(p)\)._
1. _If_ \(P\) _is supported in a compact set_ \(K\subset\operatorname{Sym}^{+}(p)\)_, then_ \(f^{(\mathcal{SR})}:\operatorname{Sym}^{+}(p)\to\mathbb{R}\) _is LSC._
2. _If_ \(P\) _has finite PSR-variance, then_ \(f^{(\mathcal{PSR})}:M(p)\to\mathbb{R}\) _is continuous._
The preceding result also implies that for any finite sample \(X_{1},\ldots,X_{n}\), \(f^{(\mathcal{SR})}_{n}\) (or \(f^{(\mathcal{PSR})}_{n}\)) is LSC (or continuous, respectively). Lemma 4.7 plays an important role in developing theoretical properties of SR and PSR means, which we present in the subsequent sections.
### Existence of scaling-rotation means
The SR mean set \(E^{(\mathcal{SR})}\) consists of the minimizers of the function \(f^{(\mathcal{SR})}\). To prove existence of SR means (or, equivalently, non-emptiness of \(E^{(\mathcal{SR})}\)) we use the fact that any LSC function on a compact set attains a minimum. For this purpose, we first verify _coercivity_ of \(f^{(\mathcal{SR})}\) (and \(f^{(\mathcal{PSR})}\)).
**Proposition 4.8**.: _Let \(P\) be a Borel probability measure on \(\operatorname{Sym}^{+}(p)\)._
1. _There exists a compact set_ \(K\subset\operatorname{Sym}^{+}(p)\) _such that_ \[\inf_{S\in\operatorname{Sym}^{+}(p)}f^{(\mathcal{SR})}(S)=\inf_{S\in K}f^{( \mathcal{SR})}(S).\] (4.4)
2. _There exists a compact set_ \(\widetilde{K}\subset M(p)\) _such that_ \[\inf_{m\in M(p)}f^{(\mathcal{PSR})}(m)=\inf_{m\in\widetilde{K}}f^{(\mathcal{ SPR})}(m).\] (4.5)
Proposition 4.8 says that \(f^{(\mathcal{SR})}\) (and \(f^{(\mathcal{PSR})}\), respectively) is coercive, i.e. uniformly large outside some compact set, and, under the finite-variance condition, has a (non-strictly) smaller value somewhere inside that compact set. Using this fact and the lower semicontinuity of \(f^{(\mathcal{SR})}\) (respectively, \(f^{(\mathcal{PSR})}\)), we show in Theorem 4.9 that the SR and PSR mean sets are non-empty. In this theorem, the bounded support condition for \(P\) is used only to ensure the lower semicontinuity of \(f^{(\mathcal{SR})}\).
**Theorem 4.9**.: _Let \(P\) be a Borel probability measure on \(\operatorname{Sym}^{+}(p)\)._
1. _If_ \(P\) _is supported on a compact set, then_ \(E^{(\mathcal{SR})}\neq\emptyset\)_._
2. _If_ \(P\) _has finite PSR-variance, then_ \(E^{(\mathcal{PSR})}\neq\emptyset\)_._
Since the conditions of Theorem 4.9 are met for any empirical measure defined from a finite set \(\{X_{1},\ldots,X_{n}\}\subset\operatorname{Sym}^{+}(p)\), a corollary of the population SR mean result is the existence of sample SR means:
**Corollary 4.10**.: _For any finite \(n\), and any \(X_{1},\ldots,X_{n}\in\operatorname{Sym}^{+}(p)\), \(E^{(\mathcal{SR})}_{n}\neq\emptyset\) and \(E^{(\mathcal{PSR})}_{n}\neq\emptyset\)._
_Remark 4.11_.: For any Borel-measurable \(\operatorname{Sym}^{+}(p)\)-valued random variable with finite PSR-variance, the PSR mean set \(E^{(\mathcal{PSR})}\) is closed. In particular, every sample PSR mean set is closed. To verify this, recall from Lemma 4.7 that \(f^{(\mathcal{PSR})}\) is continuous. The PSR mean set is a level set of a continuous function, and therefore is closed.
Moreover, as seen in Proposition 4.8, the closed set \(E^{(\mathcal{PSR})}\) is a subset of a compact set, thus is compact as well.
### Uniqueness of PSR means
Much work has been done on the question of uniqueness of the Frechet mean of Riemannian manifold-valued observations. It is known that the Frechet mean is unique as long as the support of the probability distribution lies within a geodesic ball of a certain radius (see, for example, Afsari (2011)). Although \(d_{\mathcal{SR}}\) is not a geodesic distance on \(\operatorname{Sym}^{+}(p)\), we can obtain a similar result for a kind of uniqueness of the PSR mean.
For any \(X\in\mathrm{Sym}^{+}(p)\), recall from (2.6) that \(\mathcal{F}^{-1}(X)=\{h\cdot(UR,D):R\in G_{D}^{0},h\in\mathcal{G}(p)\}\) for an eigen-decomposition \((U,D)\) of \(X\). Since the finite group \(\mathcal{G}(p)\) acts freely and isometrically on \(M(p)\), for any \(h\in\mathcal{G}(p)\) and \(m\in M(p)\),
\[d_{\mathcal{PSR}}(X,m) =\inf_{R\in G_{D}^{0},h^{\prime}\in\mathcal{G}(p)}d_{M}(h^{\prime }\cdot(UR,D),m)\] \[=\inf_{R\in G_{D}^{0},h\cdot h^{\prime}\in\mathcal{G}(p)}d_{M}(h \cdot h^{\prime}\cdot(UR,D),h\cdot m)=d_{\mathcal{PSR}}(X,h\cdot m).\]
For a sample \(X_{1},\ldots,X_{n}\in\mathrm{Sym}^{+}(p)\), we have thus
\[f_{n}^{(\mathcal{PSR})}(m)=\frac{1}{n}\sum_{i=1}^{n}d_{\mathcal{PSR}}^{2}(X_{ i},m)=\frac{1}{n}\sum_{i=1}^{n}d_{\mathcal{PSR}}^{2}(X_{i},h\cdot m))=f_{n}^{( \mathcal{PSR})}(h\cdot m) \tag{4.6}\]
for any \(h\in\mathcal{G}(p)\) and \(m\in M(p)\). It follows from (4.6) that for any \(m\in E_{n}^{(\mathcal{PSR})}\), the remaining members of its orbit \(\mathcal{G}(p)\cdot m\) (see (2.7)) also belong to \(E_{n}^{(\mathcal{PSR})}\). Thus, \(E_{n}^{(\mathcal{PSR})}\) will contain at least \(2^{p-1}p!\) elements. In the case where \(E_{n}^{(\mathcal{PSR})}\) only contains \(2^{p-1}p!\) elements (necessarily belonging to the same orbit), we will say that the sample PSR mean is _unique up to the action of \(\mathcal{G}(p)\)_. The notion of uniqueness (up to the action of \(\mathcal{G}(p)\)) for the population PSR mean in \(E^{(\mathcal{PSR})}\) is defined similarly.
The following lemma yields a useful lower bound on the distance between distinct eigen-decompositions of an SPD matrix in \(S_{p}^{\mathrm{top}}\). (Note that for any \(X\in S_{p}^{\mathrm{lwr}}\), the set of eigen-decompositions of \(X\) is not discrete, so two eigen-decompositions of \(X\) may be arbitrarily close to each other.)
**Lemma 4.12**.: _(a) For any \((U,D)\in M(p)\) and for any \(h\in\mathcal{G}(p)\setminus\{I_{p}\}\),_
\[d_{M}((U,D),h\cdot(U,D))\geq\sqrt{k}\beta_{\mathcal{G}(p)}\]
_where \(\beta_{\mathcal{G}(p)}:=\min_{h\in\mathcal{G}(p)\setminus\{I_{p}\}}d_{SO}(I_ {p},h)=\min_{h\in\mathcal{G}(p)\setminus\{I_{p}\}}\frac{1}{\sqrt{2}}\|\mathrm{ Log}(h)\|_{F}\)._
_(b) The quantity \(\beta_{\mathcal{G}(p)}\) satisfies \(\beta_{\mathcal{G}(p)}\leq\frac{\pi}{2}\) for any \(p\geq 2\)._
_(c) For any \(X\in S_{p}^{\mathrm{top}}\), any two distinct eigen-decompositions \((U_{X},D_{X})\) and \((U_{X}^{\prime},D_{X}^{\prime})\) of \(X\) satisfy \(d_{M}((U_{X},D_{X}),(U_{X}^{\prime},D_{X}^{\prime}))\geq\sqrt{k}\beta_{ \mathcal{G}(p)}\)._
_Remark 4.13_.: It is easily checked that \(\beta_{\mathcal{G}(p)}=\frac{\pi}{2}\) when \(p=2,3\).
In Theorem 4.15 below, we provide a sufficient condition for uniqueness (up to the action of \(\mathcal{G}(p)\)) of the PSR means. In preparation, we first provide a sufficient condition for a distribution on \(M(p)\) to have a unique Frechet mean. Recall that \((M(p),g_{M})\) is a Riemannian manifold, which in turn implies that \((M(p),d_{M})\) is a metric space. The Frechet mean set for a probability distribution \(P\) on \(M(p)\) is thus well-defined.
**Lemma 4.14**.: _Let \(\tilde{P}\) be a Borel probability measure on \(M(p)\). Suppose that \(\mathrm{supp}(\tilde{P})\), the support of \(\tilde{P}\), satisfies_
\[\mathrm{supp}(\tilde{P})\subseteq B_{r}^{d_{M}}(m_{0}) \tag{4.7}\]
_for some \(r\leq\sqrt{k}\beta_{\mathcal{G}(p)}\) and some \(m_{0}\in M(p)\). Then there exists a unique Frechet mean \(\bar{m}(\tilde{P}):=\operatorname*{argmin}_{m\in M(p)}\int_{M(p)}d_{M}^{2}( \tilde{X},m)\tilde{P}(d\tilde{X})\) of \(P\), and \(\bar{m}(\tilde{P})\in B_{r}^{d_{M}}(m_{0})\)._
Similarly to Lemma 4.14, if a deterministic sample \(m_{1},\ldots,m_{n}\in M(p)\) lies in \(B_{r}^{d_{M}}(m_{0})\) (\(i=1,\ldots,n\)) for some \(r\leq\sqrt{k}\beta_{\mathcal{G}(p)}\) and some \(m_{0}\in M(p)\), then the sample Frechet mean \(\bar{m}:=\operatorname*{argmin}_{m\in M(p)}\frac{1}{n}\sum_{i=1}^{n}d_{M}^{2}( m_{i},m)\) is unique and lies in \(B_{r}^{d_{M}}(m_{0})\).
**Theorem 4.15**.: _Suppose the probability measure \(P\) on \(\operatorname{Sym}^{+}(p)\) is absolutely continuous with respect to volume measure and that for two independent \(\operatorname{Sym}^{+}(p)\)-valued random variables \(X_{1},X_{2}\) whose distribution is \(P\),_
\[P(d_{\mathcal{SR}}(X_{1},X_{2})<r^{\prime}_{cx})=1,\quad\text{\rm where}\ \ r^{\prime}_{cx}:=\frac{\sqrt{k}\beta_{\mathcal{G}(p)}}{4}. \tag{4.8}\]
_Then the population PSR mean set \(E^{(\mathcal{PSR})}\) is unique up to the action of \(\mathcal{G}(p)\)._
The number \(r^{\prime}_{cx}\) is a lower bound on the regular convexity radius of the quotient space \(M(p)/\mathcal{G}(p)\) with the induced Riemannian structure, as shown in Groisser, Jung and Schwartzman (2023). This ensures that a ball in \(M(p)/\mathcal{G}(p)\) with radius less than \(r^{\prime}_{cx}\) is convex. The quotient space \(M(p)/\mathcal{G}(p)\) "sits" between \(M(p)\) and \(\operatorname{Sym}^{+}(p)\); any \(X\in S_{p}^{\rm top}\) coincides with an element in \(M(p)/\mathcal{G}(p)\), but there are multiple (in fact, infinitely many) elements in \(M(p)/\mathcal{G}(p)\) corresponding to any \(X\in S_{p}^{\rm lwr}\) (cf. (2.6)). Lemma 4.12 shows that \(r^{\prime}_{cx}\leq\sqrt{k}\pi/8\). In contrast, the regular convexity radius of \((M(p),g_{M})\) is \(\sqrt{k}\pi/2\), which is much larger than \(r^{\prime}_{cx}\). Even though we work with the eigen-decomposition space \(M(p)\), in Theorem 4.15 we require data-support diameter at most \(r^{\prime}_{cx}<\sqrt{k}\pi/2\) since, if \(d_{\mathcal{SR}}(S_{1},S_{2})\geq r^{\prime}_{cx}\) for some \(S_{1},S_{2}\in\operatorname{Sym}^{+}(p)\), then there may be two or more eigen-decompositions of \(S_{1}\) that are both closest to an eigen-decomposition of \(S_{2}\).
The assumption of absolute continuity of \(P\) in Theorem 4.15 enables us to restrict our attention to the probability-1 event for which the random variables lie in the top stratum \(S_{p}^{\rm top}\) of \(\operatorname{Sym}^{+}(p)\), since the complement of \(S_{p}^{\rm top}\) has volume zero in \(\operatorname{Sym}^{+}(p)\). Corollary 4.16 below explicitly states this restriction as a sufficient condition for the uniqueness of sample PSR means of a deterministic sample. We also show that the estimation procedure (Procedure 3.8) will yield the unique (up to the action of \(\mathcal{G}(p)\)) sample PSR mean.
**Corollary 4.16**.: _Assume \(X_{1},\ldots,X_{n}\in S_{p}^{\rm top}\). If_
\[d_{\mathcal{SR}}(X_{i},X_{j})<r^{\prime}_{cx} \tag{4.9}\]
_for all \(i,j=1,\ldots,n\), then_
1. _the sample PSR mean is unique up to the action of_ \(\mathcal{G}(p)\)_;_
2. _choosing an eigen-decomposition of any observation from the sample as the initial guess will lead Procedure_ 3.8 _to converge to the sample PSR mean after one iteration._
_Remark 4.17_.: The data-diameter condition (4.9) in Corollary 4.16 is satisfied under either of the following two conditions (in the presence of the assumption \(X_{i}\in S_{p}^{\mathrm{top}}\)):
1. There exists an \(S_{0}\in S_{p}^{\mathrm{top}}\) such that \(d_{\mathcal{SR}}(S_{0},X_{i})<r^{\prime}_{cx}/2\) for all \(i=1,\ldots,n\).
2. There exists an \(m\in M(p)\) such that \(d_{\mathcal{PSR}}(X_{i},m)<r^{\prime}_{cx}/2\) for all \(i=1,\ldots,n\).
Similarly, the condition that \(d_{\mathcal{SR}}(X_{1},X_{2})<r^{\prime}_{cx}\) almost surely in Theorem 4.15 is guaranteed by either (i) or (ii) above, when the latter two conditions are modified probabilistically; see Appendix B.4.4. In condition (i) above, it is necessary for the center of the open ball (data support) to lie in the top stratum, due to the fact that the functions \(d_{\mathcal{SR}}(\cdot,X_{i})\) are, in general, only LSC (not continuous) at points belonging to \(S_{p}^{\mathrm{lwr}}\). For an \(S_{0}\in S_{p}^{\mathrm{lwr}}\), even if a condition \(d_{\mathcal{SR}}(S_{0},X_{i})<\epsilon\) (\(i=1,\ldots,n\)) is satisfied for arbitrarily small \(\epsilon\), \(d_{\mathcal{SR}}(X_{i},X_{j})\) may be larger than \(r^{\prime}_{cx}\).
Proof of the statements given in this remark can be found in Appendix B.4.4.
If the data-support is small enough to satisfy (4.8) and also is far from the lower stratum (satisfying the conditions in Theorem 3.7), then the SR mean is unique, as the following corollary states.
**Corollary 4.18**.: _Let \(X\) be a \(\mathrm{Sym}^{+}(p)\)-valued random variable following the distribution \(P\). Assume that there exist \(S_{0}\in S_{p}^{\mathrm{top}}\) and \(r<\min\{\delta(S_{0})/3,r^{\prime}_{cx}/2\}\) satisfying \(P(d_{\mathcal{SR}}(S_{0},X_{i})\leq r)=1\). Then, (i) the PSR mean is unique up to the action of \(\mathcal{G}(p)\), (ii) \(E^{(\mathcal{SR})}\subset S_{p}^{\mathrm{top}}\), and (iii) \(E^{(\mathcal{SR})}=\mathcal{F}(E^{(\mathcal{PSR})})\) is a singleton set._
### Asymptotic properties of the sample PSR means
This subsection addresses two aspects of the asymptotic behavior of the sample PSR mean \(E_{n}^{(\mathcal{PSR})}\): (i) strong consistency of \(E_{n}^{(\mathcal{PSR})}\) with the population PSR mean set \(E^{(\mathcal{PSR})}\) and (ii) the large-sample limiting distribution of a sample PSR mean. Much work has been done to establish consistency and central limit theorem-type results for sample Frechet means on Riemannian manifolds and metric spaces (Bhattacharya and Patrangenaru (2003), Bhattacharya and Patrangenaru (2005), Bhattacharya and Lin (2017), Eltzner and Huckemann (2019)). Estimation of the PSR mean does not fit into the context of estimation on Riemannian manifolds or metric spaces since the sample space \(\mathrm{Sym}^{+}(p)\) and _parameter space_\(M(p)\) are different. Moreover, as we have seen, the PSR means are never unique. With this in mind, we apply the framework of generalized Frechet means on general product spaces in Huckemann (2011a) and Huckemann (2011b) to our PSR mean estimation context, enabling us to establish conditions for strong consistency and for a central limit theorem.
We now establish a strong-consistency result for \(E_{n}^{\mathcal{PSR}}\). Throughout this subsection, let \(X,X_{1},\ldots\) be independent random variables mapping from a complete probability space \((\Omega,\mathcal{A},\mathcal{P})\) to \(\mathrm{Sym}^{+}(p)\) equipped with its Borel \(\sigma\)-field, and
let \(P\) be the induced Borel probability measure on \(\operatorname{Sym}^{+}(p)\). The sets \(E^{(\mathcal{PSR})}\) and \(E^{(\mathcal{PSR})}_{n}\) denote the population and sample PSR-mean sets defined by \(P\) and \(X_{1},\ldots,X_{n}\), respectively.
**Theorem 4.19**.: _Assume that \(P\) has finite PSR-variance. Then_
\[\lim_{n\to\infty}\sup_{m\in E^{(\mathcal{PSR})}_{n}}d_{M}(m,E^{(\mathcal{PSR}) })=0 \tag{4.10}\]
_almost surely._
Our proof of Theorem 4.19 is contained in Appendix B.5. There, we closely follow the arguments of Huckemann (2011b) used in verifying the conditions required to establish strong consistency of the generalized Frechet means. However, the theorems of Huckemann (2011b) are not directly applied since the function \(d_{\mathcal{PSR}}\) is not continuous. Nevertheless, the Frechet-type objective function \(f^{(\mathcal{PSR})}:M(p)\to\mathbb{R}\) is continuous, as shown in Lemma 4.7, a fact that plays an important role in the proof of Theorem 4.19. Schotz (2022) extends the results of Huckemann (2011b) by, among other things and in our notation, allowing for \(d_{\mathcal{PSR}}(X,\cdot)\) to be only LSC. However, this is not actually helpful for \(d_{\mathcal{PSR}}\) either, because \(d_{\mathcal{PSR}}\) is actually _continuous_ with respect to the second variable (it is LSC with respect to the _first_ variable); see Lemma 4.1 and Theorem 4.3.
In the proof of Theorem 4.19, we first show that with probability 1
\[\cap_{k=1}^{\infty}\overline{\cup_{n=k}^{\infty}E^{(\mathcal{PSR})}_{n}} \subset E^{(\mathcal{PSR})}. \tag{4.11}\]
In the terminology of Huckemann (2011b), (4.11) is called strong consistency of \(E^{(\mathcal{PSR})}_{n}\) as an estimator of \(E^{(\mathcal{PSR})}\) in the sense of Ziezold (1977). Our result (4.10) is equivalent to strong consistency in the sense of Bhattacharya and Patrangenaru (2003) (again using the terminology of Huckemann (2011b)), as shown in Lemma B.19 in the appendix. Schotz (2022) classified three types of convergence for a sequence of sets, referring to (4.11) as convergence _in the outer limit_, and to (4.10) as convergence _in the one-sided Hausdorff distance_. The last type of convergence is convergence _in Hausdorff distance_. Recall that for a metric space \((M,d)\) the Hausdorff distance between non-empty sets \(A,B\subset M\) is \(d_{H}(A,B):=\max\{\sup_{m\in A}d(m,B),\sup_{m\in B}d(A,m)\}\).
Theorem 4.19 states that, with probability 1, any sequence \(m_{n}\in E^{(\mathcal{PSR})}_{n}\) of sample PSR means will eventually lie in an arbitrarily small neighborhood of the population PSR mean set as the sample size \(n\) increases. But, conceivably there could be a population PSR mean in \(E^{(\mathcal{PSR})}\) with no sample PSR mean nearby even for large \(n\), in which case \(d_{H}(E^{(\mathcal{PSR})}_{n},E^{(\mathcal{PSR})})\) would not approach zero. In other words, \(E^{(\mathcal{PSR})}_{n}\) would be a strongly consistent estimator of \(E^{(\mathcal{PSR})}\) only with respect to _one-sided_ Hausdorff distance, not (two-sided) Hausdorff distance. However, if the population PSR mean is unique up to the action of \(\mathcal{G}(p)\), then \(E^{(\mathcal{PSR})}_{n}\)_is_ a strongly consistent estimator of \(E^{(\mathcal{PSR})}\) with respect to Hausdorff distance on \((M(p),d_{M})\), as shown next.
**Corollary 4.20**.: _Assume that \(P\) has finite PSR-variance, and that \(E^{(\mathcal{PSR})}=\mathcal{G}(p)\cdot\mu\) for some \(\mu\in M(p)\). Then with probability 1,_
\[\lim_{n\to\infty}\sup_{m\in E^{(\mathcal{PSR})}}d_{M}(E^{(\mathcal{PSR})}_{n},m)=0 \tag{4.12}\]
_and_
\[\lim_{n\to\infty}d_{H}(E^{(\mathcal{PSR})}_{n},E^{(\mathcal{PSR})})=0. \tag{4.13}\]
The strong consistency of sample PSR means with the population PSR means can be converted to strong consistency of sample PSR means with the _population SR means_, as follows. For \(S\in\operatorname{Sym}^{+}(p)\) and a set \(\mathcal{E}\subset\operatorname{Sym}^{+}(p)\), we define \(d_{\mathcal{SR}}(S,\mathcal{E}):=\inf_{E\in\mathcal{E}}d_{\mathcal{SR}}(S,E)\).
**Corollary 4.21**.: _Assume that \(P\) has finite PSR-variance. Then,_
_(i) \(\lim_{n\to\infty}\sup_{S\in\mathcal{F}(E^{(\mathcal{PSR})}_{n})}d_{\mathcal{ SR}}(S,\mathcal{F}(E^{(\mathcal{PSR})}))=0\) almost surely._
_(ii) If, in addition, \(E^{(\mathcal{SR})}\subset S^{\operatorname{top}}_{p}\), then \(\lim_{n\to\infty}\sup_{S\in\mathcal{F}(E^{(\mathcal{PSR})}_{n})}d_{\mathcal{ SR}}(S,E^{(\mathcal{SR})})=0\) almost surely._
_(iii) If \(E^{(\mathcal{SR})}\subset S^{\operatorname{top}}_{p}\) and the population SR mean is unique with \(E^{(\mathcal{SR})}=\{\mu^{(\mathcal{SR})}\}\), then \(\lim_{n\to\infty}d_{\mathcal{SR}}(\mathcal{F}(E^{(\mathcal{PSR})}_{n}),\mu^{ (\mathcal{SR})})=0\) almost surely._
Note that in establishing a strong consistency property of \(E^{(\mathcal{PSR})}_{n}\) with respect to population (partial) SR means, we assumed only that the _population_ mean set \(E^{(\mathcal{PSR})}\) is unique up to the action of \(\mathcal{G}(p)\), not that the _sample_ mean sets \(E^{(\mathcal{PSR})}_{n}\) have this uniqueness property. We also did not assume that \(E^{(\mathcal{PSR})}_{n}\subset M^{\operatorname{top}}_{p}\).
We next establish a central limit theorem for our estimator \(E^{(\mathcal{PSR})}_{n}\). Our strategy is to closely follow the arguments in Bhattacharya and Patrangenaru (2005); Huckemann (2011a); Bhattacharya and Lin (2017); Eltzner et al. (2021), for deriving central limit theorems for (generalized) Frechet means on a Riemannian manifold. In particular, our central limit theorem is expressed in terms of charts and the asymptotic distributions of "linearized" estimators.
Our parameter space of interest \(M(p)=SO(p)\times\operatorname{Diag}^{+}(p)\) is a Riemannian manifold of dimension \(d:=\frac{(p-1)p}{2}+p\). As defined in Section 2.1, the tangent space at \((U,D)\in M(p)\) is \(T_{(U,D)}M(p)=\{(AU,LD):A\in\mathfrak{so}(p),L\in\operatorname{Diag}(p)\}\), which can be canonically identified with \(\mathfrak{so}(p)\oplus\operatorname{Diag}(p)\), a vector space of dimension \(d\).
At \((U,D)\in M(p)\), we use the local chart \((\mathcal{U}_{(U,D)},\tilde{\varphi}_{(U,D)})\), where
\[\mathcal{U}_{(U,D)}=\{(V,\Lambda)\in M(p):\|\mathrm{Log}(VU^{T})\|_{F}<\pi\},\]
and where \(\tilde{\varphi}_{(U,D)}:\mathcal{U}_{(U,D)}\to\mathfrak{so}(p)\oplus \operatorname{Diag}(p)\) is defined by
\[\tilde{\varphi}_{(U,D)}(V,\Lambda)=(\mathrm{Log}(VU^{T}),\mathrm{Log}( \Lambda D^{-1})). \tag{4.14}\]
Observe that \(\tilde{\varphi}_{(U,D)}^{-1}(A,L)=(\mathrm{Exp}(A)U,\mathrm{Exp}(L)D)\). The maps \(\tilde{\varphi}_{(U,D)}\) and \(\tilde{\varphi}_{(U,D)}^{-1}\) are the Riemannian logarithm and exponential maps to (and from) the tangent
space \(T_{(U,D)}M(p)\), composed with the right-translation isomorphism between \(T_{(U,D)}M(p)\) and \(T_{(I,I)}M(p)=\mathfrak{so}(p)\oplus\mathrm{Diag}(p)\).
We also write the elements of \(\mathfrak{so}(p)\oplus\mathrm{Diag}(p)\) in a coordinate-wise vector form. For each \((A,L)\in\mathfrak{so}(p)\oplus\mathrm{Diag}(p)\), define a suitable vectorization operator \(\mathrm{vec}\),
\[\mathrm{vec}(A,L):=\begin{pmatrix}\sqrt{k}\ x_{SO}(A)\\ x_{\mathcal{D}}(L)\end{pmatrix}\in\mathbb{R}^{d}, \tag{4.15}\]
where \(x_{SO}(A)\in\mathbb{R}^{(p-1)p/2}\) consists of the upper triangular entries of \(A\) (in the lexicographical ordering) and \(x_{\mathcal{D}}(L)=(L_{11},\ldots,L_{pp})^{T}\in\mathbb{R}^{p}\) consists of the diagonal entries of \(L\). The inverse vectorization operator \(\mathrm{vec}^{-1}\) is well-defined as well. We use the notation \(\phi_{(U,D)}(\cdot,\cdot):=\mathrm{vec}\circ\tilde{\varphi}_{(U,D)}(\cdot,\cdot)\) and \(\phi_{(U,D)}^{-1}(\cdot):=\tilde{\varphi}_{(U,D)}^{-1}\circ\mathrm{vec}^{-1}(\cdot)\).
Assume the following.
(A1) The probability measure \(P\) induced by \(X\) on \(\mathrm{Sym}^{+}(p)\) is absolutely continuous with respect to volume measure, and has finite PSR-variance.
(A2) \(E^{(\mathcal{PSR})}\) is unique up to the action of \(\mathcal{G}(p)\). With probability \(1\), so is \(E_{n}^{(\mathcal{PSR})}\) (for every \(n\)).
(A3) For some \(m_{0}\in E^{(\mathcal{PSR})}\), \(P(d_{\mathcal{PSR}}(X,m_{0})<r^{\prime}_{cx})=1\).
The absolute continuity assumption (A1) ensures that any volume-zero (Lebesgue-measurable) subset of \(\mathrm{Sym}^{+}(p)\) has probability zero. In particular, \(P(X\in S_{p}^{\mathrm{top}})=1-P(X\in S_{p}^{\mathrm{lwr}})=1\). This fact greatly simplifies our theoretical development.
The uniqueness assumption (A2) ensures that \(E_{n}^{(\mathcal{PSR})}\) converges almost surely to \(E^{(\mathcal{PSR})}\) with respect to the Hausdorff distance (by Corollary 4.20). Therefore, for any \(m_{0}\in E^{(\mathcal{PSR})}\), there exists a sequence \(m_{n}\in E_{n}^{(\mathcal{PSR})}\) satisfying \(d_{M}(m_{n},m_{0})\to 0\) (or, equivalently, \(\phi_{m_{0}}(m_{n})\to\phi_{m_{0}}(m_{0})=0\)) as \(n\to\infty\) almost surely. Assumption (A2) also guarantees that if (A3) is true for some PSR mean \(m_{0}\in E^{(\mathcal{PSR})}\) then it is true for any other PSR mean in \(E^{(\mathcal{PSR})}\).
The radius \(r^{\prime}_{cx}=\sqrt{k}\beta_{\mathcal{G}(p)}/4\) in Assumption (A3) previously appeared in Theorem 4.16, where the bounded-support assumption was used to ensure uniqueness of one element of a minimal pair (see Definition 2.2) when the other element is fixed. Similarly, assumptions (A1) and (A3) ensure that, with probability \(1\), for each \(X_{i}\) there exists a unique \(m_{i}\in\mathcal{F}^{-1}(X_{i})\) such that \(m_{i}\in B_{r^{\prime}_{cx}}^{d_{M}}(m_{0})\), a radius-\(r^{\prime}_{cx}\) ball in \(M(p)\) centered at \(m\). A stronger version of this fact will be used (in the proof of Theorem 4.22, to be given shortly) to rewrite the objective function \(f_{n}^{(\mathcal{PSR})}\) involving \(d_{\mathcal{PSR}}\) as a Frechet objective function \(m\mapsto\frac{1}{n}\sum_{i=1}^{n}d_{M}^{2}(m_{i},m)\), with probability \(1\).
In addition, the bounded support condition (A3) ensures that with probability \(1\) the function \(d_{\mathcal{PSR}}^{2}(X,\cdot)\) is smooth (\(C^{\infty}\)) and convex on a convex set. Using this fact and geometric results from given in Afsari (2011) and Afsari, Tron and Vidal (2013), we show in the proof that the gradient
\[\mathrm{grad}_{x}\,d_{\mathcal{PSR}}^{2}(X,\phi_{m_{0}}^{-1}(x)):=\left(\frac{ \partial}{\partial x_{i}}d_{\mathcal{PSR}}^{2}(X,\phi_{m_{0}}^{-1}(x))\right)_ {i=1,\ldots,d}\]
at \(x=0\) has mean zero, and has a finite covariance matrix \(\Sigma_{P}:=\mathrm{Cov}(\mathrm{grad}_{x}\,d_{\mathcal{PSR}}^{2}(X,\phi_{m_{0}}^{ -1}(0)))\). (We conjecture that (A1) guarantees that \(\Sigma_{P}\) is also positive-definite.) Likewise, as we will see in the proof, the differentiability and (strict) convexity of \(d_{\mathcal{PSR}}^{2}(X,\phi_{m_{0}}^{-1}(\cdot))\) guarantee that the expectation of the Hessian \(H_{P}(x):=E\left(\mathbf{H}d_{\mathcal{PSR}}^{2}(X,\phi_{m_{0}}^{-1}(x))\right)\) exists and is positive definite at \(x=0\). Write \(H_{P}:=H_{P}(0)\).
In summary, Assumptions (A1)--(A3) enable us to use a second-order Taylor expansion for \(f_{n}^{(\mathcal{PSR})}\), to which the classical central limit theorem and the law of large numbers are applied. Such an approach was used in Bhattacharya and Patrangenaru (2005) and Huckemann (2011a). In particular, our proof for part (b) of Theorem 4.22 (in Appendix B.5.2) closely follows the proof of Theorem 6 of Huckemann (2011a).
**Theorem 4.22**.: _Suppose that Assumptions (A1)--(A3) are satisfied, and let \(m_{0}\in E^{(\mathcal{PSR})}\) be a PSR mean. Let \(\{m_{n}^{\prime}\in E_{n}^{(\mathcal{PSR})}\}\) be any choice of sample PSR mean sequence. For each \(n\), let \(m_{n}\in\mathrm{argmin}_{m\in\mathcal{G}(p)\cdot m_{n}^{\prime}}\,d_{M}(m,m_{0})\). Then, with probability 1, the sequence \(\{m_{n}\}\) is determined uniquely. Furthermore,_
1. \(m_{n}\to m_{0}\) _almost surely as_ \(n\to\infty\)_, and_
2. \(\sqrt{n}\phi_{m_{0}}(m_{n})\to N_{d}(0,H_{P}^{-1}\Sigma_{P}H_{P}^{-1})\) _in distribution as_ \(n\to\infty\)_._
Estimating the covariance matrix \((H_{P}^{-1}\Sigma_{P}H_{P}^{-1}\) in our case) of the limiting Gaussian distribution (for Riemannian manifold-valued Frechet means) is a difficult task. For general Riemannian manifold-valued Frechet means, Bhattacharya and Patrangenaru (2005) and Bhattacharya and Bhattacharya (2012) suggest using a moment estimator for \(H_{P}\) and \(\Sigma_{P}\). This however requires specifying the second derivatives of \(d_{\mathcal{PSR}}^{2}(X,\phi_{m_{0}}(\cdot))\). We note that in the literature, explicit expressions for \(H_{P}\) and \(\Sigma_{P}\) are only available for geometrically very simple manifolds, with a high degree of symmetry, such as spheres. As an instance, see Hotz and Huckemann (2015) and Section 5.3 of Bhattacharya and Bhattacharya (2012) for the cases where the data and the Frechet mean lie in the unit circle \(S^{1}\) and the more general unit sphere \(S^{d}\), respectively. Others, including Eltzner and Huckemann (2019), simply use the sample covariance matrix of \(\{\phi_{m_{n}}(m_{{}_{X_{i}}}):i=1,\ldots,n\}\) (in our notation) as an estimate of \(H_{P}^{-1}\Sigma_{P}H_{P}^{-1}\). In Section 5, we will use a bootstrap estimator of \(\mathrm{Var}(\phi_{m_{0}}(m_{n}))\), the variance of \(\phi_{m_{0}}(m_{n})\), instead of directly estimating \(H_{P}^{-1}\Sigma_{P}H_{P}^{-1}\). Out bootstrap estimator is defined as follows.
Choose a PSR mean \(m_{n}\) computed from the original sample \(\{X_{1},\ldots,X_{n}\}\). For the \(b\)th bootstrap sample (that is, a simple random sample of size \(n\) from the set \(\{X_{1},\ldots,X_{n}\}\), treated as a fixed set, with replacement), let \(m_{b}^{*}\) be the PSR mean of the bootstrap sample that is closest to \(m_{n}\). (For the purpose of defining the bootstrap estimator, we are assuming that such an \(m_{b}^{*}\) is unique.) The bootstrap estimator of \(\mathrm{Var}(\phi_{m_{0}}(m_{n}))\) is then defined to be
\[\widehat{\mathrm{Var}}_{\mathrm{boot}}(\phi_{m_{0}}(m_{n})):=\frac{1}{B}\sum_{ b=1}^{B}\phi_{m_{n}}(m_{b}^{*})\cdot(\phi_{m_{n}}(m_{b}^{*}))^{T},\]
where \(B\) is the number of bootstrap replicates.
## 5 Numerical examples
### Numerical examples of scaling-rotation means
In this subsection, we provide an example where the SR means are equivalent to the PSR means, and an example where they are not. Consider a random variable \(X\in\mathrm{Sym}^{+}(2)\),
\[X=R(\theta)\mathrm{diag}(\exp(D_{1}),\exp(D_{2}))R(\theta)^{T}, \tag{5.1}\]
where \(\theta\) follows the normal distribution with mean \(0\), standard deviation \(\sigma_{\theta}\), truncated to lie in \((-\pi,\pi)\), and independently \((D_{1},D_{2})\) follow a normal distribution with mean \((\mu_{1},\mu_{2})\) and covariance matrix \(\sigma_{D}^{2}I_{2}\). From this model, we generate two samples of size \(n=200\) with different choices of model parameters.
For each sample, a PSR mean, denoted \(\hat{m}^{\mathcal{PSR}}\), is computed using the algorithm of Section 3.3, and we also numerically compute the minimizer of \(f_{n}^{(\mathcal{SR})}\) over \(S_{p}^{\mathrm{lwr}}\), and denote it by \(\hat{M}_{\mathrm{lwr}}^{\mathcal{SR}}\). Throughout we set \(k=1\). By Theorem 3.6, if \(f_{n}^{(\mathcal{SR})}(\mathcal{F}(\hat{m}^{\mathcal{PSR}}))\leq f_{n}^{( \mathcal{SR})}(\hat{M}_{\mathrm{lwr}}^{\mathcal{SR}})\), \(\mathcal{F}(\hat{m}^{\mathcal{PSR}})\) is an SR mean, and otherwise \(\hat{M}_{\mathrm{lwr}}^{\mathcal{SR}}\) is a SR mean.
* Case I: Set \(\sigma_{\theta}=\pi/12\), \((\mu_{1},\mu_{2})=(2,0)\) and \(\sigma_{D}=0.2\). See the left panels of Fig. 1.
* Case II: Set \(\sigma_{\theta}=\pi/3\), \((\mu_{1},\mu_{2})=(1,0)\) and \(\sigma_{D}=0.2\). See the right panels of Fig. 1.
For Case I, the sample are relatively far from the lower stratum \(S_{2}^{\mathrm{lwr}}=\{cI_{2}:c>0\}\) (shown as the green axis in the top row of Fig. 1). In this particular instance, \(82\approx f_{n}^{(\mathcal{SR})}(\mathcal{F}(\hat{m}^{\mathcal{PSR}}))<f_{n}^{ (\mathcal{SR})}(\hat{M}_{\mathrm{lwr}}^{\mathcal{SR}})\approx 458\), and \(\mathcal{F}(\hat{m}^{\mathcal{PSR}})\) is an SR mean.
For Case II, \(196\approx f_{n}^{(\mathcal{SR})}(\mathcal{F}(\hat{m}^{\mathcal{PSR}}))>f_{n} ^{(\mathcal{SR})}(\hat{M}_{\mathrm{lwr}}^{\mathcal{SR}})\approx 173\), and \(\hat{M}_{\mathrm{lwr}}^{\mathcal{SR}}\) is an SR mean.
### Comparison with other geometric frameworks
In analyzing SPD matrices, the scaling-rotation (SR) framework has an advantage in interpretation as it allows describing the changes of SPD matrices in terms of rotation and scaling of the corresponding ellipsoids. For example, it is shown in Jung, Schwartzman and Groisser (2015) that only the SR framework yields interpolation curves which consist of pure rotation when the endpoints differ only by rotation, when compared to the commonly used log-Euclidean (Arsigny et al., 2007) and affine-invariant (Fletcher et al., 2004; Pennec, Fillard and Ayache, 2006) interpolation curves.
In this subsection, we illustrate situations under which averaging via the scaling-rotation framework has similar interpretive advantages over the affine-invariant mean. The affine-invariant (AI) mean \(\bar{X}^{(\mathrm{AI})}\) for a sample of SPD matrices \(X_{1},\ldots,X_{n}\in\mathrm{Sym}^{+}(p)\) is the sample Frechet mean with respect to the
_Jung et al./Averaging SPD matrices via eigen-decomposition_
affine-invariant metric \(d_{AI}\):
\[\bar{X}^{\rm(AI)}=\operatorname*{argmin}_{M\in\operatorname{Sym}^{+}(p)}\sum_{i=1 }^{n}d_{AI}^{2}(M,X_{i}),\]
where \(d_{AI}(X,Y)=\|\mathrm{Log}(X^{-1/2}YX^{-1/2})\|_{F}\). The AI mean exists and is unique for any finite sample (Pennec, Fillard and Ayache, 2006).
In numerical experiments, we randomly generated SPD matrices from the model (5.1) with parameters set as in Case I but with \(\sigma_{\theta}=\pi/6\). A sample of size \(n=200\) is plotted in Fig. 2. There, we have used two different types of "linearizations" of \(\operatorname{Sym}^{+}(2)\), as explained below.
The _Log-Euclidean coordinates_ on \(\operatorname{Sym}^{+}(2)\) are given by the three free parameters \(y_{11}\), \(y_{22}\) and \(\sqrt{2}y_{12}\) of \(Y=(y_{ij})=\mathrm{Log}(X)\in\operatorname{Sym}(2)\). Write \(\mathrm{vecd}(Y):=(y_{11},y_{22},\sqrt{2}y_{12})^{T}\in\mathbb{R}^{3}\). These coordinates are chosen so that for any two vectors \((\mathrm{vecd}(X),\mathrm{vecd}(Y))=(x,y)\), the usual inner product \(\langle x,y\rangle=x^{T}y\) corresponds to the Riemannian metric when \(X,Y\in\operatorname{Sym}(2)\) are viewed as tangent vectors in the affine-invariant framework. The left panel of Fig. 2 plots the data on the Log-Euclidean coordinates.
The _PSR coordinates_, used in the right panel of the figure for the same data, come from the coordinates defined on a tangent space of the eigen-decomposition space \(M(p)\). More precisely, given a reference point \((U,D)\in M(p)\), we use the local chart \((\mathcal{U}_{(U,D)},\phi_{(U,D)})\) defined in (4.14), followed by the vectorization via \(\mathrm{vec}\) (see (4.15)), to determine a coordinate system. To illustrate this concretely, let \(p=2\). Then \(\tilde{\varphi}_{(U,D)}(V,\Lambda)=(\mathrm{Log}(VU^{T}),\mathrm{Log}(\Lambda D ^{-1}))=:(A,L)\in\mathfrak{so}(p)\oplus\mathrm{Diag}(p)\). The first coordinate of \(x_{V,\Lambda}:=\mathrm{vec}(\phi_{(U,D)}(V,\Lambda))\in\mathbb{R}^{3}\) is the free parameter \(a_{21}\) of \(A\) (multiplied by the scale parameter \(\sqrt{k}\)), and corresponds to the rotation angle of \(VU^{T}\) in radians (scaled by \(\sqrt{k}\)). The second and last coordinates of \(x_{V,\Lambda}\) are the diagonal entries of \(L\). Multiplying by \(\sqrt{k}\) as above affords us the convenience that for any two \(x,y\), the usual inner product
Figure 2: A sample of SPD matrices (sampled from the model (5.1)) shown in the Log-Euclidean (LE) coordinates (left) and the PSR coordinates (right), overlaid with the PSR mean and AI mean. For this data set, PSR mean appears to be a better representative for the data, while the AI mean does not lie in the data-dense region. See Section 5.2 for details.
\(x^{T}y\) corresponds to the Riemannian metric \(g_{M}\) we have assumed on the tangent spaces of \(M(p)\).
When representing SPD-valued data \(X_{1},\ldots,X_{n}\in\mathrm{Sym}^{+}(2)\) in PSR coordinates, we choose the reference point \((U,D)\) to be an arbitrarily chosen PSR mean \(\hat{m}^{\mathcal{P}SR}\) of the data. Care is needed since there are multiple eigen-decompositions corresponding to each observation \(X_{i}\). For each \(X_{i}\), an eigen-decomposition \(m_{i}\in\mathcal{F}^{-1}(X_{i})\subset M(p)\) is chosen so that \(m_{i}\) has the smallest geodesic distance from \(\hat{m}^{\mathcal{P}SR}\) among all elements of \(\mathcal{F}^{-1}(X_{i})\). The right panel of Fig. 2 plots the same data as in the left panel, but in these PSR coordinates.
The AI mean and a PSR mean for this data set are also plotted in Fig. 2. It can be seen that major modes of variation in the data are well described in terms of rotation angles and scaling, while the variation appears to be highly non-linear in Log-Euclidean (LE) coordinates. As one might expect from this non-linearity, we observe that the AI mean is located far from the data, while the PSR mean appears to be a better representative of the data.
In the opposite direction, we also considered a data set sampled from an SPD-matrix log-normal distribution (Schwartzman, 2016). Note that the SPD-matrix log-normal distributions on \(\mathrm{Sym}^{+}(p)\) correspond to a multivariate normal distribution in Log-Euclidean coordinates. The data and their AI and PSR means are plotted in Fig. 3. While the AI mean is well approximated by the average of data in LE coordinates, the PSR mean (in LE coordinates) is also not far from this average. Similarly, the PSR mean is approximately the average in PSR coordinates, and the AI mean is also not far. Therefore, we may conclude that using the SR framework and PSR means is beneficial especially if variability in the sample (or in a population) is pronounced in terms of rotations, while the cost of using the SR framework is small for the log-normal case.
Figure 3: A sample of SPD matrices shown in the Log-Euclidean (LE) coordinates (left) and the PSR coordinates (right), overlaid with the PSR mean and AI mean. For this data set, PSR mean appears to be a better representative for the data, while the AI mean does not lie in the data-dense region. See Section 5.2 for details.
### An application to multivariate tensor-based morphometry
In Paquette et al. (2017), the authors compared the lateral ventricular structure in the brains of 17 pre-term and 19 full-term infant children. After an MRI scan of a subject's brain was obtained and processed through an image processing pipeline, the shape data collected at 102816 vertices on the surfaces of their left and right ventricles were mapped onto the left and right ventricles of a template brain image, after which the \(2\times 2\) Jacobian matrix \(J\) from that surface registration transformation was computed at each vertex for each subject. The deformation tensor \(X=(J^{T}J)^{1/2}\in\mathrm{Sym}^{+}(2)\) was then computed at each vertex for each subject. To summarize the structure of the data, there are 102816 vertices along the surfaces of the template ventricles, and at each vertex there are deformation tensors (\(2\times 2\) SPD matrices) from \(n_{1}=17\) pre-term and \(n_{2}=19\) full-term infants. We will call these group 1 and group 2, respectively.
One way that the authors tested for differences in ventricular shape between the two groups was by performing two-sample location tests at each vertex via use of the log-Euclidean version of Hotelling's \(T^{2}\) test statistic introduced in Lepore et al. (2008). The log-Euclidean (LE) version of the \(T^{2}\) test statistic is the squared Mahalanobis distance between the full-term and pre-term log-Euclidean sample means on the LE coordinates (defined in Section 5.2).
Similarly, one could also measure separation between groups by comparing their respective PSR means in PSR coordinates. For this two-group context, the reference point for the PSR coordinates is given by a PSR mean computed from pooled sample (with sample size \(n_{1}+n_{2}\)).
We have chosen vertex 75412 as an example to illustrate a scenario in which two groups have little separation in the LE coordinates but are well-separated in the PSR coordinates. In the top row of Figure 4, tensors from the two groups as well as the group-wise LE and PSR means are plotted in their respective coordinates. There is little visible separation between the two groups in the LE coordinates, while there is near-total separation in the PSR coordinates.
To visualize the sampling distributions of the group-wise means under the log-Euclidean and scaling-rotation frameworks, we computed 500 bootstrap sample means for each group, under both geometric frameworks. These are plotted in their respective tangent spaces in the bottom row of Figure 4. (See also Figure 5 in Appendix C, in which one can see that the (bootstrap) sampling distributions of the group-wise PSR means are approximately normal.) The nonparametric bootstrap provides an estimate of standard errors of the sample means, from which (bootstrap-approximated) parametric 95% confidence regions are obtained. For this, we assume normality, as suggested by the central limit theorem, Theorem 4.22, and obtain an approximate 95% confidence region given by \(\{x\in\mathbb{R}^{3}:x\hat{\Sigma}^{-1}x^{T}\leq\chi^{2}_{0.05,2}\}\), for each sample mean. Here, \(\hat{\Sigma}\) is the sample covariance matrix of the bootstrap (group-wise LE or PSR) sample means, and \(\chi^{2}_{0.05,2}\) is the 95% quantile of the \(\chi^{2}_{2}\) distribution. The resulting confidence regions are overlaid in the bottom row of Figure 4 as well. As in the top row, there is considerable overlap between the LE confidence regions, while there is complete separation between the two confidence regions for the group-wise
PSR sample means, especially along the direction of rotation angles. This example suggests that the scaling-rotation framework may be better at detecting group differences than other frameworks when most of the variability between the groups is due to rotation.
## 6 Discussion
We have presented the first statistical estimation methods for \(\text{Sym}^{+}(p)\) based on the scaling-rotation framework of Jung, Schwartzman and Groisser (2015). These estimation methods are intended to set the foundation for the development of scaling-rotation-framework-based statistical methods, such as testing the equality of two or more PSR means, testing for a variety of eigenvalue and eigenvector patterns of SPD matrices, and an analogue of principal component analysis for SPD-valued data. The scaling-rotation framework should also be particularly useful for diffusion tensor processing since the eigenvectors and
Figure 4: Real data example. (Top row) Observations corresponding to Group 1 (and Group 2) are shown in blue (and red, respectively) dots. The group-wise LE and PSR means are shown as the asterisks. (Bottom row) Bootstrap replications of the LE and PSR means (left and right panels, respectively) for each group, with the 95% approximate confidence regions shown as transparent ellipsoids. See Section 5.3 for details.
eigenvalues of a diffusion tensor model the principal directions and intensities of water diffusion at a given voxel, and are thus the primary objects of interest.
We recommend using the scaling-rotation estimation procedure presented here for \(p=2,3\), since the number of eigen-decompositions of an SPD matrix from \(S_{p}^{\text{top}}\) grows rapidly with \(p\). One avenue for interesting future work is to develop computational procedures for higher \(p\). Another avenue is to develop a proper two-sample or multi-sample testing framework, and dimension-reduction and regression methods using the eigen-decomposition spaces, and to establish asymptotic and non-asymptotic properties of these statistical methods, reflecting the structure of \(\text{Sym}^{+}(p)\) as a stratified space under eigen-decomposition.
## Appendix A Discontinuity of \(d_{\mathcal{SR}}\)
While the scaling-rotation "distance" function \(d_{\mathcal{SR}}:\text{Sym}^{+}(p)\times\text{Sym}^{+}(p)\rightarrow[0,\infty)\) is continuous when restricted to \(S_{p}^{\text{top}}\times S_{p}^{\text{top}}\), it is not so in general. Even the one-variable function \(d_{\mathcal{SR}}(\cdot,S)\), with a fixed \(S\in S_{p}^{\text{top}}\), has many discontinuities in lower strata. While it may be of interest to characterize the set of discontinuity, here we provide just an example. For \(0<\theta<\theta^{\prime}<\pi/4\), and \(\lambda>1\), let \(S=R(\theta^{\prime})\text{diag}(e^{\lambda},e^{-\lambda})R(\theta^{\prime})^ {T}\), where \(R(\theta)\) is the \(2\times 2\) rotation matrix corresponding to the counterclockwise rotation by angle \(\theta\). For \(n=1,2,\ldots\), let \(S_{n}=R(\theta)\text{diag}(e^{1/n},e^{-1/n})R(\theta)^{T}\). For every \(n\), it can be checked that \((R(\theta^{\prime}),\text{diag}(e^{\lambda},e^{-\lambda}))\in\mathcal{F}^{-1 }(S)\) and \((R(\theta),\text{diag}(e^{1/n},e^{-1/n}))\in\mathcal{F}^{-1}(S_{n})\) form a minimal pair, which implies that \(d_{\mathcal{SR}}(S_{n},S)^{2}=k(\theta^{\prime}-\theta)^{2}+2(\lambda-\frac{ 1}{n})^{2}\). On the other hand, \(\lim_{n\rightarrow\infty}S_{n}=I\) and \(d_{\mathcal{SR}}(I,S)^{2}=2\lambda^{2}\). Thus,
\[\lim_{n\rightarrow\infty}d_{\mathcal{SR}}(S_{n},S)=\{k(\theta^{\prime}-\theta )^{2}+2\lambda^{2}\}^{1/2}>2\lambda^{2}=d_{\mathcal{SR}}(\lim_{n\rightarrow \infty}S_{n},S),\]
and the function \(d_{\mathcal{SR}}(\cdot,S)\) is not continuous at \(I\).
## Appendix B Technical details, additional lemmas and proofs
### Proofs for Section 3
#### b.1.1 Proof of Theorem 3.5
Proof of Theorem 3.5.: Let \(Y\in E_{n}^{(\mathcal{SR})}\cap S_{p}^{\text{top}}\), let \(\tilde{Y}\in\mathcal{F}^{-1}(Y)\) be an arbitrary eigen-decomposition of \(Y\), let \(\tilde{Z}=(U,D)\in M(p)\) be arbitrary, and let \(Z=\mathcal{F}(\tilde{Z})\). Since \(Y\) has no repeated eigenvalues, it follows from (3.3) and (3.4) that
\[\sum_{i=1}^{n}d_{\mathcal{PSR}}^{2}(X_{i},\tilde{Y})=\sum_{i=1}^{n}d_{\mathcal{ SR}}^{2}(X_{i},Y)\leq\sum_{i=1}^{n}d_{\mathcal{SR}}^{2}(X_{i},Z)\leq\sum_{i=1}^{n}d_{ \mathcal{PSR}}^{2}(X_{i},\tilde{Z}),\] (B.1)
implying that \(\tilde{Y}\in E_{n}^{(\mathcal{PSR})}\). Since the case where \(E_{n}^{(\mathcal{SR})}\cap S_{p}^{\text{top}}=\emptyset\) is trivial, we have shown (a).
(b) Suppose now \(\tilde{Z}\in E_{n}^{(\mathcal{PSR})}\). Then the first and fourth sums in (B.1) must be equal, so the two inequalities must be equalities. In particular, the second and third sums are equal, so \(Z\in E_{n}^{(\mathcal{SR})}\) and hence \(\tilde{Z}\in\mathcal{F}^{-1}(E_{n}^{(\mathcal{SR})})\). This shows \(\mathcal{F}^{-1}(E_{n}^{(\mathcal{SR})})\supset E_{n}^{(\mathcal{PSR})}\), which immediately implies \(\mathcal{F}(E_{n}^{(\mathcal{PSR})})\subset E_{n}^{(\mathcal{SR})}\).
(c) Assume that \(E_{n}^{(\mathcal{SR})}\cap S_{p}^{\mathrm{top}}\neq\emptyset\) and \(E_{n}^{(\mathcal{PSR})}\subset M^{\mathrm{top}}(p)\). Then, using (b) and (a),
\[E_{n}^{(\mathcal{PSR})}=E_{n}^{(\mathcal{PSR})}\cap M^{\mathrm{top }}(p) \subset \mathcal{F}^{-1}(E_{n}^{(\mathcal{SR})})\cap M^{\mathrm{top}}(p)\] \[= \mathcal{F}^{-1}(E_{n}^{(\mathcal{SR})})\cap\ \mathcal{F}^{-1}(S_{p}^{ \mathrm{top}})\] \[= \mathcal{F}^{-1}(E_{n}^{(\mathcal{SR})}\cap S_{p}^{\mathrm{top}})\] \[\subset E_{n}^{(\mathcal{PSR})}.\] (B.3)
Hence the inclusions in (B.2) and(B.3) are equalities.
Finally, note that (B.1) holds with the finite summation replaced by the integration with respect to the probability measure \(P\), provided that \(f^{(\mathcal{PSR})}(U,D)<\infty\) is finite for any \((U,D)\in M(p)\), which also guarantees that \(f^{(\mathcal{SR})}(Z)<\infty\) for any \(Z\in\mathrm{Sym}^{+}(p)\). Since these conditions are satisfied by Lemma 4.6, the statements (a)-(c) hold for \(E_{n}^{(\mathcal{SR})}\) and \(E_{n}^{(\mathcal{PSR})}\) replaced by \(E^{(\mathcal{SR})}\) and \(E^{(\mathcal{PSR})}\), respectively.
#### b.1.2 Proof of Theorem 3.6
Proof of Theorem 3.6.: We give a proof for (a) and (b). Assertions (c) and (d) can be verified similarly.
For (a), consider the case where the inequality is strict, i.e., \(f^{(\mathcal{SR})}(\mathcal{F}(m^{\mathcal{PSR}}))<\min_{\Sigma\in S_{p}^{ \mathrm{lwr}}}f^{(\mathcal{SR})}(\Sigma)\). Then no scaling-rotation mean can lie in \(S_{p}^{\mathrm{lwr}}\), but since scaling-rotation means always exist, \(E^{(\mathcal{SR})}\subset S_{p}^{\mathrm{top}}\). By Theorem 3.5, \(\mathcal{F}(m^{\mathcal{PSR}})\in E^{(\mathcal{SR})}\). Now consider the situation where
\[f^{(\mathcal{SR})}(\mathcal{F}(m^{\mathcal{PSR}}))=\min_{\Sigma\in S_{p}^{ \mathrm{lwr}}}f^{(\mathcal{SR})}(\Sigma).\] (B.4)
Suppose that no scaling-rotation mean lies in \(S_{p}^{\mathrm{lwr}}\). Then \(E^{(\mathcal{SR})}\subset S_{p}^{\mathrm{top}}\) and, by Theorem 3.5, \(\mathcal{F}(m^{\mathcal{PSR}})\in E^{(\mathcal{SR})}\). Since these arguments and (B.4) contradict, there must be a scaling-rotation mean in \(S_{p}^{\mathrm{lwr}}\). Moreover, by (B.4), \(\mathcal{F}(m^{\mathcal{PSR}})\in E^{(\mathcal{SR})}\) as well.
The hypothesis of (b) leads that \(\mathcal{F}(m^{\mathcal{PSR}})\notin E^{(\mathcal{SR})}\). By the inverse of Theorem 3.5(b), \(E^{(\mathcal{SR})}\cap S_{p}^{\mathrm{top}}=\emptyset\).
#### b.1.3 Proof of Theorem 3.7
We need several technical lemmas. For \((U,D)\in M(p)\), define
\[\tilde{\delta}(U,D)=\inf\{d_{M}((U,D),(V,\Lambda)):(V,\Lambda)\in M(p)\setminus M ^{\mathrm{top}}(p)\},\]
where the infimum can be replaced by minimum. The minimum is achieved since \(M(p)\setminus M^{\mathrm{top}}(p)\) is closed in \(M(p)\), and as a finite-dimensional manifold, \(M(p)\) is locally compact.
**Lemma B.1**.: _For \((U,D)\in M(p)\), \(\tilde{\delta}(U,D)=\min\{d_{\mathcal{D}^{+}}(D,\Lambda):\Lambda\in\mathrm{ Diag}^{+}(p)\setminus D^{\mathrm{top}}_{+}(p)\}\), where \(D^{\mathrm{top}}_{+}(p)\) is the subset of \(\mathrm{Diag}^{+}(p)\) consisting of distinct diagonal entries._
Proof.: Write \(M^{\mathrm{lwr}}(p):=M(p)\setminus M^{\mathrm{top}}(p)\) and \(D^{\mathrm{lwr}}_{+}(p):=\mathrm{Diag}^{+}(p)\setminus D^{\mathrm{top}}_{+}(p)\). Clearly \(\inf\{d_{M}((U,D),(V,\Lambda)):(V,\Lambda)\in M^{\mathrm{lwr}}(p)\}\geq\inf\{d _{\mathcal{D}^{+}}(D,\Lambda):\Lambda\in D^{\mathrm{lwr}}_{+}(p)\}\). Conversely, if \(\Lambda\in D^{\mathrm{lwr}}_{+}(p)\), then \((U,\Lambda)\in M^{\mathrm{lwr}}(p)\). So,
\[\inf\{d_{M}((U,D),(V,\Lambda)):(V,\Lambda)\in M^{\mathrm{lwr}}(p ))\} \leq\inf\{d_{M}((U,D),(U,\Lambda)):\Lambda\in D^{\mathrm{lwr}}_{+}(p)\}\] \[=\inf\{d_{\mathcal{D}^{+}}(D,\Lambda):\Lambda\in D^{\mathrm{lwr}} _{+}(p)\}\] \[=\min\{d_{\mathcal{D}^{+}}(D,\Lambda):\Lambda\in D^{\mathrm{lwr}} _{+}(p)\},\]
where the minimum is achieved since \(D^{\mathrm{lwr}}_{+}(p)\) is a closed subset of the locally compact metric space \((\mathrm{Diag}^{+}(p),d_{\mathcal{D}^{+}})\).
**Lemma B.2**.: _The function \(\tilde{\delta}(U,D)\) is constant on fibers of \(\mathcal{F}\). That is, for each \(S\in\mathrm{Sym}^{+}(p)\), the value of \(\tilde{\delta}(U,D)\) is independent of the choice of \((U,D)\in F^{-1}(S)\)._
Proof.: Let \(S\in\mathrm{Sym}^{+}(p)\) and let \((U,D),(U_{1},D_{1})\in F^{-1}(S)\). Then \(D_{1}=h\cdot D\) for some \(h\in\mathcal{G}(p)\). But the set \(D^{\mathrm{lwr}}_{+}(p)\) and the metric \(d_{\mathcal{D}^{+}}\) are invariant under the action of \(\mathcal{G}(p)\), defined in Section 2.3, so
\[\{d_{\mathcal{D}^{+}}(h\cdot D,\Lambda):\Lambda\in D^{\mathrm{lwr }}_{+}(p)\} =\{d_{\mathcal{D}^{+}}(h\cdot D,h\cdot\Lambda):\Lambda\in D^{ \mathrm{lwr}}_{+}(p)\}\] \[=\{d_{\mathcal{D}^{+}}(D,\Lambda):\Lambda\in D^{\mathrm{lwr}}_{+} (p)\}.\]
Hence by Lemma B.1, \(\tilde{\delta}(U_{1},D_{1})\) and \(\tilde{\delta}(U,D)\) are the infimum of the same set of real numbers.
The following lemma shows a relation between \(\delta(S)\) and \(\tilde{\delta}(U,D)\).
**Lemma B.3**.: _For any \(S\in\mathrm{Sym}^{+}(p)\), \(\delta(S)=\tilde{\delta}(U,D)\) for any \((U,D)\in\mathcal{F}^{-1}(S)\)._
Proof of Lemma b.3.: Recall that we write \(S^{\mathrm{lwr}}_{p}=\mathrm{Sym}^{+}(p)\setminus S^{\mathrm{top}}_{p}\).
\[\delta(S) =\inf\{d_{\mathcal{SR}}(S,S^{\prime}):S^{\prime}\in S^{\mathrm{ lwr}}_{p}\}\] \[=\inf\{\inf\{d_{M}((U,D),(V,\Lambda)):(U,D)\in\mathcal{F}^{-1}(S),( V,\Lambda)\in\mathcal{F}^{-1}(S^{\prime})\}:S^{\prime}\in S^{\mathrm{lwr}}_{p}\}\] \[=\inf\{d_{M}((U,D),(V,\Lambda)):(U,D)\in\mathcal{F}^{-1}(S),(V, \Lambda)\in M^{\mathrm{lwr}}(p)\}\] \[=\inf\{\inf\{d_{M}((U,D),(V,\Lambda)):(V,\Lambda)\in M^{\mathrm{ lwr}}(p)\}:(U,D)\in\mathcal{F}^{-1}(S)\}\] \[=\inf\{\tilde{\delta}(U,D):(U,D)\in\mathcal{F}^{-1}(S)\}.\]
The above and Lemma B.2 give the result.
By Lemmas B.1--B.3, we have \(\delta(S)>0\) if and only if \(S\in S^{\mathrm{top}}_{p}\).
**Lemma B.4**.: _Let \(S_{0}\in S_{p}^{\rm top}\), let \(r>0\), and write \(\bar{B}_{r}^{ds_{\mathcal{R}}}(S_{0})=\{Y\in\operatorname{Sym}^{+}(p):d_{ \mathcal{S}\mathcal{R}}(Y,S_{0})\leq r\}\)._
1. _If_ \(S\in\bar{B}_{r}^{d_{\mathcal{S}\mathcal{R}}}(S_{0})\)_, then_ \(\delta(S)\geq\delta(S_{0})-r\)_._
2. _If_ \(r<\delta(S_{0})\)_, then_ \(\bar{B}_{r}^{d_{\mathcal{S}\mathcal{R}}}(S_{0})\subset S_{p}^{\rm top}\)_._
3. _If_ \(r<\delta(S_{0})/3\)_, then for any_ \(S,S^{\prime}\in\bar{B}_{r}^{ds_{\mathcal{R}}}(S_{0})\)_, and_ \(S_{\rm lwr}\in\operatorname{Sym}^{+}(p)\setminus S_{p}^{\rm top}\)_,_ \[d_{\mathcal{S}\mathcal{R}}(S,S^{\prime})<d_{\mathcal{S}\mathcal{R}}(S,S_{\rm lwr }).\]
Proof.: (a) Let \(S\in\bar{B}_{r}^{d_{\mathcal{S}\mathcal{R}}}(S_{0})\). If \((U,D)\in\mathcal{F}^{-1}(S)\) and \(\tilde{S}_{0}:=(U_{0},D_{0})\in\mathcal{F}^{-1}(S_{0})\), then by Lemmas B.1 and B.3, \(\tilde{\delta}(U,D)=\min\{d_{\mathcal{D}^{+}}(D,\Lambda):\Lambda\in D_{+}^{ \rm lwr}(p)\}\). Since \((\operatorname{Diag}^{+}(p),d_{\mathcal{D}^{+}})\) is a metric space, \(d_{\mathcal{D}^{+}}(D,\Lambda)\geq d_{\mathcal{D}^{+}}(D_{0},\Lambda)-d_{ \mathcal{D}^{+}}(D_{0},D)\) for any \(D,D_{0},\Lambda\in\operatorname{Diag}^{+}(p)\). Thus
\[\delta(S) \geq\inf\{d_{\mathcal{D}^{+}}(D_{0},\Lambda)-d_{\mathcal{D}^{+}}( D_{0},D):\Lambda\in D_{+}^{\rm lwr}(p)\}\] \[=\inf\{d_{\mathcal{D}^{+}}(D_{0},\Lambda):\Lambda\in D_{+}^{\rm lwr }(p)\}-d_{\mathcal{D}^{+}}(D_{0},D)\] \[=\delta(S_{0})-d_{\mathcal{D}^{+}}(D_{0},D)\] \[\geq\delta(S_{0})-d_{\mathcal{S}\mathcal{R}}(S_{0},S)\] \[\geq\delta(S_{0})-r.\]
(b) If \(r<\delta(S_{0})\) and \(S\in\bar{B}_{r}^{ds_{\mathcal{R}}}(S_{0})\), then by part (a), \(\delta(S)\geq\delta(S_{0})-r>0\), so \(S\in S_{p}^{\rm top}\).
(c) By part (b), since \(r<\delta(S_{0})/3<\delta(S_{0})\), \(\bar{B}:=\bar{B}_{r}^{d_{\mathcal{S}\mathcal{R}}}(S_{0})\subset S_{p}^{\rm top}\), and \(\bar{B}\) is a closed ball in the metric space \((S_{p}^{\rm top},d_{\mathcal{S}\mathcal{R}})\). Hence, for any \(S,S^{\prime}\in\bar{B}\),
\[d_{\mathcal{S}\mathcal{R}}(S,S^{\prime})\leq d_{\mathcal{S}\mathcal{R}}(S,S_{0 })+d_{\mathcal{S}\mathcal{R}}(S_{0},S^{\prime})\leq 2r<2\delta(S_{0})/3.\]
But by Lemma B.3 and part (a), for \(S_{\rm lwr}\in\operatorname{Sym}^{+}(p)\setminus S_{p}^{\rm top}\),
\[d_{\mathcal{S}\mathcal{R}}(S,S_{\rm lwr})\geq\delta(S)\geq\delta(S_{0})-r> \delta(S_{0})-\delta(S_{0})/3=2\delta(S_{0})/3.\]
Hence \(d_{\mathcal{S}\mathcal{R}}(S,S^{\prime})\leq 2\delta(S_{0})/3<d_{\mathcal{S} \mathcal{R}}(S,S_{\rm lwr})\).
The proof of Theorem 3.7 heavily depends on Lemma B.4(c).
Proof of Theorem 3.7.: The random variable \(X\) in the hypothesis of the theorem lies in \(\bar{B}:=\bar{B}_{r}^{d_{\mathcal{S}\mathcal{R}}}(S_{0})\) with probability \(1\). Thus, by Lemma B.4(c), for any \(S\in\bar{B}\) and \(S_{\rm lwr}\in\operatorname{Sym}^{+}(p)\setminus S_{p}^{\rm top}\),
\[f^{(\mathcal{S}\mathcal{R})}(S)<f^{(\mathcal{S}\mathcal{R})}(S_{\rm lwr}).\]
Hence, no element of \(\operatorname{Sym}^{+}(p)\setminus S_{p}^{\rm top}\) can be a minimizer of \(f_{\mathcal{S}\mathcal{R}}\). Since the set of minimizers of the function \(f_{\mathcal{S}\mathcal{R}}\) is exactly \(E^{(\mathcal{S}\mathcal{R})}\), and \(E^{(\mathcal{S}\mathcal{R})}\) is non-empty, \(E^{(\mathcal{S}\mathcal{R})}\subset S_{p}^{\rm top}\).
The second part of the theorem can be shown similarly by Lemma B.4(c), but with the function \(f_{n}^{(\mathcal{S}\mathcal{R})}(\cdot)\) defined with respect to the data \(X_{1},\ldots,X_{n}\)
### Proofs and technical details for Section 4.1
#### b.2.1 Proof of Lemma 4.1
Proof of Lemma 4.1.: (a) Define the map \(\rho:M(p)\times M(p)\to[0,\infty)\) as
\[\rho((U^{\prime},D^{\prime}),(U,D))=\min_{h\in\mathcal{G}(p)}d_{M}((U^{\prime}h ^{-1},h\cdot D^{\prime}),(U,D)).\]
Note that \(\rho((U^{\prime},D^{\prime}),(U,D))=\rho(h\cdot(U^{\prime},D^{\prime})),(U,D))\) for any \(h\in\mathcal{G}(p)\). Hence for each \((X,(U,D))\in S_{p}^{\mathrm{top}}\times M(p)\), the function \(\rho\) is constant on the set \(\mathcal{F}^{-1}(X)\times\{(U,D)\}\). Therefore the restriction of \(\rho\) to the domain \(M(p)^{\mathrm{top}}\times M(p)\) induces a function on \(S_{p}^{\mathrm{top}}\times M(p)\), which by definition is precisely our function \(d_{\mathcal{PSR}}\). (Here, \(M(p)^{\mathrm{top}}=\mathcal{F}^{-1}(S_{p}^{\mathrm{top}})\).) For each \(h\in\mathcal{G}(p)\), the function \(((U^{\prime},D^{\prime}),(U,D))\mapsto d_{M}((U^{\prime}h^{-1},h\cdot D^{ \prime}),(U,D))\) is continuous on \(M(p)\times M(p)\). Therefore \(\rho\) is also continuous on \(M(p)\times M(p)\) since it is the minimum of a finite number of continuous functions, which implies that the restriction of \(\rho\) to \(M(p)^{\mathrm{top}}\times M(p)\) is continuous. Hence the induced function \(d_{\mathcal{PSR}}\) on \(S_{p}^{\mathrm{top}}\times M(p)\) is also continuous.
(b) For any non-empty subset \(A\) of a metric space \((M,d)\), the triangle inequality implies that \(|d(A,y)-d(A,y^{\prime})|\leq d(y,y^{\prime})\) for any \(y,y^{\prime}\in M\), where \(d(A,y)=\inf_{x\in A}d(x,y)\). For any \(S\in\mathrm{Sym}^{+}(p)\), applying the above fact to the subset \(\mathcal{F}^{-1}(S)\) of the metric space \((M(p),d_{M})\), and noting that \(d_{\mathcal{PSR}}(S,m)=d_{M}(\mathcal{F}^{-1}(S),m)\), the conclusion follows.
#### b.2.2 Background work on semicontinuous functions
Recall Definition 4.2.
**Proposition B.5**.: _Let \(X\) be a topological space, let \(Y\) be a set, and let \(f:X\times Y\to\mathbb{R}\). Let \(J\subset\mathbb{R}\) be a set containing \(\mathrm{range}(f)\), and let \(g:J\to\mathbb{R}\) be a (non-strictly) increasing, uniformly continuous function._
1. _Assume that_ \(f:X\times Y\to\mathbb{R}\) _is LSC in its first variable, uniformly with respect to its second variable. Then so is_ \(g\circ f:X\times Y\to\mathbb{R}\)_._
2. _Assume that_ \(Y\) _is a topological space and that_ \(f:X\times Y\to\mathbb{R}\) _is LSC in its first variable, locally uniformly with respect to its second variable. Then so is_ \(g\circ f:X\times Y\to\mathbb{R}\)_._
Proof of Proposition b.5.: (a) Let \(x_{0}\in X\) and let \(\epsilon>0\). Since \(g\) is uniformly continuous, we may select \(\delta>0\) such that whenever \(z_{1},z_{2}\in J\) and \(|z_{1}-z_{2}|<\delta\), we have \(g(z_{2})>g(z_{1})-\epsilon\). By the hypothesis on \(f\), we may select an open neighborhood \(U\) of \(x_{0}\) such that \(f(x,y)>f(x_{0},y)-\delta\) for all \(x\in U\) and all \(y\in Y\).
Let \(x\in U\) and \(y\in Y\). Then either (i) \(f(x_{0},y)-\delta<f(x,y)\leq f(x_{0},y)\) or (ii) \(f(x,y)>f(x_{0},y)\). In case (i), \(|f(x,y)-f(x_{0},y)|<\delta\), so \(g(f(x,y))>g(f(x_{0},y))-\epsilon\). In case (ii), since \(g\) is increasing, \(g(f(x,y))\geq g(f(x_{0},y))>g(f(x_{0},y))-\epsilon\). Hence in both cases, \(g(f(x))>g(f(x_{0}))-\epsilon\).
Thus \(g\circ f\) is LSC in its first variable, uniformly with respect to its second.
(b) Follows immediately from part (a) and Definition 4.2(ii).
**Corollary B.6**.: _Let \(X\) be a topological space, let \(Y\) be a set, and let \(f:X\times Y\to[0,\infty)\)._
1. _Assume that_ \(f:X\times Y\to\mathbb{R}\) _is LSC in its first variable, uniformly with respect to its second variable. Then so is_ \(\sqrt{f}\)_._
2. _Assume that_ \(Y\) _is a topological space and that_ \(f:X\times Y\to\mathbb{R}\) _is LSC in its first variable,_ locally _uniformly with respect to its second variable. Then so is_ \(\sqrt{f}\)_._
Proof of Corollary b.6.: The square-root function \([0,\infty)\to[0,\infty)\) is uniformly continuous (since \(\sqrt{x+\delta}-\sqrt{x}\leq\sqrt{\delta}\) for \(x,\delta\geq 0\)). Hence the results follow from Proposition B.5.
#### b.2.3 Background work on \(M(p)\) and \(\mathcal{F}\)
The strata of \(\operatorname{Diag}^{+}(p)\) (and the strata of \(M(p)\)) are partially ordered by identifying a stratum with the corresponding partition of \(\{1,2,\ldots,p\}\). If \(\mathcal{T}_{\mathsf{J}}\subset\operatorname{Diag}^{+}(p)\) denotes the stratum labeled by \(\mathsf{J}\), then we have the following relations (the first of which is definition)
\[\mathcal{T}_{\mathsf{J}_{1}}\leq\mathcal{T}_{\mathsf{J}_{2}}\iff\mathsf{J}_{ 1}\leq\mathsf{J}_{2}\iff G_{\mathsf{J}_{1}}\supset G_{\mathsf{J}_{2}}.\] (B.5)
(See Groisser, Jung and Schwartz, 2017, Section 2.2.)
In Lemma B.7 and throughout, \(B_{\delta}^{\mathcal{D}}(D)=\{\Lambda\in\operatorname{Diag}^{+}(p):d_{ \mathcal{D}^{+}}(D,\Lambda)<\delta\}\) denotes the open ball in the metric space \((\operatorname{Diag}^{+}(p),d_{\mathcal{D}^{+}})\).
**Lemma B.7**.: _(a) Every \(D\in\operatorname{Diag}^{+}(p)\). has an open neighborhood that intersects only strata that are at least as high as the stratum of \(D\). I.e. for any \(D\in\operatorname{Diag}^{+}(p)\) there is an open ball \(B_{\delta}^{\mathcal{D}}(D)\) such that_
\[\text{if $\mathcal{T}$ is a stratum of $\operatorname{Diag}^{+}(p)$ for which $\mathcal{T}\cap B_{\delta}^{\mathcal{D}}(D)\neq\emptyset$, then $\mathcal{T}\geq\mathcal{T}_{D}$;}\] (B.6)
_equivalently,_
\[\text{if $\mathsf{J}\in\operatorname{Part}(\{1,2,\ldots,p\})$ and $\mathcal{T}_{\mathsf{J}}\cap B_{\delta}^{\mathcal{D}}(D)\neq\emptyset$, then $\mathsf{J}\geq\mathsf{J}_{D}$.}\] (B.7)
_(b) There is a function \(\delta_{\mathrm{strat}}:\operatorname{Sym}^{+}(p)\to(0,\infty)\) such that for all \(S\in\operatorname{Sym}^{+}(p)\) and all \((U,D)\in\mathcal{F}^{-1}(S)\), (B.6) (equivalently, (B.7)) holds with \(\delta=\delta_{\mathrm{strat}}(S)\)._
Proof of Lemma b.7.: (a) This follows from the fact that any strict eigenvalue-inequalities holding at \(D\) persist on a small enough open neighborhood of \(D\).
(b) Let \(S\in\operatorname{Sym}^{+}(p)\), and let \(D\) be a diagonal matrix appearing in some eigecomposition of \(S\). The set of such diagonal matrices is \(\{\pi\cdot D:\pi\in\mathcal{S}_{p}\}\). Because the action of \(\mathcal{S}_{p}\) on \(\operatorname{Diag}^{+}(p)\) is isometric, if \(\delta>0\) is such that (B.7) holds for a given \(D\), then for any \(\pi\in\mathcal{S}_{p}\), (B.7) holds with \(D\) replaced by \(\pi\cdot D\) (with the same \(\delta\)). Thus any such \(\delta\) depends only on \(S\), not on any chosen eigendecomposition.
**Definition B.8** (just notational).: For each \(S\in\mathrm{Sym}^{+}(p)\), let \(\delta_{\mathrm{strat}}(S)\) be as in Lemma B.7(b).
**Proposition B.9**.: \(\mathcal{F}\) _is a proper map (i.e. the inverse of any compact set is compact)._
Proof of Proposition b.9.: Let \(\lambda_{\max},\lambda_{\min}:\mathrm{Sym}^{+}(p)\to\mathbb{R}\) be the functions carrying \(S\in\mathrm{Sym}^{+}(p)\) to its largest and smallest eigenvalues, respectively. As is well known, these functions are continuous.
Let \(K\subset\mathrm{Sym}^{+}(p)\) be a nonempty compact set. Then \(K\) is closed, and since \(\mathcal{F}\) is continuous, \(\mathcal{F}^{-1}(K)\) is closed.
Let \(\lambda_{\max}^{K}\) (respectively \(\lambda_{\min}^{K}\)) denote the maximum (resp. minimum) value of \(\lambda_{\max}\) (resp. \(\lambda_{\min}\)) achieved on \(K\), and let \(\tilde{K}_{\mathcal{D}}=\{D\in\mathrm{Diag}^{+}(p):D_{ii}\in[\lambda_{\min}^{K },\lambda_{\max}^{K}],\ 1\leq i\leq p\}\). Note that \(\mathcal{F}^{-1}(K)\subset SO(p)\times\tilde{K}_{\mathcal{D}}\), a compact subset of \(M(p)\). Hence \(\mathcal{F}^{-1}(K)\) is a closed subset of a compact set, and is therefore compact.
Next few results are needed because \(\mathcal{F}\) is not an open map. (A map is _open_ if it carries open sets to open sets.)
**Lemma B.10** (**"Slice lemma")**.: _Let \(\Lambda\in\mathrm{Diag}^{+}(p)\), let \(\mathfrak{g}_{\Lambda}\subset\mathfrak{so}(p)\) be the Lie algebra of \(G_{\Lambda}\), and let \(\mathfrak{g}_{\Lambda}^{\perp}\subset\mathfrak{so}(p)\) be the orthogonal complement of \(\mathfrak{g}_{\Lambda}\) in \(\mathfrak{so}(p)\) (with respect to \(g_{SO(p)}|\), a multiple of the Frobenius inner product). Define \(\nu_{\Lambda}:=\mathfrak{g}_{\Lambda}^{\perp}\oplus\mathfrak{d}(p)\) and \(n_{\Lambda}:=\dim(\nu_{\Lambda})=\dim(\mathfrak{g}_{\Lambda}^{\perp})+p\), and define \(\Psi:\nu_{\Lambda}\to\mathrm{Sym}^{+}(p)\) by_
\[\Psi(A,L)=e^{A}\Lambda e^{L}e^{-A}\ ;\]
_note that \(\Psi\) is \(C^{\infty}\) and that \(\Psi(0,0)=\Lambda\). On \(\mathfrak{gl}(p,\mathbb{R})\) or any of its subspaces let \(\|\ \|_{\mathrm{Fr}}\) denote the Frobenius norm; on \(\mathfrak{so}(p)\) let \(\|\ \|_{\mathfrak{so}}=\frac{1}{\sqrt{2}}\|\ \|_{\mathrm{Fr}}\) ; and on \(\nu_{\Lambda}\) let \(\|\ \|_{\tilde{g}_{e}}\) be the norm defined by \(\|(A,L)\|_{\tilde{g}_{e}}=(k\|A\|_{\mathfrak{so}}^{2}+\|L\|_{\mathrm{Fr}}^{2})^ {1/2}\)._
_There exist \(\delta_{2}>0,c>0\), and an open neighborhood \(\tilde{\mathcal{H}}_{\Lambda}\) of \((0,0)\) in \(\nu_{\Lambda}\), such that the \((\mathrm{Sym}(p),\|\ \|_{\mathrm{Fr}})\)-open ball \(B_{\delta_{2}}^{\mathrm{Frob}}(\Lambda)\) lies in \(\mathrm{Sym}^{+}(p)\) and_
1. \(\Psi|_{\tilde{\mathcal{H}}_{\Lambda}}\) _is an embedding;_
2. \(\mathcal{H}_{\Lambda}:=\Psi(\tilde{\mathcal{H}}_{\Lambda})\) _is an_ \(n_{\Lambda}\)_-dimensional submanifold of_ \(\mathrm{Sym}^{+}(p)\) _containing_ \(\Lambda\)_;_
3. \(\mathcal{H}_{\Lambda}=\mathrm{image}(\Psi)\cap B_{\delta_{2}}^{\mathrm{Frob}} (\Lambda)\);
4. _letting_ \(\Phi=\Psi|_{\tilde{\mathcal{H}}_{\Lambda}}\)_, viewed as a map_ \(\tilde{\mathcal{H}}_{\Lambda}\to\mathcal{H}_{\Lambda}\)_,_ \[\|\Phi^{-1}(S^{\prime})\|_{\tilde{g}_{e}}\leq c\|S^{\prime}-\Lambda\|_{ \mathrm{Fr}}\quad\text{for all $S^{\prime}\in\mathcal{H}_{\Lambda}$ };\] (B.8) _and_
5. _for every_ \(S^{\prime}\in B_{\delta_{2}}^{\mathrm{Frob}}(\Lambda)\subset\mathrm{Sym}^{+}(p)\)_, there exist_ \(R\in G_{\Lambda}^{0}\)_,_ \(A\in\mathfrak{g}_{\Lambda}^{\perp}\)_, and_
\(L\in\mathfrak{d}(p)\) such that_
\[S^{\prime} = R\,\Psi(A,L)\,R^{T},\] \[\|A\|_{\mathfrak{so}} = d_{SO}(e^{A},I),\ \ \text{and}\] \[\|(A,\Lambda)\|_{\tilde{g}_{e}} \leq c\|S^{\prime}-\Lambda\|.\]
Proof of Lemma b.10.: For any \(p\times p\) symmetric matrix \(S\) and antisymmetric matrix \(A\), the commutator \([S,A]\) is antisymmetric. Hence for the diagonal matrix \(\Lambda\), the map \(\mathfrak{gl}(p,\mathbb{R})\rightarrow\mathfrak{gl}(p,\mathbb{R})\) defined by \(A\mapsto[\Lambda,A]\) restricts to a linear map \(\mathrm{ad}_{\Lambda}:\mathfrak{so}(p)\rightarrow\mathrm{Sym}(p)\). Recall that the subalgebra \(\mathfrak{g}_{\Lambda}\subset\mathfrak{so}(p)\) consists precisely of those elements of \(\mathfrak{so}(p)\) that commute with \(\Lambda\). Thus \(\mathfrak{g}_{\Lambda}=\ker(\mathrm{ad}_{\Lambda})\), and the further-restricted map \(\mathrm{ad}^{\prime}_{\Lambda}:=\mathrm{ad}_{\Lambda}|_{\mathfrak{g}_{ \Lambda}^{\perp}}\) is injective.
The derivative of \(\Psi\) at \((0,0)\) is the linear map \(d\Psi|_{(0,0)}:\nu_{\Lambda}\rightarrow\mathrm{Sym}(p)\) given by
\[d\Psi|_{(0,0)}(A,L)=[A,\Lambda]+\Lambda L=-\mathrm{ad}^{\prime}_{\Lambda}(A)+ \Lambda L.\] (B.9)
Let \(A\in\mathfrak{g}_{\Lambda}^{\perp}\) and \(L\in\mathfrak{d}(p)\). Since the diagonal entries of \(A\) are all zero, so are the diagonal entries of \(A\Lambda,\ \Lambda A\), and \([A,\Lambda]\). Hence \([A,\Lambda]\) is Frobenius-orthogonal to the diagonal matrix \(\Lambda L\). Thus if \(d\Psi|_{(0,0)}(A,L)=0\), equation (B.9) implies that \(\mathrm{ad}^{\prime}_{\Lambda}(A)=0\) and \(\Lambda L=0\). Since \(\mathrm{ad}^{\prime}_{\Lambda}\) is injective and \(\Lambda\) is invertible, the latter pair of equations implies \(A=0\) and \(L=0\). Thus \(d\Psi|_{(0,0)}\) is injective.
Since \(\Psi\) is continuously differentiable and \(d\Psi|_{(0,0)}\) is injective, and \(\mathrm{Sym}^{+}(p)\) is an open subset of the vector space \(\mathrm{Sym}(p)\), a standard application of the Inverse Function Theorem implies the existence of \(\delta_{2},c\), and \(\tilde{\mathcal{H}}_{\Lambda}\) for which properties (a)-(d) hold and for which \(B_{\delta_{2}}^{\mathrm{Frob}}(\Lambda)\subset\mathrm{Sym}^{+}(p)\). Note that, modulo the value of \(c\), conclusion (d) does not depend on our choices of norms, since all norms on a finite-dimensional vector space are equivalent.
For conclusion (e), let \(S^{\prime}\in B_{\delta_{2}}^{\mathrm{Frob}}(\Lambda)\) and let \((U,D)\in\mathcal{F}^{-1}(S^{\prime})\). Since \(G_{\Lambda}^{0}\) is compact, there exists an element \(R\in G_{\Lambda}^{0}\) achieving the \(d_{SO}\)-distance from \(U\) to \(G_{\Lambda}^{0}\). Since the Riemannian exponential map \(\exp_{R}:T_{R}(SO(p))\to SO(p)\) is surjective, and the tangent space \(T_{R}(G_{\Lambda}^{0})\) is \(\{RA\in\mathfrak{gl}(p,\mathbb{R}):A\in\mathfrak{g}_{\Lambda}\}\), the minimal-distance condition (together with our choice of inner product on \(\mathfrak{so}(p)\)) implies that \(U=Re^{A}\) for some \(A\in\mathfrak{g}_{\Lambda}^{\perp}\) with \(\|A\|=d_{SO}(e^{A},I)\). Letting \(L=\log(D\Lambda^{-1})\), we then have \(S^{\prime}=Re^{A}\Lambda e^{L}e^{-A}R^{-1}=R\Psi(A,L)R^{-1}\). Since the Frobenius norm on \(\mathrm{Sym}(p)\) is invariant under the action of \(SO(p)\) (the map \((R,S)\mapsto RSR^{T}\)),
\[\delta_{2}\ >\ \|S^{\prime}-\Lambda\|_{\mathrm{Fr}} = \|R\Psi(A,L)R^{-1}-\Lambda\|_{\mathrm{Fr}}\] (B.10) \[= \|\Psi(A,L)-R^{-1}\Lambda R\|_{\mathrm{Fr}}\] \[= \|\Psi(A,L)-\Lambda\|_{\mathrm{Fr}}\ \ \ \ \ (\text{since}\ R\in G_{ \Lambda}).\]
Thus \(\Psi(A,L)\in B_{\delta_{2}}^{\mathrm{Frob}}(S)\cap\mathrm{image}(\Psi)= \mathcal{H}_{\Lambda}\), and \((A,L)=\Phi^{-1}(\Psi(A,L))\). Hence (B.8) and (B.10) imply that
\[|(A,L)\|_{\tilde{g}_{e}}<c\|\Psi(A,L)-\Lambda\|_{\mathrm{Fr}}=c\|S^{\prime}- \Lambda\|_{\mathrm{Fr}}.\qed\]
_Remark B.11_.: The geometric significance of the space \(\nu_{\Lambda}\) in Lemma B.10 is the following. The manifold \(M(p)=SO(p)\times\mathrm{Diag}^{+}(p)\) is a Lie group with identity element \(e=(I,I)\) and Lie algebra \(T_{e}(M(p))=\mathfrak{so}(p)\oplus\mathfrak{d}(p)\). For any \(S\in\mathrm{Sym}^{+}(p)\) and \((V,\Lambda)\in\mathcal{F}^{-1}(S)\), let \(\nu_{(V,\Lambda)}(\mathcal{F}^{-1}(S))\) be the normal space to the fiber \(\mathcal{F}^{-1}(S)\) at \((V,\Lambda)\)--i.e. the orthogonal complement of \(T_{(V,\Lambda)}(\mathcal{F}^{-1}(S))\subset T_{(V,\Lambda)}M(p)\) w.r.t. \(\tilde{g}_{(V,\Lambda)}\). The space \(\nu_{\Lambda}\subset\mathfrak{so}(p)\oplus\mathfrak{d}(p)=T_{e}(M(p))\) in Lemma B.10 is simply the image of \(\nu_{(V,\Lambda)}(\mathcal{F}^{-1}(S))\) under the map \(T_{(V,\Lambda)}(M(p)))\to T_{e}(M(p))\) induced by left-translation by \((V^{-1},\Lambda^{-1})\).
_Notation B.12_.: Given any metric space \((X,d_{X})\), any \(Y\subset X\), and any \(\epsilon>0\), we let \(N_{\epsilon}(Y)\) denote the \(\epsilon\)-neighborhood of \(Y\) in \(X\):
\[N_{\epsilon}(Y)=\{x\in X:d_{X}(x,Y)<\epsilon\}\]
It is easily seen that
\[N_{\epsilon}(Y)=\bigcup_{y\in Y}B_{\epsilon}^{X}(y).\] (B.11)
**Corollary B.13**.: _Let \(S\in\mathrm{Sym}^{+}(p)\), let \((V,\Lambda)\in\mathcal{F}^{-1}(S)\), and let \(\mathcal{C}(V,\Lambda)\) denote the connected component of \(\mathcal{F}^{-1}(S)\) containing \((V,\Lambda)\). Let \(VG_{\Lambda}^{0}\) denote the set \(\{VR:R\in G_{\Lambda}^{0}\}\)._
_There exist \(\delta_{2}=\delta_{2}(\Lambda)>0\) and \(c_{1}=c_{1}(\Lambda)>0\) (depending only on \(\Lambda\), not \(V\)) such that for all \(\delta\in(0,\delta_{2}]\),_
\[\underbrace{B_{\delta}^{\mathrm{Frob}}(S)}_{\text{in $\mathrm{Sym}^{+}(p)$}} \subset F\big{(}\bigcup_{R\in G_{\Lambda}^{0}}\big{(}B_{c\delta/\sqrt{k}}^{ SO}(VR)\times B_{c\delta}^{\mathcal{D}}(\Lambda)\big{)}\ \big{)}\] (B.12) \[= F\big{(}\underbrace{N_{c\delta/\sqrt{k}}(VG_{\Lambda}^{0})}_{ \text{in $SO(p)$}}\times B_{c\delta}^{\mathcal{D}}(\Lambda)\big{)}\] (B.13) \[\subset F\big{(}\bigcup_{\tilde{S}\in\mathcal{C}(V,\Lambda)}B_{c_{1} \delta}^{d_{M}}(\tilde{S})\ \big{)}\] (B.14) \[= F(\underbrace{N_{c_{1}\delta}(\mathcal{C}(V,\Lambda))}_{\text{ in $M(p)$}})\] (B.15)
Proof of Corollary b.13.: Let \(\mathfrak{g}_{\Lambda}^{\perp},\delta_{2},c\), and all relevant norms be as in Lemma B.10, let \(c_{1}=c\sqrt{2}\), and let \(\delta\in(0,\delta_{2}]\).
We will prove (B.13) last. First, the equality (B.15) follows from (B.11), as does the equality
\[N_{\epsilon}(VG_{\Lambda}^{0})=\bigcup_{U\in VG_{\Lambda}^{0}}B_{\epsilon}^{SO }(U)=\bigcup_{R\in G_{\Lambda}^{0}}B_{\epsilon}^{SO}(VR)\] (B.16)
for any \(\epsilon>0\). But (B.16) implies that for any \(\epsilon,\epsilon^{\prime}>0\),
\[N_{\epsilon}(VG_{\Lambda}^{0})\times B_{\epsilon^{\prime}}^{\mathcal{D}}( \Lambda)=\big{(}\bigcup_{R\in G_{\Lambda}^{0}}B_{\epsilon}^{SO}(VR)\ \big{)}\times B_{\epsilon^{\prime}}^{\mathcal{D}}(\Lambda)=\bigcup_{R\in G_{ \Lambda}^{0}}\big{(}B_{\epsilon}^{SO}(VR)\times B_{\epsilon^{\prime}}^{ \mathcal{D}}(\Lambda)\big{)},\]
yielding the equality (B.13).
Next, \(\mathcal{C}(V,\Lambda)=\{(VR,\Lambda):R\in G_{\Lambda}^{0}\}\) (See Groisser, Jung and Schwartzman, 2021, Appendix A), and for any \(R\in G_{\Lambda}^{0}\) and \((U,D)\in B^{SO}_{c\delta/\sqrt{k}}(VR)\times B^{\mathcal{D}}_{\delta}(\Lambda)\),
\[d_{M}((U,D),(V,\Lambda))^{2}=kd_{SO}(U,VR)^{2}+d_{\mathcal{D}}(D,\Lambda)^{2}< 2(c\delta)^{2}=(c_{1}\delta)^{2}.\]
Thus \(B^{SO}_{c\delta/\sqrt{k}}(VR)\times B^{\mathcal{D}}_{\delta}(\Lambda)\ \subset\ B^{d_{M}}_{c_{1}\delta}(VR,\Lambda)\). Hence the RHS of (B.12) (and therefore the equal RHS of (B.13)) is contained in the RHS of (B.14).
It remains only to establish the inclusion (B.12). Let \(S^{\prime}\in B^{\mathrm{Frob}}_{\delta}(S)\). Then
\[\delta>\|S^{\prime}-S\|_{\mathrm{Fr}}=\|S^{\prime}-V\Lambda V^{T}\|_{\mathrm{ Fr}}=\|V^{-1}S^{\prime}V-\Lambda\|_{\mathrm{Fr}},\]
so \(S^{\prime\prime}:=V^{-1}S^{\prime}V\in B^{\mathrm{Frob}}_{\delta}(\Lambda)\). By Lemma B.10(e), there exist \(R\in G_{\Lambda}\), \(A\in\mathfrak{g}_{\Lambda}^{\perp}\) and
\(L\in\mathfrak{d}(p)\) such that \(S^{\prime\prime}=Re^{A}\Lambda e^{L}e^{-A}R^{T}\), \(\|A\|_{\mathfrak{so}}=d_{SO}(e^{A},I)\), and
\[(k\|A\|_{\mathfrak{so}}^{2}+\|L\|_{\mathrm{Fr}}^{2})^{1/2}=\|(A,L)\|_{\tilde{ g}_{e}}\leq\|e^{A}\Lambda e^{L}e^{-A}-\Lambda\|_{\mathrm{Fr}}<c\delta.\]
Then \(S^{\prime}=VS^{\prime\prime}V^{T}=VRe^{A}D(VRe^{A})^{T}=F(VRe^{A},D)\), where and \(D=\Lambda e^{L}\). Since \(d_{SO}(VRe^{A},VR)=d_{SO}(e^{A},I)=\|A\|_{\mathfrak{so}}<c\delta/\sqrt{k}\) and \(d_{\mathcal{D}}(D,\Lambda)=d_{\mathcal{D}}(\Lambda e^{L},\Lambda)=\|L\|_{ \mathrm{Fr}}<c\delta\), the pair \((VRe^{A},D)\) lies in \(B^{SO}_{c\delta/\sqrt{k}}(VR)\times B^{\mathcal{D}}_{c\delta}(\Lambda)\).
Hence \(S^{\prime}\in F\big{(}B^{SO}_{c\delta/\sqrt{k}}(VR)\times B^{\mathcal{D}}_{c \delta}(\Lambda)\big{)}\), establishing (B.12).
#### b.2.4 Proof of Lemma 4.4
Since the eigenvector and eigenvalue matrices \((U,D)\in M(p)\) have distinct roles in the proof of Lemma 4.4, we restate the lemma with a different notation:
**Lemma B.14** (Lemma 4.4).: _Let \(K\subset\mathrm{Sym}^{+}(p)\) be a compact set. Let \(\epsilon>0\) and let \(S\in\mathrm{Sym}^{+}(p)\). There exists \(\delta_{1}=\delta_{1}(S,K,\epsilon)>0\) such that for all \(S_{0}\in K\), all \(\tilde{S}_{0}\in\mathcal{F}^{-1}(S_{0})\), all \((V,\Lambda)\in\mathcal{F}^{-1}(S)\), and all \(S^{\prime}\in\mathcal{F}\big{(}B^{d_{M}}_{\delta_{1}}((V,\Lambda))\big{)}\),_
\[d_{\mathcal{SR}}(S^{\prime},S_{0})^{2}>d_{\mathcal{SR}}(S,S_{0})^{2}-\epsilon \tag{4.2}\]
_and_
\[d_{\mathcal{PSR}}(S^{\prime},\tilde{S}_{0})^{2}>d_{\mathcal{PSR}}(S,\tilde{S}_ {0})^{2}-\epsilon. \tag{4.3}\]
Proof of Lemma 4.4.: For any \(A\subset\mathrm{Sym}^{+}(p)\), let \(\tilde{A}_{\mathcal{D}}\) denote the image of \(\mathcal{F}^{-1}(A)\) under the natural projection \(M(p)\to\mathrm{Diag}^{+}(p)\). (Thus \(\tilde{A}_{\mathcal{D}}\) is the set of diagonal matrices occurring in eigendecompositions of elements of \(A\).) We will need this only when \(A\) is either \(K\) or a one-element set. In the latter case, for \(Y\in\mathrm{Sym}^{+}(p)\) we write \(\tilde{Y}_{\mathcal{D}}\) for \(\{\tilde{Y}\}_{\mathcal{D}}\).
For each \(h\in\mathcal{G}(p)\) the function \(\mathrm{Diag}^{+}(p)\times\mathrm{Diag}^{+}(p)\to\mathbb{R}\) given by \((D_{1},D_{2})\mapsto d_{\mathcal{D}}(D_{1},\pi_{h}\cdot D_{2})^{2}\) is locally uniformly continuous. Hence, for each \((\Lambda,h,D_{0})\in\tilde{S}_{\mathcal{D}}\times\mathcal{G}(p)\times\tilde{K}_ {\mathcal{D}}\) there are numbers \(\tilde{\delta}_{3}(\Lambda,h;D_{0})\in(0,\delta_{\mathrm{strat}}(S)])\) and \(\tilde{\delta}_{4}(\Lambda,h;D_{0})>0\) such that for all \(\Lambda^{\prime}\in B^{\mathcal{D}}_{\delta_{3}(\Lambda,h;D_{0})}(\Lambda)\) and \(D^{\prime}_{0}\in B^{\mathcal{D}}_{\delta_{4}(\Lambda,h;D_{0})}(D_{0})\),
\[d_{\mathcal{D}}(\Lambda^{\prime},\pi_{h}\cdot D_{0}^{\prime})^{2}>d_{\mathcal{D}}( \Lambda,\pi_{h}\cdot D_{0}^{\prime})^{2}-\epsilon/2.\] (B.17)
Choose such numbers \(\tilde{\delta}_{3}(\Lambda,h;D_{0})\), \(\tilde{\delta}_{4}(\Lambda,h;D_{0})\) for every \((\Lambda,h,D_{0})\in\tilde{S}_{\mathcal{D}}\times\mathcal{G}(p)\times\tilde{ K}_{\mathcal{D}}\).
Since \(\mathcal{G}(p)\) is finite, and the set \(\tilde{Y}_{\mathcal{D}}\) is finite for every \(Y\in\mathrm{Sym}^{+}(p)\), given any \(S_{0}\in K\) we may choose \(\delta_{3}(S_{0}),\delta_{4}(S_{0})>0\) such that (B.17) holds simultaneously for all \((\Lambda,h,D_{0})\in\tilde{S}_{\mathcal{D}}\times\mathcal{G}(p)\times(\tilde{ S}_{0})_{\mathcal{D}}\), \(\Lambda^{\prime}\in B^{\mathcal{D}}_{\delta_{3}(S_{0})}(\Lambda)\), and \(D_{0}^{\prime}\in B^{\mathcal{D}}_{\delta_{4}(S_{0})}(D_{0})\). (The numbers \(\delta_{3}(S_{0}),\delta_{4}(S_{0})\) depend on \(S\) and \(\epsilon\) as well, but \(S\) and \(\epsilon\) were fixed in the hypotheses of the proposition.) Without loss of generality, we impose the additional restriction \(\delta_{3}(S_{0})\leq\delta_{\mathrm{strat}}(S)\).
By Proposition 3.5 of Groisser, Jung and Schwartzman (2021), for any \((V^{\prime},\Lambda^{\prime}),(U_{0}^{\prime},D_{0}^{\prime})\in M(p)\),
\[d_{\mathcal{SR}}(F(V^{\prime},\Lambda^{\prime}),F(U_{0}^{\prime},D_{0}^{\prime }))^{2}=\min_{h\in\mathcal{G}(p)}\left\{k\,\hat{d}_{h}\big{(}(V^{\prime}, \Lambda^{\prime}),(U_{0}^{\prime},D_{0}^{\prime})\big{)}^{2}+d_{\mathcal{D}}( \Lambda^{\prime},\pi_{h}\cdot D_{0}^{\prime})^{2}\right\},\] (B.18)
where
\[\hat{d}_{h}\big{(}(V^{\prime},\Lambda^{\prime}),(U_{0}^{\prime},D_{0}^{\prime })\big{)}=\min_{R_{1}\in G^{0}_{\Lambda^{\prime}},R_{2}\in G^{0}_{D_{0}^{ \prime}}}\left\{d_{SO}(V^{\prime}R_{1},U_{0}^{\prime}R_{2}h^{-1})\right\}\, \leq\,\mathrm{diam}(SO(p)).\] (B.19)
Similarly,
\[d_{\mathcal{PSR}}(F(V^{\prime},\Lambda^{\prime}),(U_{0}^{\prime},D_{0}^{\prime }))^{2}=\min_{h\in\mathcal{G}(p)}\left\{k\,\hat{\hat{d}}_{h}\big{(}(V^{\prime },\Lambda^{\prime}),(U_{0}^{\prime},D_{0}^{\prime})\big{)}^{2}+d_{\mathcal{D}} (\Lambda^{\prime},\pi_{h}\cdot D_{0}^{\prime})^{2}\right\},\] (B.20)
where
\[\hat{\hat{d}}_{h}\big{(}(V^{\prime},\Lambda^{\prime}),(U_{0}^{\prime},D_{0}^{ \prime})\big{)}=\min_{R_{1}\in G^{0}_{\Lambda^{\prime}}}\left\{d_{SO}(V^{ \prime}R_{1},U_{0}^{\prime}h^{-1})\right\}\,\leq\,\mathrm{diam}(SO(p)).\] (B.21)
Let
\[\delta_{2}=\min\left\{\mathrm{diam}(SO(p)),\frac{\epsilon}{6k\,\mathrm{diam}( SO(p))}\right\}.\]
By definition of \(\delta_{\mathrm{strat}}(S)\), for all \((V,\Lambda)\in\mathcal{F}^{-1}(S)\) and all \(\Lambda^{\prime}\in B^{\mathcal{D}}_{\delta_{\mathrm{strat}}(S)}(\Lambda)\) we have \(\mathsf{J}_{\Lambda^{\prime}}\geq\mathsf{J}_{\Lambda}\), implying \(G^{0}_{\Lambda^{\prime}}\subset G^{0}_{\Lambda}.\) Hence for all \(S_{0}\in K\), \((U_{0},D_{0})\in\mathcal{F}^{-1}(S_{0})\), \((U_{0}^{\prime},D_{0}^{\prime})\in B^{SO}_{\delta_{2}}(U_{0})\times B^{ \mathcal{D}}_{\delta_{4}(S_{0})}(D_{0})\), \((V,\Lambda)\in\mathcal{F}^{-1}(S)\), \((V^{\prime},\Lambda^{\prime})\in B^{SO}_{\delta_{2}}(V)\times B^{\mathcal{D}}_ {\delta_{3}(S_{0})}(\Lambda)\), and \(h\in\mathcal{G}(p)\), we have
\[\min_{R_{1}\in G^{0}_{\Lambda^{\prime}},R_{2}\in G^{0}_{D_{0}^{\prime}}} \left\{d_{SO}(V^{\prime}R_{1},U_{0}^{\prime}R_{2}h^{-1})\right\} \geq \min_{R_{1}\in G^{0}_{\Lambda},R_{2}\in G^{0}_{D_{0}^{\prime}}} \left\{d_{SO}(V^{\prime}R_{1},U_{0}^{\prime}R_{2}h^{-1})\right\}\]
and
\[\min_{R_{1}\in G^{0}_{\Lambda^{\prime}}}\left\{d_{SO}(V^{\prime}R_{1},U_{0}^{ \prime}h^{-1})\right\} \geq \min_{R_{1}\in G^{0}_{\Lambda}}\left\{d_{SO}(V^{\prime}R_{1},U_{0}^{ \prime}h^{-1})\right\};\]
i.e.
\[\hat{d}_{h}\big{(}(V^{\prime},\Lambda),(U^{\prime}_{0},D^{\prime}_{0})\big{)} \geq\hat{d}_{h}\big{(}(V^{\prime},\Lambda),(U^{\prime}_{0},D^{\prime}_{0})\big{)}\] (B.22)
and
\[\hat{\hat{d}}_{h}\big{(}(V^{\prime},\Lambda^{\prime}),(U^{\prime}_{0},D^{\prime }_{0})\big{)}\geq\hat{\hat{d}}_{h}\big{(}(V^{\prime},\Lambda),(U^{\prime}_{0}, D^{\prime}_{0})\big{)}.\] (B.23)
With all data as above, observe that for all \(R_{1},R_{2}\in SO(p),\)
\[\big{|}d_{SO}(V^{\prime}R_{1},U^{\prime}_{0}R_{2}h^{-1})-d_{SO}(VR _{1},U^{\prime}_{0}R_{2}h^{-1})\big{|} \leq d_{SO}(V^{\prime}R_{1},VR_{1})\] \[= d_{SO}(V^{\prime},V)\] \[< \delta_{2}.\]
Hence
\[\big{|}\hat{d}_{h}\big{(}(V^{\prime},\Lambda),(U^{\prime}_{0},D^{ \prime}_{0})\big{)}-\hat{d}_{h}\big{(}(V,\Lambda),(U^{\prime}_{0},D^{\prime}_{ 0})\big{)}\big{|}=\] \[\big{|}\min_{R_{1}\in G^{0}_{\Lambda},R_{2}\in G^{0}_{D_{0}}} \big{\{}d_{SO}(V^{\prime}R_{1},U^{\prime}_{0}R_{2}h^{-1})\big{\}}-\min_{R_{1} \in G^{0}_{\Lambda},R_{2}\in G^{0}_{D_{0}}}\big{\{}d_{SO}(VR_{1},U^{\prime}_{ 0}R_{2}h^{-1})\big{\}}\big{|}\] \[<\delta_{2}\,,\]
implying \(\hat{d}_{h}\big{(}(V^{\prime},\Lambda),(U^{\prime}_{0},D^{\prime}_{0})\big{)} >\hat{d}_{h}\big{(}(V,\Lambda),(U^{\prime}_{0},D^{\prime}_{0})\big{)}-\delta_ {2}.\) Similarly, \(\hat{\hat{d}}_{h}\big{(}(V^{\prime},\Lambda),(U^{\prime}_{0},D^{\prime}_{0}) \big{)}>\hat{\hat{d}}_{h}\big{(}(V,\Lambda),(U^{\prime}_{0},D^{\prime}_{0}) \big{)}-\delta_{2}.\) Combining these last two inequalities with (B.22) and (B.23), we find
\[\hat{d}_{h}\big{(}(V,\Lambda),(U^{\prime}_{0},D^{\prime}_{0})\big{)}<\hat{d} _{h}\big{(}(V^{\prime},\Lambda^{\prime}),(U^{\prime}_{0},D^{\prime}_{0})\big{)} )+\delta_{2}\] (B.24)
and
\[\hat{\hat{d}}_{h}\big{(}(V,\Lambda),(U^{\prime}_{0},D^{\prime}_{0})\big{)}< \hat{\hat{d}}_{h}\big{(}(V^{\prime},\Lambda^{\prime}),(U^{\prime}_{0},D^{\prime }_{0})\big{)})+\delta_{2}\,.\] (B.25)
Letting \(S^{\prime}_{0}=F(U^{\prime}_{0},D^{\prime}_{0}),\) the bounds (B.24) and (B.17) then yield
\[k\,\hat{d}_{h}\big{(}(V,\Lambda),(U^{\prime}_{0},D^{\prime}_{0}) \big{)}^{2}+d_{\mathcal{D}}(\Lambda,\pi_{h}\cdot D^{\prime}_{0})^{2}\] (B.26) \[< k\,[\hat{d}_{h}((V^{\prime},\Lambda^{\prime}),(U^{\prime}_{0},D^{ \prime}_{0}))+\delta_{2}]^{2}+d_{\mathcal{D}}(\Lambda^{\prime},\pi_{h}\cdot D^ {\prime}_{0})^{2}+\epsilon/2\quad\text{ (by \eqref{eq:2.2})}\] \[= k\,\hat{d}_{h}((V^{\prime},\Lambda^{\prime}),(U^{\prime}_{0},D^{ \prime}_{0}))^{2}+d_{\mathcal{D}}(\Lambda^{\prime},\pi_{h}\cdot D^{\prime}_{0} )^{2}\] \[+k\delta_{2}\left(2\,\hat{d}_{h}((V^{\prime},\Lambda^{\prime}),(U^ {\prime}_{0},D^{\prime}_{0}))+\delta_{2}\right)+\epsilon/2\] \[< k\,\hat{d}_{h}((V^{\prime},\Lambda^{\prime}),(U^{\prime}_{0},D^{ \prime}_{0}))^{2}+d_{\mathcal{D}}(\Lambda^{\prime},\pi_{h}\cdot D^{\prime}_{0} )^{2}\] \[+k\delta_{2}\left(3\,\text{diam}(SO(p))\right)+\epsilon/2\] \[< k\,\hat{d}_{h}((V^{\prime},\Lambda^{\prime}),(U^{\prime}_{0},D^{ \prime}_{0}))^{2}+d_{\mathcal{D}}(\Lambda^{\prime},\pi_{h}\cdot D^{\prime}_{0} )^{2}+\epsilon\]
(by our definition of \(\delta_{2}\)). Since (B.26) holds for every \(h\in\mathcal{G}(p),\) it follows from (B.18) that \(d_{\mathcal{SR}}(S,S^{\prime}_{0})^{2}<d_{\mathcal{SR}}(S^{\prime},S^{\prime}_{ 0})^{2}+\epsilon,\) where \(S^{\prime}=\mathcal{F}(V^{\prime},\Lambda^{\prime}).\) Additionally writing \(\tilde{S}^{\prime}_{0}=(U^{\prime}_{0},D^{\prime}_{0}),\) the bounds (B.25) and (B.17) similarly imply that \(d_{\mathcal{PSR}}(S,\tilde{S}^{\prime}_{0})^{2}<d_{\mathcal{PSR}}(S^{\prime}, \tilde{S}^{\prime}_{0})^{2}+\epsilon.\)
Thus (4.2) and (4.3) hold for all \(S^{\prime}\in F\big{(}B^{SO}_{\delta_{2}}(V)\times B^{\mathcal{D}}_{\delta_{3}(S _{0})}(\Lambda)\big{)},S^{\prime}_{0}\in F\big{(}B^{SO}_{\delta_{2}}(U_{0}) \times B^{\mathcal{D}}_{\delta_{4}(D_{0})}(D_{0})\big{)},\) and \(\tilde{S}^{\prime}_{0}\in B^{SO}_{\delta_{2}}(U_{0})\times B^{\mathcal{D}}_{ \delta_{4}(D_{0})}(D_{0}).\)
Since \(\mathcal{F}\) is a proper map (Proposition B.9), \(\mathcal{F}^{-1}(K)\) is compact. The collection \(\left\{B^{SO}_{\delta_{2}}(U_{0})\times B^{\mathcal{D}}_{\delta_{4}(D_{0})}(D_{ 0})\right\}_{(U_{0},D_{0})\in\mathcal{F}^{-1}(K)}\) is an open cover of \(\mathcal{F}^{-1}(K)\), and hence has a finite subcover \(\left\{(B^{SO}_{\delta_{2}}(U_{0}^{(i)})\times B^{\mathcal{D}}_{\delta_{4}(D_{ 0}^{(i)})}(D_{0}^{(i)})\right\}_{i=1}^{n}\) with "centers" \(\tilde{S}_{i}=(U_{0}^{(i)},D_{0}^{(i)}),1\leq i\leq n\).
Define \(\delta_{5}=\min\{\delta_{3}(\tilde{S}^{(i)}):1\leq i\leq n\}\). Then (4.2) and (4.3) hold whenever \((V,\Lambda)\in\mathcal{F}^{-1}(S)\), \(S^{\prime}\in\mathcal{F}\big{(}B^{SO}_{\delta_{2}}(V)\times B^{\mathcal{D}}_{ \delta_{5}}(\Lambda)\big{)}\), \(S_{0}\in K\), and \(\tilde{S}_{0}\in\mathcal{F}^{-1}(S_{0})\).
Finally, let \(\delta_{1}=\min\{\sqrt{k}\,\delta_{2},\delta_{5}\}\). Then \(B^{d_{M}}_{\delta_{1}}(V,\Lambda)\subset B^{SO}_{\delta_{2}}(V)\times B^{ \mathcal{D}}_{\delta_{5}}(\Lambda)\), so (4.2) and (4.3) hold for all \(S^{\prime}\in\mathcal{F}\big{(}B^{d_{M}}_{\delta_{1}}(V,\Lambda)\big{)}\), \(S_{0}\in K\), and \(\tilde{S}_{0}\in\mathcal{F}^{-1}(S_{0})\).
#### b.2.5 Proof of Theorem 4.3
Proof of Theorem 4.3.: As mentioned after the statement of the theorem, part (a) is a special case of part (b), so it suffices to prove part (b).
Observe that if \(\tilde{K}\subset M(p)\) is compact, then so are \(\mathcal{F}(\tilde{K})\) and (since \(\mathcal{F}\) is proper [Proposition B.9]) also \(\mathcal{F}^{-1}(\mathcal{F}(\tilde{K}))\). Since \(\tilde{K}\subset\mathcal{F}^{-1}(\mathcal{F}(\tilde{K}))\), any property that is uniform over \(\mathcal{F}^{-1}(\mathcal{F}(\tilde{K}))\) is uniform over \(K\). Hence, to prove the desired result for \(d_{\mathcal{PSR}}\), it suffices to consider compact subsets of \(M(p)\) of the form \(\mathcal{F}^{-1}(K)\), where \(K\subset\mathrm{Sym}^{+}(p)\) is compact.
Let \(S\in\mathrm{Sym}^{+}(p)\), let \(K\subset\mathrm{Sym}^{+}(p)\) be a compact set, and let \(\epsilon>0\). Let \(\delta_{1}=\delta_{1}(S,K,\epsilon)\) be as in Lemma 4.4. Let \((V,\Lambda)\in\mathcal{F}^{-1}(S)\), let \(c_{1}=c_{1}(\Lambda)\) and \(\delta_{2}=\delta_{2}(\Lambda)\) be as in Corollary B.13, and let \(\delta=\min\{\delta_{1}/c_{1},\delta_{2}\}\).
Let \(S^{\prime}\in B^{\mathrm{Frob}}_{\delta}(S)\), and let \(S_{0}\in K\), and let \(\tilde{S}_{0}\in\mathcal{F}^{-1}(K)\). Since \(\delta\leq\delta_{2}\), relations (B.12)-(B.14) in Corollary B.13, ensure that \(S^{\prime}\in\mathcal{F}\left(B^{d_{M}}_{c_{1}\delta}(\tilde{S})\right)\) for some \(\tilde{S}\in\mathcal{F}^{-1}(S)\). Since \(c_{1}\delta\leq\delta_{1}\), Lemma 4.4 implies that \(d_{\mathcal{SR}}(S^{\prime},S_{0})^{2}>d_{\mathcal{SR}}(S,S_{0})^{2}-\epsilon\) and \(d_{\mathcal{PSR}}(S^{\prime},\tilde{S}_{0})^{2}>d_{\mathcal{SR}}(S,\tilde{S}_ {0})^{2}-\epsilon\).
This proves that \(d^{2}_{\mathcal{SR}}\) and \(d^{2}_{\mathcal{PSR}}\) are LSC in their first variables, locally uniformly with respect to their second variables. The analogous result for the unsquared functions \(d_{\mathcal{SR}}\) and \(d_{\mathcal{PSR}}\) then follow from Corollary B.6.
#### b.2.6 Proof of Lemma 4.6
We first develop an inequality for \(d_{\mathcal{SR}}\) which plays the role of the triangle inequality.
**Lemma B.15**.: _Let \(S_{2}\in\mathrm{Sym}^{+}(p)\). Then there exists a constant \(C=C(S_{2})\in(0,\infty)\), depending only on the eigenvalues of \(S_{2}\), such that for all \(S_{0},S_{1}\in\mathrm{Sym}^{+}(p)\),_
\[d_{\mathcal{SR}}(S_{1},S_{2})\leq d_{\mathcal{SR}}(S_{1},S_{0})+d_{\mathcal{SR}} (S_{0},S_{2})+C(S_{2}).\]
Proof.: Note from Proposition 3.5 of Groisser, Jung and Schwartzman (2021) that given any two \(S^{\prime},S^{\prime\prime}\in\mathrm{Sym}^{+}(p)\) and a connected component \(\mathcal{C}^{\prime}\) of \(\mathcal{F}^{-1}(S^{\prime})\), there is a minimal pair \(\left((U^{\prime},D^{\prime}),(U^{\prime\prime},D^{\prime\prime})\right)\in \mathcal{C}^{\prime}\times\mathcal{F}^{-1}(S^{\prime\prime})\). Note also that if
both \((U^{\prime},D^{\prime})\) and \((U,D)\) are in the same connected component \(\mathcal{C}^{\prime}\) of \(S^{\prime},\) then \(D=D^{\prime}.\) Moreover, for any \((U^{\prime},D^{\prime}),(U,D)\in\mathcal{F}^{-1}(S^{\prime}),\)
\[d_{M}((U^{\prime},D^{\prime}),(U,D)) \leq d_{M}((U^{\prime},D^{\prime}),(U,D^{\prime}))+d_{M}((U,D^{ \prime}),(U,D))\] \[\leq\sqrt{k}\mathrm{diam}(SO(p))+d_{\mathcal{D}^{+}}(D^{\prime},D).\] (B.27)
Let \(\mathcal{C}_{0}\) be a connected component of \(\mathcal{F}^{-1}(S_{0}).\) For \(i=1,2,\) let \(\big{(}(U_{i},D_{i}),(U_{0}^{(i)},D_{0}^{(i)})\big{)}\in\mathcal{F}^{-1}(S_{i} )\times\mathcal{F}^{-1}(S_{0})\) be minimal pairs with \((U_{0}^{(i)},D_{0}^{(i)})\) both lying in \(\mathcal{C}_{0}.\) Let \(\mathcal{C}_{1}\) be the connected component of \(\mathcal{F}^{-1}(S_{1})\) containing \((U_{1},D_{1}),\) and let \(\big{(}(U_{1}^{\prime},D_{1}^{\prime}),(U_{2}^{\prime},D_{2}^{\prime})\big{)} \in\mathcal{F}^{-1}(S_{1})\times\mathcal{F}^{-1}(S_{2})\) be a minimal pair for \((S_{1},S_{2})\) with \((U_{1}^{\prime},D_{1}^{\prime})\in\mathcal{C}_{1}.\) Then, \(D_{0}^{(1)}=D_{0}^{(2)}=:D_{0},\) and \(D_{1}^{\prime}=D_{1}.\) Moreover,
\[d_{\mathcal{SR}}(S_{i},S_{0})=d_{M}(U_{i},D_{i}),(U_{0}^{(i)},D_{0}^{(i)}))=d_ {M}(U_{i},D_{i}),(U_{0}^{(i)},D_{0})),\]
and
\[d_{\mathcal{SR}}(S_{1},S_{2})=d_{M}(U_{1}^{\prime},D_{1}^{\prime}),(U_{2}^{ \prime},D_{2}^{\prime}))=d_{M}(U_{1}^{\prime},D_{1}),(U_{2}^{\prime},D_{2}^{ \prime})).\]
Hence
\[d_{\mathcal{SR}} (S_{1},S_{2})=d_{M}(U_{1}^{\prime},D_{1}),(U_{2}^{\prime},D_{2}^{ \prime}))\] \[\leq d_{M}(U_{1}^{\prime},D_{1}),(U_{1},D_{1}))+d_{M}((U_{1},D_{ 1}),(U_{0}^{(1)},D_{0}))+d_{M}((U_{0}^{(1)},D_{0}),(U_{0}^{(2)},D_{0}))\] \[\quad+d_{M}((U_{0}^{(2)},D_{0}),(U_{2},D_{2}))+d_{M}((U_{2},D_{2} ),(U_{2}^{\prime},D_{2}^{\prime}))\] \[\leq\sqrt{k}\mathrm{diam}(SO(p))+d_{\mathcal{SR}}(S_{1},S_{0})+ \sqrt{k}\mathrm{diam}(SO(p))\] \[\quad+d_{\mathcal{SR}}(S_{0},S_{2})+(\sqrt{k}\mathrm{diam}(SO(p) )+d_{\mathcal{D}^{+}}(D_{2},D_{2}^{\prime}))\] \[\leq d_{\mathcal{SR}}(S_{1},S_{0})+d_{\mathcal{SR}}(S_{0},S_{2})+ 3\sqrt{k}\mathrm{diam}(SO(p))+d_{\mathcal{D}^{+}}(D_{2},D_{2}^{\prime}).\qed\]
Proof of Lemma 4.6.: (a) Suppose that for some \((U_{0},D_{0})\in M(p),f^{(\mathcal{PSR})}(U_{0},D_{0})<\infty.\) Then for any given \((U,D)\in M(p),\)
\[f^{(\mathcal{PSR})}(U,D)=\int_{\mathrm{Sym}^{+}(p)}\inf_{(U_{X},D_{X})\in \mathcal{F}^{-1}(X)}[kd_{SO}^{2}(U_{X},U)+d_{\mathcal{D}^{+}}^{2}(D_{X},D)]dP.\]
By the triangle inequality, we have
\[d_{SO}^{2}(U_{X},U) \leq\{d_{SO}(U_{X},U_{0})+d_{SO}(U_{0},U)\}^{2}\] \[\leq 2d_{SO}^{2}(U_{X},U_{0})+2d_{SO}^{2}(U_{0},U),\]
and similarly \(d_{\mathcal{D}^{+}}^{2}(D_{X},D)\leq 2d_{\mathcal{D}^{+}}^{2}(D_{X},D_{0})+2d_{ \mathcal{D}^{+}}^{2}(D_{0},D).\) Thus,
\[f^{(\mathcal{PSR})}(U,D)\leq 2\int_{\mathrm{Sym}^{+}(p)}d_{\mathcal{PSR}}^{2}(X,(U_{0},D_{0}))dP+C<\infty,\]
where \(C=2kd_{SO}^{2}(U_{0},U)+2d_{\mathcal{D}^{+}}^{2}(D_{0},D).\) Moreover, for any given \(\Sigma\in\mathrm{Sym}^{+}(p),\) choosing any \((U^{\prime},D^{\prime})\in\mathcal{F}(\Sigma),\) we have \(f^{(\mathcal{SR})}(\Sigma)\leq f^{(\mathcal{PSR})}(U^{\prime},D^{\prime})<\infty\) by (3.4).
(b) Suppose that for some \(S_{0}\in\operatorname{Sym}^{+}(p)\), \(f^{(\mathcal{SR})}(S_{0})<\infty\). For any given \(S\in\operatorname{Sym}^{+}(p)\), Lemma B.15 gives \(d_{\mathcal{SR}}(X,S)\leq d_{\mathcal{SR}}(X,S_{0})+C^{\prime}(S_{0},S)\) for any \(X\in\operatorname{Sym}^{+}(p)\), where \(C^{\prime}(S_{0},S)=d_{\mathcal{SR}}(S_{0},S)+C(S)<\infty\). Thus,
\[f^{(\mathcal{SR})}(S) =\int_{\operatorname{Sym}^{+}(p)}d_{\mathcal{SR}}^{2}(X,S)dP\leq \int_{\operatorname{Sym}^{+}(p)}2d_{\mathcal{SR}}^{2}(X,S_{0})dP+2C^{\prime}(S _{0},S)^{2}\] \[=2f^{(\mathcal{SR})}(S_{0})+2C^{\prime}(S_{0},S)^{2}<\infty.\qed\]
#### b.2.7 Proof of Lemma 4.7
Proof of Lemma 4.7.: (i) By Theorem 4.3, \(d_{\mathcal{SR}}^{2}\mid_{\operatorname{Sym}^{+}(p)\times K}\) is LSC in the first variable, uniformly with respect to the second. Let \(S_{0}\in\operatorname{Sym}^{+}(p)\) and \(\epsilon>0\) be arbitrary, and let \(U\) be the open set containing \(S_{0}\) as in Definition 4.2(i). Then for all \(S\in U\) and \(S^{\prime}\in K\), \(d_{\mathcal{SR}}^{2}(S,S^{\prime})>d_{\mathcal{SR}}^{2}(S_{0},S^{\prime})-\epsilon\). Hence
\[f^{(\mathcal{SR})}(S)=\int_{K}d_{\mathcal{SR}}^{2}(S,\cdot)dP>\int_{K}\left(d_ {\mathcal{SR}}^{2}(S_{0},\cdot)-\epsilon\right)dP=f^{(\mathcal{SR})}(S_{0})-\epsilon.\]
Hence \(f^{(\mathcal{SR})}\) is LSC at the arbitrary point \(S_{0}\).
(ii) We will show that \((f^{(\mathcal{PS})})^{1/2}\) is Lipschitz with constant \(1\):
\[\left|f^{(\mathcal{PS})}(m_{1})^{1/2}-f^{(\mathcal{PS})}(m_{2})^{1/2}\right| \leq d_{M}(m_{1},m_{2})\quad\text{for all }m_{1},m_{2}\in M(p).\] (B.28)
This will imply uniform continuity of \((f^{(\mathcal{PS})})^{1/2}\), and therefore continuity of \(f^{(\mathcal{PS})}\). Let \(m_{1},m_{2}\in M(p)\). Utilizing Lemma 4.1(ii),
\[\left|f^{(\mathcal{PS})}(m_{1})-f^{(\mathcal{PS})}(m_{2})\right|\] (B.29) \[\leq d_{M}(m_{1},m_{2})\int_{\operatorname{Sym}^{+}(p)}\left[d_{ \mathcal{PS}}(X,m_{1})+d_{\mathcal{PS}}(X,m_{2})\right]P(dX)\] \[\leq d_{M}(m_{1},m_{2})\left[f^{(\mathcal{PS})}(m_{1})^{1/2}+f^{( \mathcal{PS})}(m_{2})^{1/2}\right].\]
If \(d_{\mathcal{PS}}(m_{1})=d_{\mathcal{PS}}(m_{2})=0\), then (B.28) is true trivially. Otherwise, dividing both sides of (B.29) by \(\left[f^{(\mathcal{PS})}(m_{1})^{1/2}+f^{(\mathcal{PS})}(m_{2})^{1/2}\right]\) yields (B.28).
### Proofs for Section 4.2
#### b.3.1 Proof of Proposition 4.8
Proof of Proposition 4.8.: For \(r>0\) let
\[K_{r}=\{S\in\operatorname{Sym}^{+}(p):\text{every eigenvalue }\lambda\text{ of }S\text{ satisfies }|\log\lambda|\leq r\},\]
and let \(\kappa_{r}=\operatorname{diam}(\mathcal{F}^{-1}(K_{r}))\). For each \(r\), the set \(K_{r}\) is compact, and hence so is \(\tilde{K}_{r}:=F^{-1}(K_{r})\) (by Proposition B.9). Note also that \(I\in K_{r}\) and \((I,I)\in\tilde{K}_{r}\) for every \(r>0\).
Suppose \(r_{2}>r_{1}>0\) and that \(S_{2}\in\operatorname{Sym}^{+}(p)\backslash K_{r_{2}}\). Then for any \(m_{1}=(U_{1},D_{1})\in\tilde{K}_{r_{1}}\) and \(m_{2}=(U_{2},D_{2})\in\mathcal{F}^{-1}(S_{2})\), the matrix \(D_{2}\) has some eigenvalue \(\lambda_{2}\) with \(|\log\lambda_{2}|>r_{2}\), while every eigenvalue \(\lambda_{1}\) of \(D_{1}\) satisfies \(|\log\lambda_{1}|\leq r_{1}\), implying \(d_{M}(m_{1},m_{2})\geq d_{\mathcal{D}}(D_{1},D_{2})\geq r_{2}-r_{1}\). Hence \(d_{\mathcal{S}\mathcal{R}}(S_{1},S_{2})\geq r_{2}-r_{1}\), and thus
\[f^{(\mathcal{S}\mathcal{R})}(S_{2})\geq\int_{K_{r_{1}}}d_{\mathcal{S}\mathcal{ R}}(S_{1},S_{2})^{2}P(dS_{1})\geq(r_{2}-r_{1})^{2}P(K_{r_{1}}).\] (B.30)
Now choose \(r_{1}\) large enough that \(P(K_{r_{1}})>0\); such \(r_{1}\) exists since \(\bigcup_{r>0}K_{r}=\operatorname{Sym}^{+}(p)\). Let \(r_{2}>r_{1}\) be large enough that \((r_{2}-r_{1})^{2}P(K_{r_{1}})>f^{(\mathcal{S}\mathcal{R})}(I)\). Then by (B.30), for every \(S\in\operatorname{Sym}^{+}(p)\backslash K_{r_{2}}\),
\[f^{(\mathcal{S}\mathcal{R})}(S)\geq(r_{2}-r_{1})^{2}P(K_{r_{1}})>f^{( \mathcal{S}\mathcal{R})}(I).\] (B.31)
Since \(I\in K_{r_{2}}\), (B.31) implies that (4.4) holds with \(K=K_{r_{2}}\), establishing part (a).
For (b), let \(r_{1}\) be as above, consider a (new) arbitrary \(r_{2}>r_{1}\), and let \(m_{1},m_{2}\) be as above. Then essentially the same argument as above shows that \(d_{\mathcal{PSR}}(S_{1},m_{2})\geq r_{2}-r_{1}\) and hence that \(f^{(\mathcal{PSR})}(m_{2})\geq(r_{2}-r_{1})^{2}P(K_{r_{1}})\). Now let \(r_{2}>r_{1}\) be large enough that \((r_{2}-r_{1})^{2}P(K_{r_{1}})>f^{(\mathcal{PSR})}(I,I)\). Then for every \(m\in M(p)\backslash\tilde{K}_{r_{2}}\) we have \(f^{(\mathcal{PSR})}(m)>f^{(\mathcal{PSR})}(I,I)\). Since \((I,I)\in\tilde{K}_{r_{2}}\), this implies that (4.5) holds with \(K=K_{r_{2}}\).
#### b.3.2 Proof of Theorem 4.9
Proof of Theorem 4.9.: (a) By Proposition 4.8(a), there exists a compact set \(K\subset\operatorname{Sym}^{+}(p)\) such that equation (4.4) holds. But by Lemma 4.7, \(f^{(\mathcal{S}\mathcal{R})}\) is LSC, hence achieves a minimum value on \(K\), say at \(S_{0}\). By (4.4), \(f^{(\mathcal{S}\mathcal{R})}(S_{0})\) is the minimum value of \(f^{(\mathcal{S}\mathcal{R})}\) on all of \(\operatorname{Sym}^{+}(p)\). Hence \(E^{(\mathcal{S}\mathcal{R})}\) is nonempty.
The proof for (b) is almost identical to the proof for (a), except that the finite PSR-variance condition actually ensures _continuity_ (not just semi-continuity) of \(f^{(\mathcal{PSR})}\).
### Proofs for Section 4.3
#### b.4.1 Proof of Lemma 4.12
Proof of Lemma 4.12.: (a) Since \(h\cdot(U,D)=(Uh^{-1},h\cdot D)\), and \(h\cdot D=hDh^{-1}\),
\[d_{M}((U,D),(Uh^{-1},hDh^{-1})) =\left\{d_{\mathcal{D}^{+}}^{2}(D,hDh^{-1})+kd_{SO}^{2}(U,Uh^{-1}) \right\}^{1/2}\] \[\geq\sqrt{k}d_{SO}(U,Uh^{-1})=\sqrt{k}d_{SO}(I_{p},h^{-1})\] \[\geq\sqrt{k}\beta_{\mathcal{G}(p)}.\]
(b) The set \(\mathcal{G}(p)\) contains the block-diagonal signed permutation matrix
\[B=\begin{pmatrix}I_{p-2}&0\\ 0&R(\frac{\pi}{2})\end{pmatrix}\]
where
\[R(\tfrac{\pi}{2})=\begin{pmatrix}\cos(\tfrac{\pi}{2})&-\sin(\tfrac{\pi}{2})\\ \sin(\tfrac{\pi}{2})&\cos(\tfrac{\pi}{2})\end{pmatrix}=\begin{pmatrix}0&-1\\ 1&0\end{pmatrix}.\]
It can be shown that
\[\operatorname{Log}(B)=\begin{pmatrix}0&0\\ 0&\operatorname{Log}(R(\tfrac{\pi}{2}))\end{pmatrix},\]
where
\[\operatorname{Log}\left(R(\tfrac{\pi}{2})\right)=\begin{pmatrix}0&-\tfrac{ \pi}{2}\\ \tfrac{\pi}{2}&0\end{pmatrix}.\]
Then we have that \(d_{SO}(I_{p},B)=\frac{\pi}{2},\) which implies that \(\beta_{\mathcal{G}(p)}\leq\frac{\pi}{2}.\)
(c) Given \(X\in S_{p}^{\text{top}},\) let \((U_{X},D_{X})\) and \((U_{X}^{\prime},D_{X}^{\prime})\) be two distinct eigen-decompositions of \(X.\) From Theorem 3.3 of Jung, Schwartzman and Groisser (2015), there is an even signed-permutation \(h\) such that \((U_{X}^{\prime},D_{X}^{\prime})=(Uh^{-1},h\cdot D).\) Since \((U_{X},D_{X})\) and \((U_{X}^{\prime},D_{X}^{\prime})\) are distinct, \(h\neq I_{p}.\) Applying Part (a) gives the result.
#### b.4.2 Proof of Lemma 4.14
Proof of Lemma 4.14.: It is well known that \((\operatorname{Diag}^{+}(p),g_{\mathcal{D}^{+}})\) has non-positive sectional curvature and infinite injectivity radius, and \((SO(p),kg_{SO})\) has non-negative sectional curvature (bounded above by \(\Delta(SO(p),kg_{SO})=1/(4k)\)) and injectivity radius \(r_{\text{inj}}(SO(p),kg_{SO})=\sqrt{k}\pi\) (see Section 5 of Manton (2004)). Thus the injectivity radius of \((M,g_{M})\) is \(r_{\text{inj}}(M,g_{M})=r_{\text{inj}}(SO(p),kg_{SO}),\) and the sectional curvature of \((M,g_{M})\) is bounded by \(\Delta(M,g_{M})=\Delta(SO(p),kg_{SO}).\)
We apply Theorem 2.1 of Afsari (2011), which shows that the minimizer \(\bar{m}(P)\) of \(\int_{M(p)}d_{M}^{2}(\tilde{X},m)P(d\tilde{X})\) uniquely exists and lies in \(B_{r}^{d_{M}}(m_{0}),\) provided that
\[r\leq\min\{r_{\text{inj}}(M,g_{M}),\pi/\sqrt{\Delta(M,g_{M})}\}/2.\] (B.32)
Since
\[\min\{r_{\text{inj}}(M,g_{M}),\pi/\sqrt{\Delta(M,g_{M})}\}/2=\min\{\sqrt{k} \frac{\pi}{2},2\sqrt{k}\frac{\pi}{2}\}=\frac{\sqrt{k}\pi}{2}\]
and \(r\leq\sqrt{k}\beta_{\mathcal{G}(p)}\leq\frac{\sqrt{k}\pi}{2}\) (by Lemma 4.12(b)), we have the desired bound (B.32).
#### b.4.3 Proof of Theorem 4.15 and Corollary 4.16
Proof of Theorem 4.15.: By the condition (4.8), for any \(S_{1},S_{2}\in\operatorname{supp}(P)\),
\[d_{\mathcal{S}\mathcal{R}}(S_{1},S_{2})<r^{\prime}_{cx}.\] (B.33)
Since the complement of \(S_{p}^{\operatorname{top}}\) has volume zero in \(\operatorname{Sym}^{+}(p)\), we have for any \(m\in M(p)\),
\[f^{(\mathcal{PSR})}(m)=\int_{\operatorname{Sym}^{+}(p)}d_{\mathcal{PSR}}^{2}(X,m)P(dX)=\int_{S_{p}^{\operatorname{top}}}d_{\mathcal{PSR}}^{2}(X,m)P(dX).\] (B.34)
Fix an arbitrary \(S_{0}\in\operatorname{supp}(P)\cap S_{p}^{\operatorname{top}}\). There are exactly \(v_{p}:=2^{p-1}p!\) distinct eigen-decompositions in \(\mathcal{F}^{-1}(S_{0})\), and we label them by \(\ell=1,\ldots,v_{p}\), that is, \(\mathcal{F}^{-1}(S_{0})=\{m_{\ell}(S_{0}):\ell=1,\ldots,v_{p}\}\). We claim the following:
**Claim B.16**.: _For each \(\ell\), and for each \(S^{\prime}\in\operatorname{supp}(P)\cap S_{p}^{\operatorname{top}}\), one can uniquely choose \(m_{\ell}(S^{\prime})\in\mathcal{F}^{-1}(S^{\prime})\) that forms a minimal pair with \(m_{\ell}(S_{0})\). Thus, one can uniquely label all eigen-decompositions \(\{m_{\ell}(S^{\prime}):\ell=1,\ldots,v_{p}\}=\mathcal{F}^{-1}(S^{\prime})\) for all \(S^{\prime}\in\operatorname{supp}(P)\cap S_{p}^{\operatorname{top}}\)._
Proof of Claim b.16.: The fact that both \(S_{0},S^{\prime}\in\operatorname{supp}(P)\cap S_{p}^{\operatorname{top}}\) and (B.33) ensure that there exists an \(m^{\prime}\in\mathcal{F}^{-1}(S^{\prime})\) such that \(d_{M}(m^{\prime},m_{\ell}(S_{0}))=d_{\mathcal{S}\mathcal{R}}(S^{\prime},S_{0}) <r^{\prime}_{cx}\). Choose such an \(m^{\prime}\) and label it to be \(m_{\ell}(S^{\prime})\in\mathcal{F}^{-1}(S^{\prime})\). Let \(m_{k}(S^{\prime})\in\mathcal{F}^{-1}(S^{\prime})\) be such that \(m_{k}(S^{\prime})\neq m_{\ell}(S^{\prime})\). Then by the triangle inequality,
\[d_{M}(m_{k}(S^{\prime}),m_{\ell}(S_{0})) \geq d_{M}(m_{k}(S^{\prime}),m_{\ell}(S^{\prime}))-d_{M}(m_{\ell}( S^{\prime}),m_{\ell}(S_{0}))\] \[>\sqrt{k}\beta_{\mathcal{G}(p)}-r^{\prime}_{cx}=\frac{3}{4}\sqrt {k}\beta_{\mathcal{G}(p)}>r^{\prime}_{cx}.\] (B.35)
Therefore, \(m_{\ell}(S^{\prime})\in\mathcal{F}^{-1}(S^{\prime})\) is indeed the unique eigen-decomposition that forms a minimal pair with \(m_{\ell}(S_{0})\).
By Claim B.16, one can therefore label all eigen-decompositions \(m_{\ell}(S)\) of all \(S\in\operatorname{supp}(P)\cap S_{p}^{\operatorname{top}}\), provided that an initial labeling of \(S_{0}\) is given. For \(\ell=1,\ldots,v_{p}\) and for \(r>0\), define a set \(H_{\ell}(r)\) by
\[H_{\ell}(r) =\{m\in M(p):d_{M}(m,m_{\ell}(S))<r\text{ for all }S\in \operatorname{supp}(P)\cap S_{p}^{\operatorname{top}}\}\] \[=\bigcap_{S\in\operatorname{supp}(P)\cap S_{p}^{\operatorname{top }}}B_{r}^{d_{M}}(m_{\ell}(S)).\] (B.36)
**Claim B.17**.: _(a) If \(r\leq 2r^{\prime}_{cx}\), then for \(\ell\neq\ell^{\prime}\), \(H_{\ell}(r)\cap H_{\ell^{\prime}}(r)=\emptyset\)._
_(b) If \(r\geq r^{\prime}_{cx}\), then for any \(S\in\operatorname{supp}(P)\cap S_{p}^{\operatorname{top}}\), \(m_{\ell}(S)\in H_{\ell}(r)\), for any \(\ell=1,\ldots,v_{p}\)._
_(c) If \(r\leq 2r^{\prime}_{cx}\), then for any \(S\in\operatorname{supp}(P)\cap S_{p}^{\operatorname{top}}\) and for any \(m\in H_{\ell}(r)\), the eigen-decomposition of \(S\) closest to \(m\) is \(m_{\ell}(S)\)._
_(d) For any \(S_{1},S_{2}\in\operatorname{supp}(P)\cap S_{p}^{\operatorname{top}}\), \((m_{\ell}(S_{1}),m_{\ell^{\prime}}(S_{2}))\) is a minimal pair if and only if \(\ell=\ell^{\prime}\)._
Proof of Claim b.17.: Item (b) is immediate by the definition of \(H_{\ell}\) (B.36).
Let \(S\in\operatorname{supp}(P)\cap S_{p}^{\operatorname{top}}\), \(m\in H_{\ell}(r)\) and note that for any \(\ell^{\prime}\neq\ell\),
\[d_{M}(m,m_{\ell^{\prime}}(S)) \geq d_{M}(m_{\ell^{\prime}}(S),m_{\ell}(S))-d_{M}(m,m_{\ell}(S))\] \[>4r_{cx}^{\prime}-d_{M}(m,m_{\ell}(S))>d_{M}(m,m_{\ell}(S)),\] (B.37)
in which we used the triangle inequality and Lemma 4.12 (c), and the fact that \(d_{M}(m,m_{\ell}(S))<2r_{cx}^{\prime}\) (given by the condition \(r\leq 2r_{cx}^{\prime}\) and the definition of \(H_{\ell}(r)\)). This shows (c).
Take \(r\in[r_{cx}^{\prime},2r_{cx}^{\prime}]\), then parts (b) and (c) are true. To verify (d), from part(c), replace \(m\) by \(m_{\ell}(S_{1})\), \(m_{\ell^{\prime}}(S^{\prime})\) by \(m_{\ell^{\prime}}(S_{2})\) for \(\ell\neq\ell^{\prime}\).
To verify (a) it is sufficient to assume \(r=2r_{cx}^{\prime}\). Let \(m\in H_{\ell}(r)\), then there exists an \(S^{\prime}\in\operatorname{supp}(P)\cap S_{p}^{\operatorname{top}}\) such that \(d_{M}(m,m_{\ell}(S^{\prime}))<2r_{cx}^{\prime}\). But
\[d_{M}(m,m_{\ell^{\prime}}(S^{\prime}))\geq d_{M}(m_{\ell^{\prime}}(S^{\prime} ),m_{\ell}(S^{\prime}))-d_{M}(m,m_{\ell}(S^{\prime}))>4r_{cx}^{\prime}-2r_{cx}^ {\prime}=2r_{cx}^{\prime},\]
thus yielding \(m\notin H_{\ell^{\prime}}(r)\).
For each \(\ell=1,\ldots,v_{p}\) write \(H_{\ell}^{\operatorname{top}}(r_{cx}^{\prime})\) for \(H_{\ell}(r_{cx}^{\prime})\cap M_{p}^{\operatorname{top}}\) for notational simplicity. Fix an \(\ell=1,\ldots,v_{p}\), and consider the eigen-composition map \(\mathcal{F}\) restricted to \(H_{\ell}^{\operatorname{top}}(r_{cx}^{\prime})\), \(\mathcal{F}|_{H_{\ell}^{\operatorname{top}}(r_{cx}^{\prime})}:H_{\ell}^{ \operatorname{top}}(r_{cx}^{\prime})\to\operatorname{Sym}^{+}(p)\). Since for any \(S\in\operatorname{supp}(P)\cap S_{p}^{\operatorname{top}}\), \(\mathcal{F}^{-1}(S)\) intersects with \(H_{\ell}^{\operatorname{top}}(r_{cx}^{\prime})\) at a unique point, there exists the push-forward measure \(P\circ\mathcal{F}|_{H_{\ell}^{\operatorname{top}}(r_{cx}^{\prime})}\) supported on \(H_{\ell}^{\operatorname{top}}(r_{cx}^{\prime})\subset M(p)\). For each \(\ell\), denote \(P_{\ell}\) for this "push-forwarded" probability measure on \(M(p)\). Since the support of \(P_{\ell}\) lies in \(H_{\ell}^{\operatorname{top}}(r_{cx}^{\prime})\subset B_{r_{cx}^{\prime}}(m_ {\ell}(S))\) for any \(S\in\operatorname{supp}(P)\cap S_{p}^{\operatorname{top}}\), by Lemma 4.14, there exits a unique Frechet mean \(\bar{m}_{\ell}:=\bar{m}(P_{\ell})\in M(p)\) of \(P_{\ell}\), and \(\bar{m}_{\ell}\in B_{r_{cx}^{\prime}}(m_{\ell}(S))\). Since the above holds for any \(S\in\operatorname{supp}(P)\cap S_{p}^{\operatorname{top}}\), we have that
\[\bar{m}_{\ell}\in H_{\ell}(r_{cx}^{\prime}).\] (B.38)
We now show that the set \(\{\bar{m}_{\ell}:\ell=1,\ldots,v_{p}\}\) is exactly the PSR mean set, or, equivalently that \(\bar{m}_{\ell}\)'s are the only minimizers of (B.34). Let \(m\in M(p)\) be arbitrary. Choose any \(S\in\operatorname{supp}(P)\cap S_{p}^{\operatorname{top}}\).
If \(m\in H_{\ell}(2r_{cx}^{\prime})\) for some \(\ell\), but \(m\neq\bar{m}_{\ell}\) then
\[f^{(\mathcal{PSR})}(m) =\int_{\operatorname{supp}(P)\cap S_{p}^{\operatorname{top}}}\min _{l=1,\ldots,v_{p}}d_{M}^{2}(m_{l}(X),m)P(dX)\] \[=\int_{\operatorname{supp}(P)\cap S_{p}^{\operatorname{top}}}d_{M }^{2}(m_{\ell}(X),m)P(dX)\quad\text{(by Lemma B.17(c))}\] \[>\int_{\operatorname{supp}(P)\cap S_{p}^{\operatorname{top}}}d_{M }^{2}(m_{\ell}(X),\bar{m}_{\ell})P(dX)\] \[=f^{(\mathcal{PSR})}(\bar{m}_{\ell}),\]
in which the strict inequality is given by the fact that \(\bar{m}_{\ell}\) is the unique Frechet mean of \(P_{\ell}\).
Next, suppose that \(m\notin\bigcup_{l=1}^{v_{p}}H_{l}(2r^{\prime}_{cx})\). For any \(\ell=1,\ldots,v_{p}\), we have \(m\notin H_{\ell}(2r^{\prime}_{cx})\), and there exits an \(S^{\prime}_{\ell}\in\operatorname{supp}(P)\cap S^{\operatorname{top}}_{p}\) such that
\[d_{M}(m,m_{\ell}(S^{\prime}_{\ell}))>2r^{\prime}_{cx}.\]
Thus, for any \(S\in\operatorname{supp}(P)\cap S^{\operatorname{top}}_{p}\) and for any \(\ell=1,\ldots,v_{p}\), \(d_{M}(m_{\ell}(S^{\prime}_{\ell}),m_{\ell}(S))=d_{\mathcal{SR}}(S^{\prime}_{ \ell},S)\) by Lemma B.17(d), and
\[d_{M}(m,m_{\ell}(S)) \geq d_{M}(m,m_{\ell}(S^{\prime}))-d_{M}(m_{\ell}(S^{\prime}),m_{ \ell}(S))\] \[>2r^{\prime}_{cx}-r^{\prime}_{cx}=r^{\prime}_{cx}.\]
(We used the fact that for \(S^{\prime}_{\ell},S\in\operatorname{supp}(P)\), \(d_{\mathcal{SR}}(S^{\prime}_{\ell},S)<r^{\prime}_{cx}\) (B.33.) Therefore,
\[f^{(\mathcal{PSR})}(m) =\int_{\operatorname{supp}(P)\cap S^{\operatorname{top}}_{p}} \min_{l=1,\ldots,v_{p}}d_{M}^{2}(m_{l}(X),m)P(dX)\] \[>\int_{\operatorname{supp}(P)\cap S^{\operatorname{top}}_{p}} \min_{l=1,\ldots,v_{p}}(r^{\prime}_{cx})^{2}P(dX)\] \[=(r^{\prime}_{cx})^{2}.\]
However, for any \(S\in\operatorname{supp}(P)\cap S^{\operatorname{top}}_{p}\)
\[f^{(\mathcal{PSR})}(\bar{m}_{\ell}) =\int_{\operatorname{supp}(P)\cap S^{\operatorname{top}}_{p}}d _{M}^{2}(m_{\ell}(X),\bar{m}_{\ell})P(dX)\] \[\leq\int_{\operatorname{supp}(P)\cap S^{\operatorname{top}}_{p}} d_{M}^{2}(m_{\ell}(X),m_{\ell}(S))P(dX)\] \[<\int_{\operatorname{supp}(P)\cap S^{\operatorname{top}}_{p}}(r^ {\prime}_{cx})^{2}P(dX)\] \[=(r^{\prime}_{cx})^{2}.\]
The above two results show that the set \(E_{n}:=\{\bar{m}_{\ell}:\ell=1,\ldots,v_{p}\}\) is exactly the partial scaling-rotation mean set for the sample \(X_{1},\ldots,X_{n}\). Since there are exactly \(v_{p}=2^{p-1}p!\) elements in \(E_{n}\), it follows from (4.6) that the elements of \(E_{n}\) must belong to the same orbit under the action of \(\mathcal{G}(p)\). Part (a) is now proved.
Proof of Corollary 4.16.: For part (a), the sample \(X_{1},\ldots,X_{n}\in S^{\operatorname{top}}_{p}\) satisfies the support condition (B.33). A proof of part (a) is given by following the proof of Theorem 4.15, with the probability measure \(P\) replaced by the empirical measure given by the sample \(X_{1},\ldots,X_{n}\).
To prove (b), for any given \(\ell=1,\ldots,v_{p}\), set the initial guess \(\hat{m}^{(0)}\) to be the eigen-decomposition \(m_{\ell}(X_{1})\) of \(X_{1}\). Then \(m_{\ell}(X_{i})\) forms a minimal pair with \(\hat{m}^{(0)}\) for \(i=1,\ldots,n\), and, as seen earlier, is the unique element of \(\mathcal{F}^{-1}(X_{i})\) with this property. Thus, for each \(i=1,\ldots,n\), \(m_{\ell}(X_{i})\) is the unique choice of \(m_{i}^{(0)}\) in Step 1 of the algorithm. Since \(\bar{m}_{\ell}\) is the unique Frechet mean of
\(\{m_{1,\ell},\ldots,m_{n,\ell}\}\) by Lemma 4.14, Step 2 of the algorithm yields \(\hat{m}^{(1)}=\bar{m}_{\ell}\). Thus \(\hat{m}^{(1)}\) is exactly the sample PSR mean \(\bar{m}_{\ell}\), and the sample PSR mean set is the orbit \(\mathcal{G}(p)\cdot\hat{m}^{(1)}\). Since \(\hat{m}^{(1)}\in H_{\ell}(r^{\prime}_{cx})\) by (B.38), the unique choice of \(m_{i}^{(1)}\) in Step 2 of the procedure is \(m_{\ell}(X_{i})\), the same as the previous iteration. Thus, \(\hat{m}^{(2)}=\hat{m}^{(1)}\) and the algorithm terminates.
#### b.4.4 Proof for the statements in Remark 4.17
Proof of "(i) yields (4.9)": Since \(d_{\mathcal{SR}}\) is a metric when restricted to \(S_{p}^{\mathrm{top}}\), and \(S_{0},X_{i}\in S_{p}^{\mathrm{top}}\), we have \(d_{\mathcal{SR}}(X_{i},X_{j})\leq d_{\mathcal{SR}}(X_{i},S_{0})+d_{\mathcal{ SR}}(X_{j},S_{0})<r^{\prime}_{cx}\) for all \(i,j=1,\ldots,n\).
Our proof of "(ii) yields (4.9)" consists of two parts.
Part (1): Suppose that \(m\in M^{\mathrm{top}}(p)\). Then by (3.4),
\[d_{\mathcal{SR}}(X_{i},\mathcal{F}(m))\leq d_{\mathcal{PSR}}(X_{i},m)<r^{ \prime}_{cx}/2.\]
(In fact, \(d_{\mathcal{SR}}(X_{i},\mathcal{F}(m))=d_{\mathcal{PSR}}(X_{i},m)\) in this case.) Since \(\mathcal{F}(m)\in S_{p}^{\mathrm{top}}\), the statement for (i) above yields (4.9).
Part (2): Suppose that \(m\notin M^{\mathrm{top}}(p)\). Since condition (ii) is true, we may choose an \(\epsilon\in(0,\max\{r^{\prime}_{cx}/2-d_{\mathcal{PSR}}(X_{i},m):i=1,\ldots,n\})\) so that
\[d_{\mathcal{PSR}}(X_{i},m)<r^{\prime}_{cx}/2-\epsilon\]
for all \(i=1,\ldots,n\). Since \(M^{\mathrm{top}}(p)\) is dense in \(M(p)\), one can choose \(m_{\epsilon}\in M^{\mathrm{top}}(p)\) such that \(d_{M}(m,m_{\epsilon})<\epsilon\). Recall that for each \(i\)\(X_{i}\in S_{p}^{\mathrm{top}}\) and we write \(\mathcal{F}(X_{i})=\mathcal{G}(p)\cdot m_{i}\) for some \(m_{i}\in\mathcal{F}(X_{i})\subset M^{\mathrm{top}}(p)\). Then,
\[d_{\mathcal{PSR}}(X_{i},m_{\epsilon}) =\inf_{h\in\mathcal{G}(p)}d_{M}(h\cdot m_{i},m_{\epsilon})\] \[\leq\inf_{h\in\mathcal{G}(p)}d_{M}(h\cdot m_{i},m)+d_{M}(m,m_{ \epsilon})\] \[=d_{\mathcal{PSR}}(X_{i},m)+d_{M}(m_{\epsilon},m)\] \[<(r^{\prime}_{cx}/2-\epsilon)+\epsilon=r^{\prime}_{cx}/2.\]
Since for \(m_{\epsilon}\in M^{\mathrm{top}}(p)\), \(d_{\mathcal{PSR}}(X_{i},m_{\epsilon})<r^{\prime}_{cx}/2\), Part (1) gives (4.9).
The following can be verified similarly: The condition (4.8) of Theorem 4.15 is guaranteed if either (i)\({}^{*}\) or (ii)\({}^{*}\) below is satisfied. Let \(X\) be a random variable following the absolutely continuous distribution \(P\) on \(\mathrm{Sym}^{+}(p)\).
1. There exists an \(S_{0}\in S_{p}^{\mathrm{top}}\) such that \(P(d_{\mathcal{SR}}(S_{0},X)<r^{\prime}_{cx}/2)=1\).
2. There exists an \(m\in M(p)\) such that \(P(d_{\mathcal{PSR}}(X,m)<r^{\prime}_{cx}/2)=1\).
We also provide a toy example for the fact: "For an \(S_{0}\in S_{p}^{\mathrm{lwr}}\), even if a condition \(d_{\mathcal{SR}}(S_{0},X_{i})<\epsilon\) (\(i=1,\ldots,n\)) is satisfied for arbitrarily small \(\epsilon\), \(d_{\mathcal{SR}}(X_{i},X_{j})\) may be larger than \(r^{\prime}_{cx}\)." Fix \(\epsilon>0\). Let \(p=2\), \(S_{0}=I_{2}\), \(X_{1}=R(0)\mathrm{diag}(e^{\epsilon/2},e^{-\epsilon/2})R(0)^{\prime}\) and \(X_{2}=R(\pi/4)\mathrm{diag}(e^{\epsilon/2},e^{-\epsilon/2})R(\pi/4)^{\prime}\). Then \(d_{\mathcal{SR}}(S_{0},X_{i})=\epsilon/\sqrt{2}<\epsilon\) for \(i=1,2\). However, \(d_{\mathcal{SR}}(X_{1},X_{2})=\sqrt{k}\pi/4>r^{\prime}_{cx}\). (For \(p=2\), \(r^{\prime}_{cx}=\sqrt{k}\beta_{(p)}/4=\sqrt{k}\pi/8\).)
#### b.4.5 Proof of Corollary 4.18
Proof of Corollary 4.18.: Since \(r<r^{\prime}_{cx}/2\), we have \(1=P(d_{\mathcal{SR}}(S_{0},X_{i})\leq r)\leq P(d_{\mathcal{SR}}(S_{0},X_{i})\leq r ^{\prime}_{cx}/2)\). By Remark 4.17 and Theorem 4.15 (more precisely, Condition (i)\({}^{*}\) in Appendix B.4.4 is satisfied, which in turn implies that the condition of Theorem 4.15 is satisfied), the PSR mean is unique up to the action of \(\mathcal{G}(p)\). Assertion (ii) is given by Theorem 3.7. Theorem 3.5 is applied with assertion (ii) to yield \(E^{(\mathcal{SR})}=\mathcal{F}(E^{(\mathcal{PSR})})\). By (i), \(E^{(\mathcal{PSR})}\) is the orbit \(\mathcal{G}(p)\cdot(U,D)\) for some \((U,D)\in M(p)\), and \(E^{(\mathcal{SR})}\) only contains the SPD matrix \(\bar{X}:=UDU^{\prime}\).
### Proofs for Section 4.4
#### b.5.1 Proofs of Theorem 4.19 and related results
Proof of Theorem 4.19.: We use the following lemma.
**Lemma B.18**.: _Under the condition of Theorem 4.19, the following holds with probability 1: For any \(m\in M(p)\) and for any sequence \(m_{n}\in M(p)\) satisfying \(d_{M}(m_{n},m)\to 0\) as \(n\to\infty\), we have_
\[\lim_{n\to\infty}f_{n}^{(\mathcal{PSR})}(m_{n})=f^{(\mathcal{PSR})}(m),\] (B.39)
_and in particular_
\[\lim_{n\to\infty}f_{n}^{(\mathcal{PSR})}(m)=f^{(\mathcal{PSR})}(m).\] (B.40)
Proof of Lemma b.18.: For any given \(m\in M(p)\), since the random variable \(d_{\mathcal{PSR}}^{2}(X,m)\) is integrable (Proposition 4.5), we have \(P(\lim_{n\to\infty}f_{n}^{(\mathcal{PSR})}(m)=f^{(\mathcal{PSR})}(m))=1\) by the strong law of large numbers. We shall extend this result to
\[P\left(\lim_{n\to\infty}f_{n}^{(\mathcal{PSR})}(m)=f^{(\mathcal{PSR})}(m)\ \ \text{for all}\ \ m\in M(p)\right)=1,\] (B.41)
thus showing (B.40).
Let \(m_{1},m_{2},\ldots\) be a countable dense sequence in \(M(p)\). Since for each \(k\), \(\lim_{n\to\infty}f_{n}^{(\mathcal{PSR})}(m_{k})=f^{(\mathcal{PSR})}(m_{k})\) almost surely, and \(\{m_{k}\}\) is countable,
\[P\left(\lim_{n\to\infty}f_{n}^{(\mathcal{PSR})}(m_{k})=f^{(\mathcal{PSR})}(m_ {k})\ \ \text{for all}\ \ k=1,2,\ldots\right)=1,\] (B.42)
Moreover, an argument similar to above leads us to conclude that, for every \(k\),
\[\frac{1}{n}\sum_{i=1}^{n}d_{\mathcal{PSR}}(X_{i},m_{k})\to\int_{\text{Sym}^{+ }(p)}d_{\mathcal{PSR}}(X,m_{k})P(dX)<\infty\] (B.43)
as \(n\to\infty\) almost surely.
Observe that by Lemma 4.1(ii), for any \(X\in\text{Sym}^{+}(p)\), \(m,m^{\prime}\in M(p)\),
\[|d_{\mathcal{PSR}}^{2}(X,m)-d_{\mathcal{PSR}}^{2}(X,m^{\prime})|\leq d_{M}(m,m ^{\prime})(2d_{\mathcal{PSR}}(X,m)+d_{M}(m,m^{\prime})),\]
which in turn leads to
\[|f_{n}^{(\mathcal{PSR})}(m)- f_{n}^{(\mathcal{PSR})}(m^{\prime})|\leq\frac{1}{n}\sum_{i=1}^{n} \left|d_{\mathcal{PSR}}^{2}(X_{i},m)-d_{\mathcal{PSR}}^{2}(X_{i},m^{\prime})\right|\] \[\leq g_{n}(m,m^{\prime}):=d_{M}(m,m^{\prime})\left(\frac{2}{n} \sum_{i=1}^{n}d_{\mathcal{PSR}}(X_{i},m^{\prime})+d_{M}(m,m^{\prime})\right).\] (B.44)
Choose an arbitrary \(m_{0}\in M(p)\). Since \(\{m_{k}\}\) is dense in \(M(p)\), we can choose a subsequence \(m_{k_{i}}\) satisfying \(d_{M}(m_{k_{i}},m_{0})\to 0\) as \(i\to\infty\). For each \(i\), the inequality (B.44) with \((m,m^{\prime})\) replaced by \((m_{0},m_{k_{i}})\) is
\[f_{n}^{(\mathcal{PSR})}(m_{k_{i}})-g_{n}(m_{0},m_{k_{i}})\leq f_{n}^{(\mathcal{PSR })}(m_{0})\leq f_{n}^{(\mathcal{PSR})}(m_{k_{i}})+g_{n}(m_{0},m_{k_{i}}).\]
Taking the limit as \(n\to\infty\), by (B.42) and (B.43),
\[f^{(\mathcal{PSR})}(m_{k_{i}})-g(m_{0},m_{k_{i}}) \leq\liminf_{n\to\infty}f_{n}^{(\mathcal{PSR})}(m_{0})\] \[\leq\limsup_{n\to\infty}f_{n}^{(\mathcal{PSR})}(m_{0})\] \[\leq f^{(\mathcal{PSR})}(m_{k_{i}})+g(m_{0},m_{k_{i}}),\]
where \(g(m,m^{\prime})=d_{M}(m,m^{\prime})(2\int_{\operatorname{Sym}^{+}(p)}d_{ \mathcal{PSR}}(X,m^{\prime})P(dX)+d_{M}(m,m^{\prime}))\). Further taking the limit as \(i\to\infty\), since \(f^{(\mathcal{PSR})}\) is continuous (see Lemma 4.7), we have proven (B.41).
To show (B.39), let \(m_{n}\) be a sequence such that \(\lim_{n\to\infty}d_{M}(m_{n},m)=0\) for some \(m\in M(p)\). Again from the inequality (B.44), we have
\[f_{n}^{(\mathcal{PSR})}(m)+g_{n}(m_{n},m))\leq f_{n}^{(\mathcal{PSR})}(m_{n}) \leq f_{n}^{(\mathcal{PSR})}(m)+g_{n}(m_{n},m).\]
By (B.41), \(f_{n}^{(\mathcal{PSR})}(m)\) converges to \(f^{(\mathcal{PSR})}(m)\) almost surely, while \(g_{n}(m_{n},m)\to 0\) almost surely as well. This proves (B.39).
We next show that with probability 1
\[\cap_{k=1}^{\infty}\overline{\cup_{n=k}^{\infty}E_{n}^{(\mathcal{PSR})}} \subset E^{(\mathcal{PSR})}.\] (B.45)
We assume \(\cap_{k=1}^{\infty}\overline{\cup_{n=k}^{\infty}E_{n}^{(\mathcal{PSR})}}\) is non-empty; otherwise, (B.45) holds.
Let \(\ell=\inf_{m\in M(p)}f^{(\mathcal{PSR})}(m)\) and \(\ell_{n}=\inf_{m\in M(p)}f_{n}^{(\mathcal{PSR})}(m)\) for \(n=1,2,\ldots\). By (B.40), we have for any \(m\in M(p)\) there exists \(\epsilon_{n}\to 0\) such that \(f^{(\mathcal{PSR})}(m)\geq f_{n}^{(\mathcal{PSR})}(m)-\epsilon_{n}\geq\ell_{n} -\epsilon_{n}\). Taking the limit superior of both sides, we have \(f^{(\mathcal{PSR})}(m)\geq\limsup_{n\to\infty}\ell_{n}\). Taking the infimum over \(m\in M(p)\), we get
\[\ell=\inf_{m\in M(p)}f^{(\mathcal{PSR})}(m)\geq\limsup_{n\to\infty}\ell_{n}.\] (B.46)
Thus, any subsequential limit of \(\ell_{n}\) is bounded above by \(\ell\).
For any \(m_{0}\in\cap_{k=1}^{\infty}\overline{\cup_{n=k}^{\infty}E_{n}^{(\mathcal{PSR})}}\), there exists a subsequence \(\{n_{k}:k=1,2,\ldots\}\) of \(1,2,\ldots\) such that \(m_{n_{k}}\in E_{n_{k}}^{(\mathcal{PSR})}\) and \(\lim_{k\to\infty}d_{M}(m_{n_{k}},m_{0})=0\). By (B.39),
\[\ell_{n_{k}}=f_{n_{k}}^{(\mathcal{PSR})}(m_{n_{k}})\to f^{(\mathcal{PSR})}(m_{0 })\geq\inf_{m\in M(p)}f^{(\mathcal{PSR})}(m)=\ell\] (B.47)
as \(k\to\infty\) almost surely. In view of (B.46), \(f^{(\mathcal{PSR})}(m_{0})\leq\ell\), which in turn gives \(f^{(\mathcal{PSR})}(m_{0})=\ell\), i.e., \(m_{0}\in E^{(\mathcal{PSR})}\). Since \(m_{0}\in\cap_{k=1}^{\infty}\overline{\cup_{n=k}^{\infty}E_{n}^{(\mathcal{PSR })}}\) was arbitrary, (B.45) is verified.
Let \(a_{n}:=\sup_{m\in E_{n}^{(\mathcal{PSR})}}d_{M}(m,E^{(\mathcal{PSR})})\). For each \(n\), choose \(m_{n}\in E_{n}^{(\mathcal{PSR})}\) such that
\[a_{n}(1-\frac{1}{n})<d_{M}(m_{n},E^{(\mathcal{PSR})})\leq a_{n}.\] (B.48)
Assume with probability \(1\) that the event (B.45) is occurred. Then, every accumulation point of \(m_{n}\) lies in \(E^{(\mathcal{PSR})}\). Thus, either \(a_{n}\to 0\) or there is no accumulation point (equivalently, \(a_{n}\to\infty\) as \(n\to\infty\)). We will rule out the case \(a_{n}\to\infty\) by contradiction.
Suppose that \(\lim_{n\to\infty}a_{n}=\infty\). Then, for any choice \(m_{0}\in E^{(\mathcal{PSR})}\), we have by (B.48) \(d_{M}(m_{n},m_{0})\to\infty\).
For \(r>0\) let \(K_{r}=\{S\in\text{Sym}^{+}(p):\text{every eigenvalue $\lambda$ of $S$ satisfies }|\log\lambda|\leq r\}\). Choose \(r_{0}\) large enough so that \(P(K_{r_{0}})>0\) and \(m_{0}\in\mathcal{F}^{-1}(K_{r_{0}})\); such \(r_{0}\) exists since \(\cup_{r>0}K_{r}=\text{Sym}^{+}(p)\). Then for any \(X,Y\in K_{r_{0}}\),
\[\sup_{m\in\mathcal{F}^{-1}(X),m^{\prime}\in\mathcal{F}^{-1}(Y)}d_{M}(m,m^{ \prime})\leq(k\text{diag}(SO(p))^{2}+4pr^{2})^{1/2}=:C(r_{0}).\] (B.49)
We now claim that for any \(m_{n}\) with \(d_{M}(m_{n},m_{0})\to\infty\), there exists \(M_{n}\to\infty\) satisfying
\[d_{\mathcal{PSR}}(X,m_{n})\geq M_{n},\quad\text{for any $X\in K_{r_{0}}$}\] (B.50)
and \(\lim_{n\to\infty}M_{n}=\infty\). To verify, for any \(X\in\text{Sym}^{+}(p)\) and \(m_{n}\in M(p)\), let \(m_{X}^{(n)}\in\mathcal{F}^{-1}(X)\) satisfy
\[d_{M}(m_{X}^{(n)},m_{n})=\min_{m\in\mathcal{F}^{-1}(X)}d_{M}(m,m_{n})=\inf_{m \in\mathcal{F}^{-1}(X)}d_{M}(m,m_{n})=d_{\mathcal{PSR}}(X,m_{n}).\]
By the triangle inequality, and by (B.49), for any \(X\in K_{r_{0}}\) we have
\[d_{M}(m_{n},m_{0}) \leq d_{M}(m_{n},m_{X}^{(n)})+d_{M}(m_{X}^{(n)},m_{X}^{(0)})+d_{M }(m_{X}^{(0)},m_{0})\] \[=d_{\mathcal{PSR}}(X,m_{n})+d_{M}(m_{X}^{(n)},m_{X}^{(0)})+d_{ \mathcal{PSR}}(X,m_{0})\] \[\leq d_{\mathcal{PSR}}(X,m_{n})+2C(r_{0}).\]
In particular, \(d_{\mathcal{PSR}}(X,m_{n})\geq d_{M}(m_{n},m_{0})-2C(r_{0})\). Taking \(M_{n}=d_{M}(m_{n},m_{0})-2C(r_{0})\), (B.50) is verified.
For each \(n\), choose a subsequence \(n_{1},\ldots,n_{k(n)}\) of \(1,2,\ldots,n\) so that \(X_{n_{j}}\in K_{r_{0}}\) for \(j=1,\ldots,k(n)\). Then by the strong law of large numbers, \(\lim_{n\to\infty}k(n)/n=P(K_{r_{0}})>0\). This fact, together with (B.50), gives
\[f_{n}^{(\mathcal{PSR})}(m_{n})=\frac{1}{n}\sum_{i=1}^{n}d_{\mathcal{PSR}}^{2}(X _{i},m_{n})\geq\frac{1}{n}\sum_{j=1}^{k(n)}d_{\mathcal{PSR}}^{2}(X_{n_{j}},m_{ n})\geq\frac{k(n)}{n}M_{n}^{2}\to\infty.\]
However, \(f_{n}^{(\mathcal{PSR})}(m_{n})=\inf_{m\in M(p)}f_{n}^{(\mathcal{PSR})}(m)\leq f _{n}^{(\mathcal{PSR})}(m_{0})\), and \(f_{n}^{(\mathcal{PSR})}(m_{n})\to\infty\) while \(f_{n}^{(\mathcal{PSR})}(m_{0})\to f^{(\mathcal{PSR})}(m_{0})<\infty\) by (B.40), yielding a contradiction. Thus \(a_{n}\to 0\) under the probability one event satisfying (B.45).
The following lemma shows that the conclusion of Theorem 4.19 is equivalent to the strong consistency of Huckemann (2011b) in the sense of Bhattacharya and Patrangenaru (2003).
**Lemma B.19**.: _Let \((M,d)\) be a metric space and let \(E,E_{1},E_{2},\ldots\) are non-empty sets in \(M\). We have \(\lim_{n\to\infty}\sup_{m\in E_{n}}d(m,E)=0\) if and only if, for any \(\epsilon>0\), there exists \(N(\epsilon)\) such that \(\cup_{n\geq N(\epsilon)}E_{n}\subset\{m\in M:d(E,m)\leq\epsilon\}\)._
Proof of Lemma b.19.: By definition, \(\lim_{n\to\infty}\sup_{m\in E_{n}}d_{M}(m,E)=0\) is equivalent to the statement that for any \(\epsilon>0\), there exists \(N(\epsilon)\) such that for all \(n\geq N(\epsilon)\), \(\sup_{m\in E_{n}}d_{M}(m,E)\leq\epsilon\). If \(\sup_{m\in E_{n}}d_{M}(m,E)\leq\epsilon\) for all \(n\geq N(\epsilon)\), then for any \(m\in\cup_{n\geq N(\epsilon)}E_{n}\), \(m\in E_{n}\) for some \(n\geq N(\epsilon)\), and \(d(m,E)\leq\sup_{m\in E_{n}}d_{M}(m,E)\leq\epsilon\), which gives \(m\in\overline{B}_{\epsilon}(E):=\{m\in M:d(M,m)\leq\epsilon\}\). On the other hand, if \(N(\epsilon)\) is such that \(\cup_{n\geq N(\epsilon)}E_{n}\subset\overline{B}_{\epsilon}(E)\), then for any \(n\geq N(\epsilon)\) and for any \(m\in E_{n}\), \(d(m,E)\leq\epsilon\). Thus, \(\sup_{m\in E_{n}}d(m,E)\leq\epsilon\) as well.
Proof of Corollary 4.20.: By Theorem 4.19 there exists a probability \(1\) event \(A\) in which \(d_{M}(m_{n},E^{(\mathcal{PSR})})\to 0\) as \(n\to\infty\) for any sequence \(\{m_{n}\in E_{n}^{(\mathcal{PSR})}\}\). Assume the event \(A\) has occurred. Choose a sequence \(\{m_{n}\in E_{n}^{(\mathcal{PSR})}\}\). Since \(E^{(\mathcal{PSR})}=\mathcal{G}(p)\cdot\mu\), there exists a sequence \(\{h_{n}\in\mathcal{G}(p)\}\) such that \(d_{M}(m_{n},h_{n}\cdot\mu)\to 0\) as \(n\to\infty\).
Now fix an arbitrary element \(m_{0}\in E^{(\mathcal{PSR})}\), and for each \(n\) let \(h_{n}^{\prime}\in\mathcal{G}(p)\) be such that \(m_{0}=h_{n}^{\prime}\cdot h_{n}\cdot\mu\). Then, as \(n\to\infty\)
\[d_{M}(h_{n}^{\prime}\cdot m_{n},m_{0})=d_{M}(h_{n}^{\prime}\cdot m_{n},h_{n}^{ \prime}\cdot h_{n}\cdot\mu)=d_{M}(m_{n},h_{n}\cdot\mu)\to 0.\]
Since \(h_{n}^{\prime}\cdot m_{n}\in E_{n}^{(\mathcal{PSR})}\) as well as \(m_{n}\in E_{n}^{(\mathcal{PSR})}\) (as observed in (4.6)), we have
\[0\leq d_{M}(E_{n}^{(\mathcal{PSR})},m_{0})=\inf_{m\in E_{n}^{(\mathcal{PSR})}}d _{M}(m,m_{0})\leq d_{M}(h_{n}^{\prime}\cdot m_{n},m_{0}),\]
and in particular \(d_{M}(E_{n}^{(\mathcal{PSR})},m_{0})\to 0\) as \(n\to\infty\). Since \(m_{0}\in E^{(\mathcal{PSR})}\) was arbitrary, we have shown (4.12). Assertion (4.13) is verified by the conclusion of Theorem 4.19 and (4.12).
Proof of Corollary 4.21.: (i) The hypotheses of the corollary ensure that Theorem 4.19 applies, so the event that for every \(\epsilon>0\), there exists \(N(\epsilon)\) such that for all \(n\geq N(\epsilon)\)
\[\sup_{m\in E_{n}^{(\mathcal{PSR})}}d_{M}(m,E^{(\mathcal{PSR})})\leq\epsilon,\] (B.51)
occurs with probability \(1\). Assume this event has occurred. Fix an \(\epsilon>0\), and let \(N=N(\epsilon)\). For each \(n\geq N\), choose \(S_{n}\in\mathcal{F}(E_{n}^{(\mathcal{PSR})})\). Then one can choose \(m_{n}\in\mathcal{F}^{-1}(S_{n})\) such that \(m_{n}\in E_{n}^{(\mathcal{PSR})}\). Likewise for an arbitrary \(S_{0}\in\mathcal{F}(E^{(\mathcal{PSR})})\), let \(m_{0}\in M(p)\) satisfy \(m_{0}\in\mathcal{F}^{-1}(S_{0})\cap E^{(\mathcal{PSR})}\). Thus,
\[d_{\mathcal{SR}}(S_{n},\mathcal{F}(E^{(\mathcal{PSR})})) =\inf_{S_{0}\in\mathcal{F}(E^{(\mathcal{PSR})})}d_{\mathcal{SR}}( S_{n},S_{0})\] \[=\inf_{m_{0}\in E^{(\mathcal{PSR})}}d_{\mathcal{SR}}(S_{n}, \mathcal{F}(m_{0}))\] \[\leq\inf_{m_{0}\in E^{(\mathcal{PSR})}}d_{\mathcal{PSR}}(S_{n},m _{0})\quad\text{(by (\ref{eq:1}))}\] \[\leq\inf_{m_{0}\in E^{(\mathcal{PSR})}}d_{M}(m_{n},m_{0})=d_{M}( m_{n},E^{(\mathcal{PSR})})\leq\epsilon.\]
The last inequality holds since we have assumed the event (B.51) has occurred. Since \(S_{n}\in\mathcal{F}(E_{n}^{(\mathcal{PSR})})\) was arbitrary, \(\lim_{n\to\infty}\sup_{S\in\mathcal{F}(E_{n}^{(\mathcal{PSR})})}d_{\mathcal{ SR}}(S,\mathcal{F}(E^{(\mathcal{PSR})}))=0\). Hence the statement that this limit equals \(0\) is a probability-one event.
(ii) and (iii). Since the probability measure \(P\) has finite PSR-variance, the condition \(E^{(\mathcal{SR})}\subset S_{p}^{\text{top}}\) implies that \(\mathcal{F}(E^{(\mathcal{PSR})})=E^{(\mathcal{SR})}\) (by Theorem 3.5). Conclusion (i) then implies conclusion (ii), which in turn implies conclusion (iii).
#### b.5.2 Proof of Theorem 4.22
Proof of Theorem 4.22.: By Assumption (A2), \(E^{(\mathcal{PSR})}=\mathcal{G}(p)\cdot m\) for some \(m^{\prime}\in E^{(\mathcal{PSR})}\), which implies that for any \(m,m^{\prime}\in E^{(\mathcal{PSR})}\) and any \(S\in\text{Sym}^{+}(p)\), \(d_{\mathcal{PSR}}(S,m)=d_{\mathcal{PSR}}(S,m^{\prime})\). Therefore, in the presence of Assumption (A2), Assumption (A3) implies that \(P(d_{\mathcal{PSR}}(X,m_{0})<r^{\prime}_{cx})=1\) for any \(m_{0}\in E^{(\mathcal{PSR})}\). Let \(m_{0}\in E^{(\mathcal{PSR})}\) be given.
Let \(A_{1}\) be the event that \(E_{n}^{(\mathcal{PSR})}\) is unique up to the action of \(\mathcal{G}(p)\) for all \(n\), and let \(A_{2}\) be the event that \(d_{\mathcal{PSR}}(X_{i},m_{0})<r^{\prime}_{cx}\) and \(X_{i}\in S_{p}^{\text{top}}\) for all \(i\in\mathbb{N}\). Assumption (A2) implies that \(P(A_{1})=1\). Assumptions (A1) and (A3) imply that \(P(A_{2})=1\) as well. In the rest of this proof, we assume that the probability \(1\) event \(A_{1}\cap A_{2}\) has occurred.
For each \(n\), let \(m^{\prime}_{n}\in E_{n}^{(\mathcal{PSR})}\) be arbitrary. Since the event \(A_{1}\) has occurred, \(E_{n}^{(\mathcal{PSR})}=\mathcal{G}(p)\cdot m^{\prime}_{n}\). Let \(m_{n}\) be any minimizer of the function \(d_{M}(\cdot,m_{0})\) over \(\mathcal{G}(p)\cdot m^{\prime}_{n}\). Thus, by definition, \(m_{n}\) is a sample PSR mean. We first show that such an \(m_{n}\) is unique.
Let \(\text{supp}_{1}(P)=\{S\in\text{Sym}^{+}(p):d_{\mathcal{PSR}}(S,m_{0})<r^{ \prime}_{cx}\}\cap S_{p}^{\text{top}}\), so that \(X_{i}\in\text{supp}_{1}(P)\) for all \(i\in\mathbb{N}\) since the event \(A_{1}\cap A_{2}\) is occurred. For any \(S\in\text{supp}_{1}(P)\), one can choose a unique \(\tilde{m}:=\tilde{m}(m_{0};S)\in\mathcal{F}^{-1}(S)\) such that
\(d_{\mathcal{PSR}}(S,m_{0})<\min_{h\in\mathcal{G}(p)\setminus\{I_{p}\}}d_{M}(h\cdot \tilde{m},m_{0})\). Note that \(d_{M}(\tilde{m},m_{0})<r^{\prime}_{cx}\) since \(S\in\operatorname{supp}_{1}(P)\). To verify that such a choice is indeed unique, let \(m^{\prime}=h\cdot\tilde{m}\) for some \(h\in\mathcal{G}(p)\setminus\{I_{p}\}\). An application of Lemma 4.12(c), together with the triangle inequality, gives \(d(m^{\prime},m_{0})>3r^{\prime}_{cx}\), and thus the minimizer \(m\) is unique.
For each \(i=1,\ldots,n\), \(X_{i}\in\operatorname{supp}_{1}(P)\), so we can set \(m_{{}_{X_{i}}}=\tilde{m}(m_{0};X_{i})\) to be the unique eigen-decomposition of \(X_{i}\), closest to \(m_{0}\). Then, the sample PSR mean objective function can be written as
\[f_{n}^{(\mathcal{PSR})}(m)=\frac{1}{n}\sum_{i=1}^{n}d_{\mathcal{PSR}}^{2}(X_{i },m)=\frac{1}{n}\sum_{i=1}^{n}d_{M}^{2}(m_{{}_{X_{i}}},m),\] (B.52)
which is exactly the Frechet objective function on \((M(p),d_{M})\) with data-points \(m_{{}_{X_{1}}},\ldots,m_{{}_{X_{n}}}\). Since \(d_{M}(m_{{}_{X_{i}}},m_{0})<r^{\prime}_{cx}\) for all \(i\), Lemma 4.14 implies that the Frechet mean \(\bar{m}_{n}\) of \(\{m_{{}_{X_{1}}},\ldots,m_{{}_{X_{n}}}\}\) is unique and satisfies \(d_{M}(\bar{m}_{n},m_{0})<r^{\prime}_{cx}\). Moreover, by (B.52), \(\bar{m}_{n}\in E_{n}^{(\mathcal{PSR})}\). Since \(\mathcal{F}(\bar{m}_{n})=\mathcal{F}(E_{n}^{(\mathcal{PSR})})\in\operatorname {supp}_{1}(P)\) as well, \(\bar{m}_{n}\) is the unique minimizer of \(\min_{m\in\mathcal{G}(p)\cdot\bar{m}_{n}}d_{M}(m,m_{0})\). Thus, \(m_{n}=\bar{m}_{n}\) is determined uniquely (if the probability \(1\) event \(A_{1}\cap A_{2}\) has occurred).
(a) By Corollary 4.20, \(\lim_{n\to\infty}d_{H}(E_{n}^{(\mathcal{PSR})},E^{(\mathcal{PSR})})=0\) with probability \(1\). Therefore, with probability \(1\) the sequence \(\{m_{n}\in E_{n}^{(\mathcal{PSR})}\}\) chosen above satisfies,
\[\lim_{n\to\infty}d_{M}(m_{n},m_{0})=0.\] (B.53)
(b) For the sample \(X_{1},\ldots,X_{n}\), the PSR mean \(m_{n}\) minimizes \(f_{n}^{(\mathcal{PSR})}(\cdot)\). Thus, for any neighborhood \(V\subset B_{r^{\prime}_{cx}}(0)\subset\mathbb{R}^{d}\) containing zero, \(x_{n}\) is a minimizer of the function \(g_{n}:V\to[0,\infty)\) defined by
\[g_{n}(x)=\sum_{i=1}^{n}d_{\mathcal{PSR}}^{2}(X_{i},\phi_{m_{0}}^{-1}(x)),\] (B.54)
where \(\phi_{m_{0}}(\cdot)=\operatorname{vec}\circ\tilde{\varphi}_{m_{0}}(\cdot)\) (see (4.14) and (4.15)). Note that (4.15) implies that for any \(x\in B_{r^{\prime}_{cx}}(0)\), \(\phi_{m_{0}}^{-1}(x)\in B_{r^{\prime}_{cx}}^{d_{M}}(m_{0})\).
We now establish that for each \(S\in\operatorname{supp}_{1}(P)\), there exists a unique \(m_{S}\in\mathcal{F}^{-1}(S)\) such that
\[d_{\mathcal{PSR}}^{2}(S,\phi_{m_{0}}^{-1}(x))=d_{M}^{2}(m_{S},\phi_{m_{0}}^{-1 }(x))\] (B.55)
for all \(x\in B_{r^{\prime}_{cx}}(0)\). To verify this, suppose that \(S\in\operatorname{supp}_{1}(P)\) satisfies (B.55) and let \(m_{S}\in\mathcal{F}^{-1}(S)\) be the unique point at which \(\min_{m\in\mathcal{F}^{-1}(S)}d_{M}(m,m_{0})\) is achieved. By the triangle inequality,
\[d_{M}(m_{S},\phi_{m_{0}}^{-1}(x))\leq d_{M}(m_{S},m_{0})+d_{M}(m_{0},\phi_{m_{0} }^{-1}(x))<2r^{\prime}_{cx}.\]
For \(m^{\prime}\in\mathcal{F}^{-1}(S)\) such that \(m^{\prime}\neq m_{S}\), we have \(d_{M}(m^{\prime},m_{S})\geq 4r^{\prime}_{cx}\) by Lemma 4.12(c). Again by the triangle inequality,
\[d_{M}(m^{\prime},\phi_{m_{0}}^{-1}(x))\geq d_{M}(m^{\prime},m_{S})-d_{M}(m_{S}, \phi_{m_{0}}^{-1}(x))>2r^{\prime}_{cx}>d_{M}(m_{S},\phi_{m_{0}}^{-1}(x)).\]
Thus, \(m_{S}\) is the unique element of \(\mathcal{F}^{-1}(S)\) satisfying
\[d_{\mathcal{PSR}}(S,\phi_{m_{0}}^{-1}(x))=\inf_{m\in\mathcal{F}^{-1}(S)}d_{M}(m, \phi_{m_{0}}^{-1}(x))=d_{M}(m_{S},\phi_{m_{0}}^{-1}(x)),\] (B.56)
as asserted.
Next, recall that for each \(i\), \(m_{{}_{X_{i}}}=\tilde{m}(m_{0};X_{i})\) is the eigen-decomposition of \(X_{i}\) closest to \(m_{0}\), and let \(x\in B_{r^{\prime}_{ex}}(0)\) be arbitrary. Using (B.56), we rewrite (B.54) as
\[g_{n}(x)=\sum_{i=1}^{n}d_{M}^{2}(m_{{}_{X_{i}}},\phi_{m_{0}}^{-1}(x)),\]
where for every \(i\), \(m_{{}_{X_{i}}}\in B_{r^{\prime}_{ex}}^{d_{M}}(m_{0})\), a ball that also contains \(\phi_{m_{0}}^{-1}(x)\). We shall now discuss the consequence of the bounded support \(B_{r^{\prime}_{ex}}^{d_{M}}(m_{0})\). It is well known that \((\mathrm{Diag}^{+}(p),g_{\mathcal{D}^{+}})\) has non-positive sectional curvature and infinite injectivity radius, and \((SO(p),kg_{SO})\) has non-negative sectional curvature (bounded above by \(\Delta(SO(p),kg_{SO})=1/(4k)\) and injectivity radius \(r_{\mathrm{inj}}(SO(p),kg_{SO})=\sqrt{k}\pi\). Thus, for the product Riemannian manifold \((M(p),g_{M})\), it follows that \(r_{\mathrm{inj}}:=r_{\mathrm{inj}}(M,g_{M})=r_{\mathrm{inj}}(SO(p),kg_{SO})\), \(\Delta:=\Delta(M,g_{M})=\Delta(SO(p),kg_{SO})\), and that the radius \(r^{\prime}_{ex}\) of the ball \(B_{r^{\prime}_{ex}}^{d_{M}}(m_{0})\) satisfies
\[r^{\prime}_{ex}=\frac{\sqrt{k}\beta_{\mathcal{G}(p)}}{4}\leq\frac{\sqrt{k}\pi} {8}=\frac{1}{2}\min\{r_{\mathrm{inj}},\frac{\pi}{2\sqrt{\Delta}}\}=\frac{ \sqrt{k}\pi}{2},\] (B.57)
where the first inequality follows from Lemma 4.12(b). (The right-hand side of (B.57) equals the convexity radius of \((M,g_{M})\).)
By Afsari (2011) and Afsari, Tron and Vidal (2013), the inequality (B.57) ensures that (i) the open ball \(B_{r^{\prime}_{ex}}^{d_{M}}(m_{0})\) is _strongly convex2_ in \(M(p)\); (ii) for any \(m\in B_{r^{\prime}_{ex}}^{d_{M}}(m_{0})\), the function \(d_{M}^{2}(m,\cdot)\) is a \(C^{\infty}\) function in \(B_{r^{\prime}_{ex}}^{d_{M}}(m_{0})\) (since the cut locus of \(m\) does not intersect \(B_{r^{\prime}_{ex}}^{d_{M}}(m_{0})\)), which in turn implies that \(g_{n}\) is \(C^{\infty}\) ; and (iii) for any \(m_{1},\ldots,m_{n}\in B_{r^{\prime}_{ex}}^{d_{M}}(m_{0})\), the function \(\sum_{i=1}^{n}d_{M}^{2}(m_{i},\cdot)\) (restricted to \(B_{r^{\prime}_{ex}}^{d_{M}}(m_{0})\)) is convex (strictly convex if at least two \(m_{i}\)'s are distinct). In particular, when \(X_{i}\)'s are sampled from an absolutely continuous distribution \(P\) (as assumed in (A1)), with probability \(1\) the Hessian matrix of \(d_{M}^{2}(m_{{}_{X_{i}}},\phi_{m^{\prime}}^{-1}(x))\) at \(x=0\), for arbitrary \(m^{\prime}\in B_{r^{\prime}_{ex}}^{d_{M}}(m_{0})\), is well-defined and positive definite. Furthermore, thanks to the identification (B.55), we are assured that with probability \(1\), for any \(i=1,\ldots,n\), the function \(h_{X_{i}}(\cdot):=d_{\mathcal{PSR}}^{2}(X_{i},\phi_{m_{0}}^{-1}(\cdot))=d_{M}^{2 }(m_{X_{i}},\phi_{m_{0}}^{-1}(\cdot))\) is \(C^{\infty}\).
Footnote 2: A set \(B\) in \((M,g)\) is strongly convex if any two points in \(B\) can be connected by a unique minimal-length geodesic in \(M\) and the geodesic segment entirely lies in \(B\).
For all \(n\geq 0\), now let \(x_{n}=\phi_{m_{0}}(m_{n})\in\mathbb{R}^{d}\) ; note that \(x_{0}=0\). Observe that (B.53) implies that as \(n\to\infty\), \(\phi_{m_{0}}(m_{n})\to\phi_{m_{0}}(m_{0})\), i.e. that \(x_{n}\to 0\). By Theorem 2.1 of Afsari (2011) (or, equivalently by Theorem 2.6 of Afsari, Tron and Vidal (2013)), the gradient vector field \(\mathrm{grad}_{x}\,g_{n}(x)\) has a unique zero in \(B_{r^{\prime}_{ex}}(0)\), and the location of this zero is \(x_{n}\).
By the Mean Value Theorem applied to each component of \(\operatorname{grad}_{x}g_{n}\),
\[0=n^{-1/2}\operatorname{grad}_{x}g_{n}(x_{n})=n^{-1/2}\operatorname{grad}_{x}g_{ n}(0)+n^{-1}\mathbf{H}g_{n}(t_{n})\cdot(\sqrt{n}x_{n}),\]
where the \(j\)th coordinate of \(t_{n}\) is \(t_{j}x_{n,j}\) for suitable \(t_{j}\in[0,1]\), where \(x_{n,j}\) is the \(j\)th coordinate of \(x_{n}\in\mathbb{R}^{d}\).
Let \(x\in B_{r^{\prime}_{ex}}(0)\) be arbitrary. Since \(X_{1},X_{2},\ldots\) are i.i.d. with bounded support and \(d^{2}_{\mathcal{PSR}}(X_{i},\phi^{-1}_{m_{0}}(\cdot))\) is \(C^{\infty}\), the random vectors \(\operatorname{grad}_{x}d^{2}_{\mathcal{PSR}}(X_{i},\phi^{-1}_{m_{0}}(x))\)\((i=1,\ldots,n)\) are i.i.d. and bounded. This fact leads to
\[\int_{\operatorname{Sym}^{+}(p)}\frac{\partial}{\partial x_{i}}d^{2}_{ \mathcal{PSR}}(X,\phi^{-1}_{m_{0}}(x))P(dX)=\frac{\partial}{\partial x_{i}} \int_{\operatorname{Sym}^{+}(p)}d^{2}_{\mathcal{PSR}}(X,\phi^{-1}_{m_{0}}(x))P (dX)\]
(which equals \(0\) at \(x=0\) as \(m_{0}\in E^{(\mathcal{PSR})}\)), and thus \(E(\operatorname{grad}_{x}d^{2}_{\mathcal{PSR}}(X,\phi^{-1}_{m_{0}}(0)))=0\). Moreover, since the product of any two entries of \(\operatorname{grad}_{x}d^{2}_{\mathcal{PSR}}(X_{i},\phi^{-1}_{m_{0}}(x))\) is bounded as well, \(\Sigma_{P}:=\operatorname{Cov}(\operatorname{grad}_{x}d^{2}_{\mathcal{PSR}}(X,\phi^{-1}_{m_{0}}(0)))\) exists.
Since the first two moments of \(\operatorname{grad}_{x}d^{2}_{\mathcal{PSR}}(X_{i},\phi^{-1}_{m_{0}}(0))\) exist, the multivariate classical central limit theorem (_cf._Anderson, 1958) implies that as \(n\to\infty\),
\[n^{-1/2}\operatorname{grad}_{x}g_{n}(0)=\frac{\sqrt{n}}{n}\sum_{i=1}^{n} \operatorname{grad}_{x}d^{2}_{\mathcal{PSR}}(X_{i},\phi^{-1}_{m_{0}}(0))\]
weakly converges to \(N_{d}(0,\Sigma_{P})\).
Likewise, the continuity of \(\mathbf{H}g_{n}(\cdot)\) ensures that each entry of the matrix
\[H_{P}(x):=E\left(\mathbf{H}d^{2}_{\mathcal{PSR}}(X,\phi^{-1}_{m_{0}}(x))\right)\]
exists. Since, with probability \(1\), \(t_{n}\to 0\) (because \(x_{n}\to 0\)) and since \(\mathbf{H}g_{n}(\cdot)=\sum_{i=1}^{n}\mathbf{H}h_{X_{i}}(\cdot)\) is continuous, the law of large numbers implies that \(n^{-1}\mathbf{H}g_{n}(t_{n})\) converges in probability to \(H_{P}:=H_{P}(0)\) as \(n\to\infty\). Recall that, with probability \(1\), the function \(h_{X}(\cdot)\) is \(C^{\infty}\) and strictly convex on \(B_{r^{\prime}_{ex}(0)}\), and thus for any \(x\in B_{r^{\prime}_{ex}(0)}\), both \(\mathbf{H}d^{2}_{\mathcal{PSR}}(X,\phi^{-1}_{m_{0}}(x))\) and \(\mathbf{H}g_{n}(x)=\sum_{i=1}^{n}\mathbf{H}h_{X_{i}}(x)\) are positive definite almost surely. Therefore, \(H_{P}=H_{P}(0)\) is invertible, and so is \(\mathbf{H}g_{n}(t_{n})\) almost surely. Thus, by Slutsky's theorem, \(\sqrt{n}x_{n}=(n^{-1}\mathbf{H}g_{n}(t_{n}))^{-1}\times n^{-1/2}\operatorname {grad}_{x}g_{n}(0)\) converges in distribution to \(N_{d}(0,H_{P}^{-1}\Sigma_{P}H_{P}^{-1})\).
## Appendix C Additional numerical results
As referenced in Section 5.3, we plot the quantiles of the log-eigenvalues and rotation angles of the linearized PSR means against the quantiles of the standard normal distribution for each group in Figure 5, as a visual check of normality of the linearized PSR sampling distributions. With the exception of the tails, the normal QQ plots remain within the 95% confidence envelope, despite the small sample sizes (\(n_{i}=19,17\)).
## Acknowledgements
The first author was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2019R1A2C2002256). |
2303.06856 | Dynamic Neural Network for Multi-Task Learning Searching across Diverse
Network Topologies | In this paper, we present a new MTL framework that searches for structures
optimized for multiple tasks with diverse graph topologies and shares features
among tasks. We design a restricted DAG-based central network with
read-in/read-out layers to build topologically diverse task-adaptive structures
while limiting search space and time. We search for a single optimized network
that serves as multiple task adaptive sub-networks using our three-stage
training process. To make the network compact and discretized, we propose a
flow-based reduction algorithm and a squeeze loss used in the training process.
We evaluate our optimized network on various public MTL datasets and show ours
achieves state-of-the-art performance. An extensive ablation study
experimentally validates the effectiveness of the sub-module and schemes in our
framework. | Wonhyeok Choi, Sunghoon Im | 2023-03-13T05:01:50Z | http://arxiv.org/abs/2303.06856v1 | # Dynamic Neural Network for Multi-Task Learning
###### Abstract
In this paper, we present a new MTL framework that searches for structures optimized for multiple tasks with diverse graph topologies and shares features among tasks. We design a restricted DAG-based central network with read-in/read-out layers to build topologically diverse task-adaptive structures while limiting search space and time. We search for a single optimized network that serves as multiple task adaptive sub-networks using our three-stage training process. To make the network compact and discretized, we propose a flow-based reduction algorithm and a squeeze loss used in the training process. We evaluate our optimized network on various public MTL datasets and show ours achieves state-of-the-art performance. An extensive ablation study experimentally validates the effectiveness of the sub-module and schemes in our framework.
## 1 Introduction
Multi-task learning (MTL), which learns multiple tasks simultaneously with a single model has gained increasing attention [3, 13, 14]. MTL improves the generalization performance of tasks while limiting the total number of network parameters to a lower level by sharing representations across tasks. However, as the number of tasks increases, it becomes more difficult for the model to learn the shared representations, and improper sharing between less related tasks causes negative transfers that sacrifice the performance of multiple tasks [15, 36]. To mitigate the negative transfer in MTL, some works [6, 25, 32] separate the shared and task-specific parameters on the network.
More recent works [21, 29, 38] have been proposed to dynamically control the ratio of shared parameters across tasks using a Dynamic Neural Network (DNN) to construct a task adaptive network. These works mainly apply the cell-based architecture search [19, 27, 41] for fast search times, so that the optimized sub-networks of each task consist of fixed or simple structures whose layers are simply branched, as shown in Fig. 0(a). They primarily focus on finding branching patterns in specific aspects of the architecture, and feature-sharing ratios across tasks. However, exploring optimized structures in restricted network topologies has the potential to cause performance degradation in heterogeneous MTL scenarios due to unbalanced task complexity.
We present a new MTL framework searching for sub-network structures, optimized for each task across diverse network topologies in a single network. To search the graph topologies from richer search space, we apply Directed Acyclic Graph (DAG) for the homo/heterogeneous MTL frameworks, inspired by the work in NAS [19, 27, 40]. The MTL in the DAG search space causes a scalability issue, where the number of parameters and search time increase quadratically as the number of hidden states increases.
To solve this problem, we design a restricted DAG-based central network with read-in/read-out layers that allow our MTL framework to search across diverse graph topologies while limiting the search space and search time. Our flow-restriction eliminates the low-importance long skip connection among network structures for each task, and creates the required number of parameters from \(O(N^{2})\) to \(O(N)\). The read-in layer is the layer that directly connects all the hidden states from the input state, and the read-out layer is the layer that connects all the hidden states to the last feature layer. These are key to having various network topological representations, such as polytree structures, with early-exiting and multi-embedding.
Then, we optimize the central network to have compact task-adaptive sub-networks using a three-stage training procedure. To accomplish this, we propose a squeeze loss and a flow-based reduction algorithm. The squeeze loss limits the upper bound on the number of parameters. The reduction algorithm prunes the network based on the weighted adjacency matrix measured by the amount of information flow in each layer. In the end, our MTL framework constructs a compact single network that serves as multiple task-specific networks with unique structures, such as chain, polytree, and parallel diverse topologies, as presented in Fig. 0(b). It also dynamically controls the amount of sharing representation among tasks.
The experiments demonstrate that our framework successfully searches the task-adaptive network topologies of each task and leverages the knowledge among tasks to make a generalized feature. The proposed method outperforms state-of-the-art methods on all common benchmark datasets for MTL. Our contributions can be summarized as follows:
* We present for the first time an MTL framework that searches both task-adaptive structures and sharing patterns among tasks. It achieves state-of-the-art performance on all public MTL datasets.
* We propose a new DAG-based central network composed of a flow restriction scheme and read-in/out layers, that has diverse graph topologies in a reasonably restricted search space.
* We introduce a new training procedure that optimizes the MTL framework for compactly constructing various task-specific sub-networks in a single network.
## 2 Related Works
**Neural Architecture Search (NAS)** Neural Architecture Search is a method that automates neural architecture engineering [8]. Early works [40, 2, 41] use reinforcement learning based on rewarding the model accuracy of the generated architecture. Alternative approaches [30, 24, 37] employ evolutionary algorithms to optimize both the neural architecture and its weights. These methods search for an adequate neural architecture in a large discrete space. Gradient-based NAS methods [4, 19, 39] of formulating operations in a differentiable search space are proposed to alleviate the scalability issues. They generally use the convex combination from a set of operations instead of determining a single operation. Most NAS approaches [19, 27, 40] adopt the complete DAG as a search space, to find the architecture in the various network topologies. However, DAG-based MTL frameworks have not been proposed, because of their considerably high computational demands.
**Multi-Task Learning (MTL)** Multi-task learning in deep neural networks can be categorized into hard and soft parameter sharing types [31]. Hard parameter sharing [13, 3, 14], also known as shared-bottom, is the most commonly used approach to MTL. This scheme improves generalization performance while reducing the computational cost of the network, by using shared hidden layers between all tasks. However, it typically struggles with the negative transfer problem [15, 36] which degrades performance due to improper sharing between less relevant tasks.
On the other hand, soft-parameter sharing [25, 32] alleviate the negative transfer problem by changing the shared parameter ratio. These approaches mitigate the negative transfer by flexibly modifying shared information, but they cannot maintain the computational advantage on the classic shared-bottom model. Recently, advanced approaches have been proposed to adjust shared parameters using a dynamic neural network [21, 22, 38, 29] and NAS [10].
**NAS-style MTL** MTL frameworks using a dynamic neural network (DNN) can be divided into two categories. One employs the Mixture-of-Experts (MoE) [33], which is designed for conditional computation of per-sample, to MTL by determining the experts of each task [21, 9, 22]. They have a fixed depth finalized task-specific sub-network, because they choose experts from a fixed number of modular layers. This causes a critical issue with task-balancing in the heterogeneous MTL. The other adopts the skip-or-select policy to select task-specific blocks from the set of residual blocks [38] or a shared block per layer [12, 29]. These methods only create a simple serial path in the finalized sub-network of each task, and a parallel link cannot be reproduced. Moreover, they heuristically address the unbalanced
Figure 1: **Graph representation of various neural networks. (a) Graph representation of existing dynamic neural network for multitask learning and ours. (b) Topologies of a completed Directed Acyclic Graph (DAG) and the output sub-graph of DAG structure.**
task-wise complexity issues in the heterogenous MTL (_e.g_. manually changing the balancing parameters based on the task complexity [29, 38]). Thus, none of the aforementioned works focus on finding the optimal task-specific structure in the MTL scenario.
## 3 Method
We describe our MTL framework, which searches for optimized network structures tailored to each task across diverse graph topologies, while limiting search time. Sec. 3.1 describes the composition of the searchable space of our central network and our flow-restriction method for efficiently balancing the topological diversity of task-specific sub-networks and searching space. Sec. 3.2 introduces our mechanism to determine the task-adaptive sub-network in the central network and describes the overall training process and loss function. The overall pipeline of our method is illustrated in Fig. 2.
### The Central Network with Diverse Topologies
Our central network composes a graph \(G=(V,E)\) with layers \(E\) in which the \(N\) hidden states \(V=\{v_{1},...,v_{N}\}\) are topologically sorted:
\[E=\{e_{ij}\}_{i,j\in\{1,...,N\}},\text{ where }i<j, \tag{1}\] \[e_{ij}(\,\cdot\,;\theta_{ij}):\mathbb{R}^{N^{v_{i}}}\to \mathbb{R}^{N^{v_{j}}}, \tag{2}\]
where \(e_{ij}\) is the operation that transfer the state \(v_{i}\) to \(v_{j}\) with the weight parameters \(\theta_{ij}\in\Theta\), and \(N^{v_{k}}\) is the number of the elements of hidden state \(v_{k}\), respectively. We adopt the DAG structure [19, 27, 40] for the network. However, the optimized structure from DAG is searched from \(2^{N(N-1)/2}\) network topologies, which are too large to be optimized in time. To address the issue while maintaining diversity, we propose a flow-restriction and read-in/read-out layers.
**Flow-restriction** The flow-restriction eliminates the low-importance long skip connection among network structures for each task by restricting \(j-i\leq M\) where \(M\) is the flow constant. Regulating the searchable edges in the graph reduces the required number of parameters and searching time from \(O(N^{2})\) to \(O(N)\), but it sacrifices the diversity and capacity of the network topologies.
To explain the topological diversity and task capacity of sub-networks, we define the three components of network topology, as follows:
1. \(\mathcal{D}(G)=\max(\{\text{Distance}(v_{i},v_{j})\}_{v_{i},v_{j}\in V}),\)
2. \(\mathcal{W}(G)=\max(\{\text{Out}_{v_{i}}\}_{v_{i}\in V}),\)
3. \(\mathcal{S}(G_{s},G)=|E_{s}|/|E|,\)
where \(\text{Out}_{v_{i}}\) is the out-degree of the vertex \(v_{i}\) and \(\text{Distance}(\cdot)\) is the operation that counts the number of layers (or edges) between two connected vertices. The network depth \(\mathcal{D}(G)\) is equal to the longest distance between two vertices in the graph \(G\). The network width \(\mathcal{W}(G)\) is equal to the maximum value of the out-degrees of hidden states in the graph \(G\). The sparsity \(\mathcal{S}(G_{s},G)\) of the sub-graph \(G_{s}\) is the ratio of finalized edges \(|E_{s}|\) over entire edges \(|E|\). The first two components are measurements of the topological diversity of the finalized sub-network, while the last one is for the sub-network capacity. While a complete DAG has the full range of depth and width components, the flow-restricted DAG has the properties of depth and width components as follows:
1. \(\min(\{\mathcal{D}(G_{s})\}_{G_{s}\subseteq G})=\lceil(|V|/M)\rceil,\)
2. \(\max(\{\mathcal{W}(G_{s})\}_{G_{s}\subseteq G})=M,\)
where \(\{G_{s}\}\) is the entire sub-graph of \(G\). The min-depth property (Prop. 1) can cause the over-parameterized problem when the capacity of the task is extremely low. The max-width property (Prop. 2) directly sacrifices the diversity of network topologies in the search space.
**Read-in/Read-out layers** We design read-in/read-out layers to mitigate these problems. The read-in layer embeds the input state \(v_{0}\) into all hidden states \(v_{i}\in V\) with task-specific weights \(\alpha_{i}^{k}\in\mathcal{A}\) for all \(K\) tasks \(\mathcal{T}=\{T_{k}\}_{k\in\{1,...,K\}}\) as follows:
\[v_{i}^{k}=\sigma(\alpha_{i}^{k})\cdot v_{0}, \tag{3}\]
where \(\sigma(\cdot)\) is the sigmoid function. Then, the central network sequentially updates the hidden state \(v_{1}^{k}\) to \(v_{N}^{k}\) with the task-specific weights \(\gamma_{ij}^{k}\in\Gamma\) that correspond to \(e_{ij}^{k}\):
\[v_{j}^{k}=\frac{1}{\text{In}_{v_{j}^{k}}}\sum_{e_{ij}\in E}(\sigma(\gamma_{ij} ^{k})\cdot e_{ij}(v_{i}^{k};\theta_{ij})), \tag{4}\]
where \(\text{In}_{v_{j}^{k}}\) is the in-degree of \(v_{j}^{k}\). Note that \(\Gamma\) is the adjacency matrix of graph \(G\). Finally, the read-out layer aggregates all hidden state features \(\{v_{i}^{k}\}_{i\in\{1,...,N\}}\) with the task-specific weights \(\beta_{i}^{k}\in\mathcal{B}\) and produces the last layer feature \(v_{L}^{k}\) for each task \(k\) as follows:
\[v_{L}^{k}=\sum_{i\in\{1,...,N\}}(\sigma(\beta_{i}^{k})\cdot v_{i}^{k}). \tag{5}\]
The final prediction \(\hat{\mathbf{y}}_{k}\) for each task \(T_{k}\) is computed by passing the aggregated features \(v_{L}^{k}\) through the task-specific head \(H^{k}(\cdot)\) as follows:
\[\hat{\mathbf{y}}^{k}=H^{k}(v_{L}^{k}). \tag{6}\]
All upper-level parameters \(\mathcal{A},\mathcal{B}\), and \(\Gamma\) are learnable parameters, and their learning process is described in Sec. 3.2. The read-in/read-out layers enable the optimized network to have a multi-input/output sub-network. The read-out layer
aggregates all hidden states of the central network during the search stage, allowing a specific task to use the early hidden states to output predictions while ignoring the last few layers. These early-exit structures help alleviate the over-parameterized problem in simple tasks.
### Network Optimization and Training Procedure
We describe the entire training process for our MTL framework, which consists of three stages, including warm-up, search, and fine-tuning stages.
**Warm-up stage** As with other gradient-based NAS, our framework has upper-level parameters that determine the network structure and parameters. This bilevel optimization with a complex objective function in an MTL setup makes the training process unstable. For better convergence, we initially train all network parameters across tasks for a few iterations. We train the weight parameters of the central network \(\Theta\) that shares all operations \(E\) across tasks. We fix all values of the upper-level parameters \(\mathcal{A},\mathcal{B}\), and \(\Gamma\) as \(0\), which becomes \(0.5\) after the sigmoid function \(\sigma(\cdot)\), and freeze them. We train the network parameters \(\Theta\) in Eq. 4 with a task loss as follows:
\[\mathcal{L}_{task}=\sum_{k=0}^{K}\mathcal{L}_{T_{k}}(\mathbf{\hat{y}}_{T_{k}},\mathbf{y}_{T_{k}}), \tag{7}\]
where \(\mathcal{L}_{T_{k}}\) is the task-specific loss, which is the unique loss function for each task.
**Search stage** After the warm-up stage, we unfreeze the upper-level parameters \(\mathcal{A},\mathcal{B}\), and \(\Gamma\) and search the network topologies appropriate to each task. We train all these parameters and network parameters \(\Theta\) simultaneously by minimizing the task loss and the proposed squeeze loss \(\mathcal{L}_{sq}\) as follows:
\[\mathcal{L}_{train}=\mathcal{L}_{task}+\lambda_{sq}\mathcal{L}_{sq}, \tag{8}\]
\[\mathcal{L}_{sq}=\sum_{k=0}^{K}(\max((\sum_{\gamma_{ij}\in\Gamma}(\sigma( \gamma_{ij}))-\kappa),0)), \tag{9}\]
where \(\lambda_{sq}\) is the balancing hyperparameter, and \(\kappa\) is a constant number called the budget, that directly reduces the sparsity of the central network. This auxiliary loss is designed to encourage the model to save computational resources.
**Fine-tuning stage** Lastly, we perform a fine-tuning stage to construct a compact and discretized network structure using the trained upper-level parameters \(\mathcal{A},\mathcal{B}\), and \(\Gamma\). To do so, we design a flow-based reduction algorithm that allows the network to obtain high computational speed by omitting low-importance operations, as described in Alg. 1. It measures the amount of information flow of each layer \(e_{ij}\) in the central network by calculating the ratio of edge weight with respect to other related edges weight. Then, it sequentially removes the edge which has the lowest information flow. Alg. 1 stops when the edge selected to be deleted is
Figure 2: **Overall pipeline. Our central network follows a DAG-based structure with read-in/out layers, and task-specific heads. The long skip connection is cut by our flow-restriction. Our framework with a 3-task MTL learning scenario consists of three stages including warm-up, search, and fine-tuning stages. The warm-up stage only learns the parameters of the main network \(\Theta\) and task-specific weights. The search stage learns the upper-level parameters \(\mathcal{A},\mathcal{B},\Gamma\), and task-specific weights. Then, flow-based reduction eliminates the low-importance edges from the network. The fine-tuning stage re-trains the network with the remaining important parameters.**
the only edge that can reach the graph. We use the simple Depth-first search algorithm to check the reachability of \(\hat{\Gamma}\) between hidden state \(v_{N_{\alpha}}\) to \(v_{N_{\beta}}\). All the output \(\hat{\mathcal{A}},\hat{\mathcal{B}},\hat{\Gamma}\) in Alg. 1, which is the discretized binary adjacency matrix, represent the truncated task-adaptive sub-network. After the reduction, we fix the upper-level parameters and only re-train the network parameters \(\Theta\), and we do not use the sigmoid function in Eq. 3-5
```
Input:\(\Gamma\in\mathbb{R}^{N\times N}\), \(\mathcal{A}\in\mathbb{R}^{N}\), \(\mathcal{B}\in\mathbb{R}^{N}\) Output:\(\hat{\Gamma},\hat{\mathcal{A}},\hat{\mathcal{B}}\)// Discretized params.
1 initialize zero matrix \(\Psi,\hat{\Psi}\in\mathbb{R}^{(N+2)\times(N+2)}\)
2\(N_{\alpha}=\operatorname*{argmax}(\mathcal{A})\)
3\(N_{\beta}=\max(N_{\alpha}+1,\operatorname*{argmax}(\mathcal{B}))\)
4\(\Gamma[:N_{\alpha},:]\gets 0\)//removeedges\(<\)\(N_{\alpha}\)
5\(\Gamma[:,N_{\beta}:]\gets 0\)//removeedges\(>\)\(N_{\beta}\)
6\(\Psi[1:N,1:N]\leftarrow\Gamma\)//merge\(\Gamma,\mathcal{A},\mathcal{B}\)into\(\Psi\)
7\(\Psi[0,N_{\alpha}+1:N_{\beta}+1]\leftarrow\mathcal{A}[N_{\alpha}:N_{\beta}]\)
8\(\Psi[N_{\alpha}+1:N_{\beta}+1,N+1]\leftarrow\mathcal{B}[N_{\alpha}:N_{\beta}]\)
9whileTruedo
10 initialize zero matrix \(S\in\mathbb{R}^{(N+2)\times(N+2)}\)
11for\(i\gets 0\)to\(\{N+1\}\)do
12for\(j\gets 0\)to\(\{N+1\}\)do
13\(S[i,j]\gets\psi_{ij}\big{(}\frac{1}{\operatorname*{ln}_{v_{i}}}\sum_{ \psi_{ki}\in\Psi}(\psi_{ki})/\sum_{\psi_{ki}\in\Psi}(\psi_{ik})+\frac{1}{ \operatorname*{Out}_{j_{j}}}\sum_{\psi_{jk}\in\Psi}(\psi_{jk})/\sum_{\psi_{kj} \in\Psi}(\psi_{kj})\big{)}\)
14\(\psi_{ij}\gets 0\), where \(S[i,j]\) is nonzero min value
15ifgraph builded from \(\Psi\) is reachablethen
16\(\hat{\Psi}\leftarrow\Psi\)
17
18else
19\(\hat{\Psi}[\hat{\Psi}>0]\gets 1\)//discretization
20\(\hat{\Gamma}\leftarrow\hat{\Psi}[1:N,1:N]\)//splitinto\(\Gamma,\mathcal{A},\mathcal{B}\)
21\(\hat{\mathcal{A}}\leftarrow\hat{\Psi}[0,1:N]\)
22\(\hat{\mathcal{B}}\leftarrow\hat{\Psi}[1:N,N+1]\)
23return\(\hat{\Gamma},\hat{\mathcal{A}},\hat{\mathcal{B}}\)
```
**Algorithm 1**Flow-based Reduction
## 4 Experiments
We first describe the experimental setup in Sec. 4.1. We compare our method to state-of-the-art MTL frameworks on various benchmark datasets for MTL in Sec. 4.2. We also conduct extensive experiments and ablation studies to validate our proposed method in Sec. 4.3-4.5.
### Experimental Settings
**Dataset** We use four public datasets for multi-task scenarios including Omniglot [17], NYU-v2 [34], Cityscapes [7], and PASCAL-Context [26]. We use these datasets, configured by previous MTL works [29, 38], not their original sources.
* **Omniglot** Omniglot is a classification dataset consisting of 50 different alphabets, and each of them consists of a number of characters with 20 handwritten images per character.
* **NYU-v2** NYU-v2 comprises images of indoor scenes, fully labeled for joint semantic segmentation, depth estimation, and surface normal estimation.
* **Cityscapes** Cityscapes dataset collected from urban driving scenes in European cities consists of two tasks: joint semantic segmentation and depth estimation.
* **PASCAL-Context** PASCAL-Context datasets contain PASCAL VOC 2010 [34] with semantic segmentation, human parts segmentation, and saliency maps, as well as additional annotations for surface normals and edge maps.
**Competitive methods** We compare the proposed framework with state-of-the-art methods [1, 11, 12, 18, 20, 22, 23, 25, 28, 29, 32, 38] and various baselines including a single task and a shared-bottom. The single-task baseline trains each task independently using a task-specific encoder and task-specific head for each task. The shared-bottom baseline trains multiple tasks simultaneously with a shared encoder and separated task-specific heads.
We compare our method with MoE-based approaches, including Soft Ordering [23], Routing [28], and Gumbel-Matrix [22], as well as a NAS approach [18] on Omniglot datasets. CMTR [18] can modify parameter count, similar to our method. We compare our method with other soft-parameter sharing methods including Cross-Stitch [25], Sluice network [32], and NDDR-CNN [11] and the dynamic neural network (DNN)-based methods including MTAN [20], DEN [1], and Adashare [38] for the other three datasets. We provide the evaluation results of two recent works, LTB [12] and PHN [29] for PASCAL-Context datasets because only the results are reported in their papers, but no source codes are provided.
**Multi-task scenarios** We set up multi-task scenarios with the combination of several tasks out of a total of seven tasks, including classification \(\mathcal{T}_{cls}\), semantic segmentation \(\mathcal{T}_{sem}\), depth estimation \(\mathcal{T}_{dep}\), surface normal prediction \(\mathcal{T}_{norm}\), human-part segmentation \(\mathcal{T}_{part}\), saliency detection \(\mathcal{T}_{sal}\), and edge detection \(\mathcal{T}_{edge}\). We follow the MTL setup in [38] for three datasets including Omniglot, NYU-v2, and cityscapes, and [29] for PASCAL-Context. We simulate a homogeneous MTL scenario of a 20-way classification task in a multi-task setup using Omniglot datasets by following [23]. Each task predicts a class of characters in a single alphabet set. We use the other three datasets for heterogeneous MTL. We set three tasks including segmentation, depth estimation, and normal estimation for NYU-v2 and two with segmentation, depth estimation for Cityscapes.
We set five tasks \(\mathcal{T}_{sem}\), \(\mathcal{T}_{part}\), \(\mathcal{T}_{norm}\), \(\mathcal{T}_{sal}\), and \(\mathcal{T}_{edge}\) as used in [29] for PASCAL-Context datasets.
**Evaluation metrics** We follow the common evaluation metrics utilized in the competitive methods. We use an accuracy metric for the classification task. The semantic segmentation task is measured by mean Intersection over Union (mIoU) and pixel accuracy. We use the mean absolute and mean relative errors, and relative difference as the percentage of \(\delta=\max(\hat{\mathbf{d}}/\mathbf{d},\mathbf{d}/\hat{\mathbf{d}})\) within thresholds \(1.25^{\{1,2,3\}}\) for the depth estimation task. For the evaluation of the PASCAL-Context datasets, we follow the same metrics used in [29] for all tasks. As reported in [38], we report a single relative performance \(\Delta_{\mathcal{T}_{i}}\) in Tab. 1-4 for each task \(\mathcal{T}_{i}\) with respect to the single-task baseline, which defined as:
\[\Delta_{\mathcal{T}_{i}}=\frac{100}{|\mathcal{M}|}\sum_{j=0}^{|\mathcal{M}|}( -1)^{l_{j}}\frac{(\mathcal{M}_{\mathcal{T}_{i},j}-\mathcal{M}_{\mathcal{T}_{i},j}^{single})}{\mathcal{M}_{\mathcal{T}_{i},j}^{single}}, \tag{10}\]
where \(\mathcal{M}_{\mathcal{T}_{i},j}\) and \(\mathcal{M}_{\mathcal{T}_{i},j}^{single}\) are the \(j\)-th metric of \(i\)-th task \(\mathcal{T}_{i}\) from each method and the single task baseline, respectively. The constant \(l_{j}\) is 1 if a lower value represents better for the metric \(\mathcal{M}_{\mathcal{T}_{i},j}\) and 0 otherwise. The averaged relative performance for all tasks \(\mathcal{T}\) is defined as:
\[\Delta_{\mathcal{T}}=\frac{1}{|\mathcal{T}|}\sum_{i=1}^{|\mathcal{T}|}\Delta_ {\mathcal{T}_{i}}. \tag{11}\]
The absolute task performance for all metrics is reported in the supplementary material.
**Network and training details** For our central network, we set 8 hidden states, the same as the existing MoE-based works [22, 28] and use the same classification head for Omniglot datasets. We set 12 hidden states, the same as the VGG-16 [35], except for the max-pooled state, and use the Deeplab-v2 [5] decoder structure as all task heads for all the other datasets, respectively. We use the Adam [16] optimizer to update both upper-level parameters and network parameters. We use cross-entropy loss for semantic segmentation and L2 loss for the other tasks. For a fair comparison, we train our central network from scratch without pre-training for all experiments. We describe more details on the network structure and hyperparameter settings in the supplementary material.
model outperforms AdaShare [38] for both the NYU-v2 and Cityscapes datasets, while keeping almost the same number of parameters (NYU-v2: 1.00 vs. 1.04 and Cityscapes 1.00 vs. 0.96). With the flow constant \(M=7,9\), our method outperforms all the baselines by a large margin. The results from the PASCAL-Context datasets in Tab. 4 show that all baselines suffer from negative transfer in several tasks, as the number of tasks increases. Only Adashare and Cross Stitch slightly outperform the single-task baseline (see the performance \(\Delta_{\mathcal{T}}\)). On the other hand, ours with \(M=9\) achieves the best performance without any negative transfers for all tasks.
Interestingly, the required parameters of the search space increase almost in proportion to the increase of the flow constant, but there is no significant difference in the number of parameters of the finalized networks. For example, the required number of parameters for the network with the flow constant \(M=3,5,7\) is 2.77, 4.23, and 5.38, respectively. This demonstrates that the proposed flow-based reduction algorithm is effective in removing low-relative parameters while maintaining performance. Specifically, we observe that the total performance of our framework with \(M=7\) is slightly better than the \(M=9\) setup in Tab. 2 despite its smaller architecture search space. To investigate this, we further analyze the tendency in performance and computational complexity with respect to the flow constant in Sec. 4.4.
### Analysis of Topologies and Task Correlation
To demonstrate the effectiveness of the proposed learning mechanism, we visualize our finalized sub-network topologies in Fig. 3-(a-c) and the adjacency matrix for NYU-v2 3-task learning in Fig. 3-(d). We also analyze the diversity and capacity of task-adaptive network topologies in Tab. 5 with network depth \(\mathcal{D}\), width \(\mathcal{W}\), and sparsity \(\mathcal{S}\) described in Sec. 3.1. These analyses provide three key observations on our task-adaptive network and the correlation among tasks.
First, _the tasks of segmentation and surface normal hardly share network parameters_. Various task-sharing patterns are configured at the edge, but there is only one sharing layer between the two tasks. This experiment shows a low relationship between tasks, as it is widely known that the proportion of shared parameters between tasks indicates task correlation [22, 29, 38].
Second, _long skip connections are mostly lost_. The length of the longest skip connection in the finalized network is 5, and the number of these connections is 2 out of 18 layers, even with the flow constant of 7. This phenomenon is observed not only in NYU-v2 datasets but also in the other MTL datasets. This can be evidence that the proposed flow-restriction reduces search time while maintaining high performance even by eliminating the long skip connection in the DAG-based central network.
Lastly, _Depth estimation task requires more network resources than segmentation and surface normal estimation tasks_. We analyze the network topologies of the finalized sub-network of each task in NYU-v2 datasets using three components defined in Sec. 3. The depth \(\mathcal{D}\) and width \(\mathcal{W}\) of the sub-network increase in the order of semantic segmentation, surface normal prediction, and depth estimation tasks. Likewise, the sparsity \(\mathcal{S}\) of the depth network is the highest. This experiment shows that the depth network is the task that requires the most network resources.
### Performance w.r.t. Flow-restriction
We analyze performance and computational complexity with respect to the flow constant \(M\) for the NYU-v2 and Cityscapes datasets. We report the rate of performance degradation with respect to the complete DAG search space in Fig. 4. We observe that the reduction rate of the final performance does not exceed 3% even with considerably lower flow constants. The performance is saturated at a flow constant of about 7 or more. This means that the proposed method optimizes the task adaptive sub-network regardless of the size of the network search space, if it satisfies the minimum required size.
Figure 3: **Graph Representation of Task-adaptive Sub-network** The finalized sub-network topologies (\(M=7\)) trained with NYU-v2 datasets is illustrated as graph. (a-c) The task-adaptive sub-network of semantic segmentation, depth estimation, and surface normal, respectively. (d) The adjacency matrix where color represents the discretized value for the activated edge of each task.
\begin{table}
\begin{tabular}{c|c c c} \hline \hline Task & \(\mathcal{D}\) & \(\mathcal{W}\) & \(\mathcal{S}\) \\ \hline Semantic Seg. & 5 & 3 & 0.103 \\ Depth & 7 & 3 & 0.192 \\ Surface normal & 7 & 2 & 0.128 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Topologies analysis on **NYU-v2 datasets**.
To demonstrate the effectiveness of our flow-based reduction (FBR) algorithm, we compare it to two other reduction algorithms (random and threshold) for Cityscapes datasets in Fig. 5. The random reduction literally removes the edges randomly, and the thresholding method sequentially removes the edge which has the lowest value in the adjacency matrix \(\Gamma\). We measure the rate of performance degradation of the pruned network of each reduction algorithm with respect to the non-reduced network while changing the sparsity \(\mathcal{S}\). Note that our method automatically determines the sparsity, so for this experiment only, we add a termination condition that stops the network search when a certain sparsity is met. The results show that the proposed flow-reduction method retains performance even with a low sparsity rate. This means that our method efficiently prunes the low-related edge of the network compared to the other methods.
### Ablation Study on Proposed Modules
We conduct ablation studies on the four key components of our framework; the flow-restriction, read-in/out layers, flow-based reduction, and squeeze loss. We report the relative task performance and the number of parameters of the finalized network with/without the components in Tab. 6. The results show that our framework, including all components, achieves the lowest number of parameters and the second-best performance. Our method without flow-based reduction achieves the best performance. However, the finalized network from this setup has about a five-times larger number of parameters than ours because the network has never been pruned in a training process. This demonstrates that our restricted DAG-based central network is optimized to build compact task-adaptive sub-networks with performance close to the optimized sub-network from a complete DAG-based network.
## 5 Conclusions
In this paper, we present a new MTL framework to search for task-adaptive network structures across diverse network topologies in a single network. We propose flow restriction to solve the scalability issue in a complete DAG search space while maintaining the diverse network topological representation of the DAG search space by adopting read-in/out layers. We also introduce a flow-based reduction algorithm that prunes the network efficiently while maintaining overall task performance and squeeze loss, limiting the upper bound on the number of network parameters. The extensive experiments demonstrate that the sub-module and schemes of our framework efficiently improve both the performance and compactness of the network. Our method compactly constructs various task-specific sub-networks in a single network and achieves the best performance among all the competitive methods on four MTL benchmark datasets.
## Acknowledgement
This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. RS-2023-00210908).
\begin{table}
\begin{tabular}{c|c c c|c|c} \hline \hline Method & \(\Delta\tau_{rms}\uparrow\) & \(\Delta\tau_{rms}\uparrow\) & \(\Delta\tau_{rms}\uparrow\) & \(\Delta\tau\) & \# of Param \(\downarrow\) \\ \hline Ours (M=7) & +13.4 & **+9.2** & +10.7 & +11.1 & **1.31** \\ \hline w/o flow-restriction & +13.2 & **+9.2** & +10.4 & +11.0 & 1.80 \\ w/o read-in/out & +11.7 & +8.3 & +10.4 & +10.1 & 1.43 \\ w/o flow-based reduction & **+14.2** & **+9.2** & **+11.1** & **+11.5** & 6.50 \\ w/o \(\mathcal{L}_{\text{avg}}\) & +13.2 & +8.8 & +10.7 & +10.9 & 1.38 \\ \hline \hline \end{tabular}
\end{table}
Table 6: **Ablation study on the proposed modules (NYU-v2).**
Figure 4: **Model performance with respect to the proposed flow-restriction.** We plot the degradation ratio of the performance (left y-axis) and parameter (right y-axis) by changing the flow constant \(M\). We measure the final averaged task performance with NYU-v2 and Cityscapes datasets marked by purple and pink circle markers, respectively. We also measure the number of parameters marked by gray square markers.
Figure 5: **Model Performance with respect to the network sparsity.** We plot the performance degradation rate by changing network sparsity. We compare our flow-based reduction algorithm to two other schemes; random selection and thresholding. |
2304.03751 | Ideal category of a Noetherian ring | In this paper we describe the categories $\mathbb{L}_R$ , [$\mathbb{R}_R$]
whose objects are left [right] ideals of a Noetherian ring $R$ with unity and
morphisms are appropriate $R$-linear transformations. Further it is shown that
these are preadditive categories with zero object and are full subcategories of
the $R$-modulue category with the property that these are categories with
subobjects and the morphisms admits factorization property. | P G Romeo, Minnumol P K | 2023-04-07T17:43:44Z | http://arxiv.org/abs/2304.03751v1 | # Ideal category of a Noetherian ring
###### Abstract.
In this paper we describe the categories \(\mathbb{L}_{R}\), \([\mathbb{R}_{R}]\) whose objects are left [right] ideals of a Noetherian ring \(R\) with unity and morphisms are appropriate \(R\)-linear transformations. Further it is shown that these are preadditive categories with zero object and are full subcategories of the \(R\) - module category with the property that these are categories with subobjects and the morphisms admits factorization property.
Key words and phrases:Ring, Ideals, Finitely generated ideals, Category of ideals of ring.
First author wishes to thank Council of Scientific and Industrial Research(CSIR) INDIA, for providing financial support.
a morphism \(g\circ f:\operatorname{dom}f\,\to\,cod\,\,g\) is the composition \(\circ\) and for each object \(a\) there exist a unique morphism \(1_{A}\in\mathcal{C}(A,A)\) is called the identity morphism on \(a\). Further the composition satisfies \(h\circ(g\circ f)=(h\circ g)\circ f\) whenever defined and \(f\circ 1_{A}=f=1_{B}\circ f\) for all \(f\in\mathcal{C}(A,B)\).
**Example 2.1**.: **Set** : objects are sets and morphisms are functions between sets.
\(\mathbf{Grp}\,\): groups as objects and homomorphisms as morphisms
\(\mathbf{Vct_{K}}\,\): objects are the vector spaces over a fixed field \(K\) and morphisms are linear maps between them.
If a subcollection \(\mathcal{S}\) of objects and morphisms of \(\mathcal{C}\), itselfs consitute a category then \(\mathcal{S}\) is called a subcategory of \(\mathcal{C}\).
Let \(\mathcal{C}\) and \(\mathcal{D}\) be two categories. A _covariant functor_\(F:\mathcal{C}\to\mathcal{D}\) consists of a vertex map which assigns each \(A\in\nu\mathcal{C}\) to an object \(F(A)\in\nu\mathcal{D}\) and a morphism map which assigns each morphism \(f:A\to B\), to a morphism \(F(f):F(A)\to F(B)\in\mathcal{D}\) such that \(F(1_{A})=1_{\nu F(A)}\) for all \(A\in\nu\mathcal{C}\) and \(F(f\circ g)=F(f)\circ F(g)\) for all morphisms \(f,g\in\mathcal{C}\) for which the composition \(f\circ g\) exists.
A functor \(F:\mathcal{C}\to\mathcal{D}\) is said to be _full_ if for every pair of objects \(A,B\) in \(\mathcal{C}\) the morphism set \(\mathcal{C}(A,B)\) is mapped surjectively by \(F\) onto \(\mathcal{D}(F(A),F(B))\). A subcategory \(\mathcal{S}\) of \(\mathcal{C}\) is said to be _full_ if the inclusion functor from \(\mathcal{S}\) to \(\mathcal{C}\) is full.
**Definition 2.2**.: A morphism \(m:A\to B\) in a category \(\mathcal{C}\) is a _monomorphism_ if \(f_{1},f_{2}:D\to A\) in \(\mathcal{C}\), the equality \(m\circ f_{1}=m\circ f_{2}\Rightarrow f_{1}=f_{2}\), that is., \(m\) is a monomorphism if it is left cancellable. Dually a morphism \(e:A\to B\) is an _epimorphism_ if it is right cancellable. That is if \(g_{1},g_{2}:b\to c\,\,,g_{1}\circ\,e\,=\,g\,_{2}\circ e\Rightarrow g_{1}=g_{2}\).
Note that in the category \(\mathbf{Set}\) monomorphisms are percisely the injections and epimorphisms are precisely the surjections.
**Definition 2.3**.: An object \(T\) is _terminal_ in a category \(\mathcal{C}\) if each object \(A\) in \(\mathcal{C}\) there is exactly one arrow \(A\to T\). An object \(S\) is _initial_ in \(\mathcal{C}\) if each object \(A\) there is exactly one arrow \(S\to A\).
A _zero object_\(0\) in a category \(\mathcal{C}\) is an object which is both initial and terminal. For any two objects \(A\) and \(B\) in \(\mathcal{C}\) there is a unique arrow \(0_{A,B}\,:\,A\to 0\to B\) called the _zero arrow_ from \(A\) to \(B\). In the category \(\mathbf{Set}\), the empty set is an initial object and any one point set is a terminal object.
**Definition 2.4**.: Let \(\mathcal{C}\) be a category with zero objects, _kernel_ of a morphism \(f:A\to B\in\mathcal{C}\) is a pair \((K,i)\) of an object \(K\) and a morphism \(i:K\to A\) such that \(f\circ i=0\) satisfying the _universal property_, that is for any other morphism \(i^{\prime}:K^{\prime}\to A\) with \(f\circ i^{\prime}=0\) there exist a unique arrow \(h:K^{\prime}\to K\) such that \(i\circ h=i^{\prime}\).
Dually a _cokernel_ of a morphism \(f:A\to B\) is a pair \((E,p)\) of an object \(E\) and a morphism \(p:B\to E\) such that \(p\circ f=0\) satisfying the universal property.
**Definition 2.5**.: A _product_ of two object \(A\) and \(B\) in a category \(\mathcal{C}\) is an object \(A\,\Pi\,B\) together with morphisms \(p_{1}:A\,\Pi\,B\to A\) and \(p_{2}:A\,\Pi\,B\to B\) that satises the universal property, viz., for some object \(C\) and any two morphisms \(f_{1}:C\to A,f_{2}:C\to B\), there exist a unique morphism \(h:C\to A\,\Pi\,B\) such that \(p_{i}\circ h=f_{i}\) for \(i=1,2\).
Dually A _coproduct_ of two object \(A\) and \(B\) in a category \(\mathcal{C}\) is an object \(A\,\Pi\,B\) together with morphisms \(i_{1}:A\to A\,\Pi\,B\) and \(i_{2}:B\to A\,\Pi\,B\) that satises the universal property: any two morphisms \(f_{1}:A\to C,f_{2}:B\to C\) for some object \(C\) there exist a unique morphism \(h:A\,\Pi\,\,B\to C\) such that \(h\circ i_{i}=f_{i}\) for \(i=1,2\).
**Definition 2.6**.: A category \(\mathcal{C}\) is called _preadditive category_ or _Ab-category_ if each hom-set \(\mathcal{C}(a,b)\) is an additive abelian group and composition is bilinear: i.e.,
\[(g\,+\,g^{\prime})\circ(f\,+\,f^{\prime})=(g\circ f)\,+\,(g\circ f^{\prime})\, +\,(g^{\prime}\circ f)\,+\,(g^{\prime}\circ f^{\prime})\]
where \(f,f^{\prime}:a\to b\quad\text{and}\quad g,g^{\prime}:b\to c\).
An _additive category_ is a preadditive category with a zero object in which every pair of objects admits a product and coproduct and an _abelian category_ is a additive category where every morphism admits a kernel and a cokernel, and every monomorphism is a kernel and every epimorphism is a cokernel. It is easy to see that the Category of abelian groups \(\mathbf{Ab}\), category of left \(R\) modules \(\mathbf{R}\) - \(\mathbf{Mod}\), category of right \(R\) modules \(\mathbf{Mod}\) - \(\mathbf{R}\) are abelian categories.
A morphism \(e:A\to A\) in the category \(\mathcal{C}\) is called _idempotent_ if \(e^{2}=e\). An idempotent \(e:A\to A\) is said to be a _split idempotent_ if there exist morphisms \(f:B\to A\) and \(g:A\to B\) in \(\mathcal{C}\) such that \(g\circ f=1_{B}\) and \(f\circ g=e\).
**Definition 2.7**.: (cf.[5]) A category \(\mathcal{C}\) is called _idempotent complete_ if all idempotents are spit idempotents.
A _preorder_\(\mathcal{P}\) is a category such that for any \(p,p^{\prime}\in\nu\mathcal{P},\mathcal{P}(p,p^{\prime})\) contains atmost one morphism. In this case there is a quasi order
relation \(\subseteq\) on \(\in\nu\mathcal{P}\) such that \(p\subseteq p^{\prime}\iff\mathcal{P}(p,p^{\prime})\neq\phi\). \(\mathcal{P}\) is said to be a strict preorder if \(\subseteq\) is a partial order (see.cf.[4]).
**Definition 2.8**.: (cf.[4]) Let \(\mathcal{C}\) be a category and \(\mathcal{P}\) be a sub category of \(\mathcal{C}\). The pair \((\mathcal{C},\,\mathcal{P})\) is called _category with subobjects_ if the following conditions hold:
* \(\mathcal{P}\) is a strict preorder with \(\nu\mathcal{C}=\nu\mathcal{P}\).
* Every \(f\in\mathcal{P}\) is a monomorphism.
* If \(f,g\in\mathcal{P}\) and \(f=gh\) for some \(h\in\mathcal{C}\) then \(h\in\mathcal{P}\).
Let \(C,D\in\nu\mathcal{C}\), we denote the unique morphism in \(\mathcal{P}\) from \(C\to D\) by \(j_{(C,D)}\) and is called _inclusion_. In this case \(C\) is referred to as a _subobject_ of \(D\).
**Definition 2.9**.: (cf.[4]) Let \(\mathcal{C}\) be a category with subobjects. A _canonical factorization_ of a morphism \(f\) in \(\mathcal{C}\) is a factorization of the form \(f=jq\) where \(q\) is an epimorphism and \(j\) is an inclusion.
**Definition 2.10**.: (cf.[3]) Let \(R\) be a ring. A _left R - module_ is an abelian group \((M,+)\) together with a scalar multiplication \(R\,\times\,M\to M,\,(r,x)\mapsto rx\) such that:
* \(r(x+y)=rx+ry,\,\,\forall r\,\in R\,\text{and}\,x,y\,\in M\)
* \((r+r^{\prime})x=rx+r^{\prime}x,\,\,\forall r,r^{\prime}\in R,x\in M\)
* \((rr^{\prime})x=r(r^{\prime}x),\,\,\forall r,r^{\prime}\in R,x\in M\)
Similary we can define right R module. If R is commutative left R module and right R module are the same.
## 3. Category of left ideals of a Noetherian ring
Let \(R\) be a Noetherian ring with unity and \(\mathbb{L}_{R}\) be the collection of all left ideals of \(R\). Since ideals of Noetherian rings are finitely generated, each left ideal in \(\mathbb{L}_{R}\) is of the form \(A=\langle a_{1},a_{2},...,a_{n}\rangle_{l},\quad a_{i}\in R\) for all \(i=1,2,...,n\). It is easy to observe that \(\mathbb{L}_{R}\) is a category whose objects left ideals of \(R\) and morphisms are \(R\) -linear transformations. i.e. for any \(A,B\in\nu\mathbb{L}_{R}\) and \(f\in\mathbb{L}_{R}(A,B)\), then \(f\) satisfies the conditions
\[f(x+y)=f(x)+f(y)\]
\[f(rx)=rf(x)\,\,\forall x,y\in A,r\,\in R.\]
Since composition of \(R\)- linear transformations is again \(R\)- linear, the composition of morphisms in the category is the usual set composition of \(R\) linear maps and \(1_{A}\) is the identity map on \(A\).
**Theorem 3.1**.: _Let \(R\) be a Noetherian ring with unity. The category \(\mathbb{L}_{R}\), of all left ideals of \(R\) is a preadditive category with zero object._
Proof.: Consider \(A,B\in\nu\mathbb{L}_{R}\) and \(f,g\in\mathbb{L}_{R}(A,B)\). Define
\[(f+g)(x)=f(x)+g(x)\quad\text{for all}\quad x\in A\]
then
\[(f+g)(x+y) =f(x+y)+g(x+y)=f(x)+f(y)+g(x)+g(y)\] \[=f(x)+g(x)+f(y)+g(y)\] \[=(f+g)(x)+(f+g)(y)\] \[(f+g)(rx) =f(rx)+g(rx)=rf(x)+rg(x)=r(f+g)(x)\]
that is, \(f+g\in\mathbb{L}_{R}(A,B)\). Since the zero map is \(R\)- linear and belongs to \(\mathbb{L}_{R}(A,B)\) it is the identity and for each \(f\in\mathbb{L}_{R}(A,B)\), let \((-f)(x)=-f(x)\) then \(-f\in\mathbb{L}_{R}(A,B)\) and is the invers element. Hence \(\mathbb{L}_{R}(A,B)\) is an Abelian group under above defined addition. For any \(f_{1},f_{2}\in\mathbb{L}_{R}(A,B)g_{1},g_{2}\in\mathbb{L}_{R}(B,C)\),
\[(g_{1}\,+\,g_{2})\circ(f_{1}\,+\,f_{2})=(g_{1}\circ f_{2})\,+\,(g_{1}\circ f_ {2})\,+\,(g_{2}\circ f_{1})\,+\,(g_{2}\circ f_{2})\]
i.e., the composition is bilinear. Hence \(\mathbb{L}_{R}\) is a preadditive category.
Let \(O\) be the zero ideal. For any \(A\in\nu\mathbb{L}_{R}\) there is exactly one arrow in \(\mathbb{L}_{R}(A,O)\) and so \(O\) is the zero object in \(\mathbb{L}_{R}\), that is \(\mathbb{L}_{R}\) is a preadditive category with zero object.
This category \(\mathbb{L}_{R}\) is a subcategory of the category of left \(R\)-modules \(R-Mod\) and it is easy to see that the inclusion functor \(i:\mathbb{L}_{R}\to R-Mod\) is full. Similarly it is seen that \(\mathbb{R}_{R}\), the collection of all right ideals of \(R\) is a preadditive category with zero object and is a full sub category of the category of right \(R\)-modules \(Mod-R\).
**Theorem 3.2**.: _Let \(R\) be a Noetherian ring. In the category \(\mathbb{L}_{R}\), of all left ideals of \(R\) biproduct exist only for ideals with trivial intersection._
Proof.: Let \(A,B\in\nu\mathbb{L}_{R}\) with \(A\cap B=\{0\}\). Then \(A+B\in\nu\mathbb{L}_{R}\) and since \(A\cap B=\{0\}\) every element \(x\in A+B\) can be uniquely expressed as \(x=a+b\) where \(a\in A\) and \(b\in B\). Define \(p_{1}:A+B\to A\) and \(p_{2}:A+B\to B\) respectively as \(p_{1}(x)=a\) and \(p_{2}(x)=b\), for all \(x=a+b\in A+B\). Clearly \(A+B\) together with \(p_{1}\) and \(p_{2}\) constitute the product in left ideal category \(\mathbb{L}_{R}\). It has the universal property that : for any object \(C\in\nu\mathbb{L}_{R}\) and morphisms \(f_{1}:C\to A\) and \(f_{2}:C\to B\) there exist a unique map \(h:C\to A+B\) as \(h(x)=f_{1}(x)+f_{2}(x)\forall x\in C\) such that the following diagram commutes.
Dually we can define morphism \(i_{1}:A\to A+B\) and \(i_{2}:B\to A+B\) respectively as \(i_{1}(a)=a\) and \(i_{2}(b)=b,\forall a\in A,\forall b\in B\). Then \(A+B\)
together with \(i_{1}\) and \(i_{2}\) constitute the coproduct in left ideal category \(\mathbb{L}_{R}\). It has the universal property that : for any object \(D\in\nu\mathbb{L}_{R}\) and morphisms \(g_{1}:A\to D\) and \(g_{2}:B\to D\) there exist a unique map \(h^{\prime}:A+B\to D\) as \(h(x)=g_{1}(a)+g_{2}(b)\)\(\forall x=a+b\in A+B\) such that the following diagram commutes.
so in the category \(\mathbb{L}_{R}\) product and coproduct (i.e biproduct) exist only for ideals with trivial intersection.
The following proposition is recalled as it is of interest in the context of \(R\)-modules categories.
**Proposition 3.3**.: _(cf.[7]) Let \(\mathcal{C}\) be an additive category and \(\mathcal{D}\) be a full subcategory of \(\mathcal{C}\). If \(\mathcal{D}\) has a zero object and is closed under binary biproduct, then \(\mathcal{D}\) with morphism addition inherited from \(\mathcal{C}\) is an additive category._
Since the category \(\mathbb{L}_{R}\) is a full sub category of \(R-Mod\) category and \(\mathbb{L}_{R}\) is a preadditive category with zero object, by proposition 3.3 we can conclude that \(\mathbb{L}_{R}\) is only a preadditive category.
**Theorem 3.4**.: _Let \(R\) be a Noetherian ring. Then every morphism in category \(\mathbb{L}_{R}\) admits a kernel._
Proof.: Let \(f:A\to B\) be an arrow in \(\mathbb{L}_{R}\). Then \(kerf=\{x\in A:f(x)=0\}\) is an ideal of \(R\) and \(kerf\in\nu\mathbb{L}_{R}\). Consider the inclusion map \(i:kerf\to A\). Clearly \(f\circ i=0\) and the pair \((kerf,i)\) is a kernal and admits the universal property, that for any other pair \((K,j)\) where \(K\) is an object in \(\mathbb{L}_{R}\) and \(j:K\to A\) is a morphism with \(f\circ j=0\), there exist a unique morphism \(h:K\to A\) defined by \(h(x)=j(x)\,\forall x\in K\) such that the following diagram commutes.
**Theorem 3.5**.: _Let \(R\) be a Noetherian ring and \(\mathbb{L}_{R}\) be the category of all left ideals of \(R\). Then only zero map and surjective morphism of \(\mathbb{L}_{R}\) admits a cokernel._
Proof.: Let \(f:A\to B\) be an arrow in \(\mathbb{L}_{R}\). If \(f=0\), then \(B/f(A)\cong B\) is an ideal and so \(B/f(A)\in\nu\mathbb{L}_{R}\). If \(f\) is a surjective map then \(B/f(A)\cong\) trivial ideal \(\in\nu\mathbb{L}_{R}\). In these two cases the pair \((B/f(A),p)\), where \(p:B\to B/f(A)\) the usual projection map will give the cokernel. It has the universal property, since for any pair \((E,q)\) where \(E\) is an object in \(\mathbb{L}_{R}\) and \(q:B\to E\) is a morphism with \(q\circ f=0\), there exist a unique morphism \(h:B/f(A)\to E\) defined by \(h(x)=q(b)\), \(\forall x=b+f(A)\in B/f(A)\) such that the following diagram commutes.
**Lemma 3.6**.: _(cf.[5]) If \(\mathcal{C}\) a preadditive category then the following are equivalent:_
1. \(\mathcal{C}\) _idempotent complete._
2. _All idempotents have kernel._
3. _All idempotents have cokerne._
**Theorem 3.7**.: _Let \(R\) be a Noetherian ring. In the category \(\mathbb{L}_{R}\), of all left ideals of \(R\) is idempotent complete_
Proof.: We have already proved that in theoerm 3.4 every morphisms in \(\mathbb{L}_{R}\) have kernel. In particular every idempotent arrows have kernel. Hence by Lemma 3.6, \(\mathbb{L}_{R}\) is idempotent complete.
**Theorem 3.8**.: _The category \(\mathbb{L}_{R}\), of all left ideals of a Noetherian ring \(R\) is a category with subobjects and every morphisms have canonical factorization._
Proof.: To prove \(\mathbb{L}_{R}\) is a category with subobjects, it will suffices to construct a subcategory \(\mathcal{P}\) of \(\mathbb{L}_{R}\), which satisfies the conditions in Definition 2.8. For, dfine a partial order on \(\nu\mathbb{L}_{R}\) as follows:
\[A\subseteq B\iff a_{i}=r_{i1}b_{1}+...+r_{in}b_{n},\,r_{i1},...,r_{in}\in R,\, i=1,...,n\]
where \(A=<a_{1},a_{2},...,a_{n1}>_{l},\quad B=<b_{1},b_{2},...,b_{n2}>_{l}\in\nu \mathbb{L}_{R}\). Then the morphism \(j_{(A,B)}:A\to B\) defined by \(j_{(A,B)}(x)=x,\,\,\forall x\in A\) is a unique monomorphism and the subcategory \(\mathcal{P}\) of \(\mathbb{L}_{R}\) with
and morphisms of \(\mathcal{P}\) are the inclusions\(j_{(A,B)}\) is a strict preorder.
Suppose that \(j_{(A,C)},j_{(B,C)}\in\mathcal{P}\) and \(j_{(A,C)}=j_{(B,C)}h\) for some \(h\in\mathbb{L}_{R}\). Then \(j_{(A,C)}(x)=j_{(B,C)}h(x)\) for every \(x\in A\) and since both \(j_{(A,C)}\) and \(j_{(B,C)}\) are inclusions we have \(h\) is the inclusion. Hence \((\mathbb{L}_{R},\mathcal{P})\) is the category with subobject.
Consider any morphism \(f:A\to B\) in \(\mathbb{L}_{R}\), since \(A\) is left ideal and \(f\) is left \(R\)-linear implies \(f(A)\) is a left ideal. Let \(q:A\to f(A)\) be the restriction of \(f\) to \(im\left(f\right)\), then it is easy to observe that \(q\) is an epimorphism, \(f(A)\subseteq B\) and \(j_{(f(A),B)}:f(A)\to B\) is an inclusion. i.e., \(f=j_{(f(A),B)}q\) is a canonical factorization. Thus every morphism in \(\mathbb{L}_{R}\) admits canonical factorization.
## 4. Examples of ideal category of some rings
In the following we provide some examples of ideal category of some Noetherian rings.
**Example 4.1**.: **Ideal category of \(\mathbb{Z}\)**
Consider the category \(\mathbb{L}_{\mathbb{Z}}\left[\mathbb{R}_{\mathbb{Z}}\right]\) of left [right] ideals of the ring of integers \(\mathbb{Z}\). Then
\[\nu\mathbb{L}_{\mathbb{Z}}=\left\{\left\langle n\right\rangle:n\in\mathbb{Z}\right\}\]
Now \(\rho_{(n,s,m)}:\left\langle n\right\rangle\rightarrow\left\langle m\right\rangle\) and \(\rho_{(m,t,p)}:\left\langle m\right\rangle\rightarrow<p>\) and thier composition is \(\rho_{(m,t,p)}\circ\rho_{(n,s,m)}=\rho_{(n,st,p)}:\left\langle n\right\rangle \rightarrow<p>\). For \(\rho_{(n,s,m)},\rho_{(n,t,m)}\in\mathbb{L}_{\mathbb{Z}}(\left\langle n\right\rangle, \left\langle m\right\rangle)\), let \(\rho_{(n,s,m)}+\rho_{(n,t,m)}=\rho_{(n,s+t,m)}\), with respect to this addition \(\mathbb{L}_{\mathbb{Z}}(\left\langle n\right\rangle,\left\langle m\right\rangle)\) is an abelian group and \(\left\langle 0\right\rangle\) is the zero element, hence \(\mathbb{L}_{\mathbb{Z}}\) is a preadditive category with zero object.
For any two non zero ideals \(\left\langle n\right\rangle,\left\langle m\right\rangle\) of \(\mathbb{Z}\), the element \(mn\) always belongs to \(\left\langle n\right\rangle\cap\left\langle m\right\rangle\), and so \(\left\langle n\right\rangle\cap\left\langle m\right\rangle=\left\{0\right\}\) only when \(n=0\) or \(m=0\). Hence in \(\mathbb{L}_{\mathbb{Z}}\) biproduct exist only for those pair of objects in which one of them is the zero ideal. A morphism \(\rho_{(n,s,m)}:\left\langle n\right\rangle\rightarrow\left\langle m\right\rangle\) is a monomorphism for \(s\neq 0\) and zero object together with zero arrow will give the kernel of this morphism.
**Example 4.2**.: **Ideal category of \(\mathbb{Z}_{6}\)**
Let \(R=\mathbb{Z}_{6}\) and \(\mathbb{L}_{R}\) be the category whose objects are left ideals of the ring and morphisms are \(R\) - linear transformations, i.e.,
\[\nu\mathbb{L}_{\mathbb{Z}6}=\left\{\left\langle 0\right\rangle,\left\langle 1 \right\rangle,\left\langle 2\right\rangle,\left\langle 3\right\rangle\right\}\]
The composition and addtion is defined as in the case of \(\mathbb{Z}\). Hence \(\mathbb{L}_{\mathbb{Z}6}\) is a preadditive category with zero object \(\left\langle 0\right\rangle\). In \(\mathbb{L}_{\mathbb{Z}6}\) biproduct exisit for the pairs \((\left\langle 2\right\rangle,\left\langle 3\right\rangle)\) and to \((\left\langle 0\right\rangle,\left\langle n\right\rangle)\) where \(n=1,2,3\).
|
2308.12589 | Stability threshold of the 2D Couette flow in a homogeneous magnetic
field using symmetric variables | We consider a 2D incompressible and electrically conducting fluid in the
domain $\mathbb{T}\times\mathbb{R}$. The aim is to quantify stability
properties of the Couette flow $(y,0)$ with a constant homogenous magnetic
field $(\beta,0)$ when $|\beta|>1/2$. The focus lies on the regime with small
fluid viscosity $\nu$, magnetic resistivity $\mu$ and we assume that the
magnetic Prandtl number satisfies
$\mu^2\lesssim\mathrm{Pr}_{\mathrm{m}}=\nu/\mu\leq 1$. We establish that small
perturbations around this steady state remain close to it, provided their size
is of order $\varepsilon\ll\nu^{2/3}$ in $H^N$ with $N$ large enough.
Additionally, the vorticity and current density experience a transient growth
of order $\nu^{-1/3}$ while converging exponentially fast to an $x$-independent
state after a time-scale of order $\nu^{-1/3}$. The growth is driven by an
inviscid mechanism, while the subsequent exponential decay results from the
interplay between transport and diffusion, leading to the dissipation
enhancement. A key argument to prove these results is to reformulate the system
in terms of symmetric variables, inspired by the study of inhomogeneous fluid,
to effectively characterize the system's dynamic behavior. | Michele Dolce | 2023-08-24T06:31:30Z | http://arxiv.org/abs/2308.12589v1 | Stability threshold of the 2D Couette flow in a homogeneous magnetic field using symmetric variables
###### Abstract.
We consider a 2D incompressible and electrically conducting fluid in the domain \(\mathbb{T}\times\mathbb{R}\). The aim is to quantify stability properties of the Couette flow \((y,0)\) with a constant homogenous magnetic field \((\beta,0)\) when \(|\beta|>1/2\). The focus lies on the regime with small fluid viscosity \(\nu\), magnetic resistivity \(\mu\) and we assume that the magnetic Prandtl number satisfies \(\mu^{2}\lesssim\Pr_{\mathrm{m}}=\nu/\mu\leqslant 1\). We establish that small perturbations around this steady state remain close to it, provided their size is of order \(\varepsilon\ll\nu^{2/3}\) in \(H^{N}\) with \(N\) large enough. Additionally, the vorticity and current density experience a transient growth of order \(\nu^{-1/3}\) while converging exponentially fast to an \(x\)-independent state after a time-scale of order \(\nu^{-1/3}\). The growth is driven by an inviscid mechanism, while the subsequent exponential decay results from the interplay between transport and diffusion, leading to the _dissipation enhancement_. A key argument to prove these results is to reformulate the system in terms of _symmetric variables_, inspired by the study of inhomogeneous fluid, to effectively characterize the system's dynamic behavior.
###### Contents
* 1 Introduction
* 2 Linearized problem
* 3 Nonlinear problem
* 4 Proof of the bootstrap proposition
## 1. Introduction
The 2D incompressible Navier-Stokes magnetohydrodynamics (NS-MHD) equations on the domain \(\mathbb{T}\times\mathbb{R}\) are given by
\[\begin{cases}\partial_{t}\tilde{u}+\tilde{u}\cdot\nabla\tilde{u}-\tilde{b} \cdot\nabla\tilde{b}=\nu\Delta\tilde{u}-\nabla\tilde{p},\qquad t>0,\,x\in \mathbb{T},\,y\in\mathbb{R},\\ \partial_{t}\tilde{b}+\tilde{u}\cdot\nabla\tilde{b}-\tilde{b}\cdot\nabla \tilde{u}=\mu\Delta\tilde{b},\\ \nabla\cdot\tilde{u}=\nabla\cdot\tilde{b}=0\\ \tilde{u}|_{t=0}=\tilde{u}^{in},\qquad\tilde{b}|_{t=0}=\tilde{b}^{in}.\end{cases} \tag{1.1}\]
Here \(\tilde{u},\tilde{b}\) are the velocity and magnetic fields, \(\tilde{p}\) the pressure, \(\nu\) and \(\mu\) are the fluid viscosity and magnetic resistivity, which are proportional to the inverse Reynolds number and inverse magnetic Reynolds number respectively. We consider a nearly ideal system in the regime
\[0<\nu\leqslant\mu\ll 1,\qquad\Longrightarrow\qquad\Pr_{\mathrm{m}}=\nu/\mu \leqslant 1,\]
where \(\Pr_{\mathrm{m}}\) is the magnetic Prandtl number, observed to be of order \(10^{-7}-10^{-2}\) in physically relevant cases [40, 42].
A steady state of (1.1) is the Couette flow with a constant background magnetic field, that is
\[u_{E}=(y,0),\qquad b_{E}=(\beta,0),\qquad\beta\in\mathbb{R}. \tag{1.2}\]
This is problably one of the simplest setting to understand some quantitative hydromagnetic stability properties of shear flows, which is a problem of significant physical interest [17, 22, 12]. The presence of a background magnetic field could dramatically change stability features of the shear flow considered: i) it can have a destabilizing effect for shear flows that are linearly stable without the magnetic field (as the Couette flow) [21, 22, 23, 15, 47]. ii) it can suppress instabilities as the Kelvin-Helmholtz one [34] or lift-up effects in 3D fluids [33].
In this paper, we focus on quantifying a _stability threshold_ in Sobolev spaces. Following [6], the problem can be formulated as follows:
**Stability threshold**: let \(N\geq 0\), \(0<\nu\leq\mu<1\), \((\tilde{u}^{in},\tilde{b}^{in})=(u_{E},b_{E})+(u^{in},b^{in})\). What is the smallest \(\gamma=\gamma(N)\geq 0\) such that if \(\big{\|}(u^{in},b^{in})\big{\|}_{HN}=\varepsilon<\nu^{\gamma}\) then \(\|(u(t),b(t))\|_{L^{2}}\ll 1\) and \((u(t),b(t))\) converges back to a laminar flow as \(t\to\infty\)?
Let us briefly review the literature about related problems. Since Reynolds's famous experiment [39], it is a classical problem in fluid dynamics to understand under which circumstances a laminar flow transitions to a turbulent state. Estimating a stability threshold is a quantitative way to establish when turbulence does not develop. The idea that the laminar regime persists if the size of the perturbation decreases at large Reynolds number was already predicted by Kelvin in 1887 [28]. The quantification in terms of powers of the Reynolds number was also linked to the non-normal behavior of the linearized operator around a shear flow in the influential paper by Trefethen et al. [43] and we refer to the book [41] for further developments and references. In the last decade, there has been a significant effort in rigorously proving estimates for the Sobolev stability threshold in many different fluid problems involving the Couette flow [6, 10, 30, 31, 33, 36, 38, 45, 51] and recently strictly monotone shear flows as well [32]. Results in this direction are known also for the Poiseuille flow in \(\mathbb{T}\times\mathbb{R}\)[18, 16] or in \(\mathbb{T}\times[-1,1]\) with Navier-slip boundary conditions [19], the Lamb-Oseen vortex [20] and the Taylor-Couette flow [1]. For Gevrey-regular perturbations, one can improve the stability threshold [5, 7, 9] and even study problems in absence of viscosity [35, 37, 4, 24, 25, 4, 8]. In fact, the groundbreaking result by Bedrossian and Masmoudi [8], proving the _nonlinear inviscid damping_ around Couette in 2D Euler, inspired many of the subsequent works involving strictly monotone shear flows.
For electrically conducting fluids, a stability threshold was first proved by Liss in [33] in 3D NS-MHD. For the 2D case, recently Zhao and Zi [49] studied the stability of (1.2) with \(\nu=0\) (2D Euler-MHD system) with \(|\beta|\) sufficiently large and perturbations of size \(O(\mu)\) in the Gevrey-\(1/2^{-}\) space. The latter regularity requirement might be necessary for the inviscid problem [29]. For what concerns the 2D NS-MHD system considered here, a Sobolev stability threshold \(O(\nu^{5/6^{+}})\) was first proved by Chen and Zi [14] for shear close to Couette when \(\nu=\mu\), about which we comment more later on.
To state the main result for the problem studied in this paper, we first need to introduce the vorticity and the current density of the perturbation
\[\omega=\nabla^{\perp}\cdot u,\qquad j=\nabla^{\perp}\cdot b.\]
The system satisfied by \((\omega,j)\) is:
\[\begin{cases}\partial_{t}\omega+y\partial_{x}\omega-\beta\partial_{x}j-\nu \Delta\omega=\mathrm{NL}_{\omega},\\ \partial_{t}j+y\partial_{x}j-\beta\partial_{x}\omega-\mu\Delta j+2\partial_{ xy}\phi=\mathrm{NL}_{j},\\ u=\nabla^{\perp}\psi,\qquad b=\nabla^{\perp}\phi,\\ \Delta\psi=\omega,\qquad\Delta\phi=j,\\ \omega|_{t=0}=\omega^{in},\qquad j|_{t=0}=j^{in},\end{cases} \tag{1.3}\]
where
\[\mathrm{NL}_{\omega} :=-u\cdot\nabla\omega+b\cdot\nabla j,\] \[\mathrm{NL}_{j} :=-u\cdot\nabla j+b\cdot\nabla\omega+2\partial_{xy}\phi(\omega-2 \partial_{xx}\psi)-2\partial_{xy}\psi(j-2\partial_{xx}\phi).\]
In the following, we denote
\[f_{0}(y)=\frac{1}{2\pi}\int_{\mathbb{T}}f(x,y)\mathrm{d}x,\qquad f_{\neq}=f-f_ {0},\qquad\langle a\rangle:=\sqrt{1+a^{2}}\]
We are ready to state the main result.
**Theorem 1.1**.: _Let \(0<\nu\leqslant\mu\ll 1\), \(|\beta|>1/2\), \(N>10\), and assume that \(\nu\geqslant(16\mu/\beta^{2})^{3}\). Let \((\omega^{in},j^{in})\) be the initial data of (1.3). Then, there exists a universal constant \(0<\delta_{0}<1\) and \(0<\varepsilon_{0}=\varepsilon_{0}(N,\beta)<\nu^{\frac{2}{3}}\) such that for all \(\varepsilon<\varepsilon_{0}\) the following holds true: if_
\[\left\|(\omega^{in},j^{in})\right\|_{H^{N}}\leqslant\varepsilon,\]
_denoting \((\Omega,J)(t,x+yt,y)=(\omega,j)(t,x,y)\), we have_
\[\left\|(\Omega_{\neq},J_{\neq})(t)\right\|_{H^{N}}\lesssim \varepsilon\,\langle t\rangle\,\mathrm{e}^{-\delta_{0}\nu^{\frac{1}{3}}t} \tag{1.4}\] \[\left\|(u_{\neq}^{1},b_{\neq}^{1})(t)\right\|_{L^{2}}+\langle t \rangle\left\|(u_{\neq}^{2},b_{\neq}^{2})(t)\right\|_{L^{2}}\lesssim \varepsilon\mathrm{e}^{-\delta_{0}\nu^{\frac{1}{3}}t},\] (1.5) \[\left\|(u_{0},b_{0})(t)\right\|_{H^{N}}+\nu^{\frac{1}{2}}\left\| \partial_{y}(u_{0},b_{0})\right\|_{L^{2}([0,t];H^{N})}\lesssim\varepsilon. \tag{1.6}\]
The bound in (1.4), combines the linear in time transient growth with the dissipation enhancement of the vorticity and current density. The growth is an inviscid linear mechanism generated by the background magnetic field, resulting in a transient amplification of order \(\nu^{-1/3}\) in the viscous case. The exponential decay for times larger than \(\nu^{-1/3}\) is a common feature of perturbations around the Couette flow [31, 32, 33, 36, 38, 45]. This is caused by the interaction between the advection by \((y,0)\) with the diffusion: the Couette flow sends information towards high-vertical frequencies where dissipation is more efficient, leading to the accelerated decay of the non-zero horizontal frequencies. The estimates (1.5) are a direct consequence of (1.4). They quantify the _inviscid damping_[8] of the second component of the velocity field, but we do not expect any decay on \(u^{1}\) in view of the possible growth of the vorticity. Finally, from (1.6) we deduce that the \(x\)-averages of the solution remain small so that the dynamics is effectively converging to a shear flow nearby the steady state (1.2). Let us first sketch the strategy of proof and then we make a few remarks.
**Proof strategy:** a key point in the proof of Theorem 1.1 is the use of the _symmetric variables_:
\[z=\left(\partial_{xx}(-\Delta)\right)^{-1/2}\omega,\qquad q=\left(\partial_{ xx}(-\Delta)\right)^{-1/2}j. \tag{1.7}\]
These unknowns are inspired by an energy method introduced in [2] for compressible fluids, and further improved and refined in [4, 11]. The main observation is that it is possible to "symmetrize" the linearized system to get a new system that enjoys a better energy structure1, as explained in detail in Section 2. For the linearized problem, we see that for \(|\beta|>1/2\) we have a coercive energy functional for \((z,q)\), see (2.15). Using the good properties of this energy, we prove that the results stated in Theorem 1.1 are true at the linearized level, see Proposition 2.3.
Footnote 1: The method allows to effectively capture some oscillations that are stabilizing the system.
The idea is then to bootstrap the control of the linear energy functional to the nonlinear case, which is done in Sections 3-4. There are two main difficulties to overcome:
1. It is not straightforward to obtain bounds for the nonlinear system associated to \((z,q)\) because the inverse of \((\partial_{xx}(-\Delta))^{-1/2}\) is not a uniformly bounded Fourier multiplier. This imply that we might encounter some derivatives losses when reconstructing the symmetric variables in the nonlinear terms.
2. The symmetric variables do not provide enough information over the \(x\)-averages of the solution.
To overcome these issues, we follow a strategy similar to the one used in [4]. For the first problem, we exploit the nice structure of the nonlinearity and \((\partial_{xx}(-\Delta))^{-1/2}\). The main idea is that, by performing a frequency decomposition, we can exchange derivative losses with time-growth. This decomposition considers not only interactions between high-low (or low-high) frequencies but it also accounts for what occurs near Orr's critical time \(t=\eta/k\), where \(\eta\) and \(k\) are the Fourier coefficients associated with the variables \(x\) and \(y\), respectively. The dissipation enhancement plays a crucial role in avoiding the use of technically involved Fourier multipliers needed in inviscid problems, e.g. [4, 8]. In fact, it will suffice to capture the dissipation enhancement along with a form of inviscid damping using standard (by now) Fourier multipliers that are uniformly bounded in \(\nu\).
The second problem instead can be explained as follows. The transport nonlinearity will generate terms containing the \(k=0\)-modes, for instance
\[u\cdot\nabla\omega=u_{\neq}\cdot\nabla\omega_{\neq}+u_{0}^{1}\partial_{x} \omega_{\neq}+u_{\neq}^{2}\partial_{y}\omega_{0},\]
where \(u_{0}=(u_{0}^{1},0)\) thanks to the incompressibility condition. Heuristically, \(u_{0}^{1}\) has roughly the same regularity as \(z\) (one derivative less than vorticity), and therefore, we can hope to control it using information on \(z\). On the other hand, \(\partial_{y}\omega_{0}\) has higher regularity with respect to \(z\), and even dissipation cannot assist us since \(\partial_{y}\omega_{0}=-\partial_{yy}u_{0}^{1}\), which involves two derivatives more than \(z\). This suggests the need to directly control the system for \((\omega,j)\), which has a worst energy structure. Nevertheless, through the control on \((z,q)\), we show that a dangerous linear term can be easily controlled leading to bounds in agreement with the linearized behavior of \((\omega_{\neq},j_{\neq})\). The energy associated to \((\omega,j)\) is at the highest level of regularity but is allowed to grow in time, meaning that the control of the nonlinearity is somewhat easier. It is important to note that in Theorem 1.1 we have not stated the bounds for \(\left\|(\omega_{0},j_{0})\right\|_{L^{\infty}([0,t];H^{N})}=\left\|(u_{0}^{1}, b_{0}^{1})\right\|_{L^{\infty}([0,t];H^{N+1})}\), which are indeed of order \(\varepsilon\left\langle t\right\rangle\), even though there is no growth mechanism for the \(x\)-averages in the linearized problem.
**Remark 1.2**.: The standard auxiliary variables for the NS-MHD system are the Elsasser variables [3, 44], corresponding to \(e^{\pm}=\omega\pm j\). The system satisfied by \(e^{\pm}\) also has a nice structure where one could exploit the integration-in-time trick used by Liss in the 3D problem [33]. This strategy is followed in the 2D case by Chen and Zi [14] as well, where there are also the additional complicatons given by the more general form of the shear flow considered. It appears that using \((z,q)\) has certain technical advantages, particularly in achieving the \(\nu^{2/3}\) threshold and handling cases where \(\nu\neq\mu\). We also mention that, in the result for the 2D Euler-MHD obtained by Zhao and Zi [49], the main energy functional introduced by the authors uses an approximated version of \((z,q)\). In particular, \((\partial_{xx}(-\Delta))^{-1/2}\) is replaced by a Fourier multiplier whose inverse is bounded with a \(\mu\)-dependent constant (the weight \(m\) in [49]). The use of symmetric variables has proven to be a flexible approach [2, 4, 11, 51], which is in essence a carefully weighted Kawashima's type energy argument [27].
**Remark 1.3**.: The \(\nu^{2/3}\) threshold in Sobolev spaces2 can be heuristically justified as in [48]. Namely, for the 2D NS case the best available threshold is \(\nu^{1/3}\)[38]. Here, the vorticity and current density are experiencing a growth of order \(\nu^{-1/3}\) after which the dissipation enhancement kicks in. We would then require an extra \(\nu^{1/3}\) smallness to keep everything in a perturbative regime even with this transient growth, which is why we need to assume \(\varepsilon\ll\nu^{1/3+1/3}\).
Footnote 2: The use of \(H^{N}\) with \(N>10\) is certainly not optimal and it might be of interest to understand what are the critical Sobolev spaces, in a similar spirit of [30, 32, 36]
On the other hand, the threshold \(\nu^{5/6^{+}}\) obtained in [14] can be related to the method of proof. Specifically, the control of nonlinear terms is inspired by [10] where a \(\nu^{1/2}\) threshold is obtained in the 2D NS setting. In this paper, we need to treat the nonlinear terms in a more refined way compared to [10] to improve the threshold, relying on estimates that are closer to inviscid problems. We believe that the nice methods introduced in [14] to handle shear flows close to Couette, could be combined with the strategy we use here to obtain the \(\nu^{2/3^{+}}\) threshold for shear near Couette as well.
**Remark 1.4**.: The case of different viscosity coefficients \(\mu\neq\nu\) is generally more challenging to study compared to the case \(\mu=\nu\), see for instance [44]. For the energy method employed here, having \(\mu\neq\nu\) does not pose any significant difficulty because we can exploit the dissipation enhancement to handle some linear errors arising from this anisotropy. This is precisely why we need to assume that \(\mu^{3}\ll\nu\leqslant\mu\), and we anticipate that the problem becomes much more intricate in the opposite regime, as hinted by the limiting case \(\nu=0\) investigated in [49].
**Remark 1.5**.: All the constants hidden in the symbol \(\lesssim\) degenerate as \(|\beta|\to 1/2\), meaning that \(\varepsilon_{0}\to 0\) as \(|\beta|\to 1/2\). This is related to the coercivity of the energy functional we use in the linearized problem, for which we need \(|\beta|>1/2\). In fact, when \(\beta=0\) the linearized system is not coupled anymore and the current density will have a growth of order \(\left\langle t\right\rangle^{2}\) instead of \(\left\langle t\right\rangle\), see also Remark 2.2.
### Notation
We introduce some notation used throughout the paper. For \(a,b\in\mathbb{R}\), we define
\[|a,b|:=|a|+|b|,\qquad\left\langle a\right\rangle=\sqrt{1+a^{2}}.\]
We use the notation \(a\lesssim b\) to indicate that there is a constant \(C>0\), independent of the relevant parameters \(\nu,\mu\) such that \(a\leq Cb\). Similarly, we say \(a\approx b\) if \(a\lesssim b\) and \(b\lesssim a\).
We define the Fourier transform of a function \(F\) as
\[\hat{f}_{k}(\eta)=\mathcal{F}(f)_{k}(\eta)=\frac{1}{2\pi}\iint\limits_{ \mathbb{T}\times\mathbb{R}}\mathrm{e}^{i(kx+\eta y)}f(x,y)\mathrm{d}x\mathrm{d }y,\]
and the inverse Fourier transform as
\[\mathcal{F}^{-1}(\hat{f})(x,y)=\frac{1}{2\pi}\sum_{k\in\mathbb{Z}}\int_{ \mathbb{R}}\mathrm{e}^{i(kx+\eta y)}\hat{f}_{k}(\eta)\mathrm{d}\eta.\]
We identify Fourier multipliers \(w(\nabla)\) with their symbol \(w_{k}(t,\eta)\), except for standard derivatives \(\partial_{x},\partial_{y}\) where we use the symbols \(ik,i\eta\). We denote the \(L^{2}\) scalar product as
\[\left\langle f,g\right\rangle_{L^{2}}=\left\langle\hat{f},\hat{g}\right\rangle _{L^{2}}=\sum_{k\in\mathbb{Z}}\int_{\mathbb{R}}\hat{f}_{k}(\eta)\bar{\hat{g}} _{k}(\eta)\mathrm{d}\eta,\]
and the norm in \(H^{N}\) as
\[\|f\|_{H^{N}}^{2}=\sum_{k\in\mathbb{Z}}\int_{\mathbb{R}}\left\langle|k,\eta| \right\rangle^{2N}|\hat{f}_{k}(\eta)|^{2}\mathrm{d}\eta.\]
We use the following convention
\[\left\langle\partial_{xx}(f\partial_{xx}g),h\right\rangle_{L^{2}}=\left\langle k ^{2}(\hat{f}*(\ell^{2}\hat{g})),\hat{h}\right\rangle_{L^{2}}=\sum_{k,\ell\in \mathbb{Z}}\iint\limits_{\mathbb{R}^{2}}k^{2}\hat{f}_{k-\ell}(\eta-\xi)\ell^{2 }\hat{g}_{\ell}(\xi)\bar{\hat{h}}_{k}(\eta)\mathrm{d}\eta\mathrm{d}\xi. \tag{1.8}\]
We define the frequency decomposition as in [6, 33]: let \(\chi:\mathbb{R}^{4}\to\mathbb{R}\) be
\[\chi(k,\eta,\ell,\xi)=\begin{cases}1&\text{if }|k-\ell,\eta-\xi|\leq 2|\ell,\xi| \\ 0&\text{otherwise}.\end{cases}\]
We use the paraproduct decomposition
\[\mathcal{F}(fg)_{k}(\eta) = \sum_{k,\ell\in\mathbb{Z}}\int_{\mathbb{R}}\hat{f}_{k-\ell}(\eta -\xi)\hat{g}_{\ell}(\xi)\chi(k,\eta,\ell,\xi)\mathrm{d}\xi \tag{1.9}\] \[+\sum_{k,\ell\in\mathbb{Z}}\int_{\mathbb{R}}\hat{f}_{k-\ell}(\eta -\xi)\hat{g}_{\ell}(\xi)(1-\chi(k,\eta,\ell,\xi))\mathrm{d}\xi\] \[:=\mathcal{F}(f^{Lo}g^{Hi})+\mathcal{F}(f^{Hi}g^{Lo}).\]
Notice that \(|k,\eta|\leqslant 3|\ell,\xi|\) on the support of \(\chi\) and \(|k,\eta|\leqslant 3|k-\ell,\eta-\xi|/2\) on the support of \(1-\chi\).
## 2. Linearized problem
In this section, we study in detail the simple linearized dynamics. First of all, we introduce the change of coordinates
\[X=x-yt,\qquad Y=y.\]
We denote the variables in the _moving frame_ with capital letters
\[\Omega(t,X,Y)=\omega(t,x,y),\qquad J(t,X,Y)=j(t,x,y), \tag{2.1}\]
and
\[\nabla_{L}=(\partial_{X},\partial_{Y}-t\partial_{X}),\qquad\Delta_{L}= \partial_{XX}+(\partial_{Y}-t\partial_{X})^{2}. \tag{2.2}\]
The linearized problem in the moving frame is
\[\partial_{t}\Omega=\nu\Delta_{L}\Omega+\beta\partial_{X}J,\] \[\partial_{t}J=\mu\Delta_{L}J-2\partial_{X}(\partial_{Y}-t \partial_{X})\Delta_{L}^{-1}J+\beta\partial_{X}\Omega.\]
Taking the Fourier transform in both variables, defining the symbol associated to \(-\Delta_{L}\) as
\[p_{k}(t,\eta):=k^{2}+(\eta-kt)^{2},\]
we get
\[\partial_{t}\hat{\Omega}=-\nu p\hat{\Omega}+\beta ik\hat{J}, \tag{2.3}\] \[\partial_{t}\hat{J}=-\mu p\hat{J}+\frac{\partial_{t}p}{p}\hat{J} +\beta ik\hat{\Omega}, \tag{2.4}\]
where we omit the subscript \(k\) to ease the notation. Multiplying the equation for \(\hat{J}\) by \(p^{-1}\), notice that
\[\partial_{t}\hat{\Omega}=-\nu p\hat{\Omega}+\beta ikp(p^{-1}\hat{J}), \tag{2.5}\] \[\partial_{t}(p^{-1}\hat{J})=-\mu p(p^{-1}\hat{J})+\frac{\beta ik} {p}\hat{\Omega}. \tag{2.6}\]
We briefly comment below on the inviscid case. Then we study the viscous problem with a flexible energy method that will be useful in the nonlinear analysis.
\(\bullet\)_Case \(\nu=\mu=0\):_ in absence of viscosity, we see that the \(2\times 2\) non-autonomous system (2.5)-(2.6) has almost an antisymmetric structure, but the time-dependence of the factor \(p\) prevents the existence of an exact conserved quantity. To overcome this problem, we apply the _symmetrization scheme_ introduced in [2] and further developed in [4, 11]. This amounts at finding two good unknowns for which we have an _almost conserved_ quantity. In this case, the symmetrization procedure suggests the use of the variables
\[\begin{cases}Z_{k}(t,\eta)=\sqrt{\frac{k^{2}}{p_{k}(t,\eta)}}\hat{\Omega}_{k} (t,\eta),\qquad Q_{k}(t,\eta)=\sqrt{\frac{k^{2}}{p_{k}(t,\eta)}}\hat{J}_{k}(t,\eta),\qquad\text{for }k\neq 0,\\ Z_{0}(t,\eta)=Q_{0}(t,\eta)=0.\end{cases} \tag{2.7}\]
In the original reference frame, these variables are exactly the \((z,q)=(\partial_{xx}(-\Delta))^{-\frac{1}{2}}(\omega,j)\) discussed in the Introduction (1.7). The system satisfied by \((Z,Q)\) is
\[\frac{\mathrm{d}}{\mathrm{d}t}\begin{pmatrix}Z\\ Q\end{pmatrix}=\begin{pmatrix}-\frac{1}{2}\frac{\partial_{t}p}{p}&\beta ik\\ \beta ik&\frac{1}{2}\frac{\partial_{t}p}{p}\end{pmatrix}\begin{pmatrix}Z\\ Q\end{pmatrix}.\]
The energy functional is then given by
\[\tilde{E}_{\text{sym}}(t):=\frac{1}{2}\left(\left|Z\right|^{2}+\left|Q\right|^ {2}-\frac{\partial_{t}p}{\beta ikp}\text{Re}(Z\bar{Q})\right)(t).\]
**Remark 2.1**.: If \(\partial_{t}p\) and \(p\) were constants, one would have that \(\tilde{E}_{\text{sym}}(t)\) is a conserved quantity, meaning that the dynamics lie in an ellipse in the \(Z\)-\(Q\) plane. With time-dependent coefficients we only aim at showing that the dynamics remains in an annular region in the \(Z\)-\(Q\) plane.
Having that
\[\frac{|\partial_{t}p_{k}(t,\eta)|}{p_{k}(t,\eta)}=\frac{2|k(\eta-kt)|}{k^{2}+( \eta-kt^{2})}\leqslant 1,\]
we see that energy functional is coercive only when \(|\beta|>1/2\). In particular
\[\frac{1}{2}\left(1-\frac{1}{2\beta}\right)\left(|Z|^{2}+|Q|^{2}\right) \leqslant\tilde{E}_{\text{sym}}\leqslant\frac{1}{2}\left(1+\frac{1}{2\beta} \right)\left(|Z|^{2}+|Q|^{2}\right)\]
In fact, the coercivity of the energy functional is the only reason why we need to assume \(|\beta|>1/2\).
Taking the time derivative of \(\tilde{E}_{\text{sym}}\) and using a Gronwall type estimate, it is not difficult to show that
\[\tilde{E}_{\text{sym}}(0)\approx_{\beta-\frac{1}{2}}\tilde{E}_{\text{sym}}(t),\]
meaning that all the constants degenerate when \(|\beta|\to 1/2\).
**Remark 2.2**.: It might be natural that \(|\beta|=1/2\) is a somewhat sharp threshold to observe the linear-in-time growth. For instance, when \(\beta=0\) one can explicitly solve the system and obtain that \(\Omega_{\neq}\) is conserved in time whereas \(J_{\neq}\approx\left<t\right>^{2}\). It seems reasonable that for \(0<|\beta|<1/2\) one simply interpolates between the behavior at \(\beta=0\) and \(|\beta|>1/2\), in a similar fashion to what happens in the Boussinesq case at small Richardson's number [46].
\(\bullet\)_Case \(0<\nu\leqslant\mu\):_ When viscosity is present, we aim at capturing the dissipation enhancement, that is the exponential decay on a time-scale of order \(O(\nu^{-\frac{1}{3}})\). This could be proved by using the energy functional \(\tilde{E}_{\text{sym}}\) and some algebraic manipulation. However, with the idea in mind of addressing the nonlinear problem, we prove the enhanced dissipation estimate with the help of some Fourier multipliers, which are by now standard in the literature. We use the following weights: the first one, introduced in [50], is to control error terms which are integrable in time and is given by
\[\begin{cases}\partial_{t}m_{k}^{d}=\frac{C_{\beta}}{1+(\eta/k-t)^{2}}m_{k}^{d },\qquad\text{for }k\neq 0,\\ m_{k}^{d}(0,\eta)=1\\ m_{0}^{d}(t,\eta)=1,\end{cases} \tag{2.8}\]
where \(C_{\beta}>0\) is a fixed constant that can be chosen to be \(C_{\beta}=\max\{1,4/|\beta|\}\) for example. This weight is needed to recover some time-decay from the inviscid damping, that is generated by inverse powers of the Laplacian in the moving frame. Notice that
\[m_{k}^{d}(t,\eta)\approx 1\qquad\text{for all }t>0,\;\eta\in\mathbb{R},\;k \neq 0.\]
The next weight, introduced in [7], is needed to capture the dissipation enhancement and is defined as
\[\begin{cases}\partial_{t}m_{k}^{\nu}=\frac{\nu^{\frac{1}{3}}}{1+\nu^{\frac{2 }{3}}(\eta/k-t)^{2}}m_{k}^{\nu},\qquad\text{for }k\neq 0,\;\nu>0,\\ m_{k}^{\nu}(0,\eta)=1\\ m_{k}^{0}(t,\eta)=m_{0}^{\nu}(t,\eta)=1.\end{cases}\]
Also in this case we have
\[m_{k}^{\nu}(t,\eta)\approx 1\qquad\text{for all }t>0,\;\eta\in\mathbb{R},\,k \neq 0. \tag{2.9}\]
The key property of the weight \(m^{\nu}\) is
\[\nu p_{k}(t,\eta)+\frac{\partial_{t}m_{k}^{\nu}(t,\eta)}{m_{k}^{\nu}(t,\eta)} \geqslant\frac{1}{4}\nu^{\frac{1}{3}},\qquad\text{for all }t>0,\;\eta\in\mathbb{R},\,k\neq 0, \tag{2.10}\]
which can be easily checked by considering \(|\eta/k-t|\leq\nu^{-\frac{1}{3}}\) or \(|\eta/k-t|\geq\nu^{-\frac{1}{3}}\) separately. This weight is compensating the inefficiency of the dissipation enhancement close to the critical time \(t=\eta/k\).
Finally, we need a last weight to absorb some error terms given by the mixed scalar product in the energy functional,
\[\begin{cases}\partial_{t}m_{k}^{s}=\frac{\gamma_{\beta}C_{\beta}}{(1+(\eta/k-t )^{2})^{\frac{3}{2}}}m_{k}^{s},\qquad\text{for }k\neq 0,\,\nu>0,\\ m_{k}^{\nu}(0,\eta)=1\\ m_{k}^{0}(t,\eta)=m_{0}^{\nu}(t,\eta)=1,\end{cases}\]
where \(\gamma_{\beta}\) is a fixed constant such that
\[\frac{1}{|\beta|}\left(\frac{1}{2}+\frac{1}{\gamma_{\beta}}\right)<1.\]
Notice that \(\gamma_{\beta}\to+\infty\) as \(|\beta|\to 1/2\). This weight is again bounded above and below, namely
\[m_{k}^{s}(t,\eta)\approx 1\qquad\text{ for all }t>0,\,\eta\in\mathbb{R},\,k\neq 0, \tag{2.11}\]
and satisfies
\[\frac{\partial_{t}m^{s}}{m^{s}}=\gamma_{\beta}\sqrt{\frac{k^{2}}{p}\frac{ \partial_{t}m^{d}}{m^{d}}}. \tag{2.12}\]
Aiming at obtaining a bound in Sobolev spaces, we then define the weight
\[m_{k}(t,\eta)=\begin{cases}\mathrm{e}^{\delta_{0}\nu^{\frac{1}{3}}t}\left<|k, \eta|\right>^{N}(m^{d}m^{\nu}m^{s})_{k}^{-1}(t,\eta),&\text{for }k\neq 0,\\ \left<\eta\right>^{N}&\text{for }k=0,\end{cases} \tag{2.13}\]
where \(0<\delta_{0}<1/64\) is a sufficiently small constant chosen later in the proof. When \(k\neq 0\) we have
\[\frac{\partial_{t}m_{k}}{m_{k}}=\delta_{0}\nu^{\frac{1}{3}}-\sum_{\iota\in\{ \nu,d,s\}}\frac{\partial_{t}m_{k}^{\iota}}{m_{k}^{\iota}} \tag{2.14}\]
The good unknowns are still given by (2.7). The system for the weighted variables \((mZ,mQ)\) read as
\[\partial_{t}(mZ) =-\left(\nu p-\frac{\partial_{t}m}{m}\right)mZ-\frac{1}{2}\frac{ \partial_{t}p}{p}mZ+\beta ikmQ,\] \[\partial_{t}(mQ) =-\left(\mu p-\frac{\partial_{t}m}{m}\right)mQ+\frac{1}{2}\frac{ \partial_{t}p}{p}mQ+\beta ikmZ.\]
The energy functional associated to the system is
\[E_{\mathsf{sym}}(t):=\frac{1}{2}\left(|mZ|^{2}+|mQ|^{2}-\mathrm{Re}\left( \frac{\partial_{t}p}{\beta ikp}(mZm\bar{Q})\right)\right)(t). \tag{2.15}\]
We have the following.
**Proposition 2.3**.: _Let \(0<\nu\leq\mu\ll 1\), \(|\beta|>1/2\) and assume that assume that \(\nu^{\frac{1}{3}}\geq(16\mu/\beta^{2})\). Then_
\[E_{\mathsf{sym}}(t)+\frac{1}{16}\int_{0}^{t}D_{\mathsf{sym}}(\tau)\mathrm{d} \tau\leq E_{\mathsf{sym}}(0), \tag{2.16}\]
_where_
\[D_{\mathsf{sym}}(t):=\left(\nu p|mZ|^{2}+\mu p|mQ|^{2}+\left(\frac{\partial_{ t}m^{\nu}}{m^{\nu}}+\frac{\partial_{t}m^{d}}{m^{d}}\right)(|mZ|^{2}+|mQ|^{2}) \right)(t). \tag{2.17}\]
_As a consequence of this bound, the following inequalities holds true:_
\[\left\|Z(t)\right\|_{H^{N}}+\left\|Q(t)\right\|_{H^{N}}\lesssim_{ \beta-\frac{1}{2}}\mathrm{e}^{-\delta_{0}\nu^{\frac{1}{3}}t}\left(\left\|Z^{in} \right\|_{H^{N}}+\left\|Q^{in}\right\|_{H^{N}}\right), \tag{2.18}\] \[\left\|\Omega_{\neq}(t)\right\|_{H^{N}}+\left\|J_{\neq}(t) \right\|_{H^{N}}\lesssim_{\beta-\frac{1}{2}}\left\langle t\right\rangle \mathrm{e}^{-\delta_{0}\nu^{\frac{1}{3}}t}\left(\left\|\omega_{\neq}^{in} \right\|_{H^{N}}+\left\|j_{\neq}^{in}\right\|_{H^{N}}\right),\] (2.19) \[\left\|U_{\neq}^{1}(t)\right\|_{H^{N}}+\left\langle t\right\rangle \left\|U_{\neq}^{2}(t)\right\|_{H^{N-1}}\lesssim_{\beta-\frac{1}{2}}\mathrm{e }^{-\delta_{0}\nu^{\frac{1}{3}}t}\left(\left\|\omega_{\neq}^{in}\right\|_{H^ {N}}+\left\|j_{\neq}^{in}\right\|_{H^{N}}\right). \tag{2.20}\]
Proof.: Compute that
\[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}|mZ|^{2} =-\left(\nu p-\frac{\partial_{t}m}{m}\right)|mZ|^{2}+\delta_{0} \nu^{\frac{1}{3}}|mZ|^{2}-\frac{1}{2}\frac{\partial_{t}p}{p}|mZ|^{2}+\beta \mathrm{Re}(ikmQm\bar{Z}), \tag{2.21}\] \[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}|mQ|^{2} =-\left(\mu p-\frac{\partial_{t}m}{m}\right)|mQ|^{2}+\delta_{0} \nu^{\frac{1}{3}}|mQ|^{2}+\frac{1}{2}\frac{\partial_{t}p}{p}|mQ|^{2}+\beta \mathrm{Re}(ikmZm\bar{Q}). \tag{2.22}\]
When adding these two equations the last terms on the right-hand side cancel out. For the mixed product instead, we have
\[-\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\left(\frac{\partial_{ t}p}{\beta ikp}(mZm\bar{Q})\right)= -\frac{1}{2}\frac{\partial_{t}p}{p}\left(|mQ|^{2}-|mZ|^{2}\right) \tag{2.23}\] \[+\frac{\partial_{t}p}{2\beta ikp}\left((\nu+\mu)p-2\delta_{0}\nu ^{\frac{1}{3}}+2\sum_{\iota\in\{\nu,d,s\}}\frac{\partial_{t}m^{\iota}}{m^{ \iota}}\right)\!(mZm\bar{Q})\] \[-\frac{1}{2}\left(\frac{p\partial_{tt}p-(\partial_{t}p)^{2}}{ \beta ikp^{2}}\right)(mZm\bar{Q}).\]
Observe that the first term on the right-hand side of (2.23) cancel out with the second to last terms in (2.21)-(2.22) when computing the time derivative of \(E_{\mathsf{sym}}\). Thanks to the energy identities above, the property (2.14) and the definition (2.17) we arrive at the following inequality
\[\frac{\mathrm{d}}{\mathrm{d}t}E_{\mathsf{sym}}+D_{\mathsf{sym}}+ \frac{\partial_{t}m^{s}}{m^{s}}(|mZ|^{2}+|mQ|^{2})\leqslant\sum_{i=0}^{5} \mathcal{L}_{i}, \tag{2.24}\]
where we define the linear error terms as:
\[\mathcal{L}_{0} :=\delta_{0}\nu^{\frac{1}{3}}\left(1+\frac{|\partial_{t}p|}{| \beta||k|p}\right)(|mZ|^{2}+|mQ|^{2}),\] \[\mathcal{L}_{1} :=(\nu+\mu)\frac{|\partial_{t}p|}{2|\beta||k|}|mZ||mQ|,\] \[\mathcal{L}_{2} :=\frac{|\partial_{t}p|}{|\beta||k|p}\frac{\partial_{t}m^{\nu}}{ m^{\nu}}|mZ||mQ|,\] \[\mathcal{L}_{3} :=\left(\frac{p|\partial_{tt}p|+(\partial_{t}p)^{2}}{2|\beta||k|p ^{2}}\right)|mZ||mQ|,\] \[\mathcal{L}_{4} :=\frac{|\partial_{t}p|}{|\beta||k|p}\frac{\partial_{t}m^{d}}{m^ {d}}|mZ||mQ|,\] \[\mathcal{L}_{5} :=\frac{|\partial_{t}p|}{|\beta||k|p}\frac{\partial_{t}m^{s}}{m^ {s}}|mZ||mQ|.\]
Using (2.10), we get
\[\mathcal{L}_{0}\leqslant 8\delta_{0}D_{\mathsf{sym}}, \tag{2.25}\]
where we also used that \(\mu\geqslant\nu\). For \(\mathcal{L}_{1}\), since
\[\frac{|\partial_{t}p|}{|k|}\leqslant 2\sqrt{p},\]
we have
\[\mathcal{L}_{1}\leqslant\left(\frac{\nu}{4}p+\frac{\mu}{\beta^{2}}\right)|mZ|^{2} +\left(\frac{\mu}{4}p+\frac{\nu}{\beta^{2}}\right)|mQ|^{2}.\]
Now, we combine the hypothesis \(\mu/\beta^{2}\leqslant\nu^{\frac{1}{3}}/16\) and the property (2.10) to get
\[\mathcal{L}_{1}\leqslant\frac{1}{2}D_{\mathsf{sym}}. \tag{2.26}\]
For \(\mathcal{L}_{2}\), combining
\[\frac{|\partial_{t}p|}{|\beta||k|p}\leqslant\frac{2}{|\beta|\sqrt{p}} \leqslant\frac{2}{|\beta|C_{\beta}}\sqrt{\frac{\partial_{t}m^{d}}{m^{d}}}, \tag{2.27}\]
with \(\partial_{t}m^{\nu}/m^{\nu}\leqslant\nu^{\frac{1}{3}}\), we obtain
\[\mathcal{L}_{2}\leqslant\frac{1}{64}\frac{\partial_{t}m^{\nu}}{m^{\nu}}|mZ|^{ 2}+\frac{64\nu^{\frac{1}{3}}}{|\beta|^{2}C_{\beta}^{2}}\frac{\partial_{t}m^{ d}}{m^{d}}|mQ|^{2}.\]
Since \(\nu\ll 1\), we have
\[\mathcal{L}_{2}\leqslant\frac{1}{64}D_{\mathsf{sym}}. \tag{2.28}\]
Turning our attention to \(\mathcal{L}_{3}\), observe that
\[\frac{p|\partial_{tt}p|+(\partial_{t}p)^{2}}{2|\beta||k|p^{2}}\leqslant\frac{ |k|}{|\beta|p}\leqslant\frac{1}{|\beta|C_{\beta}}\frac{\partial_{t}m^{d}}{m^{d}}.\]
Hence
\[\mathcal{L}_{3}\leqslant\frac{1}{|\beta|C_{\beta}}D_{\mathsf{sym}}. \tag{2.29}\]
To control \(\mathcal{L}_{4}\), combining the first bound in (2.27) with the property (2.12) we have
\[\mathcal{L}_{4}\leqslant\frac{2}{|\beta|\sqrt{p}}\frac{\partial_{t}m^{d}}{m^ {d}}|mZ||mQ|\leqslant\frac{2}{|\beta|\gamma_{\beta}}\frac{\partial_{t}m^{s}}{ m^{s}}|mZ||mQ|\leqslant\frac{1}{|\beta|\gamma_{\beta}}\frac{\partial_{t}m^{s}}{m^{s}}(| mZ|^{2}+|mQ|^{2}). \tag{2.30}\]
On the other hand, for \(\mathcal{L}_{5}\) we use \(|\partial_{t}p|/(|k\beta|p)\leqslant 1/|\beta|\) to get
\[\mathcal{L}_{5}\leqslant\frac{1}{2|\beta|}\frac{\partial_{t}m^{s}}{m^{s}}(| mZ|^{2}+|mQ|^{2}). \tag{2.31}\]
Choosing \(\delta_{0},C_{\beta},\gamma_{\beta}\) such that
\[\delta_{0}<\frac{1}{64},\qquad\frac{1}{|\beta|C_{\beta}}\leqslant\frac{1}{4},\qquad\frac{1}{|\beta|}\left(\frac{1}{2}+\frac{1}{\gamma_{\beta}}\right)<1,\]
which is always possible since \(|\beta|>1/2\), we can combine (2.25), (2.26), (2.28), (2.29), (2.30) and (2.31) with (2.32) to get
\[\frac{\mathrm{d}}{\mathrm{d}t}E_{\mathsf{sym}}+\frac{1}{16}D_{\mathsf{sym}}+ \left(1-\frac{1}{2|\beta|}-\frac{1}{|\beta|\gamma_{\beta}}\right)\frac{ \partial_{t}m^{s}}{m^{s}}(|mZ|^{2}+|mQ|^{2})\leqslant 0, \tag{2.32}\]
whence proving (2.16).
The bound (2.18) is a straighforward consequence of the coercivity of \(E_{\mathsf{sym}}\) and (2.16). To prove (2.19), we can perform an energy estimate directly on (2.3)-(2.4) to get
\[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}(|m\hat{\Omega}_{\neq}|^{2}+|m\hat{J }_{\neq}|^{2})\leqslant\frac{|\partial_{t}p|}{p}|m\hat{J}_{\neq}|^{2}=\frac{| \partial_{t}p|}{|k|\sqrt{p}}|mQ||m\hat{J}_{\neq}|\leqslant 2|mQ||m\hat{J}_{\neq}|. \tag{2.33}\]
This inequality implies that
\[(|m\hat{\Omega}_{\neq}|+|m\hat{J}_{\neq}|)(t) \lesssim(|m\hat{\Omega}_{\neq}|+|m\hat{J}_{\neq}|)(0)+\int_{0}^{t} |mQ|(\tau)\mathrm{d}\tau\] \[\lesssim(|m\hat{\Omega}_{\neq}|+|m\hat{J}_{\neq}|)(0)+t\sqrt{E_{ \mathsf{sym}}(0)}\] \[\lesssim\langle t\rangle\,(|m\hat{\Omega}_{\neq}|+|m\hat{J}_{ \neq}|)(0),\]
where we used the coercivity of \(E_{\mathsf{sym}}\) and the fact that \(|\sqrt{k^{2}/p}\hat{F}|\leq|\hat{F}|\). Integrating in space and exploiting the definition of \(m\) (2.13), we deduce (2.19).
The estimates (2.20), follows by
\[|\hat{U}^{1}_{\neq}| =\frac{|\eta-kt|}{k^{2}+(\eta-kt)^{2}}|\hat{\Omega}_{\neq}|=\frac {|\eta/k-t|}{|k|\sqrt{p}}|Z|\leq|Z| \tag{2.34}\] \[|\hat{U}^{2}_{\neq}| =\frac{|k|}{k^{2}+(\eta-kt)^{2}}|\hat{\Omega}_{\neq}|=\frac{1}{|k |\sqrt{p}}|Z|\leq\frac{\langle|k,\eta|\rangle}{\langle t\rangle}|Z|, \tag{2.35}\]
where in the last bound we used the general bound \(\langle a-b\rangle\langle b\rangle\gtrsim\langle a\rangle\). Integrating in space we obtain the desired bound and conclude the proof.
## 3. Nonlinear problem
For the nonlinear problem, the idea is to propagate the linearized behavior for the symmetric variables \((Z,Q)\), see (2.7), proved in Proposition 2.3. As explained in the introduction, to overcome problems related to the \(x\)-averages (especially for \(\partial_{y}(\omega_{0},j_{0})\)), we need to directly control also \((\Omega,J)\). As shown in the proof of Proposition 2.3, we can use the bounds on \((Z,Q)\) to handle the problematic linear error term associated to \(2\partial_{xy}\Delta^{-1}j\) in the equation for \(j\). Indeed, from the linearized problem (2.3)-(2.4), as in (2.33) we notice that
\[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\left(\|\Omega\|_{L^{2}}^{2}+\|J\|_{ L^{2}}^{2}\right)\leq\left|\left\langle\frac{\partial_{t}p}{p}\hat{J}, \hat{J}\right\rangle\right|=\left|\left\langle\frac{\partial_{t}p}{|k|\sqrt{p }}Q,\hat{J}\right\rangle\right|\leq 2\left\|Q\right\|_{L^{2}}\left\|J\right\|_{L^{2} }, \tag{3.1}\]
where we used \(|\partial_{t}p/(|k|\sqrt{p})|\leq 2\). If we are able to propagate smallness on \(Q\), say \(\|Q\|\lesssim\varepsilon\) for some norm, and \(\|J\|\lesssim\varepsilon\left\langle t\right\rangle\), we can treat this term as forcing term of order \(\varepsilon^{2}\left\langle t\right\rangle\) which would lead to bounds on \((\Omega,J)\) of order \(\varepsilon\left\langle t\right\rangle\) when integrating in time. This behavior is consistent with the growth observed in the linearized problem.
Before introducing the main ingredients for the proof of Theorem 1.1, we first rewrite the system (1.3) in the moving frame \(X=x-yt,\,Y=y\). Recalling the notation (2.1)-(2.2), we get
\[\begin{cases}\partial_{t}\Omega-\beta\partial_{X}J-\nu\Delta_{L}\Omega=\mathrm{ NL}_{\Omega},\\ \partial_{t}J-\beta\partial_{X}\Omega-\nu\Delta_{L}J+2\partial_{X}(\partial_{ Y}-t\partial_{X})\Phi=\mathrm{NL}_{J},\\ U=\nabla_{L}^{\perp}\Psi,\qquad B=\nabla_{L}^{\perp}J\\ \Delta_{L}\Psi=\Omega,\qquad\Delta_{L}\Phi=J,\end{cases} \tag{3.2}\]
where
\[\mathrm{NL}_{\Omega} =-\nabla^{\perp}\Psi\cdot\nabla\Omega+\nabla^{\perp}\Phi\cdot \nabla J,\] \[\mathrm{NL}_{J} =-\nabla^{\perp}\Psi\cdot\nabla J+\nabla^{\perp}\Phi\cdot\nabla\Omega\] \[\quad+(2\partial_{X}(\partial_{Y}-t\partial_{X})\Phi)(\Omega-2 \partial_{XX}\Psi)-(2\partial_{X}(\partial_{Y}-t\partial_{X})\Psi)(J-2 \partial_{XX}\Phi). \tag{3.3}\]
**Remark 3.1**.: Observe that we used the following crucial cancellation
\[\nabla_{L}^{\perp}F\cdot\nabla_{L}G=\nabla F\cdot\nabla G,\]
that is true for any function \(F,G\).
From Proposition 2.3, it is clear that the proof of Theorem 1.1 is reduced in obtaining bounds for energy functionals controlling \((Z,Q)\) and \((\Omega,J)\).
### Energy functionals and the bootstrap scheme
To introduce the energy functionals needed to prove the main Theorem 1.1, we recall the definitions of the symmetric variables \((Z,Q)\) and the weight \(m\) respectively given in (2.7) and (2.13).
The first energy functional is the one used in the linearized problem to control the symmetric variables. Here we cannot do estimates at fixed frequencies \((k,\eta)\) and therefore we define
\[\mathsf{E}_{\mathsf{sym}}(t)=\frac{1}{2}\left(\|mZ\|_{L^{2}}^{2}+\|mQ\|_{L^{2 }}^{2}-\frac{1}{\beta}\mathrm{Re}\left\langle\frac{1}{ik}\frac{\partial_{t}p} {p}mZ,mQ\right\rangle_{L^{2}}\right). \tag{3.4}\]
The goal is to propagate the smallness of this energy, namely \(\mathsf{E}_{\mathsf{sym}}\lesssim\varepsilon^{2}\) where \(\varepsilon\) is the size of the initial data in Theorem 1.1.
Then, we need the higher order energy to control directly the vorticity and current density, that is
\[\mathsf{E}_{\mathsf{h.o.}}(t)=\frac{1}{2}\left(\|m\Omega\|_{L^{2}}^{2}+\|mJ\|_ {L^{2}}^{2}\right) \tag{3.5}\]
From the estimate (3.1) and (2.19), we expect that \(\mathsf{E}_{\mathsf{h.o.}}\lesssim\left\langle t\right\rangle^{2}\varepsilon^ {2}\).
Finally, to control the \(x\)-averages (which is also the only reason why we introduce \(\mathsf{E}_{\mathsf{h.o.}}\)), we define
\[\mathsf{E}_{0}(t):=\frac{1}{2}\left(\left\|U_{0}^{1}\right\|_{H^{N}}^{2}+\left\| B_{0}^{1}\right\|_{H^{N}}^{2}+\frac{1}{\left\langle t\right\rangle^{2}}\left(\| \Omega_{0}\|_{H^{N}}^{2}+\|J_{0}\|_{H^{N}}^{2}\right)\right) \tag{3.6}\]
We aim at propagating smallness for this functional, that is \(\mathsf{E}_{0}\lesssim\varepsilon^{2}\).
**Remark 3.2**.: Notice that we allow the higher order zero modes \((\Omega_{0},J_{0})\), controlled by \(\mathsf{E}_{\mathsf{h.o.}}\) and \(\mathsf{E}_{0}\), to grow linearly in time in \(H^{N}\). One might expect to achieve uniform boundedness for \((\Omega_{0},J_{0})\). However, since they are at the highest level of regularity, controlling them requires using bounds on \((Z,Q)\) which are at lower regularity. Essentially, we are trading regularity for time-growth, which is a standard argument in these types of energy estimates. The inclusion of the term \(t^{-1}(\Omega_{0},J_{0})\) in \(\mathsf{E}_{0}\) is for technical purposes and does not provide any substantial information beyond what we already know from \(\mathsf{E}_{\mathsf{h.o.}}\).
Before computing the time-derivative of the energy functionals, we introduce some notation. We define the _good terms_ as
\[\mathsf{G}_{\nu}[F]:=\left\|\sqrt{\frac{\partial_{t}m^{\nu}}{m^{\nu}}}mF \right\|_{L^{2}}^{2},\qquad\mathsf{G}_{d}[F]:=\left\|\sqrt{\frac{\partial_{t} m^{d}}{m^{d}}}mF\right\|_{L^{2}}^{2},\]
which naturally arise from the time-derivative of the weight \(m\). Associated to each energy functional, we have the dissipation functionals defined as
\[\mathsf{D}_{\mathsf{sym}}(t) :=\nu\left\|\nabla_{L}mZ\right\|_{L^{2}}^{2}+\mu\left\|\nabla_{L} mQ\right\|_{L^{2}}^{2}+\sum_{\iota\in\{\nu,d\}}\mathsf{G}_{\iota}[Z]+\mathsf{G}_{ \iota}[Q], \tag{3.7}\] \[\mathsf{D}_{\mathsf{h.o.}}(t) :=\nu\left\|\nabla_{L}m\Omega\right\|_{L^{2}}^{2}+\mu\left\| \nabla_{L}mJ\right\|_{L^{2}}^{2}+\sum_{\iota\in\{\nu,d\}}\mathsf{G}_{\iota}[ \Omega]+\mathsf{G}_{\iota}[J],\] \[\mathsf{D}_{0}(t) :=\left(\nu\left\|\partial_{Y}U_{0}^{1}\right\|_{H^{N}}^{2}+\mu \left\|\partial_{Y}B_{0}^{1}\right\|_{H^{N}}^{2}+\frac{1}{\left\langle t \right\rangle^{2}}\left(\nu\left\|\partial_{Y}\Omega_{0}\right\|_{H^{N}}^{2}+ \mu\left\|\partial_{Y}J_{0}\right\|_{H^{N}}^{2}\right)\right). \tag{3.8}\]
We are now ready to compute some basic energy inequalities where we exploit the bounds obtained in the linearized problem and we introduce the nonlinear error terms.
**Lemma 3.3**.: _Let \(0<\nu\leqslant\mu\ll 1\), \(|\beta|>1/2\) and assume that assume that \(\nu^{\frac{1}{3}}\geqslant(16\mu/\beta^{2})\). Let \(\mathsf{E}_{t},\mathsf{D}_{t}\) with \(\iota\in\{\mathsf{sym},\mathsf{h.o.},0\}\) be the energy and dissipation functionals defined in (3.4)-(3.6) and (3.7)-(3.8). Then,_
\[\frac{\mathrm{d}}{\mathrm{d}t}\mathsf{E}_{\mathsf{sym}}+\frac{1}{ 16}\mathsf{D}_{\mathsf{sym}} \leqslant\mathsf{T}_{\mathsf{sym}}+\mathsf{S}_{\mathsf{sym}}, \tag{3.9}\] \[\frac{\mathrm{d}}{\mathrm{d}t}\mathsf{E}_{\mathsf{h.o.}}+\frac{1} {16}\mathsf{D}_{\mathsf{h.o.}} \leqslant 4\sqrt{\mathsf{E}_{\mathsf{sym}}}\sqrt{\mathsf{E}_{ \mathsf{h.o.}}}+\mathsf{T}_{\mathsf{h.o.}}+\mathsf{S}_{\mathsf{h.o.}},\] (3.10) \[\frac{\mathrm{d}}{\mathrm{d}t}\mathsf{E}_{0}+\mathsf{D}_{0} \leqslant\mathsf{R}_{\neq}, \tag{3.11}\]
_where, with the convention introduced in (1.8), we define the following error terms: the error for the symmetric variables are given by_
\[\mathsf{T}_{\mathsf{sym}}:= \left|\left\langle\sqrt{\frac{k^{2}}{p}}m\mathcal{F}\left(\nabla ^{\perp}\Psi\cdot\nabla\Omega\right),mZ+\frac{1}{i\beta k}\frac{\partial_{t}p} {p}mQ\right\rangle\right| \tag{3.12}\] \[+\left|\left\langle\sqrt{\frac{k^{2}}{p}}m\mathcal{F}\left( \nabla^{\perp}\Psi\cdot\nabla J\right),mQ+\frac{1}{i\beta k}\frac{\partial_{t} p}{p}mZ\right\rangle\right|\] \[+\left|\left\langle\sqrt{\frac{k^{2}}{p}}m\mathcal{F}\left( \nabla^{\perp}\Phi\cdot\nabla\Omega\right),mQ-\frac{1}{i\beta k}\frac{\partial _{t}p}{p}mZ\right\rangle\right|,\]
_and_
\[\mathsf{S}_{\mathsf{sym}}:= \left|\left\langle\sqrt{\frac{k^{2}}{p}}m\bigg{(}\frac{\partial _{t}p}{p}\hat{J}*\big{(}\hat{\Omega}-2\frac{\ell^{2}}{p}\hat{\Omega}\big{)} \bigg{)},mQ-\frac{1}{i\beta k}\frac{\partial_{t}p}{p}mZ\right\rangle\right| \tag{3.13}\] \[+\left|\left\langle\sqrt{\frac{k^{2}}{p}}m\bigg{(}\frac{\partial _{t}p}{p}\hat{\Omega}*\big{(}\hat{J}-2\frac{\ell^{2}}{p}\hat{J}\big{)}\bigg{)},mQ-\frac{1}{i\beta k}\frac{\partial_{t}p}{p}mZ\right\rangle\right|.\]
_The errors for the higher-order terms are_
\[\mathsf{T}_{\mathsf{h.o.}}:= \left|\left\langle m\mathcal{F}\left(\nabla^{\perp}\Psi\cdot \nabla\Omega\right),m\Omega\right\rangle\right|+\left|\left\langle m\mathcal{F }\left(\nabla^{\perp}\Phi\cdot\nabla J\right),m\Omega\right\rangle\right| \tag{3.14}\] \[+\left|\left\langle m\mathcal{F}\left(\nabla^{\perp}\Psi\cdot \nabla J\right),mJ\right\rangle\right|+\left|\left\langle m\mathcal{F}\left( \nabla^{\perp}\Phi\cdot\nabla\Omega\right),mJ\right\rangle\right|.\]
_and_
\[\mathsf{S}_{\mathsf{h.o.}}:=\left|\left\langle m\bigg{(}\frac{\partial_{t}p} {p}\hat{J}*\big{(}\hat{\Omega}-2\frac{\ell^{2}}{p}\hat{\Omega}\big{)}\bigg{)},mJ\right\rangle\right|+\left|\left\langle m\bigg{(}\frac{\partial_{t}p}{p} \hat{\Omega}*\big{(}\hat{J}-2\frac{\ell^{2}}{p}\hat{J}\big{)}\bigg{)},mJ \right\rangle\right|.\]
_The error term for the zero-mode functional is_
\[\mathsf{R}_{\neq}:= \tag{3.15}\] \[\quad\bigg{|}\Big{\langle}\langle\partial_{Y}\rangle^{N}\left(U_{ \neq}^{2}B_{\neq}^{1}\right)_{0},\langle\partial_{Y}\rangle^{N}\left(\partial_ {Y}B_{0}^{1}\right)\rangle\bigg{|}+\bigg{|}\Big{\langle}\langle\partial_{Y} \rangle^{N}\left(B_{\neq}^{2}U_{\neq}\right)_{0},\langle\partial_{Y}\rangle^{N }\left(\partial_{Y}B_{0}^{1}\right)\rangle\bigg{|}\] \[\quad\qquad\qquad+\bigg{|}\Big{\langle}\langle\partial_{Y} \rangle^{N}\left(U_{\neq}^{2}J_{\neq}\right)_{0},\langle\partial_{Y}\rangle^{N }\left(\partial_{Y}J_{0}^{1}\right)\rangle\bigg{|}+\bigg{|}\Big{\langle}\langle \partial_{Y}\rangle^{N}\left(B_{\neq}^{2}\Omega_{\neq}\right)_{0},\langle \partial_{Y}\rangle^{N}\left(\partial_{Y}J_{0}^{1}\right)\rangle\bigg{|}\] \[\qquad\qquad\qquad+\bigg{|}\bigg{\langle}\langle\partial_{Y} \rangle^{N}\left(2\partial_{X}B_{\neq}^{1}\big{(}\Omega_{\neq}-2\partial_{XX} \Delta_{L}^{-1}J_{\neq}\big{)}\right)_{0},\langle\eta\rangle^{N}\,\hat{J}_{0 }\bigg{\rangle}\bigg{|}\] \[\qquad\qquad\qquad+\bigg{|}\bigg{\langle}\langle\eta\rangle^{N} \left((2\partial_{X}U_{\neq}^{1}\big{(}J_{\neq}-2\partial_{XX}\Delta_{L}^{-1}J _{\neq}\big{)}\right)_{0},\langle\eta\rangle^{N}\,\hat{J}_{0}\bigg{\rangle} \bigg{|}\bigg{)}.\]
**Remark 3.4**.: In the transport error term we could easily introduce commutators by exploiting the divergence free condition on \(U\) and \(B\). However, since we are not worried of loss of derivatives thanks to the dissipation, we will see that commutators are not necessary to close the argument with the threshold \(\nu^{\frac{2}{3}}\), which is expected as explained in Remark 1.3.
Proof.: In the proof, we omit the subscript \(k\) for simplicity of notation. First of all, taking the Fourier transform of (3.2), we compute that
\[\partial_{t}Z =-\left(\nu p-\frac{\partial_{t}m}{m}\right)Z-\frac{1}{2}\frac{ \partial_{t}p}{p}Z+\beta ikQ+m\sqrt{\frac{k^{2}}{p}}\mathcal{F}(\mathrm{NL}_{ \Omega}),\] \[\partial_{t}Q =-\left(\nu p-\frac{\partial_{t}m}{m}\right)Q+\frac{1}{2}\frac{ \partial_{t}p}{p}Q+\beta ikZ+m\sqrt{\frac{k^{2}}{p}}\mathcal{F}(\mathrm{NL}_{ J}).\]
Therefore, when computing the time-derivative of \(\mathsf{E}_{\mathsf{sym}}\) we readily see that the contributions from the linear part of the equations are controlled as in the proof of Proposition 2.3 and give us the left-hand side of (3.9). Indeed, all the linear estimates are valid point-wise in frequency and here we are just integrating in space. The definition of the nonlinear terms follows by triangle inequality and Plancherel's theorem, whence proving (3.9).
To prove (3.10), by (3.2) we get
\[\frac{\mathrm{d}}{\mathrm{d}t}\mathsf{E}_{\mathsf{h.o.}}+\mathsf{ D}_{\mathsf{h.o.}}= \delta_{0}\nu^{\frac{1}{3}}(\left\|m\Omega\right\|^{2}+\left\|mJ \right\|^{2}) \tag{3.16}\] \[+\beta(\langle\partial_{X}mJ,m\Omega\rangle+\langle\partial_{X} m\Omega,mJ\rangle)\] (3.17) \[+\left\langle\frac{\partial_{t}p}{p}m\hat{J},m\hat{J}\right\rangle\] (3.18) \[+\langle m(\mathrm{NL}_{\Omega}),m\Omega\rangle+\langle m( \mathrm{NL}_{J}),mJ\rangle\,.\]
Appealing to (2.9), we have
\[\delta_{0}\nu^{\frac{1}{3}}\left\|m(\Omega,J)\right\|^{2}\leqslant 4\delta_{0} \mathsf{D}_{\mathsf{h.o.}},\]
where we used \(\mu\geqslant\nu\). Hence, for \(\delta_{0}\) sufficiently we can absorb the term on the right-hand side of (3.16) to the left hand-side and remain with \(\mathsf{D}_{\mathsf{h.o.}}/16\) as in (3.10). The term in (3.17) is clearly zero. For the term in (3.18), reasoning as done in (3.1), we get
\[\left|\left\langle\frac{\partial_{t}p}{p}m\hat{J},m\hat{J}\right\rangle\right| \leqslant 2|\langle mQ,m\hat{J}\rangle|\leqslant 4\sqrt{\mathsf{E}_{ \mathsf{sym}}}\sqrt{\mathsf{E}_{\mathsf{h.o.}}}.\]
For the nonlinear terms we only apply the triangle inequality.
It remains to compute the errors for the zero modes. We first write down the equations for the \(x\)-average of the velocity and magnetic fields. Since both \(U\) and \(B\) are divergence free, we have \(U_{0}^{2}=B_{0}^{2}=0\). Hence, it is not difficult to check that the equations of \((U_{0}^{1},B_{0}^{1})\) are given by (see for instance [49, eq. (2.11)])
\[\partial_{t}U_{0}^{1}-\nu\partial_{YY}U_{0}^{1} =-(\nabla^{\perp}\Psi_{\neq}\cdot\nabla U_{\neq}^{1})_{0}+(\nabla ^{\perp}\Phi_{\neq}\cdot\nabla B_{\neq}^{1})_{0} \tag{3.19}\] \[\partial_{t}B_{0}^{1}-\mu\partial_{YY}B_{0}^{1} =-(\nabla^{\perp}\Psi_{\neq}\cdot\nabla B_{\neq}^{1})_{0}+(\nabla ^{\perp}\Phi_{\neq}\cdot\nabla U_{\neq}^{1})_{0}, \tag{3.20}\]
where we also used the identity
\[(FG)_{0}=(F_{\neq}G_{\neq})_{0}.\]
The equations for \((\Omega_{0},J_{0})\) are like (3.19)-(3.20) with the changes \((U_{0}^{1},B_{0}^{1})\to(\Omega_{0},J_{0})\), \((U_{\neq},B_{\neq})\to(\Omega_{\neq},J_{\neq})\) and the \(x\)-average of the stretching terms of \(\mathrm{NL}_{J}\) in (3.3). To prove that \(\mathsf{R}_{\neq}\) only involves some specific components of the nonlinearity, we observe the following general cancellations: for any multiplier \(q\) and functions \(F,G,H\), after a few integration by parts we obtain
\[\left\langle q(\nabla^{\perp}F_{\neq}\cdot\nabla G_{\neq})_{0},qH _{0}\right\rangle=-\left\langle q(\partial_{Y}F_{\neq}\partial_{X}G_{\neq}),qH _{0}\right\rangle+\left\langle q(\partial_{X}F_{\neq}\partial_{Y}G_{\neq}),qH _{0}\right\rangle\] \[\qquad=\left\langle q((\partial_{X}F_{\neq})G_{\neq}),q\partial_{ Y}H_{0}\right\rangle.\]
It is then enough to recall that \(\partial_{X}(\Psi_{\neq},\Phi_{\neq})=(U_{\neq}^{2},B_{\neq}^{2})\) to obtain all the transport type terms in \(\mathsf{R}_{\neq}\). For the stretching nonlinearity in (3.3), we use \((\partial_{Y}-t\partial_{X})(\Psi_{\neq},\Phi_{\neq})=-(U_{\neq}^{1},B_{\neq} ^{1})\) to conclude the proof of the lemma.
With the energy identities at hand, we are ready to set up the bootstrap argument. First, we assume the following.
**Bootstrap hypothesis:** Assume that there exists \(T_{\star}\geq 1\) such that for all \(1/2\leq t\leq T_{\star}\) the following inequalities holds true:
\[\mathsf{E}_{\mathsf{sym}}(t)+\frac{1}{16}\int_{0}^{t}\mathsf{D}_{ \mathsf{sym}}(\tau)\mathrm{d}\tau\leq 10\varepsilon^{2},\] ( \[\mathsf{H}_{\mathsf{sym}}\] ) \[\mathsf{E}_{\mathsf{h.o.}}(t)+\frac{1}{16}\int_{0}^{t}\mathsf{D}_ {\mathsf{h.o.}}(\tau)\mathrm{d}\tau\leq C_{1}\varepsilon^{2}\left\langle t \right\rangle^{2},\] ( \[\mathsf{H}_{\mathsf{h.o.}}\] ) \[\mathsf{E}_{0}(t)+\frac{1}{16}\int_{0}^{t}\mathsf{D}_{0}(\tau) \mathrm{d}\tau\leq 100\varepsilon^{2},\] ( \[\mathsf{H}_{0}\] ) with \[C_{1}=4000\].
By a standard local well-posedness argument (which can be obtained from the bounds in Lemma 3.3), we know that for \(\varepsilon_{0}\) sufficiently small the hypothesis (\(\mathrm{H}_{\mathsf{sym}}\))-(\(\mathrm{H}_{0}\)) holds true with \(T_{\star}=1\) and all the constants on the right-hand side divided by \(4\). Then, we aim at improving the bounds (\(\mathrm{H}_{\mathsf{sym}}\))-(\(\mathrm{H}_{0}\)) so that, by continuity and the fact that the interval \([1/2,T_{\star}]\) will be open, closed and connected, we get \(T_{\star}=+\infty\). In particular, our goal is to prove the following.
**Proposition 3.5** (Bootstrap improvement).: _Under the hypothesis of Theorem 1.1, there exists \(0<\varepsilon_{0}=\varepsilon_{0}(N,\beta)<1/2\) with the following property. If \(\varepsilon<\varepsilon_{0}\) and (\(\mathrm{H}_{\mathsf{sym}}\))-(\(\mathrm{H}_{0}\)) hold on \([1/2,T_{\star}]\), then for any \(t\in[1/2,T_{\star}]\) the estimates (\(\mathrm{H}_{\mathsf{sym}}\))-(\(\mathrm{H}_{0}\)) are true with all the constants on the right-hand side of (\(\mathrm{H}_{\mathsf{sym}}\))-(\(\mathrm{H}_{0}\)) divided by a factor \(2\)._
From this proposition, which we prove in the next section, the proof of Theorem 1.1 readily follows by the definition of the energies and the bounds (2.34)-(2.35).
## 4. Proof of the bootstrap proposition
This section is dedicated to the proof Proposition 3.5, which implies Theorem 1.1, and constitutes the core of this paper. To improve the bounds (\(\mathrm{H}_{\mathsf{sym}}\))-(\(\mathrm{H}_{0}\)), we need to introduce some useful technical results.
### Toolbox
We introduce the _resonant intervals_ as follows: we say that
\[t\in I_{k,\eta}\qquad\text{ if }\qquad\Big{|}t-\frac{\eta}{k}\Big{|}\leqslant \frac{|\eta|}{2k^{2}}.\]
These intervals are usually defined in a slightly more precise way in inviscid problems, e.g. [8]. This definition is sufficient for us since we never have to define weights using the resonant intervals. In fact, we only need them for notational purposes when splitting integrals.
We recall the following properties of the weight \(p_{k}(t,\eta)\).
**Lemma 4.1**.: _For any \(t,k,\eta,\ell,\xi\), the following inequalities holds true_
\[\sqrt{\frac{p_{\ell}(\xi)}{p_{k}(\eta)}}\leqslant\langle|k-\ell,\eta-\xi| \rangle^{3}\begin{cases}\frac{|\eta|}{k^{2}(1+|\frac{\eta}{k}-t|)},&\text{ if }\,t\in I_{k,\eta}\cap I_{\ell,\xi}^{c}\\ 1&\text{ otherwise}\end{cases} \tag{4.1}\]
_When \(k=\ell\) we have the improved estimate_
\[\sqrt{\frac{p_{k}(\xi)}{p_{k}(\eta)}}\leqslant 1+\frac{|\eta-\xi|}{|k|(1+| \frac{\eta}{k}-t|)}. \tag{4.2}\]
Proof.: This lemma is a version of [4, Lemma 4.14], where (4.1) is proved. The bound (4.2) is in the proof of [4, Lemma 4.14] as well, which follows by
\[\sqrt{\frac{p_{k}(\xi)}{p_{k}(\eta)}}=\frac{1+|\frac{\xi}{k}-t|}{1+|\frac{\eta }{k}-t|}\leqslant 1+\frac{|(\frac{\xi}{k}-\frac{\eta}{k})+(\frac{\eta}{k}-t)|-| \frac{\eta}{k}-t|}{1+|\frac{\eta}{k}-t|}\leqslant 1+\frac{|\eta-\xi|}{|k|(1+| \frac{\eta}{k}-t|)}.\]
The following _lossy elliptic estimate_ enables us to exploit the invisicid damping by paying regularity.
**Lemma 4.2**.: _For any \(s\geqslant 0\)_
\[\big{\|}(-\Delta_{L})^{-1}F\big{\|}_{H^{s}}\lesssim\frac{1}{\langle t\rangle^ {2}}\,\|F\|_{H^{s+2}}\,.\]
The proof of this lemma is an application of the inequality \(\langle a-b\rangle\,\langle b\rangle\gtrsim\langle a\rangle\), see [8]. We also record the following bounds that follows directly by the definition of \(m^{d}\), see (2.8),
\[\sqrt{\frac{k^{2}}{p_{k}(t,\eta)}}=\frac{1}{\sqrt{C_{\beta}}}\sqrt{\frac{ \partial_{t}m_{k}^{d}(t,\eta)}{m_{k}^{d}(t,\eta)}},\qquad\frac{|k|}{p_{k}(t, \eta)}\lesssim\sqrt{\frac{\partial_{t}m_{k}^{d}(t,\eta)}{m_{k}^{d}(t,\eta)}} \sqrt{\frac{k^{2}}{p_{k}(t,\eta)}}. \tag{4.3}\]
### Bounds on the symmetric variables
In this section, we aim at proving that (H\({}_{\text{sym}}\)) holds true with \(10\) replaced by \(5\). Looking at the energy identity (3.9) and the definition of \(\mathsf{T}_{\text{sym}}\) (3.12) and \(\mathsf{S}_{\text{sym}}\) (3.13), we see that all the nonlinear error terms are of the following type:
\[\mathcal{T}_{\text{sym}}(F,G,H) :=\left|\left\langle\sqrt{\frac{k^{2}}{p}}m\mathcal{F}\left( \nabla^{\perp}\Delta_{L}^{-1}F\cdot\nabla G\right),\hat{H}\right\rangle\right| \tag{4.4}\] \[\mathcal{S}_{\text{sym}}(F,G,H) :=\left|\left\langle\sqrt{\frac{k^{2}}{p}}m\bigg{(}\frac{ \partial_{t}p}{p}\hat{F}*\big{(}\hat{G}-2\frac{\ell^{2}}{p}\hat{G}\big{)} \bigg{)},H\right\rangle\right|.\]
Moreover, in terms of bound to perform, thanks to the definition of the functionals (3.4)-(3.8) and the bootstrap hypotheses (H\({}_{\text{sym}}\))-(H\({}_{0}\)), we see that there is actually no difference between \((\Psi,\Omega)\) and \((\Phi,J)\). In the next lemma we collect the bounds we need for the transport and stretching nonlinearities respectively.
**Lemma 4.3**.: _Let \(m\) be the Fourier multiplier defined in (2.13) with \(N>10\). The following inequalities holds true:_
\[\mathcal{T}_{\mathsf{sym}}(F_{\neq},G_{\neq},H)\lesssim \,\mathrm{e}^{-\delta_{0}\nu^{\frac{1}{3}}t}\left\|mF_{\neq} \right\|_{L^{2}}\left\|\sqrt{\frac{k^{2}}{p}}mG_{\neq}\right\|_{L^{2}}\left\| \sqrt{\frac{\partial_{t}m^{d}}{m^{d}}}H\right\|_{L^{2}} \tag{4.5}\] \[+\frac{1}{\left\langle t\right\rangle}\mathrm{e}^{-\delta_{0}\nu ^{\frac{1}{3}}t}\left\|mF_{\neq}\right\|_{L^{2}}\left\|\sqrt{\frac{k^{2}}{p}}m |\nabla_{L}|G_{\neq}\right\|_{L^{2}}\left\|H\right\|_{L^{2}}\] \[+\left\langle t\right\rangle\mathrm{e}^{-\delta_{0}\nu^{\frac{1} {3}}t}\left\|mG_{\neq}\right\|_{L^{2}}\left\|\sqrt{\frac{\partial_{t}m^{d}}{m ^{d}}}\sqrt{\frac{k^{2}}{p}}mF_{\neq}\right\|_{L^{2}}\left\|\sqrt{\frac{ \partial_{t}m^{d}}{m^{d}}}H\right\|_{L^{2}}\] \[+\mathrm{e}^{-\delta_{0}\nu^{\frac{1}{3}}t}\left\|mG_{\neq} \right\|_{L^{2}}\left\|\sqrt{\frac{k^{2}}{p}}mF_{\neq}\right\|_{L^{2}}\left\| \sqrt{\frac{\partial_{t}m^{d}}{m^{d}}}H\right\|_{L^{2}}.\]
_Denoting \((\nabla^{\perp}\Delta_{L}^{-1}F)_{0}=(V_{F,0}^{1},0)\), one has_
\[\mathcal{T}_{\mathsf{sym}}(F_{0},G_{\neq},H)\lesssim \,\left\|V_{F,0}^{1}\right\|_{H^{N}}\left\|\sqrt{\frac{k^{2}}{p}} m\partial_{X}G_{\neq}\right\|_{L^{2}}\left\|H\right\|_{L^{2}} \tag{4.6}\] \[+\left\|\partial_{Y}V_{F,0}^{1}\right\|_{H^{N}}\left\|\sqrt{ \frac{k^{2}}{p}}mG_{\neq}\right\|_{L^{2}}\left\|\sqrt{\frac{\partial_{t}m^{d} }{m^{d}}}H\right\|_{L^{2}}.\]
_Moreover_
\[\mathcal{T}_{\mathsf{sym}}(F_{\neq},G_{0},H)\lesssim \,\left\|G_{0}\right\|_{H^{3}}\left\|\sqrt{\frac{\partial_{t}m^{ d}}{m^{d}}}\sqrt{\frac{k^{2}}{p}}mF_{\neq}\right\|_{L^{2}}\left\|\sqrt{\frac{ \partial_{t}m^{d}}{m^{d}}}H\right\|_{L^{2}} \tag{4.7}\] \[+\frac{1}{\left\langle t\right\rangle^{2}}\left\|mF_{\neq}\right\| _{L^{2}}\left\|\partial_{Y}G_{0}\right\|_{H^{N}}\left\|\sqrt{\frac{\partial_{t }m^{d}}{m^{d}}}H\right\|_{L^{2}}.\]
_For the stretching nonlinearities we have the following_
\[\mathcal{S}_{\mathsf{sym}}(F,G_{\neq},H)\lesssim \,\mathrm{e}^{-\delta_{0}\nu^{\frac{1}{3}}t}\left\|\sqrt{\frac{k^ {2}}{p}}mF_{\neq}\right\|_{L^{2}}\left\|mG_{\neq}\right\|_{L^{2}}\left\|\sqrt{ \frac{\partial_{t}m^{d}}{m^{d}}}H\right\|_{L^{2}}, \tag{4.8}\] \[\mathcal{S}_{\mathsf{sym}}(F,G_{0},H)\lesssim \,\left(\left\|\sqrt{\frac{k^{2}}{p}}mF_{\neq}\right\|_{L^{2}} \left\|G_{0}\right\|_{H^{3}}+\frac{1}{\left\langle t\right\rangle}\left\|mF_ {\neq}\right\|_{L^{2}}\left\|G_{0}\right\|_{H^{N}}\right)\left\|\sqrt{\frac{ \partial_{t}m^{d}}{m^{d}}}H\right\|_{L^{2}}. \tag{4.9}\]
**Remark 4.4**.: The bounds on \(\mathcal{T}_{\mathsf{sym}}(F_{0},G_{\neq},H)\) are not optimal since we could exploit commutators to avoid losing an \(x\)-derivative on \(G_{\neq}\). However, for the threshold \(\varepsilon\ll\nu^{\frac{2}{3}}\) this does not seem necessary.
Before proving the key lemma above, we first show how to improve the bootstrap hypothesis (\(\mathrm{H}_{\mathsf{sym}}\)) with the estimates in Lemma 4.3.
Proof:: improvement of (\(\mathrm{H}_{\mathsf{sym}}\)). Since \(|\partial_{t}p|/p\leqslant 1\), in the nonlinear term \(\mathsf{T}_{\mathsf{sym}}\) (see (3.12)) we can just study, for example, the term
\[\mathcal{T}_{\mathsf{sym}}(\Omega,\Omega,mZ)\leq\mathcal{T}_{\mathsf{sym}}( \Omega_{\neq},\Omega_{\neq},mZ)+\mathcal{T}_{\mathsf{sym}}(\Omega_{0},\Omega_{ \neq},mZ)+\mathcal{T}_{\mathsf{sym}}(\Omega_{\neq},\Omega_{0},mZ)\]
where we used that \(\Delta_{L}^{-1}\Omega=\Psi\). Recalling the definition of \(Z\) given in (2.7), applying (4.5) we deduce that
\[\mathcal{T}_{\mathsf{sym}}(\Omega_{\neq},\Omega_{\neq},mZ)\lesssim \,\mathrm{e}^{-\delta_{0}\nu^{\frac{1}{3}t}}\left\|m\Omega_{\neq} \right\|_{L^{2}}\left\|mZ\right\|_{L^{2}}\left\|\sqrt{\frac{\partial_{t}m^{d}} {m^{d}}}mZ\right\|_{L^{2}}\] \[+\frac{1}{\left\langle t\right\rangle}\mathrm{e}^{-\delta_{0}\nu ^{\frac{1}{3}t}}\left\|m\Omega_{\neq}\right\|_{L^{2}}\left\|\left\|\nabla_{L}| mZ\right\|_{L^{2}}\left\|mZ\right\|_{L^{2}}\] \[+\left\langle t\right\rangle\mathrm{e}^{-\delta_{0}\nu^{\frac{1} {3}t}}\left\|m\Omega_{\neq}\right\|_{L^{2}}\left\|\sqrt{\frac{\partial_{t}m^{d }}{m^{d}}}mZ\right\|_{L^{2}}\left\|\sqrt{\frac{\partial_{t}m^{d}}{m^{d}}}mZ \right\|_{L^{2}}\]
From the definitions of \(\mathsf{E}_{\mathsf{sym}},\mathsf{E}_{\mathsf{h.o.}}\) and \(\mathsf{D}_{\mathsf{sym}}\), see respectively (3.4), (3.5) and (3.7), we rewrite this bound as
\[\mathcal{T}_{\mathsf{sym}}(\Omega_{\neq},\Omega_{\neq},mZ)\lesssim\mathrm{e}^ {-\delta_{0}\nu^{\frac{1}{3}t}}\sqrt{\mathsf{E}_{\mathsf{h.o.}}}\sqrt{\mathsf{ D}_{\mathsf{sym}}}\left(\sqrt{\mathsf{E}_{\mathsf{sym}}}+\frac{1}{\left\langle t \right\rangle}\nu^{-\frac{1}{2}}\sqrt{\mathsf{E}_{\mathsf{sym}}}+\left\langle t \right\rangle\sqrt{\mathsf{D}_{\mathsf{sym}}}\right).\]
Appealing to the boostrap hypothesis (H\({}_{\mathsf{sym}}\))-(H\({}_{\mathsf{h.o.}}\)), we get
\[\mathcal{T}_{\mathsf{sym}}(\Omega_{\neq},\Omega_{\neq},mZ) \lesssim\mathrm{e}^{-\delta_{0}\nu^{\frac{1}{3}t}}\varepsilon \left\langle t\right\rangle\sqrt{\mathsf{D}_{\mathsf{sym}}}\left(\varepsilon +\frac{1}{\left\langle t\right\rangle}\nu^{-\frac{1}{2}}\varepsilon+\left\langle t \right\rangle\sqrt{\mathsf{D}_{\mathsf{sym}}}\right)\] \[\lesssim(\varepsilon^{2}\nu^{-\frac{1}{3}}+\varepsilon^{2}\nu^{- \frac{1}{2}})\mathrm{e}^{-\delta_{0}\nu^{\frac{1}{3}t}/2}\sqrt{\mathsf{D}_{ \mathsf{sym}}}+\varepsilon\nu^{-\frac{2}{3}}\mathsf{D}_{\mathsf{sym}}\] \[\lesssim\varepsilon\nu^{-\frac{2}{3}}\mathsf{D}_{\mathsf{sym}}+ \varepsilon^{2}(\varepsilon\nu^{-\frac{2}{3}})\nu^{\frac{1}{3}}\mathrm{e}^{- \delta_{0}\nu^{\frac{1}{3}t}}.\]
Integrating in time and using the bootstrap hypotheses, we have
\[\int_{0}^{t}\mathcal{T}_{\mathsf{sym}}(\Omega_{\neq},\Omega_{\neq},mZ) \mathrm{d}\tau\mathrm{d}\tau\lesssim(\varepsilon\nu^{-\frac{2}{3}}) \varepsilon^{2}. \tag{4.10}\]
Since \(\nabla^{\perp}\Delta_{L}^{-1}\Omega_{0}=U_{0}^{1}\) and \(\left|\partial_{X}\right|\leqslant\left|\nabla_{L}\right|\), from (4.6) we get
\[\mathcal{T}_{\mathsf{sym}}(\Omega_{0},\Omega_{\neq},mZ) \lesssim\] \[\lesssim \nu^{-\frac{1}{2}}\sqrt{\mathsf{E}_{0}}\sqrt{\mathsf{E}_{ \mathsf{sym}}}\sqrt{\mathsf{D}_{\mathsf{sym}}}+\nu^{-\frac{1}{2}}\sqrt{ \mathsf{D}_{0}}\sqrt{\mathsf{E}_{\mathsf{sym}}}\sqrt{\mathsf{D}_{\mathsf{ sym}}}.\]
From the property (2.10), we know that
\[\sqrt{\mathsf{E}_{\mathsf{sym}}}\lesssim\nu^{-\frac{1}{6}}\sqrt{\mathsf{D}_{ \mathsf{sym}}}. \tag{4.11}\]
Using the bootstrap hypotheses we then deduce
\[\int_{0}^{t}\mathcal{T}_{\mathsf{sym}}(\Omega_{0},\Omega_{\neq},mZ)\lesssim \varepsilon\nu^{-\frac{2}{3}}\int_{0}^{t}\mathsf{D}_{\mathsf{sym}}\mathrm{d} \tau+\varepsilon\nu^{-\frac{1}{2}}\int_{0}^{t}\mathsf{D}_{0}\mathrm{d}\tau \lesssim(\varepsilon\nu^{-\frac{2}{3}})\varepsilon^{2}. \tag{4.12}\]
For the last term of the transport nonlinearity, since \(\left\|\Omega_{0}\right\|_{H^{3}}\lesssim\left\|U_{0}^{1}\right\|_{H^{4}} \lesssim\sqrt{\mathsf{E}_{0}}\), applying (4.7) we have
\[\mathcal{T}_{\mathsf{sym}}(\Omega_{\neq},\Omega_{0},mZ) \lesssim\] \[\quad+\frac{1}{\left\langle t\right\rangle^{2}}\left\|m\Omega_{ \neq}\right\|_{L^{2}}\left\|\partial_{Y}\Omega_{0}\right\|_{H^{N}}\left\| \sqrt{\frac{\partial_{t}m^{d}}{m^{d}}}mZ\right\|_{L^{2}}\] \[\lesssim \sqrt{\mathsf{E}_{0}}\mathsf{D}_{\mathsf{sym}}+\frac{\nu^{- \frac{1}{2}}}{\left\langle t\right\rangle}\sqrt{\mathsf{E}_{\mathsf{h.o.}}} \sqrt{\mathsf{D}_{0}}\sqrt{\mathsf{D}_{\mathsf{sym}}}.\]
Using the bootstrap assumptions we deduce
\[\int_{0}^{t}\mathcal{T}_{\mathsf{sym}}(\Omega_{\neq},\Omega_{0},mZ)\mathrm{d} \tau\lesssim(\varepsilon\nu^{-\frac{1}{2}})\int_{0}^{t}(\mathsf{D}_{\mathsf{sym }}+\mathsf{D}_{0})\mathrm{d}\tau\lesssim(\varepsilon\nu^{-\frac{1}{2}}) \varepsilon^{2} \tag{4.13}\]
The structure of all the other transport nonlinearities enables us to apply the exact same procedure to the term we have just controlled. Therefore, from the bounds (4.10), (4.12) and (4.13) we conclude that
\[\int_{0}^{t}\mathsf{T}_{\mathsf{sym}}\mathrm{d}\tau\lesssim(\varepsilon\nu^{- \frac{2}{3}})\varepsilon^{2}. \tag{4.14}\]
Turning our attention to the stretching nonlinearities, we can again explicitly handle just one of them, say
\[\mathcal{S}_{\mathsf{sym}}(J,\Omega,mQ)\leq\mathcal{S}_{\mathsf{sym}}(J, \Omega_{\neq},mQ)+\mathcal{S}_{\mathsf{sym}}(J,\Omega_{0},mQ).\]
From (4.8) we get
\[\mathcal{S}_{\mathsf{sym}}(J,\Omega_{\neq},mQ) \lesssim\,\mathrm{e}^{-\delta_{0}\nu^{\frac{1}{3}}t}\left\|mQ \right\|_{L^{2}}\left\|m\Omega_{\neq}\right\|_{L^{2}}\left\|\sqrt{\frac{ \widehat{c}_{t}m^{d}}{m^{d}}}mQ\right\|_{L^{2}}\] \[\lesssim\mathrm{e}^{-\delta_{0}\nu^{\frac{1}{3}}t}\sqrt{\mathsf{ E}_{\mathsf{sym}}}\sqrt{\mathsf{E}_{\mathsf{h.o.}}}\sqrt{\mathsf{D}_{\mathsf{sym}}} \lesssim\varepsilon^{2}\left\langle t\right\rangle\mathrm{e}^{-\delta_{0}\nu ^{\frac{1}{3}}t}\sqrt{\mathsf{D}_{\mathsf{sym}}}\]
where we used the bootstrap hypotheses. Therefore,
\[\int_{0}^{t}\mathcal{S}_{\mathsf{sym}}(J,\Omega_{\neq},mQ)\mathrm{d}\tau \lesssim(\varepsilon\nu^{-\frac{1}{2}})\int_{0}^{t}(\mathsf{D}_{\mathsf{sym }}+\varepsilon^{2}\nu^{\frac{1}{3}}\mathrm{e}^{-\delta_{0}\nu^{\frac{1}{3}} \tau})\mathrm{d}\tau\lesssim(\varepsilon\nu^{-\frac{1}{2}})\varepsilon^{2} \tag{4.15}\]
Looking at (4.9), we get
\[\mathcal{S}_{\mathsf{sym}}(J,\Omega_{0},mQ)\lesssim\,\left(\left\|mQ\right\|_ {L^{2}}\left\|\Omega_{0}\right\|_{H^{3}}+\frac{1}{\left\langle t\right\rangle }\left\|mJ_{\neq}\right\|_{L^{2}}\left\|\Omega_{0}\right\|_{H^{N}}\right) \left\|\sqrt{\frac{\widehat{c}_{t}m^{d}}{m^{d}}}mQ\right\|_{L^{2}}.\]
Now we observe that
\[\left\|\Omega_{0}\right\|_{H^{N}}=\left\|\widehat{c}_{Y}U_{0}^{1}\right\|_{H^ {N}}\lesssim\nu^{-\frac{1}{2}}\sqrt{\mathsf{D}_{0}}.\]
Thus, using again (4.11), we have
\[\mathcal{S}_{\mathsf{sym}}(J,\Omega_{0},mQ) \lesssim\sqrt{\mathsf{D}_{\mathsf{sym}}}(\sqrt{\mathsf{E}_{ \mathsf{sym}}}\sqrt{\mathsf{E}_{0}}+\frac{\nu^{-\frac{1}{2}}}{\left\langle t \right\rangle}\sqrt{\mathsf{E}_{\mathsf{h.o.}}}\sqrt{\mathsf{D}_{0}}) \tag{4.16}\] \[\lesssim(\varepsilon\nu^{-\frac{1}{6}}+\varepsilon\nu^{-\frac{1 }{2}})\mathsf{D}_{\mathsf{sym}}+\varepsilon\nu^{-\frac{1}{2}}\mathsf{D}_{0}\]
Arguing similarly for the other stretching term, combining (4.15) with (4.16) we get
\[\int_{0}^{t}\mathsf{S}_{\mathsf{sym}}\mathrm{d}\tau\lesssim(\varepsilon\nu^{- \frac{1}{2}})\varepsilon^{2}\]
Finally, using the bound above and (4.14), integrating in time the energy inequality (3.9) we get
\[\mathsf{E}_{\mathsf{sym}}+\frac{1}{16}\int_{0}^{t}\mathsf{D}_{\mathsf{sym}} \mathrm{d}\tau\leq\varepsilon^{2}+C(\varepsilon\nu^{-\frac{2}{3}}) \varepsilon^{2},\]
where \(C=C(N,\delta_{0},\beta)>1\). By choosing \(\varepsilon_{0}\ll\nu^{\frac{2}{3}}\), we improve the bound (\(\mathsf{H}_{\mathsf{sym}}\)) and conclude the proof.
We finally present the proof of Lemma 4.3.
Proof of Lemma 4.3.: We split the proof for each of the bounds (4.5), (4.6), (4.7), (4.8) and (4.9).
\(\bullet\)_Proof of (4.5)_: appealing to the paraproduct decomposition (1.9), we see that
\[\mathcal{T}_{\mathsf{sym}}(F_{\neq},G_{\neq},H)\leqslant\mathcal{T}_{\mathsf{ sym}}(F_{\neq}^{Lo},G_{\neq}^{Hi},H)+\mathcal{T}_{\mathsf{sym}}(F_{\neq}^{Hi},G_{ \neq}^{Lo},H)\]
We study separately the low-high and the high-low terms.
\(\circ\)_Control of the low-high term_. We split the integral to handle separately the resonant and non-resonant case (\(t\in I_{k,\eta}\cap I_{\ell,\xi}^{c}\) or not), that is
\[\mathcal{T}_{\mathsf{sym}}(F_{\neq}^{Lo},G_{\neq}^{Hi},H)\leqslant\mathcal{T }_{\mathsf{sym}}^{R}(F_{\neq}^{Lo},G_{\neq}^{Hi},H_{\neq})+\mathcal{T}_{ \mathsf{sym}}^{NR}(F_{\neq}^{Lo},G_{\neq}^{Hi},H),\]
where we define
\[\mathcal{T}_{\mathsf{sym}}^{R}(F_{\neq}^{Lo},G_{\neq}^{Hi},H):= \sum_{k,\ell\in\mathbb{Z}}\iint_{\mathbb{R}^{2}} \mathbbm{1}_{\{teI_{k,\eta}\cap I_{\ell,\xi}^{c}\}}\sqrt{\frac{k^ {2}}{p_{k}(t,\eta)}}m_{k}(t,\eta)\frac{|k-\ell,\eta-\xi|}{p_{k-\ell}(t,\eta- \xi)}|\hat{F}_{\neq}^{Lo}|_{k-\ell}(\eta-\xi)\] \[\times|\ell,\xi||\hat{G}^{Hi}|_{\ell}(\xi)|\hat{H}|_{k}(\eta) \mathrm{d}\eta\mathrm{d}\xi\] \[\mathcal{T}_{\mathsf{sym}}^{NR}(F_{\neq}^{Lo},G_{\neq}^{Hi},H):= \sum_{k,\ell\in\mathbb{Z}}\iint_{\mathbb{R}^{2}} \mathbbm{1}_{\{teI_{k,\eta}\cap I_{\ell,\xi}^{c}\}}\sqrt{\frac{k^ {2}}{p_{k}(t,\eta)}}m_{k}(t,\eta)\frac{|k-\ell,\eta-\xi|}{p_{k-\ell}(t,\eta- \xi)}|\hat{F}_{\neq}^{Lo}|_{k-\ell}(\eta-\xi)\] \[\times|\ell,\xi|\hat{G}^{Hi}|_{\ell}(\xi)|\hat{H}|_{k}(\eta) \mathrm{d}\eta\mathrm{d}\xi.\]
By definition of the paraproduct (1.9) we know that \(|k,\eta|\leqslant 3|\ell,\xi|\). Hence, since \(m^{d},m^{\nu},m^{s}\) are uniformly bounded Fourier multipliers, we deduce that
\[m_{k}(t,\eta)\lesssim m_{\ell}(t,\xi) \tag{4.17}\]
For the non-resonant term, thanks to (4.1), we also know that
\[\mathbbm{1}_{\{t\notin I_{k,\eta}\cap I_{\ell,\xi}^{c}\}}\sqrt{\frac{k^{2}}{p _{k}(t,\eta)}}\lesssim\langle|k-\ell,\eta-\xi|\rangle^{4}\,\mathbbm{1}_{\{t \notin I_{k,\eta}\cap I_{\ell,\xi}^{c}\}}\sqrt{\frac{\ell^{2}}{p_{\ell}(t,\xi) }}, \tag{4.18}\]
where we paid an extra derivative on the low-frequency piece since \(|k|/|\ell|\lesssim\langle k-\ell\rangle\). Moreover, having that
\[|\ell,\xi|\lesssim\langle\ell\rangle\,|\ell,\xi-\ell t|,\]
combining the bound above with (4.17), (4.18), Cauchy-Schwartz and Young's convolution inequality we arrive at
\[\mathcal{T}_{\mathsf{sym}}^{NR}(F_{\neq}^{Lo},G_{\neq}^{Hi},H) \lesssim\langle t\rangle\left\|(-\Delta_{L})^{-1}F_{\neq}\right\| _{H^{7}}\left\|\sqrt{\frac{k^{2}}{p}}m|\nabla_{L}|G_{\neq}\right\|_{L^{2}} \left\|H\right\|_{L^{2}}\] \[\lesssim\frac{1}{\langle t\rangle}\mathrm{e}^{-\delta_{0}\nu^{ \frac{1}{3}}t}\left\|mF_{\neq}\right\|_{L^{2}}\left\|\sqrt{\frac{k^{2}}{p}}m| \nabla_{L}|G_{\neq}\right\|_{L^{2}}\left\|H\right\|_{L^{2}}.\]
In the last inequality we used Lemma (4.2) combined with the fact that \(\langle|k,\eta|\rangle^{9}\lesssim\mathrm{e}^{-\delta_{0}\nu^{\frac{1}{3}}t}m _{k}(t,\eta)\) since \(N>10\). This bound is in agreement with (4.5).
We now turn our attention to the resonant part. From Lemma 4.1 we deduce
\[\mathbbm{1}_{\{teI_{k,\eta}\cap I_{\ell,\xi}^{c}\}}\sqrt{\frac{k^{2}}{p_{k}(t, \eta)}}|\ell,\xi|\lesssim\langle|k-\ell,\eta-\xi|\rangle^{4}\,\mathbbm{1}_{\{ teI_{k,\eta}\cap I_{\ell,\xi}^{c}\}}\frac{|\eta|}{k^{2}(1+|\frac{\eta}{k}-t|)}|\ell,\xi| \sqrt{\frac{\ell^{2}}{p_{\ell}(t,\xi)}}. \tag{4.19}\]
Since \(t\in I_{k,\eta}\) we know that \(t\approx|\eta|/|k|\), so we get
\[\frac{|\eta||\ell,\xi|}{k^{2}}\lesssim\langle\eta-\xi\rangle\,\frac{|\eta|^{2} }{k^{2}}+\langle k-\ell\rangle\,\frac{|\eta|}{k}\frac{\langle\xi\rangle}{k^{2 }}\lesssim\langle|k-\ell,\eta-\xi|\rangle^{2}\,\langle t\rangle^{2}\,.\]
Recalling the definition of \(m^{d}\) (2.8), combining the bound above with (4.19) we obtain
\[\mathbbm{1}_{\{t\in I_{k,\eta}\cap I_{\ell,\xi}^{c}\}}\sqrt{\frac{k^{2}}{p_{k}(t, \eta)}}|\ell,\xi|\lesssim\langle t\rangle^{2}\left\langle|k-\ell,\eta-\xi \right\rangle^{6}\sqrt{\frac{\partial_{t}m^{d}_{k}(t,\eta)}{m^{d}_{k}(t,\eta)} }\sqrt{\frac{\ell^{2}}{p_{\ell}(t,\xi)}}.\]
Therefore, appealing again to Lemma 4.2, we have
\[\mathcal{T}^{R}_{\mathsf{sym}}(F^{Lo}_{\neq},G^{Hi}_{\neq},H) \lesssim\langle t\rangle^{2}\left\|(-\Delta_{L})^{-1}F_{\neq} \right\|_{H^{8}}\left\|\sqrt{\frac{k^{2}}{p}}mG_{\neq}\right\|_{L^{2}}\left\| \sqrt{\frac{\partial_{t}m^{d}}{m^{d}}}H\right\|_{L^{2}}\] \[\lesssim\mathrm{e}^{-\delta_{0}\nu^{\frac{1}{3}t}}\left\|mF_{ \neq}\right\|_{L^{2}}\left\|\sqrt{\frac{k^{2}}{p}}mG_{\neq}\right\|_{L^{2}} \left\|\sqrt{\frac{\partial_{t}m^{d}}{m^{d}}}H\right\|_{L^{2}},\]
which is consistent with (4.5).
\(\circ\) _Control of the high-low term_. By the definition of (4.4) and a change of variables, observe that
\[\mathcal{T}_{\mathsf{sym}}(F^{Hi}_{\neq},G^{Lo}_{\neq},H)=\mathcal{T}_{ \mathsf{sym}}(\Delta_{L}G^{Lo}_{\neq},\Delta_{L}^{-1}F^{Hi}_{\neq},H).\]
Writing down this term explicitly, we obtain the bound
\[\mathcal{T}_{\mathsf{sym}}(\Delta_{L}G^{Lo}_{\neq},\Delta_{L}^{- 1}F^{Hi}_{\neq},H) \lesssim\sum_{k,\ell\in\mathbb{Z}}\iint\limits_{\mathbb{R}^{2}} \sqrt{\frac{k^{2}}{p_{k}(t,\eta)}}m_{k}(t,\eta)|k-\ell,\eta-\xi||\hat{G}^{Lo}_ {\neq}|_{k-\ell}(\eta-\xi)\] \[\qquad\times(\mathbbm{1}_{\{t\in I_{\ell,\xi}\}}+\mathbbm{1}_{\{ t\notin I_{\ell,\xi}\}})\frac{|\ell,\xi|}{p_{\ell}(t,\xi)}|\hat{F}^{Hi}|_{\ell}(\xi)| \hat{H}|_{k}(\eta)\mathrm{d}\eta\mathrm{d}\xi\] \[:=\mathcal{J}^{R}+\mathcal{J}^{NR},\]
where \(\mathcal{J}^{R}\) is the integral containing \(t\in\mathbbm{1}_{\{t\in I_{\ell,\xi}\}}\) and \(\mathcal{J}^{NR}\) the other one. Notice that with the change of variables we now have \(\langle|k,\eta|\rangle\lesssim 3\left\langle\ell,\xi\right\rangle/2\). When \(t\in I_{\ell,\xi}\), since \(m^{d},m^{\nu},m^{s}\) are bounded Fourier multipliers, we observe that
\[\mathbbm{1}_{\{t\in I_{\ell,\xi}\}}\frac{|\ell,\xi|}{p_{\ell}(t,\xi)}m_{k}(t, \eta) \lesssim\mathbbm{1}_{\{t\in I_{\ell,\xi}\}}\frac{|\xi|}{|\ell|^{2}} \frac{1}{1+|\frac{\xi}{\ell}-t|^{2}}\lesssim\langle t\rangle\sqrt{\frac{ \partial_{t}m^{d}_{\ell}(t,\xi)}{m^{d}_{\ell}(t,\xi)}}\sqrt{\frac{\ell^{2}}{p _{\ell}(t,\xi)}}m_{\ell}(t,\xi).\]
Since \(\sqrt{k^{2}/p}\approx\sqrt{\partial_{t}m^{d}/m^{d}}\), moving this factor to \(H\), we then deduce the bound
\[\mathcal{J}^{R} \lesssim\langle t\rangle\,\mathrm{e}^{-\delta_{0}\nu^{\frac{1}{3} t}}\left\|mG_{\neq}\right\|_{L^{2}}\left\|\sqrt{\frac{\partial_{t}m^{d}}{m^{d}}} \sqrt{\frac{k^{2}}{p}}mF_{\neq}\right\|_{L^{2}}\left\|\sqrt{\frac{\partial_{t} m^{d}}{m^{d}}}H\right\|_{L^{2}}.\]
When \(t\notin I_{\ell,\xi}\) we have \(|\xi/\ell-t|\gtrsim|\xi|/|\ell|^{2}\), hence
\[\mathbbm{1}_{\{t\notin I_{\ell,\xi}\}}\frac{|\ell,\xi|}{p_{\ell}(t,\xi)}m_{k}( t,\eta)\lesssim\mathbbm{1}_{\{t\notin I_{\ell,\xi}\}}\frac{|\xi|}{|\ell|^{2}} \frac{1}{1+|\frac{\xi}{\ell}-t|^{2}}m_{\ell}(t,\xi)\lesssim\sqrt{\frac{\ell^{2 }}{p_{\ell}(t,\xi)}}m_{\ell}(t,\xi).\]
Thus
\[\mathcal{J}^{NR} \lesssim\mathrm{e}^{-\delta_{0}\nu^{\frac{1}{3}t}}\left\|mG_{ \neq}\right\|_{L^{2}}\left\|\sqrt{\frac{k^{2}}{p}}mF_{\neq}\right\|_{L^{2}} \left\|\sqrt{\frac{\partial_{t}m^{d}}{m^{d}}}H\right\|_{L^{2}}.\]
The bound (4.5) is proved.
\(\bullet\) _Proof of (4.6):_ first we notice that
\[(\nabla^{\perp}\Delta_{L}^{-1}F)_{0}\cdot\nabla G_{\neq}=V^{1}_{F,0}\partial_{ X}G_{\neq}.\]
Hence
\[\mathcal{T}_{\mathsf{sym}}(F_{0},G_{\neq},H)\leq\sum_{k\in\mathbb{Z}}\iint\limits_ {\mathbb{R}^{2}}\sqrt{\frac{k^{2}}{p_{k}(t,\eta)}}m_{k}(t,\eta)|\hat{V}^{1}_{F,0 }|(\eta-\xi)|k||\hat{G}_{\neq}|_{k}(\xi)|\hat{H}|_{k}(\eta)\mathrm{d}\eta \mathrm{d}\xi.\]
Since \(\langle|k,\eta|\rangle\lesssim\langle|k,\xi|\rangle+\langle\eta-\xi\rangle\) and \(m^{d},m^{\nu},m^{s}\) are uniformly bounded, we deduce that
\[m_{k}(t,\eta)\lesssim m_{k}(t,\xi)+\frac{\langle\eta-\xi\rangle^{N}}{\langle|k,\xi|\rangle^{N}}m_{k}(t,\xi).\]
Hence
\[\mathcal{T}_{\mathsf{sym}}(F_{0},G_{\neq},H)\leq\sum_{k\in \mathbb{Z}}\iint\limits_{\mathbb{R}^{2}}\sqrt{\frac{k^{2}}{p_{k}(t,\eta)}}| \hat{V}^{1}_{F,0}|(\eta-\xi)|k||m(t)\hat{G}_{\neq}|_{k}(\xi)|\hat{H}|_{k}(\eta )\mathrm{d}\eta\mathrm{d}\xi\] \[\quad+\sum_{k\in\mathbb{Z}}\iint\limits_{\mathbb{R}^{2}}\sqrt{ \frac{k^{2}}{p_{k}(t,\eta)}}\left\langle\eta-\xi\right\rangle^{N}|\hat{V}^{1} _{F,0}|(\eta-\xi)\frac{|k|}{\left\langle|k,\xi|\right\rangle^{N}}|m(t)\hat{G}_ {\neq}|_{k}(\xi)|\hat{H}|_{k}(\eta)\mathrm{d}\eta\mathrm{d}\xi\] \[\quad:=\mathcal{I}_{1}+\mathcal{I}_{2}.\]
Using (4.2) and the definition of \(m^{d}\), we deduce
\[\sqrt{\frac{k^{2}}{p_{k}(t,\eta)}}\leq\left(1+|\eta-\xi|\frac{1}{1+|\frac{\eta }{k}-t|}\right)\sqrt{\frac{k^{2}}{p_{k}(t,\xi)}}\lesssim\left(1+|\eta-\xi| \sqrt{\frac{\partial_{t}m^{d}_{k}(t,\eta)}{m^{d}_{k}(t,\eta)}}\right)\sqrt{ \frac{k^{2}}{p_{k}(t,\xi)}}.\]
Hence
\[\mathcal{I}_{2}\lesssim \left\|V^{1}_{F,0}\right\|_{H^{N}}\left\|\sqrt{\frac{k^{2}}{p}} \left\langle\cdot\right\rangle^{-N}m\partial_{X}G_{\neq}\right\|_{H^{3}}\left\| H\right\|_{L^{2}}\] \[+\left\|\partial_{Y}V^{1}_{F,0}\right\|_{H^{N}}\left\|\sqrt{ \frac{k^{2}}{p}}mG_{\neq}\right\|_{L^{2}}\left\|\sqrt{\frac{\partial_{t}m^{d} }{m^{d}}}H\right\|_{L^{2}},\]
which is in agreement with (4.6) since \(N>10\). On \(\mathcal{I}_{1}\) we can pay regularity on \(V^{1}_{F,0}\) and obtain the bound
\[\mathcal{I}_{1}\lesssim\ \left\|V^{1}_{F,0}\right\|_{H^{3}}\left\|\sqrt{\frac{k^{2} }{p}}m\partial_{X}G_{\neq}\right\|_{L^{2}}\left\|H\right\|_{L^{2}},\]
so (4.6) is proved.
\(\bullet\)_Proof of (4.7):_ in this case we have
\[(\nabla^{\perp}\Delta^{-1}_{L}F)_{\neq}\cdot\nabla G_{0}=(\partial_{X}\Delta ^{-1}_{L}F_{\neq})\partial_{Y}G_{0}.\]
Then we do a paraproduct decomposition to get
\[\mathcal{T}_{\mathsf{sym}}(F_{\neq},G_{0},H)\leq\mathcal{T}_{\mathsf{sym}}(F^ {Hi}_{\neq},G^{Lo}_{0},H)+\mathcal{T}_{\mathsf{sym}}(F^{Lo}_{\neq},G^{Hi}_{0},H).\]
For the high-low term, we move the factor \(\sqrt{k^{2}/p}\) on \(H\) and use (4.3) to get
\[\mathcal{T}_{\mathsf{sym}}(F^{Hi}_{\neq},G^{Lo}_{0},H)\lesssim\left\langle \left(\sqrt{\frac{\partial_{t}m^{d}}{m^{d}}}\sqrt{\frac{k^{2}}{p}}m|\hat{F}^{Hi }_{\neq}|\ast|\mathcal{F}(\partial_{Y}G^{Lo}_{0})|\right),\sqrt{\frac{ \partial_{t}m^{d}}{m^{d}}}|H|\right\rangle.\]
Applying Cauchy-Schwartz and Young's convolution inequality we get a bound in agreement with (4.7).
For the low-high term instead, we need to be careful in order to recover time-decay from \(\partial_{X}\Delta_{L}^{-1}\). This is because \(x\)-derivatives can be high in the \(F^{Lo}\) piece since \(G_{0}\) is concentrated on the zero \(x\)-frequencies. We then argue as follows: since \(\xi\) is the high-frequency, we have
\[\langle|k,\eta|\rangle^{N}\lesssim\langle k\rangle^{N}+\langle\eta\rangle^{N} \lesssim\langle|k,\eta-\xi|\rangle^{N}+\langle\xi\rangle^{N}\,.\]
This implies
\[m_{k}(t,\eta)\lesssim m_{k}(t,\eta-\xi)+\frac{\langle\xi\rangle^{N}}{\langle|k,\eta-\xi|\rangle^{N}}m_{k}(t,\eta-\xi).\]
Hence, using (4.3) we deduce that
\[\mathcal{T}_{\text{sym}}(F_{\neq}^{Lo},G_{0}^{Hi},H)\lesssim\mathcal{J}_{1}+ \mathcal{J}_{2}\]
where
\[\mathcal{J}_{1} :=\sum_{k}\iint\left(\sqrt{\frac{\partial_{t}m_{k}^{d}(t)}{m_{k }^{d}(t)}}\sqrt{\frac{k^{2}}{p_{k}(t)}}m_{k}(t)|\hat{F}_{\neq}^{Lo}|_{k}\right) (\eta-\xi)|\xi||\hat{G}_{0}^{Hi}|(\xi)\left(\sqrt{\frac{\partial_{t}m_{k}^{d} (t)}{m_{k}^{d}(t)}}|\hat{H}|_{k}\right)(\eta)\mathrm{d}\eta\mathrm{d}\xi\] \[\mathcal{J}_{2} :=\sum_{k}\iint\frac{|k|m_{k}(t,\eta-\xi)}{\langle|k,\eta-\xi| \rangle^{N}p_{k}(t,\eta-\xi)}|\hat{F}_{\neq}^{Lo}|_{k}(\eta-\xi)\langle\xi \rangle^{N}\,|\xi||\hat{G}_{0}^{Hi}|(\xi)\left(\sqrt{\frac{\partial_{t}m_{k}^{ d}(t)}{m_{k}^{d}(t)}}|\hat{H}|_{k}\right)(\eta)\mathrm{d}\eta\mathrm{d}\xi\]
For \(\mathcal{J}_{1}\) it is not difficult to get
\[\mathcal{J}_{1}\lesssim\|G_{0}\|_{H^{3}}\left\|\sqrt{\frac{\partial_{t}m^{d} }{m^{d}}}\sqrt{\frac{k^{2}}{p}}mF_{\neq}\right\|_{L^{2}}\left\|\sqrt{\frac{ \partial_{t}m^{d}}{m^{d}}}H\right\|_{L^{2}}.\]
For \(\mathcal{J}_{2}\) instead, we have
\[\mathcal{J}_{2} \lesssim\left\|\partial_{X}(-\Delta_{L})^{-1}mF_{\neq}\right\|_{ H^{-N+2}}\left\|\partial_{Y}G_{0}\right\|_{H^{N}}\left\|\sqrt{\frac{\partial_{t}m^{d} }{m^{d}}}H\right\|_{L^{2}}\] \[\lesssim\frac{1}{\langle t\rangle^{2}}\left\|mF_{\neq}\right\|_{ L^{2}}\left\|\partial_{Y}G_{0}\right\|_{H^{N}}\left\|\sqrt{\frac{\partial_{t}m^{ d}}{m^{d}}}H\right\|_{L^{2}},\]
where in the last line we used (4.2) and \(N>10\). The bound (4.7) is then proved.
\(\bullet\)_Proof of (4.8):_ notice that, since we have \(\partial_{t}p/p\) in front of \(\hat{F}\), we always have \(F_{\neq}\). It is also enough to prove the bound for
\[\mathcal{S}^{1}_{\text{sym}}(F,G_{\neq},H)=\left|\left\langle\sqrt{\frac{k^{2 }}{p}}m\left(\frac{\partial_{t}p}{p}\hat{F}_{\neq}*G_{\neq}\right),H\right\rangle \right|.\]
Using that \(|\partial_{t}p/p|\leq 2\sqrt{k^{2}/p}\) and the algebra property of \(H^{N}\), we get
\[\mathcal{S}^{1}_{\text{sym}}(F,G_{\neq},H)\lesssim\mathrm{e}^{-\delta_{0} \nu^{\frac{1}{3}}t}\left\|\sqrt{\frac{k^{2}}{p}}mF_{\neq}\right\|_{L^{2}}\left\| mG_{\neq}\right\|_{L^{2}}\left\|\sqrt{\frac{\partial_{t}m^{d}}{m^{d}}}H \right\|_{L^{2}},\]
whence proving (4.8).
\(\bullet\)_Proof of (4.9):_ in this case \(\mathcal{S}=\mathcal{S}^{1}\) defined above. Then, analogously to what we have done to treat \(\mathcal{T}_{\text{sym}}(F_{\neq},G_{0},H)\), we use the paraproduct decomposition first
\[\mathcal{S}^{1}_{\text{sym}}(F,G_{0},H)\leq\mathcal{S}^{1}_{\text{sym}}(F^{Hi },G_{0}^{Lo},H)+\mathcal{S}^{1}_{\text{sym}}(F^{Lo},G_{0}^{Hi},H).\]
For the high-low piece, we can proceed as done in the proof of (4.8) to get
\[\mathcal{S}^{1}_{\text{sym}}(F^{Hi},G_{0}^{Lo},H)\lesssim\left\|\sqrt{\frac{k^ {2}}{p}}mF_{\neq}\right\|_{L^{2}}\left\|G_{0}\right\|_{H^{3}}\left\|\sqrt{\frac {\partial_{t}m^{d}}{m^{d}}}H\right\|_{L^{2}} \tag{4.20}\]
For the low-high piece, we argue as done for the low-high term in the proof of (4.7). Namely, we can split the derivatives with higher-order in \(x\) and \(y\). In the first case, namely the term corresponding to \(\mathcal{J}_{1}\) in the proof of (4.7), we argue as done for the low-high term and we prove the same bound as in (4.20). In the other case, we proceed as done for \(\mathcal{J}_{2}\) in the proof of (4.7). Overall, we get
\[\mathcal{S}_{\text{sym}}^{1}(F^{Lo},G_{0}^{Hi},H) \lesssim\left\|\sqrt{\frac{k^{2}}{p}}mF_{\neq}\right\|_{L^{2}} \left\|G_{0}\right\|_{H^{3}}\left\|\sqrt{\frac{\partial_{t}m^{d}}{m^{d}}}H \right\|_{L^{2}}\] \[\quad+\left\|\frac{\partial_{t}p}{p}mF_{\neq}\right\|_{H^{-N+2}} \left\|G_{0}\right\|_{H^{N}}\left\|\sqrt{\frac{\partial_{t}m^{d}}{m^{d}}}H \right\|_{L^{2}}.\]
Having that \(\partial_{t}p\leqslant\left\langle t\right\rangle\left\langle k,\eta\right\rangle ^{2}\), using again Lemma 4.2 we have
\[\left\|\frac{\partial_{t}p}{p}mF_{\neq}\right\|_{H^{-N+2}}\lesssim\left\langle t \right\rangle\left\|(-\Delta_{L})^{-1}mF_{\neq}\right\|_{H^{-N+4}}\lesssim \frac{1}{\left\langle t\right\rangle}\left\|mF_{\neq}\right\|_{H^{-N+6}},\]
which proves (4.9) since \(N>6\).
### Bounds for the higher order energy
The structure of the proof for the higher-order energy is analogous to what we have done for \(\mathsf{E}_{\text{sym}}\). However, bounds will be simpler because we do not have to exchange frequencies for the unbounded multiplier \(\sqrt{k^{2}/p}\). We define the transport and stretching nonlinear terms as
\[\mathcal{T}_{\mathsf{h.o.}}(F,G,H) :=\left|\left\langle m\mathcal{F}\left(\nabla^{\perp}\Delta_{L}^ {-1}F\cdot\nabla G\right),\hat{H}\right\rangle\right|,\] \[\mathcal{S}_{\mathsf{h.o.}}(F,G,H) :=\left|\left\langle m\bigg{(}\frac{\partial_{t}p}{p}\hat{F}* \big{(}\hat{G}-2\frac{\ell^{2}}{p}\hat{G}\big{)}\bigg{)},H\right\rangle\right|.\]
We have the following.
**Lemma 4.5**.: _Let \(m\) be the Fourier multiplier defined in (2.13) with \(N>10\). Then:_
\[\mathcal{T}_{\mathsf{h.o.}}(F_{\neq},G_{\neq},H)\lesssim\mathrm{e}^{-\delta_ {0}\nu^{\frac{1}{3}}t}\left\|\sqrt{\frac{k^{2}}{p}}mF_{\neq}\right\|_{L^{2}} \left\|\nabla_{L}mG_{\neq}\right\|_{L^{2}}\left\|H\right\|_{L^{2}}. \tag{4.21}\]
_Denoting \((\nabla^{\perp}\Delta_{L}^{-1}F)_{0}=(V_{F,0}^{1},0)\), one has_
\[\mathcal{T}_{\mathsf{h.o.}}(F_{0},G_{\neq},H) \lesssim\left\|V_{F,0}^{1}\right\|_{H^{N}}\left\|m\partial_{X}G_{ \neq}\right\|_{L^{2}}\left\|H_{\neq}\right\|_{L^{2}} \tag{4.22}\] \[\mathcal{T}_{\mathsf{h.o.}}(F_{\neq},G_{0},H) \lesssim\left\|H\right\|_{L^{2}}\bigg{(}\left\|G_{0}\right\|_{H^{3 }}\left\|\sqrt{\frac{\partial_{t}m^{d}}{m^{d}}}\sqrt{\frac{k^{2}}{p}}mF_{\neq }\right\|_{L^{2}}\] (4.23) \[\qquad\qquad+\frac{1}{\left\langle t\right\rangle^{2}}\left\|mF _{\neq}\right\|_{L^{2}}\left\|\partial_{Y}G_{0}\right\|_{H^{N}}\bigg{)}\]
_For the stretching nonlinearities we have:_
\[\mathcal{S}_{\text{sym}}(F,G_{\neq},H) \lesssim\,\mathrm{e}^{-\delta_{0}\nu^{\frac{1}{3}}t}\left\|\sqrt{ \frac{k^{2}}{p}}mF_{\neq}\right\|_{L^{2}}\left\|mG_{\neq}\right\|_{L^{2}} \left\|H\right\|_{L^{2}}, \tag{4.24}\] \[\mathcal{S}_{\text{sym}}(F,G_{0},H) \lesssim\,\left(\left\|\sqrt{\frac{k^{2}}{p}}mF_{\neq}\right\|_{L ^{2}}\left\|G_{0}\right\|_{H^{3}}+\frac{1}{\left\langle t\right\rangle}\left\| mF_{\neq}\right\|_{L^{2}}\left\|G_{0}\right\|_{H^{N}}\right)\left\|H\right\|_{L^{2}}. \tag{4.25}\]
Proof of Lemma 4.5.: To prove (4.21), we simply observe that
\[\nabla^{\perp}\Delta_{L}^{-1}F_{\neq}\cdot\nabla G_{\neq}=\nabla_{L}^{\perp} \Delta_{L}^{-1}F_{\neq}\cdot\nabla_{L}G_{\neq},\qquad\text{and}\qquad\left\| \nabla_{L}\Delta_{L}^{-1}F_{\neq}\right\|_{L^{2}}\lesssim\left\|\sqrt{\frac{k ^{2}}{p}}F_{\neq}\right\|_{L^{2}}.\]
Being \(m^{d},m^{\nu},m^{s}\) bounded Fourier multipliers, using the algebra property of \(H^{N}\) we deduce that
\[\mathcal{T}_{\text{h.o.}}(F_{\neq},G_{\neq},H)\lesssim\mathrm{e}^{-\delta_{0 }\nu^{\frac{1}{3}t}}\left\|\nabla_{L}\Delta_{L}^{-1}mF_{\neq}\right\|_{L^{2}} \left\|\nabla_{L}mG_{\neq}\right\|_{L^{2}}\left\|H\right\|_{L^{2}}\]
whence proving (4.21).
Turning our attention to (4.22), we first observe that
\[\mathcal{T}_{\text{h.o.}}(F_{0},G_{\neq},H)=\left|\left\langle m\mathcal{F} \left(V_{F,0}^{1}\partial_{X}G_{\neq}\right),\hat{H}_{\neq}\right\rangle\right|\]
since \(\left\langle m\mathcal{F}\left(V_{F,0}^{1}\partial_{X}G_{\neq}\right),\hat{H} _{0}\right\rangle=-\left\langle V_{F,0}^{1}G,m\partial_{X}H_{0}\right\rangle=0\). The proof (4.22) easily follows as an application of Cauchy-Schwartz and Young's inequality.
The proof of the bounds (4.23)-(4.25) is identical to the ones for (4.7)-(4.9). This is because in the latter bounds we have only moved the factor \(\sqrt{k^{2}/p}\) on the function \(H\).
With Lemma (4.5) at hand, we show how to improve (Hh.o.).
Proof: improvement of (Hh.o.).: For the transport nonlinearity (3.14), recall that
\[\left\|m(\Omega_{\neq},J_{\neq})\right\|_{L^{2}}\lesssim\nu^{-\frac{1}{6}} \sqrt{\mathsf{D}_{\text{h.o.}}}.\]
Hence, combining (4.21)-(4.23), since \(\sqrt{k^{2}/p}(\hat{\Omega},\hat{J})=(Z,Q)\), we have
\[\mathsf{T}_{\text{h.o.}}\lesssim \nu^{-\frac{1}{2}}\mathrm{e}^{-\delta_{0}\nu^{\frac{1}{3}t}} \sqrt{\mathsf{E}_{\text{sym}}}\sqrt{\mathsf{E}_{\text{h.o.}}}\sqrt{\mathsf{D }_{\text{h.o.}}}+\nu^{-\frac{1}{2}-\frac{1}{6}}\sqrt{\mathsf{E}_{0}}\mathsf{D }_{\text{h.o.}}\] \[+\left(\sqrt{\mathsf{E}_{0}}\sqrt{\mathsf{D}_{\text{sym}}}+\frac {1}{\left\langle t\right\rangle}\nu^{-\frac{1}{6}}\sqrt{\mathsf{D}_{\text{h.o.}}}\sqrt{\mathsf{D}_{0}}\right)\sqrt{\mathsf{E}_{\text{h.o.}}}.\]
Similarly, from (4.24)-(4.25), using that
\[\frac{1}{\left\langle t\right\rangle}\left\|m(\Omega_{\neq},J_{ \neq})\right\|_{L^{2}}\left\|(\Omega_{0},J_{0})\right\|_{H^{N}}\left\|m(\Omega, J)\right\|_{L^{2}}\] \[=\frac{1}{\left\langle t\right\rangle}\left\|m(\Omega_{\neq},J_{ \neq})\right\|_{L^{2}}\left\|\partial_{Y}(U_{0}^{1},B_{0}^{1})\right\|_{H^{N} }\left\|m(\Omega,J)\right\|_{L^{2}}\lesssim\frac{1}{\left\langle t\right\rangle }\nu^{-\frac{1}{6}-\frac{1}{2}}\sqrt{\mathsf{D}_{\text{h.o.}}}\sqrt{\mathsf{D }_{0}}\sqrt{\mathsf{E}_{\text{h.o.}}}\]
we have
\[\mathsf{S}_{\text{h.o.}}\lesssim \nu^{-\frac{1}{6}}\mathrm{e}^{-\delta_{0}\nu^{\frac{1}{3}t}} \sqrt{\mathsf{E}_{\text{sym}}}\sqrt{\mathsf{E}_{\text{h.o.}}}\sqrt{\mathsf{D }_{\text{h.o.}}}+\sqrt{\mathsf{E}_{\text{h.o.}}}\sqrt{\mathsf{E}_{\text{sym}}} \sqrt{\mathsf{E}_{0}}+\frac{1}{\left\langle t\right\rangle}\nu^{-\frac{2}{3}} \sqrt{\mathsf{D}_{\text{h.o.}}}\sqrt{\mathsf{D}_{0}}\sqrt{\mathsf{E}_{\text{ h.o.}}}.\]
Integrating in time and applying the bootstrap hypothesis, we get
\[\int_{0}^{t}\mathsf{T}_{\text{h.o.}}+\mathsf{S}_{\text{h.o.}} \mathrm{d}\tau\lesssim \varepsilon^{2}\nu^{-\frac{1}{2}}\int_{0}^{t}\left\langle\tau \right\rangle\mathrm{e}^{-\delta_{0}\nu^{\frac{1}{3}\tau}}\sqrt{\mathsf{D}_{ \text{h.o.}}}\mathrm{d}\tau+\varepsilon\nu^{-\frac{2}{3}}\int_{0}^{t}\mathsf{D }_{\text{h.o.}}\mathrm{d}\tau\] \[+\varepsilon^{2}\int_{0}^{t}\left\langle\tau\right\rangle\sqrt{ \mathsf{D}_{\text{sym}}}\mathrm{d}\tau+\varepsilon\nu^{-\frac{1}{6}}\int_{0}^{t} \sqrt{\mathsf{D}_{\text{h.o.}}}\sqrt{\mathsf{D}_{0}}\mathrm{d}\tau\] \[+\varepsilon^{3}\int_{0}^{t}\left\langle\tau\right\rangle\mathrm{d }\tau+\varepsilon\nu^{-\frac{2}{3}}\int_{0}^{t}\sqrt{\mathsf{D}_{\text{h.o.}}} \sqrt{\mathsf{D}_{0}}\mathrm{d}\tau.\]
Applying Cauchy-Schwartz inequality and the bootstrap hypotheses, we get
\[\varepsilon^{2}\nu^{-\frac{1}{2}}\int_{0}^{t}\langle\tau\rangle\, \mathrm{e}^{-\delta_{0}\nu^{\frac{1}{3}}\tau}\sqrt{\mathsf{D}_{\mathsf{h.o.}}} \mathrm{d}\tau\lesssim\varepsilon^{3}\left\langle t\right\rangle\nu^{-\frac{1} {2}}\left(\int_{0}^{t}\langle\tau\rangle^{2}\,\mathrm{e}^{-2\delta_{0}\nu^{ \frac{1}{3}}\tau}\mathrm{d}\tau\right)^{\frac{1}{2}}\lesssim(\varepsilon\nu^{- \frac{2}{3}})\varepsilon^{2}\left\langle t\right\rangle^{2},\] \[\varepsilon^{2}\int_{0}^{t}\langle\tau\rangle\sqrt{\mathsf{D}_{ \mathsf{sym}}}\mathrm{d}\tau\lesssim\varepsilon^{3}\left\langle t\right\rangle^ {\frac{3}{2}},\] \[\varepsilon\nu^{-\frac{1}{6}}\int_{0}^{t}\sqrt{\mathsf{D}_{ \mathsf{h.o.}}}\sqrt{\mathsf{D}_{0}}\mathrm{d}\tau\lesssim(\varepsilon\nu^{- \frac{1}{6}})\varepsilon^{2}\left\langle t\right\rangle,\] \[\varepsilon\nu^{-\frac{2}{3}}\int_{0}^{t}\sqrt{\mathsf{D}_{ \mathsf{h.o.}}}\sqrt{\mathsf{D}_{0}}\mathrm{d}\tau\lesssim(\varepsilon\nu^{- \frac{2}{3}})\varepsilon^{2}\left\langle t\right\rangle.\]
Integrating in time (3.10), using again the bootstrap hypothes, when \(\varepsilon\ll\nu^{\frac{2}{3}}\) we get
\[\mathsf{E}_{\mathsf{h.o.}}+\frac{1}{16}\int_{0}^{t}\mathsf{D}_{ \mathsf{h.o.}}\mathrm{d}\tau \lesssim 4\int_{0}^{t}\sqrt{\mathsf{E}_{\mathsf{sym}}}\sqrt{ \mathsf{E}_{\mathsf{h.o.}}}\mathrm{d}\tau+\varepsilon^{2}\left\langle t \right\rangle^{2}\leqslant 4\sqrt{10}\sqrt{C_{1}}\varepsilon^{2}\int_{0}^{t} \left\langle t\right\rangle\mathrm{d}\tau+\varepsilon^{2}\left\langle t \right\rangle^{2}\] \[\leqslant(8\sqrt{10}\sqrt{C_{1}}+1)\varepsilon^{2}\left\langle t \right\rangle^{2}.\]
It is then enough that
\[8\sqrt{10}\sqrt{C_{1}}+1\leqslant\frac{C_{1}}{2},\]
which is certainly true for \(C_{1}=4000\).
### Bounds on the zero modes
We finally show how to improve (H\({}_{0}\)).
Proof.: Using the uniform boundedness of \(m^{d},m^{\nu},m^{s}\) and (4.3), we first observe that
\[\left\|(U_{\neq}^{2},B_{\neq}^{2})\right\|_{H^{N}}=\left\|\partial_{X}\Delta_ {L}^{-1}(\Omega_{\neq},J_{\neq})\right\|_{H^{N}}\lesssim\mathrm{e}^{-\delta_ {0}\nu^{\frac{1}{3}}t}\left\|\sqrt{\frac{\partial_{t}m^{d}}{m^{d}}}m(Z,Q) \right\|_{L^{2}}, \tag{4.26}\]
\[\left\|\partial_{X}(U_{\neq}^{1},B_{\neq}^{1})\right\|_{H^{N}}=\left\| \partial_{X}(\partial_{Y}-t\partial_{X})\Delta_{L}^{-1}(\Omega_{\neq},J_{\neq })\right\|_{H^{N}}\lesssim\mathrm{e}^{-\delta_{0}\nu^{\frac{1}{3}}t}\left\|m(Z,Q)\right\|_{L^{2}}. \tag{4.27}\]
where we also used \(|\partial_{t}p/p|\lesssim\sqrt{k^{2}/p}\). Applying Cauchy-Schwartz and the algebra property of \(H^{N}\) we see that we can bound \(\mathsf{R}_{\neq}\) in (3.15) by
\[\mathsf{R}_{\neq} \lesssim\left\|\left(U_{\neq}^{2},B_{\neq}^{2})\right\|_{H^{N}} \left\|(U_{\neq}^{1},B_{\neq}^{1})\right\|_{H^{N}}\left\|\partial_{Y}(U_{0}^{1 },B_{0}^{1})\right\|_{H^{N}}\] \[\quad+\frac{1}{\left\langle t\right\rangle^{2}}\left\|(U_{\neq}^{ 2},B_{\neq}^{2})\right\|_{H^{N}}\left\|(\Omega_{\neq},J_{\neq})\right\|_{H^{N} }\left\|\partial_{Y}(\Omega_{0},J_{0})\right\|_{H^{N}}\] \[\quad+\frac{1}{\left\langle t\right\rangle^{2}}\left\|\partial_{X} (U_{\neq}^{1},B_{\neq}^{1})\right\|_{H^{N}}\left\|(\Omega_{\neq},J_{\neq}) \right\|_{H^{N}}\left\|J_{0}\right\|_{H^{N}}.\]
Combining the bound above with (4.26)-(4.27) and using the boostrap hypothesis, we get
\[\mathsf{R}_{\neq} \lesssim\mathrm{e}^{-2\delta_{0}\nu^{\frac{1}{3}}t}\bigg{(}\nu^ {-\frac{1}{2}}\sqrt{\mathsf{E}_{\mathsf{sym}}}\sqrt{\mathsf{D}_{\mathsf{sym}}} \sqrt{\mathsf{D}_{0}}+\frac{\nu^{-\frac{1}{2}}}{\left\langle t\right\rangle} \sqrt{\mathsf{E}_{\mathsf{h.o.}}}\sqrt{\mathsf{D}_{\mathsf{sym}}}\sqrt{\mathsf{ D}_{0}}+\frac{1}{\left\langle t\right\rangle}\sqrt{\mathsf{E}_{\mathsf{sym}}}\sqrt{ \mathsf{E}_{\mathsf{h.o.}}}\sqrt{\mathsf{E}_{0}}\bigg{)}\] \[\lesssim(\varepsilon\nu^{-\frac{1}{2}})(\mathsf{D}_{\mathsf{sym }}+\mathsf{D}_{0})+\varepsilon^{3}\mathrm{e}^{-2\delta_{0}\nu^{\frac{1}{3}}t}.\]
Integrating (3.11) in time, we get
\[\mathsf{E}_{0}+\int_{0}^{t}\mathsf{D}_{0}\mathrm{d}\tau\leqslant\varepsilon^{2 }(1+C(\varepsilon\nu^{-\frac{1}{2}}+\varepsilon\nu^{-\frac{1}{3}})),\]
for some constant \(C>0\). Therefore, when \(\varepsilon\ll\nu^{\frac{2}{3}}\) we see that we can improve from the constant \(100\) in (H\({}_{0}\)) to \(50\) as desired, whence completing the proof of Proposition 3.5.
**Acknowledgments.** The author would like to thank Ruizhao Zi for sharing their result [14]. The research of MD was supported by the SNSF Grant 182565, by the Swiss State Secretariat for Education, Research and Innovation (SERI) under contract number M822.00034 and by the GNAMPA-INdAM.
|
2306.03128 | Dark Matter Through the Axion-Gluon Portal | Axion-like-particles are a well-motivated extension of the Standard Model
that can mediate interactions between the dark matter and ordinary matter. Here
we consider an axion portal between the two sectors, where the axion couples to
dark matter and to QCD gluons. We establish the relevant processes of interest
across the scales of dark matter and axion masses and couplings, identify the
distinct mechanisms that control the dark matter relic abundance in each case,
and extract the resulting experimental signatures of the gluonic axion portal
to dark matter. | Patrick J. Fitzpatrick, Yonit Hochberg, Eric Kuflik, Rotem Ovadia, Yotam Soreq | 2023-06-05T18:00:01Z | http://arxiv.org/abs/2306.03128v1 | # Dark Matter Through the Axion-Gluon Portal
###### Abstract
Axion-like-particles are a well-motivated extension of the Standard Model that can mediate interactions between the dark matter and ordinary matter. Here we consider an axion portal between the two sectors, where the axion couples to dark matter and to QCD gluons. We establish the relevant processes of interest across the scales of dark matter and axion masses and couplings, identify the distinct mechanisms that control the dark matter relic abundance in each case, and extract the resulting experimental signatures of the gluonic axion portal to dark matter.
## I Introduction
Dark matter (DM) makes up the vast majority of the matter in our universe, but we still know little about its particle nature. Many processes can be considered to set its relic abundance in the early universe, including \(2\to 2\) annihilations [1; 2; 3; 4; 5; 6; 7], \(n\to 2\) annihilations [8; 9; 10; 11], as well as decays and inverse decays [12; 13]. (For recent reviews, see _e.g._ Refs. [14; 15].) In many frameworks, couplings between the DM and the Standard Model (SM) should be present in order to mediate interactions between the dark and visible particles, serving as a 'portal' between the sectors.
A well-motivated portal is that of the axion, or axion-like-particle (ALP). ALPs as mediators to the dark sector have been studied in Refs. [16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32]. In this paper, we consider ALP couplings to SM gluons. Via this coupling, the ALP essentially couples to the QCD states of the SM, whose degrees of freedom change as one flows through the QCD confining phase transition in the early universe. Throughout the cosmological history, one can study the relative importance of the various processes that can occur, and determine their impact on the DM relic abundance. The DM abundance can result from either freeze-out or freeze-in processes, with a variety of existing experiments placing important constraints and many future experiments set to probe novel regions of the parameter space.
This paper is organized as follows. In Section II, we introduce the model and in Section III we study the axion and DM relic abundance. Sections IV and V analyze the DM phases of the theory, which include freeze-in and freeze-out, respectively. Current experimental constraints and future probes are presented in Section VI. We present our results in Section VII and conclude in Section VIII. Appendix A presents the thermally averaged rate calculations. In Appendix B, we elaborate on the analytical estimates for freeze-in. Appendix C adds further details about the model. Appendix D explains in detail the experimental bounds presented in this work.
## II Model
We begin by presenting the theory that we will consider in this work. The model is an extension of the SM with an axion, \(a\), and a Dirac fermion DM candidate, \(\chi\). Generalizing to a Majorana fermion is straightforward. We consider the axion, at some UV scale, to couple only to gluons and to the DM. This is similar to a KSVZ axion [33; 34] where the heavy quarks are electroweak singlets.
The effective Lagrangian at the UV scale \(\Lambda=8\pi f_{a}\) is given by
\[\mathcal{L}= \mathcal{L}_{\text{SM}}+\frac{1}{2}\partial^{\mu}a\partial_{\mu} a-\frac{m_{a}^{2}}{2}a^{2}+i\overline{\chi}\not{D}\chi-m_{\chi}\overline{\chi}\chi\] \[-ic_{\chi}m_{\chi}\frac{a}{f_{a}}\overline{\chi}\gamma_{5}\chi- \frac{\alpha_{s}}{8\pi}\frac{a}{f_{a}}G^{a\mu\nu}\tilde{G}_{a\mu\nu}\,. \tag{1}\]
Here \(G^{a\mu\nu}\) is the gluon field strength, \(\tilde{G}^{a}_{\mu\nu}\equiv\frac{1}{2}\epsilon_{\mu\nu\rho\sigma}G^{a\rho\sigma}\) is its dual, \(\alpha_{s}\) is the coupling strength of the strong force, \(m_{a}\) and \(m_{\chi}\) are the axion and DM masses, respectively, \(f_{a}\) is the axion decay constant defined by its coupling to gluons as above, and \(c_{\chi}\) is a dimensionless coefficient parameterizing the coupling of the axion to the DM relative to its coupling to the SM gluons. For simplicity, we take \(c_{\chi}=1\) throughout this work. The RG flow will create loop-induced couplings to other SM particles at lower scales as one flows below \(\Lambda\). We limit ourselves to cases where \(m_{\chi},m_{a}<8\pi f_{a}\) for validity of our computations. The axion mass term is the sum of a dynamical term (which determines the mass of the QCD axion) \(m_{a,\text{QCD}}^{2}\simeq m_{\pi}^{2}f_{\pi}^{2}/f_{a}^{2}\) and a bare term \(m_{a,0}^{2}\). To avoid fine-tuning of the contributions to the axion mass we consider only \(m_{a}\geq m_{a,\text{QCD}}\). Note that an axion that solves the CP problem but is heavier than the standard QCD axion could realize the framework presented here, as in _e.g._ Refs. [35; 36; 37; 38; 39].
At temperatures and axion masses well above the QCD confinement scale \(\Lambda_{\text{QCD}}\sim 200\,\text{MeV}\), QCD can be treated perturbatively and therefore we need only to take into account axion-gluon interactions. However, close to and below the QCD scale, one must consider axion-hadron interactions. These interactions have been studied in detail for scales less than \(4\pi f_{\pi}\) using chiral perturbation the
ory (\(\chi\)PT) in Ref. [40]. This analysis has been extended to scales in the range of \(1-3\,\mathrm{GeV}\) in Ref. [41] by using data driven methods. Close to \(\Lambda_{\mathrm{QCD}}\), the dynamics of the axion are governed by kinetic and mass mixing with the \(\pi^{0}\), \(\eta\), \(\eta^{\prime}\) mesons. At scales much below \(\Lambda_{\mathrm{QCD}}\), the leading order dynamics stem from a loop-induced coupling to photons,
\[\mathcal{L}\supset-\frac{c_{\gamma}\alpha_{\mathrm{EM}}}{8\pi}\frac{a}{f_{a}}F _{\mu\nu}\tilde{F}^{\mu\nu}\,, \tag{2}\]
with \(F_{\mu\nu}\) and \(\tilde{F}_{\mu\nu}\) the photon field strength and its dual and \(\alpha_{\mathrm{EM}}\) the electromagnetic coupling strength. We take \(c_{\gamma}=1.92\) at low energy, matching the KSVZ axion [40; 42].
## III Axion and Dark Matter Production
We begin by writing down the Boltzmann equations governing the evolution of the axion and DM abundances. The relevant Boltzmann equations are
\[\dot{n}_{a}+3Hn_{a}= -\Gamma_{a\,\mathrm{SM}\to\mathrm{SM}}\left(n_{a}-n_{a}^{\mathrm{ eq}}\right)\] \[-\Gamma_{a\to\chi\bar{\chi}}\left(n_{a}-n_{\chi}^{2}\frac{n_{a}^{ \mathrm{eq}}}{n_{\chi}^{\mathrm{eq}2}}\right)\] \[+\left\langle\sigma v\right\rangle_{\chi\bar{\chi}\to aa}\left(n_{ \chi}^{2}-n_{a}^{2}\frac{n_{\chi}^{\mathrm{eq}}}{n_{a}^{\mathrm{eq}2}}\right)\,, \tag{3}\]
and
\[\dot{n}_{\chi}+3Hn_{\chi}= -\left\langle\sigma v\right\rangle_{\chi\bar{\chi}\to\mathrm{SM}}^{ \mathrm{sub}}\left(n_{\chi}^{2}-n_{\chi}^{\mathrm{eq}}\right)\] \[-\left\langle\sigma v\right\rangle_{\chi\bar{\chi}\to aa}\left(n_{ \chi}^{2}-n_{a}^{2}\frac{n_{\chi}^{\mathrm{eq}2}}{n_{a}^{\mathrm{eq}2}}\right)\] \[+\Gamma_{a\to\chi\bar{\chi}}\left(n_{a}-n_{\chi}^{2}\frac{n_{a}^{ \mathrm{eq}}}{n_{\chi}^{\mathrm{eq}2}}\right)\] \[+\sum_{P=\pi^{0},\eta,\eta^{\prime}}\Gamma_{P\to\chi\bar{\chi}} \left(n_{P}^{\mathrm{eq}}-n_{\chi}^{2}\frac{n_{P}^{\mathrm{eq}}}{n_{\chi}^{ \mathrm{eq}2}}\right)\,. \tag{4}\]
Here \(n_{i}^{\mathrm{eq}}\) are the equilibrium abundances for the different species and \(H\) is the Hubble parameter. For meson densities we take the densities to vanish at temperatures above the QCD phase-transition temperature \(T_{\mathrm{QCD}}\) and as a Bose-Einstein distribution below it, \(n_{P}^{\mathrm{eq}}=n_{P}^{\mathrm{BE}}(T)\Theta(T_{\mathrm{QCD}}-T)\)[43]. We discuss the contribution of each collision term in what follows, and note that \(\left\langle\sigma v\right\rangle_{\chi\bar{\chi}\to\mathrm{SM}}\) includes both \(2\to 2\) annihilations (such as \(\chi\bar{\chi}\to gg\)) and \(3\to 2\) annihilations (such as \(\chi\bar{\chi}\,g\to gg\)). Depending on the dominant process, the DM abundance may be produced in the early universe in a variety of ways, including freeze-in and freeze-out processes. For the convenience of the reader, throughout this section we provide analytical approximations for various cross-sections and rates. We note that all figures presented in this work are obtained via full numerical computations of the relevant quantities.
### Axion thermalization: \(\Gamma_{a\,\mathrm{SM}\to\mathrm{SM}}\)
To understand the cosmological dynamics of the model, it is important to establish when the \(a\) particles are in equilibrium with the SM bath and when they are not. Since the DM \(\chi\) couples to the SM bath particles only via its interactions with the axion \(a\), thermal decoupling of \(a\) from the SM bath necessarily implies the thermal decoupling of \(\chi\) as well.
The question of thermal axion production resulting from the axion coupling to gluons has been discussed in detail in the literature [44; 45], including the use of thermal field theory to account for many body plasma effects (_e.g._ thermal masses due to screening). Ref. [46] extends the analysis to much lower temperatures to include temperatures below the QCD phase transition, where \(\pi\pi\to\pi a\) is the dominant process. The latter is calculated in chiral perturbation theory which is valid up to temperatures of \(T\sim 62\,\mathrm{MeV}\)[47]. This issue is addressed using the prescription presented in Ref. [46], where interpolation is used to match between the rate \(\Gamma_{\pi\pi\to aa}\) at \(T<62\,\mathrm{MeV}\) and the rate \(\Gamma_{\mathrm{UV}}=\Gamma_{gg\to ag}+\Gamma_{q\bar{q}\to ag}+\Gamma_{qg\to aq}+ \Gamma_{\bar{q}g\to a\bar{q}}\) at \(T>2\,\mathrm{GeV}\). The rates in the interpolation span 10 orders of magnitude, therefore the axion production rates at temperatures between \(\sim 60\,\mathrm{MeV}-2\,\mathrm{GeV}\) should be taken with caution. However, as we will see below, the final DM abundance is sensitive to this only for axion masses close to the QCD phase transition. Additionally, we consider the rates of the QED Primakoff processes \(\Gamma_{e\gamma\to ea}+\Gamma_{\bar{e}\gamma\to\bar{e}a}+\Gamma_{e\bar{e}\to \gamma a}\) provided in Refs. [48; 49], which yield the dominant contributions at \(T\ll 62\,\mathrm{MeV}\).
Importantly, all the analyses mentioned above are oriented towards a massless pseudo-scalar--motivated by the QCD-axion--and assume a relativistic axion where \(m_{a}\ll T\). For \(m_{a}>T\), the dominant rate of axion production comes from decays and inverse decays. The axion decay rate was estimated in Ref. [41] (see Fig. S1 therein). Above \(m_{a}\gtrsim 2\,\mathrm{GeV}\), the decay can be calculated perturbatively to two gluons, while for \(3m_{\pi}\lesssim m_{a}\lesssim 2\,\mathrm{GeV}\), the decays occurs to hadrons and photons and for \(m_{a}\lesssim 3m_{\pi}\) predominantly to photons. We denote the axion decay rate to the bath as \(\Gamma_{a\to\mathrm{SM}}\).
We add up all the calculated rates in both the relativistic and non-relativistic regimes and use these for \(\Gamma_{a\,\mathrm{SM}\to\mathrm{SM}}\). Note that \(\Gamma_{a\,\mathrm{SM}\to\mathrm{SM}}\) includes in it \(\Gamma_{a\to\mathrm{SM}}\). The rate is therefore a function of the axion mass and the temperature. However, for \(T\sim m_{a}\), the previously calculated rates in the relativistic and non-relativistic regimes are not expected to be precise. In particular, there may be small corrections to the total rate, where the finite temperature rate calculated by Refs. [44; 45; 46] in the relativistic regime may still dominate over the decay
rate even as \(T\) approaches \(m_{a}\). This effect mostly occurs at temperatures right above the QCD phase transition, where \(\alpha_{s}\) is large, and higher order effects become more important.
### Axion decay to dark matter: \(\Gamma_{a\to\chi\bar{\chi}}\)
For \(m_{a}>2m_{\chi}\), the axion can decay directly into the DM. If allowed, this will be the dominant source of DM freeze-in. The axion decay rate into DM is given by
\[\Gamma_{a\to\chi\bar{\chi}}=m_{a}\frac{c_{\chi}^{2}m_{\chi}^{2}}{8\pi f_{a}^{ 2}}\sqrt{1-\frac{4m_{\chi}^{2}}{m_{a}^{2}}}\,. \tag{5}\]
### Axion annihilation to dark matter: \(\langle\sigma v\rangle_{\chi\bar{\chi}\to aa}\)
At leading order the DM annihilation into axions can be calculated from the tree-level \(u-\) and \(t-\)channel diagrams. A detailed calculation of the thermally averaged cross-section related to this process is given in Appendix A.3. For \(m_{a},T\ll m_{\chi}\)--relevant for much of the freeze-in and freeze-out parameter space--the thermally averaged cross is well-approximated by
\[\langle\sigma v\rangle_{\chi\bar{\chi}\to aa}\simeq\frac{c_{\chi}^{4}m_{\chi}^ {2}}{64\pi f_{a}^{4}}\frac{T}{m_{\chi}}\,. \tag{6}\]
Dark matter bath annihilation and production: \(\langle\sigma v\rangle_{\chi\bar{\chi}\to\mathrm{SM}}^{\mathrm{sub}}\)
Alternatively, the DM can be produced directly from the bath, or annihilate directly into the bath, by bypassing on-shell axions. To leading order, the process proceeds via an off-shell axion \(a^{*}\). Thus, the thermally averaged cross section can be calculated directly from the axion production rates already presented:
\[\langle\sigma v\rangle_{\chi\bar{\chi}\to\mathrm{SM}}=\\ \frac{1}{(n_{\chi}^{\mathrm{eq}})^{2}}\int\frac{dm_{a^{*}}^{2}}{ \pi}\frac{m_{a^{*}}\Gamma_{a^{*}\to\chi\bar{\chi}}\Gamma_{a^{*}\mathrm{SM}\to \mathrm{SM}}n_{a^{*}}^{\mathrm{eq}}}{(m_{a^{*}}^{2}-m_{a}^{2})^{2}+(m_{a} \Gamma_{a})^{2}}\,, \tag{7}\]
where \(\Gamma_{a}\) is the total axion decay rate. Within the integral, the rates (\(\Gamma\)'s) should be evaluated at the off-shell axion mass, \(m_{a^{*}}\). A full derivation can be found in Appendix A.1.
In the narrow-width approximation, where the axion is produced on-shell, one can verify that
\[\langle\sigma v\rangle_{\chi\bar{\chi}\to\mathrm{SM}}^{\mathrm{on-shell}}\; \Rightarrow\;\frac{n_{a}^{\mathrm{eq}}}{(n_{\chi}^{\mathrm{eq}})^{2}}\;\Gamma _{a\,\mathrm{SM}\to\mathrm{SM}}\;\mathrm{BR}(a\to\chi\bar{\chi})\,, \tag{8}\]
where \(\mathrm{BR}(a\to\chi\bar{\chi})\) is the branching ratio for the axion decay into DM. The possibility that the DM produces an on-shell axion which then decays back to the SM bath is, in fact, already included in the other terms in the Boltzmann equations and must then be subtracted from this term to avoid double counting. The on-shell subtracted thermally averaged cross section is therefore given by
\[\langle\sigma v\rangle_{\chi\bar{\chi}\to\mathrm{SM}}^{\mathrm{sub}}=\langle \sigma v\rangle_{\chi\bar{\chi}\to\mathrm{SM}}-\langle\sigma v\rangle_{\chi \bar{\chi}\to\mathrm{SM}}^{\mathrm{on-shell}}\,. \tag{9}\]
Far from the axion resonance and for \(T\ll m_{\chi}\), we have the simple relationship:
\[\langle\sigma v\rangle_{\chi\bar{\chi}\to\mathrm{SM}}^{\mathrm{sub}}\Big{|}_ {T\ll m_{\chi}}\!\!\!\!\!\simeq\frac{16c_{\chi}^{2}m_{\chi}^{6}}{(m_{a}^{2}-4 m_{\chi}^{2})^{2}f_{a}^{2}}\frac{\Gamma_{a^{*}\to\mathrm{SM}}(2m_{\chi})}{(2m_{ \chi})^{3}}\,. \tag{10}\]
In the other regime, for high temperatures, \(T\gg m_{a},m_{\chi}\)--which will be relevant for freeze-in of the DM--the thermally averaged cross section takes on the simple form
\[\langle\sigma v\rangle_{\chi\bar{\chi}\to\mathrm{SM}}^{\mathrm{sub}} \Big{|}_{T\gg m_{a}>2m_{\chi}}\\ =\frac{n_{a}^{\mathrm{eq}}}{(n_{\chi}^{\mathrm{eq}})^{2}}\frac{2c _{\chi}^{2}m_{\chi}^{2}T^{3}}{\pi^{2}f_{a}^{2}}\frac{\Gamma_{a^{*}\mathrm{SM} \to\mathrm{SM}}(\tilde{m}_{a})}{\tilde{m}_{a}^{3}}\,, \tag{11}\]
where \(\Gamma_{a^{*}\mathrm{SM}\to\mathrm{SM}}(\tilde{m}_{a})/\tilde{m}_{a}^{3}\) should be evaluated at \(\tilde{m}_{a}\simeq 1.8T\) where the integral in Eq. (7) is dominated. As this value falls within the region of uncertainty of the rate \(\Gamma_{a^{*}\mathrm{SM}\to\mathrm{SM}}\), we approximate it by using only the decay rate contribution \(\Gamma_{a^{*}\mathrm{SM}\to\mathrm{SM}}\simeq\Gamma_{a^{*}\to\mathrm{SM}}\). This introduces an \(\mathcal{O}(10-100)\) uncertainty in Eq. (11), which corresponds to a correction of \(\mathcal{O}(1-3)\) in the required \(f_{a}\) for UV-dominated freeze-in through this process.
### Meson decay: \(\Gamma_{P\to\chi\bar{\chi}}\)
The neutral pseudoscalar mesons \(P=\pi^{0},\eta,\eta^{\prime}\) can decay to DM via mixing with the axion. The mixing can be calculated in the chiral Lagrangian. Following Ref. [41] we find the simple relation
\[\Gamma_{P\to\chi\bar{\chi}}\simeq\big{|}\theta_{aP}^{2}\big{|}\Gamma_{a^{*} \to\chi\bar{\chi}}\,, \tag{12}\]
where \(m_{a}^{*}=m_{P}\). In the numerical computations presented in this work, the mixing angle \(\theta_{aP}\) is determined by diagonalizing the kinetic and mass terms of \(a\) and \(P\). We note that our obtained mixing angle is inaccurate when the mass difference between \(a\) and \(P\) is of order their decay widths or smaller.
## IV Freeze-in
Having addressed the relevant interactions of the DM, axion and SM bath, we now move to discuss how the DM abundance is set in the early universe, starting with freeze-in. Freeze-in is a dynamic process for producing a thermal relic of DM that assumes the DM abundance at early times to be negligibly small compared to its equi
librium abundance [12]. A simple realization of such a case is post-inflationary reheating that reheats the SM bath, but not the DM. We take the initial temperature at which the axion and the DM can begin to be produced to be the reheating temperature \(T_{\rm RH}\), although we remain agnostic to the exact mechanism leading to such initial conditions. We consider that \(T_{\rm RH}<8\pi f_{a}\), such that the production is not sensitive to the physics above the cut-off.
As the \(aG\tilde{G}\) coupling we consider is non-renormalizable, freeze-in of \(a\) and \(\chi\) is prone to being UV-dominated [12], largely determined by physics at \(T_{\rm RH}\). In general, when the majority of \(\chi\) particles are produced directly from the SM bath, the production of \(\chi\) will be dominated near \(T_{\rm RH}\). Likewise, if the axion never thermalizes, the production will depend on the UV-sensitive frozen in axion abundance. Otherwise, if the \(\chi\) particles are produced from thermal axions or meson decays, then the production is determined by the renormalizable \(a\chi\bar{\chi}\) interaction and production will be IR dominated, mostly occurring at \(T=\max(m_{a},m_{\chi})\). In this section, we shall identify the processes governing UV-dominated and IR-dominated freeze-in, and describe the validity of each regime.
For the axion portal, freeze-in of the DM can be separated into two regimes. The first is when \(m_{a}\geq 2m_{\chi}\). In this case, freeze-in will always be dominated by the decay \(a\to\chi\bar{\chi}\) regardless of the reheating temperature and whether or not the axion reaches chemical equilibrium with the bath. The second regime is when the decay is kinematically forbidden, \(m_{a}<2m_{\chi}\). Here freeze-in will be dominated by axion-axion annihilation, SM bath annihilation or via meson decay into \(\chi\) particles, depending on the reheat temperature and axion mass. Of the processes mentioned, only SM bath annihilation \(\mathrm{SM}\to\chi\bar{\chi}\) is UV-dominated, whereas the rest are dominated at temperatures similar to the mass of the constituents.
The left panel of Fig. 1 presents solutions to the Boltzmann equations along contours of fixed \(m_{\chi}\) which produce the observed DM relic abundance through freeze-in. Below we discuss the solutions to the Boltzmann equations and give analytical estimates of the results. For the figure we chose \(T_{\rm RH}=10\,\mathrm{TeV}\).
### \(m_{a}>2m_{\chi}\): \(a\to\chi\bar{\chi}\) freeze-in
We begin with the case in which the axion can decay into the DM. Here the freeze-in of the DM proceeds via the production of the axion and then its decay into DM. There are three relevant regimes, dependent on the thermal history of the axion:
1. _The axion never thermalizes with the bath._ If the axion is too feebly interacting to thermalize with the bath, then its abundance will be populated by freeze-in. This process is UV-dominated so it reaches an asymptotic co-moving number density near \(T_{\rm RH}\), given simply by integrating over the rate of production, \[Y_{a}^{\rm FI} = \int_{0}^{T_{\rm RH}}dT\frac{n_{a}^{\rm eq}(T)\Gamma_{a\,\mathrm{ SM}\to\mathrm{SM}}(T)}{TH(T)s(T)}\] (13) \[\simeq \frac{135\sqrt{5}m_{\rm Pl}\Gamma_{a\,\mathrm{SM}\to\mathrm{SM}}( T_{\rm RH})}{\sqrt{2}\pi^{5}\sqrt{g_{\star}(T_{\rm RH})}g_{\star s}(T_{\rm RH})T_{ \rm RH}^{2}}\,,\] where \(g_{\star}\left(g_{\star s}\right)\) is the effective number of relativistic (entropy) degrees of freedom. The integral was performed assuming \(\Gamma_{a\,\mathrm{SM}\to\mathrm{SM}}\propto T^{3}\). The subsequent \(\chi\) abundance is just the fraction of these axions that decay into DM, \[Y_{\chi}(\infty)=2Y_{a}^{\rm FI}{\rm BR}(a\to\chi\bar{\chi})\,.\] (14) This regime is demonstrated in the left panel of Fig. 1 by the green parts of the curves, labeled A1. The shape is controlled by the branching ratio of axions to the DM, and the shape can be matched to Fig. 5 in Appendix C.1.1.
2. _The axion thermalizes but decouples while relativistic._ Decoupling from the bath occurs when \[\Gamma_{a\,\mathrm{SM}\to\mathrm{SM}}\simeq H\Big{|}_{T=T_{\rm dec}}\,.\] (15) After \(T_{\rm dec}\), the comoving abundance is fixed to its equilibrium value at \(T_{\rm dec}\) until it decays. The \(\chi\) abundance is then given by \[Y_{\chi}(\infty)=2Y_{a}^{\rm eq}(T_{\rm dec}){\rm BR}(a\to\chi\bar{\chi})\,.\] (16) Corrections to the axion production rate, \(\Gamma_{a\,\mathrm{SM}\to\mathrm{SM}}\), in the region interpolated between the chiral Lagrangian calculation and the QCD calculation may change \(T_{\rm dec}\) in Eq. (15). In terms of the relic abundance of the DM, the effect will be a change in \(g_{\star s}\) at the time of axion decoupling that appears in \(Y_{a}^{\rm eq}(T_{\rm dec})\). Near the QCD phase transition, this can alter the relic abundance up to a factor of six, which would correspond to \(\mathcal{O}(1)\) corrections to the \(m_{\chi}\), \(m_{a}\) and \(f_{a}\) values needed to match the observed abundance. This regime is shown in the left panel of Fig. 1 by the orange parts of the curves labeled A2.
3. _The axions decay in equilibrium with the bath._ This occurs when the axions stay thermalized until they become non-relativistic, and then decay. The relic abundance of DM is then given simply by the rate of thermal axion decays into the DM, \[Y_{\chi}(\infty) = \int_{0}^{T_{\rm RH}}dT\frac{n_{a}^{\rm eq}(T)\Gamma_{a\to\chi \bar{\chi}}}{TH(T)s(T)}\] (17) \[\simeq \frac{135\sqrt{5}c_{\chi}^{2}m_{\rm Pl}m_{\chi}^{2}}{2\sqrt{2} \pi^{6}\sqrt{g_{\star}(m_{a})}g_{\star s}(m_{a})m_{a}f_{a}^{2}}\,.\]
This regime is shown in the left panel of Fig. 1 by the blue curves labeled A3.
### \(m_{a}\leq 2m_{\chi}\): axion annihilation freeze-in
Next, we consider the case where the axion decay to DM is forbidden. In this case, there are three different sources of DM freeze-in production:
1. _Production directly from the bath._ For much of the parameter space considered here, the dominant contribution of DM is direct production from the bath. This is a UV-dominated process and will be sensitive to the reheat temperature. The freeze-in abundance is then \[Y_{\chi}(\infty) = \int_{0}^{T_{\rm RH}}dT\frac{\left(n_{\chi}^{\rm eq}\right)^{2} \left<\sigma v\right>_{\chi\bar{\chi}\to\rm SM}}{TH(T)s(T)}\] (18) \[\simeq T\frac{\left(n_{\chi}^{\rm eq}\right)^{2}\left<\sigma v\right>_{ \chi\bar{\chi}\to\rm SM}}{TH(T)s(T)}\Bigg{|}_{T=T_{\rm RH}}\.\] This regime is shown in the left panel of Fig. 1 by the brown curves labeled B1. This will be the dominant source of production of the DM for masses between the curves \(m_{\chi}=m_{\eta^{\prime}}/2=478\,\mathrm{MeV}\) and \(m_{\chi}\simeq 20\,\mathrm{GeV}\). The \(m_{\chi}\simeq 20\,\mathrm{GeV}\) boundary is a result of the choice of \(T_{\rm RH}=10\,\mathrm{TeV}\). Increasing \(T_{\rm RH}\) would enlarge this regime.
2. _Meson decays._ The decay of the psuedo-scalar mesons \(\pi^{0}\), \(\eta\) and \(\eta^{\prime}\) freezes-in the majority of DM when the decay is kinematically allowed. For this case, \[Y_{\chi}(\infty)= \int_{0}^{T_{\rm QCD}}dT\frac{n_{P}^{\rm eq}(T)\sum_{P=\pi^{0}, \eta,\eta^{\prime}}\Gamma_{P\to\chi\bar{\chi}}}{TH(T)s(T)}\] \[\simeq \sum_{P=\pi^{0},\eta,\eta^{\prime}}\frac{135\sqrt{5}}{8\pi^{11/2 }}\frac{m_{\rm Pl}\Gamma_{P\to\chi\bar{\chi}}}{\sqrt{g_{\star}(m_{\pi})g_{ \star}(m_{\pi})m_{P}^{2}}}\] (19) \[\qquad\times\left(\frac{m_{P}}{\Lambda_{\rm QCD}}\right)^{3}K_{3} \bigg{(}\frac{m_{P}}{\Lambda_{\rm QCD}}\bigg{)}\,,\] where \(T=\Lambda_{\rm QCD}\) is the QCD phase transition temperature and \(K_{3}(x)\) is the third modified Bessel function of the second kind. To achieve this nice closed form we have evaluated the effective number of degrees of freedom of the SM bath at \(m_{\pi}\), since freeze-in through such decays is dominated by temperatures just below \(\Lambda_{\rm QCD}\sim m_{\pi}\). This production is largely insensitive to the reheat temperature, unlike the direct bath production described above that grows with reheat temperature. This shown in the left panel of Fig. 1 by the purple curves labeled B2.
Figure 1: **Dark matter freeze in and freeze out.**_Left:_ Contours of fixed \(m_{\chi}\) indicate the values of \((m_{a},f_{a})\) at which the observed DM relic abundance is obtained through freeze-in. The different colors illustrate the six different freeze-in regimes. For the invisibly decaying axion: regime A1 (green), in which the axion never thermalizes with the bath; regime A2 (orange), in which the axion thermalizes but decouples while relativistic; regime A3 (blue), in which the axions decay in equilibrium with the bath. For the visibly decaying axion: regime B1 (brown), UV-dominated bath production; regime B2 (purple), pseudo scalar decay, and regime B3 (red) axion annihilations. _Right:_ Contours of fixed \(m_{\chi}\) indicate the values of \((m_{a},f_{a})\) at which the observed DM relic abundance is obtained through freeze-out. The different colors along each solid curve illustrate each of the three freezeout regimes in our model. Blue: \(\chi\bar{\chi}\to aa\) controls freezeout for \(m_{\chi}\gtrsim m_{a}\) (except where meson resonances enhance annihilation to SM); orange: axion resonance \(\chi\bar{\chi}\to a\to\rm SM\) controls \(m_{\chi}\lesssim m_{a}\lesssim 3m_{\chi}\); green: \(\chi\bar{\chi}\to\rm SM\) controls \(m_{a}\gtrsim 3m_{\chi}\). _In both panels_, grey vertical lines mask the regions where our numerical calculations do not accurately describe the effects of meson resonances due to finite resolution.
3. _Axion annihilations._ At low enough reheat temperatures and for DM masses large enough (\(m_{\chi}\gtrsim 20\,\)GeV for \(T_{\rm RH}=10\,\)TeV as shown in the left panel of Fig. 1), the majority of the DM freezes in from axion annihilation. This contribution will only compete with the direct production of DM from the SM bath when the axions are thermalized. Therefore, we assume a thermal distribution of axions when calculating the abundance from this process. Freeze-in here is IR-dominated and most of the DM is produced soon after the axions become non-relativistic and deplete. Therefore, we can integrate Eq. (6) to approximately obtain the freeze-in abundance: \[Y_{\chi}(\infty) = \int_{0}^{T_{\rm RH}}dT\frac{\left(n_{\chi}^{\rm eq}\right)^{2} \left<\sigma v\right>_{\chi\bar{\chi}\to aa}}{TH(T)s(T)}\] (20) \[\simeq 6\times 10^{-4}\frac{c_{\chi}^{4}m_{\rm p1}m_{\chi}^{3}}{f_{a}^{4 }\sqrt{g_{*}(m_{\chi})}g_{*s}(m_{\chi})}\,,\] where we numerically perform the integral for \(m_{a}<2m_{\chi}\) and \(T_{\rm RH}\gg m_{a},m_{\chi}\). This is outlined by the red curves in the left panel of Fig. 1 labeled B3.
## V Freeze-out
In this section we consider freeze-out, where the DM is in thermal equilibrium with the SM bath at early times and its relic abundance is set by the decoupling of DM number-changing processes. In all regions of parameter space where the \(\chi\) freezes out from the bath, the axions are also in equilibrium with the bath. Therefore, one only needs to study the Boltzmann equation of
\[\dot{n}_{\chi}+3Hn_{\chi}= -\left(\left<\sigma v\right>_{\chi\bar{\chi}\to aa}+\left< \sigma v\right>^{\rm sub}_{\chi\bar{\chi}\to\rm SM}+\left<\sigma v\right>_{ \chi\bar{\chi}\to a}\right)\] \[\times\left(n_{\chi}^{2}-n_{\chi}^{\rm eq}{}^{2}\right). \tag{21}\]
The parameter space where the DM relic abundance is obtained via freeze-out can be understood in three main regimes, each corresponding to the dominance of a different term in Eq. (21). The right panel of Fig. 1 presents solutions to the Boltzmann Eq. (21) for fixed \(m_{\chi}\) leading to the observed DM relic abundance today, \(m_{\chi}Y_{\chi}^{Y}\simeq 0.43\,\)eV through freeze out. The different colored segments along each curve illustrate each of the three freeze-out regimes:
1. _DM annihilation to axions._\(\chi\bar{\chi}\to aa\) dominates for \(m_{\chi}>m_{a}\) (with the exception of \(2m_{\chi}\sim m_{\pi^{0}},m_{\eta},m_{\eta^{\prime}}\) where mesons resonances enhance annihilation to SM); depicted in blue. For \(m_{\chi}<m_{a}\), the annihilation into axions becomes kinematically suppressed.
2. _Axion resonance (inverse decay)._\(\chi\bar{\chi}\to a\to\rm SM\) dominates for \(2m_{\chi}\lesssim m_{a}\lesssim 3m_{\chi}\); depicted in orange.
3. _Annihilation into bath particles._\(\chi\bar{\chi}\to\rm SM\) through an off-shell axion dominates for \(m_{a}\gtrsim 3m_{\chi}\); illustrated in green. When \(2m_{\chi}\) is close to hadronic resonances, the annihilation into bath particles can dominate for \(m_{a}<3m_{\chi}\) as well.
Following Ref. [7], the mass-coupling relationship to match the observed abundance can be obtained semi-analytically. For \(m_{\chi}>m_{a}\), using Eq. (6) we find
\[m_{\chi}\simeq 11\,\text{TeV}\times\left(\frac{f_{a}}{\text{TeV}}\right)^{2}. \tag{22}\]
For \(m_{\chi}\ll m_{a}\) (far from the axion resonance), by using Eq. (10) we find
\[m_{\chi}\simeq 2\,\text{TeV}\left(\frac{f_{a}m_{a}}{\text{TeV}^{2}}\right)^{ \frac{2}{6}}\left(\frac{f_{a}^{2}\Gamma_{a\to\rm SM}/m_{a}^{3}}{10^{-5}}\right) ^{-\frac{1}{6}} \tag{23}\]
where we have taken \(g_{\star}(m_{\chi})=g_{\star s}(m_{\chi})=106.75\).
For \(m_{a}\simeq 2m_{\chi}\), the annihilation is dominated by the axion resonance. Using Eq. (8), we find the relic abundance is mostly \(m_{\chi}\) and \(m_{a}\) independent in this case (up to \(g_{\star}\) and \(g_{\star s}\) corrections) and
\[f_{a}\simeq 30\,\text{TeV}\,, \tag{24}\]
reproduces the observed abundance.
## VI Experimental constraints
Having analyzed the DM relic abundance in various regions of parameter space, we move to address experimental constraints of the ALP-gluon portal discussed in this work. The shaded regions of Figs. 2 and 3 consolidate existing bounds on the axion coupling, \(f_{a}^{-1}\), as a function of the axion mass, \(m_{a}\).
The constraints fall into three categories:
1. _Robust terrestrial bounds_: terrestrial experiments that are either based on invisible signatures or independent of the ALP decay final states (shown in pink).
2. _Visible terrestrial bounds_: terrestrial experiments that place constraints on visibly decaying ALPs (shown in brown, turquoise and orange).
3. _Cosmological and astrophysical bounds_: cosmological (shown in purple) and astrophysical (shown in dark blue) bounds.
Details regarding the casting of the bounds, including in case of the invisibly decaying ALP, are provided in Appendix D. Below we describe the various constraints.
In category (i) of robust terrestrial bounds, measurement of the rare decay \(K^{+}\to\pi^{+}\nu\bar{\nu}\) performed by the NA62 Collaboration [50] and analyzed in Ref. [51] place constraints on ALP masses in the regime
\(m_{K^{+}}-m_{\pi^{+}}\). These constraints are constructed by bounding the number of ALPs that escape detection; as such they provide a robust limit independent of the final state, with the limit strengthening further when the ALP can decay invisibly. For larger couplings, the \(K^{+}\to\pi^{+}a\) decay modifies the \(K^{+}\) lifetime beyond the bound \(\text{BR}(K^{+}\to\text{sm}+\text{new physics})\leq 3\times 10^{-3}\) placed by Ref. [52]. Present data from \(K_{L}\to\pi^{0}\nu\bar{\nu}\) searches [107] places weaker bounds, and are not presented here. (Here we do not show limits in the region \(m_{a}<m_{B}-m_{K}\) derived in Ref. [66] from the analysis of the inclusive branching ratio of \(\text{BR}(b\to sa)\), nor the limits derived in Ref. [51] for \(m_{a}<m_{t}\) from the measurements of the chromomagnetic dipole moment of the top quark, as both should arise from an RG flow from a \(f_{a}\)-dependent UV scale.)
The visible terrestrial bounds, category (ii), contains constraints from beam dumps, meson decays and other collider searches. Ref. [53] summarizes existing constraints from proton beam dumps, including limits from the NuCal [54; 55] and CHARM [56; 57; 58] collaborations, which place some of the strongest limits in the region of \(m_{a}\lesssim 1\,\text{GeV}\). We also consider constraints from the electron beam dumps E137 [59] and E141 [60], presented in Ref. [53]. The beam dump bounds are complemented at larger couplings by the constraints derived from the CMS search for long-lived particles decaying in the muon endcap detectors [86], as analyzed in Ref. [87]. Additional bounds can be derived from \(K\), \(B\), \(J/\psi\) and \(\Upsilon\) meson decays. Here we have included the meson decays \(B^{+}\to K^{+}a(\mu^{+}\mu_{-})\), \(K^{+}\to\pi^{+}a(\gamma\gamma)\) and \(\Upsilon\to\gamma a\), summarized in Ref. [51] based on measurements by the LHC [62], NA62 [63], E949 [64] and BaBar [65] collaborations and \(B\to Ka(3\pi)\), \(B\to Ka(\eta\pi)\), \(B\to Ka(KK\pi)\) and \(B\to Ka(\phi\phi)\), summarized in Ref. [66] based on measurements performed by the Belle [67] and BaBar [68; 69] collaborations. We have further included a recent analysis by BESIII [70] that places constraints on the \(J/\psi\to\gamma a(\gamma\gamma)\) decay process. We learn that meson decays place the most stringent constraints on large ALP-SM couplings in the \(m_{a}\sim 100\,\text{MeV}-7\,\text{GeV}\) region.
For \(m_{a}\gtrsim 10\,\text{GeV}\), searches for di-jet [74] and di-photon [71; 72; 73; 75; 76; 77; 78; 79; 80; 81] resonances at the LHC place the strongest limits on the parameter space. Here we have recast the data from Ref. [81], which is publicly available in Ref. [82], and the bounds analyzed in Ref. [85], to encompass our ALP-gluon model. We do not show mono-jet and di-jet signatures of lower masses [83] as they are weaker than the bounds we present. We have also cast bounds from GlueX which constrain the mass region \(m_{a}\sim\mathcal{O}(100)\,\text{MeV}\)[84].
As the analyses in category (ii) all rely on visible final states, they are affected by the branching ratio \(\text{BR}(a\to\text{visible})\) and the axion lifetime. For \(m_{a}>2m_{\chi}\), the axion invisible branching ratio becomes dominant, weakening the sensitivity of these searches by a factor of
Figure 2: **Axion coupling to gluons for \(m_{a}<2m_{\chi}\).**_Left:_ Constraints on the coupling: robust terrestrial bounds [50; 51; 52] (pink), beam dumps [53; 54; 55; 56; 57; 58; 59; 60; 61] (brown), meson decays [62; 63; 64; 65; 66; 67; 68; 69; 70] (turquoise), colliders [71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87] (orange), BBN [88; 89] (purple), astrophysical [90] (dark blue) and new colored particles [91; 92; 93; 94; 95] (gray). EFT constraints are indicated by gray shaded regions. The dashed curves are numerical solutions to the Boltzmann equations giving the correct DM abundance today for the various labeled DM masses. _Right:_ The freeze-in (**dark red**) and freeze-out (**purple**) phases in the theory considered here as one varies the DM mass, with regions excluded by existing constraints (gray) and regions excluded only by the colored particles bound (light gray). The upper limit of both regions stems from \(2m_{\chi}\gtrsim m_{a}\); for the freeze-out region the lower limit arises from \(m_{\chi}=8\pi f_{a}\) which is comparable to the unitarity bound. Dashed curves show the projected sensitivities of future experiments: DUNE-ND (**red**) [96], Belle-II (**turquoise**) [66], LHC track trigger (orange) [38], MATHUSLA (light blue) [96; 97], SHiP (dark blue) [53; 98], NAG2-LS3 (magenta) [53; 99], SHADOWS (yellow) [53; 100], DarkQuest (light purple) [53; 101], CODEX-b (**green**) [96; 102], FASER (gray) [96; 103; 104] and KOTO (brown), along with KOTO2 (brown dot-dashed) [105; 106; 106].
\(\sim\sqrt{1-\text{BR}(a\to\chi\bar{\chi})}\). Values of \(\text{BR}(a\to\chi\bar{\chi})\) for some representative DM masses are shown in Fig. 5 in Appendix C.1.1.
The constraints in category (iii) come from cosmology and astrophysical probes, and include bounds from big bang nucleosynthesis (BBN), supernovae (SN), and proton-neutron star heating. Ref. [89] provides bounds on the synthesis of light elements during BBN in the presence of ALPs decaying to photons, for \(m_{a}=10\,\text{MeV}\) and \(100\,\text{MeV}\).
SN bounds are taken from Ref. [90] which considered implications of ALP nucleon, ALP-pion and ALP-photon interactions on various SN observables. We expect the SN bounds presented here to be complemented at stronger couplings by future dedicated analyses considering the effects of an ALP-nucleon coupling in addition to an ALP-photon coupling in low energy SN [108]. The SN bounds shown in this work can be improved by a dedicated numerical analysis and should be taken as an estimate of the region of exclusion. Note that we do not show the SN1987A bounds presented in [109; 110] as those works did not consider the dominant \(N\,\pi\ \to N\,a\) process.
Ref. [111] considered neutron star kinetic heating due to pseudoscalar-mediated DM interactions. These bounds can be seen in Fig. 3 as a dark blue band at \(f_{a}\sim 1\,\text{TeV}\) for \(m_{\chi}=100\,\text{MeV}\), and \(10\,\text{GeV}\).
Regarding direct detection, we have considered the strongest published spin-dependent bounds by CDM-Slite [112], PICO-60L [113] and XENON-1T [114] and found that they are unable to exclude relevant regions of parameter space. Spin-independent direct detection con
Figure 3: **Axion coupling to gluons at fixed \(m_{\chi}\), including \(m_{a}>2m_{\chi}\)**. We show constraints on the coupling for various DM masses: MeV (_upper left_), \(100\,\text{MeV}\) (_upper right_), \(10\,\text{GeV}\) (_lower left_), \(100\,\text{GeV}\) (_lower right_). The colors of the shaded regions indicate the different types of constraints: robust terrestrial bounds [50; 51; 52] (pink), beam dumps [53; 54; 55; 56; 57; 58; 59; 60; 61] (brown), meson decays [62; 63; 64; 65; 66; 67; 68; 69; 70] (**turquoise**), colliders [71; 72; 73; 74; 75; 76; 77] (orange), BBN [88; 89] (**purple**), astrophysical [90] (**dark blue**) and new colored particles [91; 92; 93; 94; 95] (light gray). EFT constraints are indicated by gray shaded regions. The dashed curves indicate the numerical solutions to the Boltzmann equations for freeze out (purple) and freeze in (**dark red**) giving the correct DM abundance today.
straints can arise from a box diagram via the exchange of two axions (see Ref. [115]). However, the rate for such a process is proportional to \(1/f_{a}^{8}\), and thus the constraints are expected to be further suppressed compared to the spin-dependent bounds we considered. We conclude that current direct detection limits do not play a role in constraining viable parameter space of our theory.
Finally, the \(aG\bar{G}\) coupling may be generated by integrating out heavy quarks from a UV theory. In such a case, one would expect that the heavy quarks appear below the scale \(4\pi f_{a}\). Following Ref. [38], a constraint can be placed of \(4\pi f_{a}>2\,\mathrm{TeV}\), with \(2\,\mathrm{TeV}\) approximately the bound on new heavy quarks at the LHC [91, 92, 93, 94, 95]. This model-dependent bound is illustrated by the light shaded gray regions in Figs. 2 and 3. Since it depends on the UV completion of the theory, this constraint should not be considered as stringent as the other bounds we presented above.
## VII Results
Our results for the axion-gluon coupling considered in this work are presented in Figs. 2, 3 and 4. Throughout we use \(c_{\chi}=1\) and \(T_{\mathrm{RH}}=10\,\mathrm{TeV}\).
Fig. 2 considers the visible decaying axion, where \(m_{a}<2m_{\chi}\). In the left panel of Fig. 2, the colored shaded regions represent the constraints on the visibly decaying ALP. The solid lines delineate the freeze-in and freeze-out phases, while the dashed curves correspond to the numerical solutions to the Boltzmann equations for a fixed value of \(m_{\chi}\). In the right panel of Fig. 2, the gray shaded regions represent the current constraints, the shaded regions outlined with solid curves indicate the freeze-in and freeze-out phases when including all relevant DM masses, such that the axion decays visibly and our effective field theory description remains valid. Dashed curves show the projections for the reach of future experiments. We learn that for the visibly decaying axion, freeze-out is excluded by current experiments for axion masses \(m_{a}\) between \(10\,\mathrm{MeV}\) to a few hundred MeV. The region between beam dumps and colliders at \(m_{a}\sim 1-50\,\mathrm{GeV}\) remains viable for freeze-out, with decay constants of order \(f_{a}\sim 100\,\mathrm{GeV}\) to \(10\,\mathrm{TeV}\), and will be partially probed by Belle-II [66]. Freeze-in of the visibly decaying axion is currently constrained only by BBN and SN at axion masses \(m_{a}\sim 10-400\,\mathrm{MeV}\), allowing for a broad range of axion masses with decay constant \(f_{a}\gtrsim 100\,\mathrm{TeV}\). The DUNE near detector [96], CODEX-b [102], LHC track trigger [38], SHiP [98], SHADOWS [100], NA62-LS3 [99, 53] and MATHUSLA [97] are expected to probe the freeze-in region further for ALP masses in the \(m_{a}\sim 100\,\mathrm{MeV}-20\,\mathrm{GeV}\) range.
Fig. 3 considers the invisible decaying axion, where \(m_{a}>2m_{\chi}\). Here we fixed the DM mass \(m_{\chi}\) to several values, presented in the four panels of Fig. 3. For each value of \(m_{\chi}\) we show the current constraints as colored shaded regions and the numerical solutions to the Boltzmann equations as dashed curves. Note that since the DM mass is fixed, the \(m_{\chi}=0.1\,\mathrm{GeV}\) and \(10\,\mathrm{GeV}\) panels contain regions with visible axion decays, while for \(m_{\chi}=100\,\mathrm{GeV}\) the figure is entirely visibly decaying axions. We learn that an invisibly decaying axion can avoid many of the constraints, allowing for a broad range of axion masses and decay constants. In the freeze-out phase, near-resonance axion masses of \(m_{a}\sim 2m_{\chi}\) enable significantly smaller couplings (_i.e._ larger decay constants \(f_{a}\)) than away from resonance. For \(m_{a}\gtrsim 300\) MeV, this allows to evade existing limits, though a broader DM mass range is possible when the axion decays visibly. (Note that the freeze-out phase in the top left panel sits outside the plotted range and is excluded.) In the freeze-in phase, once invisible decays of the axion \(a\to\chi\bar{\chi}\) become allowed, the coupling needed for freezing in the DM drops significantly compared to only visible decays; this is exemplified by the sharp change in couplings at \(m_{a}=2m_{\chi}\) in the upper right and bottom left panels of Fig. 3.
Fig. 4 summarizes a parameter scan in the \(m_{a}-m_{\chi}\) plane consisting of \(\sim 10^{5}\) points. For freeze-out, we considered \(m_{\chi}>10\,\mathrm{MeV}\) to avoid adding a thermalized relativistic degree of freedom during BBN, and \(m_{\chi}<10^{3}\,\mathrm{TeV}\) since values above this exceed the EFT validity region of \(m_{\chi}<8\pi f_{a}\). For freeze-in, we focused on \(m_{\chi}>10\,\mathrm{keV}\), corresponding to the freeze-in lower mass limit (see Ref. [116]), and on \(m_{\chi}<\mathrm{TeV}\ll T_{\mathrm{RH}}\), in order to avoid solutions where \(m_{\chi}\) is tuned close to \(T_{\mathrm{RH}}\) which would enable arbitrarily large couplings.
For each set of masses, we solved the Boltzmann equa
Figure 4: **Allowed regions of DM-ALP parameter space.** A parameter scan consisting of \(\sim 10^{5}\) points in the \(m_{a}-m_{\chi}\) plane. For each set of masses we have numerically solved the Boltzmann equations and found a coupling \(f_{a}\) such that the correct DM relic abundance is achieved. The colored regions represent points that have successfully solved the Boltzmann equations in the case of freeze out (purple) and freeze in (**dark red**) and are not ruled out by the existing limits as described in Section VI. Freeze-out regions excluded only by the colored particles bound are shaded in light purple.
tions numerically to find a coupling \(1/f_{a}\) such that the correct relic abundance of DM is obtained. We then checked against existing constraints; the remaining allowed combinations of DM and axion masses are shown in the colored regions of Fig. 4: freeze-out (purple) and freeze-in (dark red). Freeze-out regions excluded only by the colored particles bound are shaded in light purple. Note that some regions in this plane are able to accommodate the DM relic abundance both via freeze-out, with strong couplings such that \(\chi\) and \(a\) thermalize, and via freeze-in, through smaller couplings where \(\chi\) never thermalizes. The solid black curve corresponds to \(m_{a}=2m_{\chi}\), indicating the boundary between the visible (above) and invisibly (below) decaying axion.
The allowed regions around axion masses \(\sim 100\,\mathrm{MeV}\) and a few hundred \(\mathrm{MeV}\) are narrow due to the proximity to the \(\pi^{0},\,\eta\) and \(\eta^{\prime}\) resonances which are difficult to probe. We find that freeze-out is viable for DM masses \(m_{\chi}\gtrsim 30\,\mathrm{MeV}\) along with axion masses above \(\sim 100\,\mathrm{MeV}\), when excluding the bounds on new colored particles, but is cut off around \(m_{\chi}\simeq 100\,\mathrm{GeV}\) when including them. The allowed DM-axion freeze-out mass region is more restricted in the case of invisibly decaying axions compared to the region for visible decays--most of the allowed parameter space exists just below the \(m_{a}=2m_{\chi}\) resonance. The freeze-in scenario is much less constrained than that of freeze-out due to the smaller couplings involved that are only partially probed by cosmological and astrophysical observations while remaining currently unprobed by terrestrial experiments.
## VIII Summary
In this work we considered the axion-gluon portal to dark matter in detail, considering different cosmological histories to explain the relic abundance. Studying both freeze-out and freeze-in processes, we have mapped out the cosmologically viable parameter space for DM and axion masses and couplings, along with the existing constraints from terrestrial experiments, cosmological considerations and astrophysical bounds.
Future experiments will be able to probe the visibly decaying axion regions extensively. Belle-II [66] is expected to improve current visible meson decay constraints; future runs of the LHC will statistically improve the current collider reach; and future experiments such as the DUNE near detector [96], CODEX-b [102], MATHUSLA [97], FASER [103, 104], SHiP [98], SHADOWS [100] and NA62-LS3 [53, 99] will extend the reach of beam dumps to a wider range of couplings in addition to masses \(m_{a}\lesssim 10\,\mathrm{GeV}\) (see right panel of Fig. 2). In addition, the proposed LHC displaced track trigger search is expected to probe a novel region of parameter space in the \(m_{a}\sim 1-20\,\mathrm{GeV}\) range [96, 38]. Modification of the low energy supernova bounds [108] to include ALP-nucleon and ALP-pion couplings may also probe the currently open freeze-in region of the visibly decaying axion with \(f_{a}\sim 10^{5}-10^{8}\,\mathrm{GeV}\) at ALP masses \(m_{a}<1\,\mathrm{GeV}\). We conclude that the large coupling region \(f_{a}\sim 100\,\,\mathrm{GeV}-10\,\,\mathrm{TeV}\) at \(m_{a}\sim 3-10\,\,\mathrm{GeV}\), which remains unconstrained by current and future experiments that we considered, is a well-motivated region for future terrestrial searches.
**Acknowledgments.** We thank Martin Bauer, Jan Jerhot, and Kohsaku Tobioka for sharing of constraint data, and Edoardo Vitagliano for useful discussions related to stellar emission. P.F. is supported by the Zuckerman STEM Leadership Program. The work of Y.H. is supported by the Israel Science Foundation (grant No. 1818/22), by the Binational Science Foundation (grant No. 2018140), by the Azrieli Foundation and by an ERC STG grant ('Light-Dark', grant No. 101040019). E.K. is supported by the US-Israeli Binational Science Foundation (grant No. 2020220) and by the Israel Science Foundation (grant No. 1111/17). R.O. acknowledges support from the Israel Science Foundation (grant No. 1818/22). Y.S. is supported by grants from the NSF-BSF (grant No. 2021800), the ISF (grant No. 482/20), the BSF (grant No. 2020300) and by the Azrieli foundation. This project has received funding from the European Research Council (ERC) under the European Union's Horizon Europe research and innovation programme (grant agreement No. 101040019). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union. The European Union cannot be held responsible for them.
_Note Added:_ While at final stages of writing this manuscript, we became aware of Refs. [117, 32] that consider similar scenarios.
## Appendix A Rates
The general form for the Boltzmann equation for the number density of particle \(X\), \(n_{X}\), in a Friedmann-Robertson-Walker background is
\[\frac{\partial n_{X}}{\partial t}+3Hn_{X}=C_{X}\,, \tag{10}\]
where \(C_{X}\) is the collision term, a sum of all the rates of processes that can create or destroy an \(X\) particle. In general for a \(n\to m\) process
\[i_{1}\,\cdots\,i_{n}\to f_{1}\,\cdots\,f_{m}\,, \tag{11}\]
the rate is given by
\[\gamma=\Delta N_{X}\int d\Pi|\mathcal{M}|^{2}f_{i_{1}}(E_{i_{1}})\ldots f_{i_ {n}}(E_{i_{n}})\,, \tag{12}\]
where \(\Delta N_{X}\) is the number of \(X\) particles created (or destroyed) in the process, \(f_{\ell}(E)\) is the phase space density of a particle \(\ell\), \(|\mathcal{M}|^{2}\) is the matrix element squared and summed over all degrees of freedom. The \(n+m\) body
phase space is given by
\[d\Pi= S\cdot d\Pi_{i_{1}}\cdots d\Pi_{i_{n}}d\Pi_{f_{1}}\cdots d\Pi_{f_{n}}\times\] \[(2\pi)^{4}\delta^{(4)}(\Sigma p_{i}-\Sigma p_{f})\,, \tag{10}\]
where \(S\) is a symmetry factor if there are identical initial or final state particles and
\[d\Pi_{\ell}=\frac{d^{3}\mathbf{p}_{\ell}}{(2\pi)^{3}2E_{\ell}}\,, \tag{11}\]
is the Lorentz invariant phase space for particle \(\ell\) with energy \(E_{\ell}\). Here we have dropped additional contributions to the collision rates from quantum statistics, _i.e._, Pauli blocking and stimulated emission.
Often one is interested in calculating collision terms for particles in thermal equilibrium. Ignoring the quantum statistics (which only have a small effect on the relic abundance calculations), the phase densities take on the familiar Maxwell-Boltzmann distribution
\[f_{\ell}(E_{\ell})=\frac{n_{\ell}}{n_{\ell}^{\rm eq}}e^{-E_{\ell}/T_{\ell}}\,, \tag{12}\]
where \(n_{\ell}^{\rm eq}=g_{\ell}\int d\Pi_{\ell}e^{-E_{\ell}/T_{\ell}}\). The collision rates can be written in terms of thermally averaged cross sections, which are defined as
\[\left\langle\sigma v\right\rangle_{i_{1}\,\cdots i_{n}\to f_{1}\, \cdots\,f_{m}}=n_{i_{1}}\cdots n_{i_{n}}\left\langle\sigma v\right\rangle_{i_ {1}\,\cdots\,i_{n}\to f_{1}\,\cdots\,f_{m}}\,, \tag{13}\]
where \(n_{i}\) is the number density of particle \(i\). For \(2\to m\) processes, \(\left\langle\sigma v\right\rangle\) turns out to be the cross-section times the Moeler velocity averaged over the phase space densities of the initial particles:
\[\left\langle\sigma v\right\rangle_{2\to m}=\frac{1}{n_{i_{1}}^{\rm eq }n_{i_{2}}^{\rm eq}}\int d\Pi_{1}d\Pi_{2}f_{i_{1}}f_{i_{2}}\] \[\times\sigma\sqrt{(p_{i_{1}}\cdot p_{i_{2}})^{2}-m_{i_{1}}^{2}m_{ i_{2}}^{2}}\,. \tag{14}\]
For processes with more than two initial particles, we define \(\left\langle\sigma v\right\rangle\) via Eq. (13).
For a \(T\) (or \(CP\)) invariant process, the equilibrium rates for a process and its inverse are the same. Thus, the reverse rates are given by detailed balance as
\[\left\langle\sigma v\right\rangle_{f_{1}\,\cdots\,f_{m}\to i_{1}\, \cdots i_{n}}^{\rm eq} \!\!= \frac{n_{i_{1}}^{\rm eq}\cdots n_{i_{n}}^{\rm eq}}{n_{f_{1}}^{\rm eq }\cdots n_{f_{m}}^{\rm eq}}\langle\sigma v\rangle_{i_{1}\,\cdots i_{n}\to f _{1}\,\cdots\,f_{m}}^{\rm eq}. \tag{15}\]
### Production through an intermediate axion
In this appendix we estimate the DM production from the thermal bath, which may be dominated by on-shell and off-shell axions. For example, via processes such as \(gq\to q(a^{(*)}\to\chi\bar{\chi})\) or \(gg\to(a^{(*)}\to\chi\bar{\chi})\). Since the matrix elements of these processes are factorized into axion production and axion decay, we can use the on-shell decay rates to calculate the off-shell axion production using the procedure described in this subsection. We take the decay rate of the axion from Ref. [41]. The DM production rate from the bath via an intermediate axion, \(X\to Y(a^{*}\to\chi\bar{\chi})\), is given by
\[\gamma_{X\to Y\chi\bar{\chi}}= \int d\Pi_{X}d\Pi_{Y}d\Pi_{\chi_{1}}d\Pi_{\chi_{2}}(2\pi)^{4} \delta^{4}(\Sigma_{p}p)f_{X}\] \[\times\left|\mathcal{M}_{X\to Y\chi\bar{\chi}}\right|^{2}, \tag{16}\]
where \(X\) and \(Y\) represent some initial and final state of particles respectively; \(d\Pi_{X}=d\Pi_{i_{1}}\cdots d\Pi_{i_{n}}\) and \(d\Pi_{Y}=d\Pi_{f_{1}}\cdots d\Pi_{f_{m}}\) is the product of the Lorentz invariant phase space factors for the initial and final states and \(f_{X}=f_{i_{1}}(E_{i_{1}})\cdots f_{i_{n}}(E_{i_{n}})\). Factorizing the matrix element gives
\[\gamma_{X\to Y\chi\bar{\chi}}= \int d\Pi_{X}d\Pi_{Y}d\Pi_{\chi_{1}}d\Pi_{\chi_{2}}(2\pi)^{4} \delta^{4}(\Sigma_{p}p)f_{X}\] \[\times\frac{|\mathcal{M}_{X\to Ya^{*}}|^{2}|\mathcal{M}_{a^{*} \to\chi\bar{\chi}}|^{2}}{(m_{a}^{2}-m_{a^{*}}^{2})^{2}-\Gamma_{a}^{2}m_{a}^{2 }}\,, \tag{17}\]
where the matrix elements should be evaluated for an off-shell axion with \(m_{a^{*}}=\sqrt{E_{a^{*}}^{2}-\mathbf{p}_{a^{*}}^{2}}\). Next, we can insert an identity integral over the internal axion 4-momentum
\[1= \int\frac{dm_{a^{*}}^{2}}{2\pi}d\Pi_{a^{*}}(2\pi)^{4}\delta^{4}(p_{a^{*}}- p_{X})\,. \tag{18}\]
Plugging this into the rate gives
\[\gamma_{X\to Y\chi\bar{\chi}}= \int\frac{dm_{a^{*}}^{2}}{\pi}\frac{\gamma_{X\to Ya^{*}}m_{as} \Gamma_{a^{*}\to\chi\bar{\chi}}}{(m_{a}^{2}-m_{a^{*}}^{2})^{2}-\Gamma_{a}^{2}m _{a}^{2}}\,, \tag{19}\]
where we have used the definition of the production rate \(\gamma_{X\to Ya^{*}}\) and decay rate \(\Gamma_{a^{*}\to\chi\bar{\chi}}\). These quantities should be evaluated for an off-shell axion with mass \(m_{a^{*}}\). We can relate the rate \(\gamma_{X\to Ya^{*}}\) to that found in Ref. [41; 45; 46]. Using that the SM bath particles are always taken to be in equilibrium
\[\gamma_{X\to Ya^{*}}=n_{a^{*}}^{\rm eq}\Gamma_{a^{*}\,\rm SM\to SM}\,. \tag{20}\]
As a sanity check, we take the narrow width approximation for the result in Eq. (19) and find that \(\gamma_{X\to Y\chi\bar{\chi}}\to\gamma_{X\to Ya}{\rm BR}(a\to\chi\bar{\chi})\) as expected.
### \(2\to 2\) rates
Following Ref. [118], the thermally averaged cross section from a \(2\to 2\) process \(i_{1}i_{2}\to f_{1}f_{2}\) is
\[\langle\sigma v\rangle= \frac{T}{n_{i_{1}}^{\rm eq}n_{i_{2}}^{\rm eq}}\int\frac{ds\sqrt{ s}}{512\pi^{5}}K_{1}(\sqrt{s}/T)\lambda^{\frac{1}{2}}(\sqrt{s},m_{i_{1}},m_{i_{2}})\] \[\times \lambda^{\frac{1}{2}}(\sqrt{s},m_{f_{1}},m_{f_{2}})\int\frac{d \Omega_{i_{1},i_{2}}}{4\pi}\frac{d\Omega_{f_{1},f_{2}}}{4\pi}|\mathcal{M}|^{2}\,, \tag{119}\]
where \(d\Omega_{i,j}\) are taken in the \(i,j\) center of mass frame, \(K_{i}\) is the Bessel K function and
\[\lambda(a,b,c)\equiv\left(1-(a+b)^{2}/c^{2}\right)\left(1-(a-b)/c^{2}\right)\,. \tag{120}\]
A useful limit is the \(T\ll m\) limit, where
\[\frac{(2\pi)^{3}}{T^{2}e^{-2m/T}} \int ds\,K_{1}(\sqrt{s}/T)\left(1-\frac{4m^{2}}{s}\right)^{n}\] \[\xrightarrow{T\to 0}\ \ 16\pi^{\frac{7}{2}}\left(\frac{T}{m} \right)^{n-\frac{1}{2}}\Gamma(n+1)\,. \tag{121}\]
### \(\chi\bar{\chi}\to aa\)
The summed and squared matrix element for \(\chi\bar{\chi}\to aa\) is
\[|\mathcal{M}_{\chi\bar{\chi}\to aa}|^{2}= \frac{2c_{\chi}^{4}m_{\chi}^{4}}{f_{a}^{4}}\left[(m_{\chi}^{2}-t) (m_{\chi}^{2}-u)-m_{a}^{4}\right]\] \[\times\left(\frac{1}{t-m_{\chi}^{2}}-\frac{1}{u-m_{\chi}^{2}} \right)^{2}\,. \tag{122}\]
A general closed form expression for the thermally averaged cross section is difficult to find.
We use Eqs. (119) and (122) to obtain
\[\langle\sigma v\rangle_{\chi\bar{\chi}\to aa}= \frac{4c_{\chi}^{4}m_{\chi}^{4}}{512\pi^{5}f_{a}^{4}}\frac{T}{(n _{\chi}^{\rm eq})^{2}}\] \[\times \int ds\sqrt{s}K_{1}(\sqrt{s}/T)\left(\beta-\tan^{-1}\beta\right)\,, \tag{123}\]
where \(\beta\equiv\sqrt{1-4m_{\chi}^{2}/s}\) is the center of mass velocity of the \(\chi\) and \(\bar{\chi}\). In the limit that \(T\ll m_{\chi}\), the velocity is small, \(\beta\ll 1\), and the integral is dominated near \(s=4m_{\chi}^{2}\). The thermally averaged cross section is then
\[\langle\sigma v\rangle_{\chi\bar{\chi}\to aa}=\frac{c_{\chi}^{4}m_{\chi}^{2}}{ 1536\pi^{5}f_{a}^{4}}\frac{T}{(n_{\chi}^{\rm eq})^{2}}\!\int\!\!dsK_{1}(\sqrt {s}/T)\beta^{3}. \tag{124}\]
Using the limit in Eq. (121), we find the thermally averaged cross section to be
\[\langle\sigma v\rangle_{\chi\bar{\chi}\to aa}\simeq\frac{c_{\chi}^{4}m_{\chi}^ {2}}{64\pi f_{a}^{4}}\frac{T}{m_{\chi}}\,. \tag{125}\]
## Appendix B Freeze-in
First, we consider DM production via \(a\) decay, namely \(a\to\chi\bar{\chi}\). Ignoring the annihilation of \(\chi\) particles from inverse decays, the Boltzmann equation becomes
\[\dot{n}_{\chi}+3Hn_{\chi}=2\left\langle\frac{m}{E}\right\rangle_{a}\Gamma_{a \to\chi\bar{\chi}}n_{a}\,, \tag{126}\]
where
\[\left\langle\frac{m}{E}\right\rangle_{a}=\frac{g_{a}}{n_{a}}\int\frac{d^{3}p} {(2\pi)^{3}}\frac{m_{a}}{E_{a}}f_{a}=\frac{K_{1}(m_{a}/T)}{K_{2}(m_{a}/T)}\,, \tag{127}\]
is the thermally averaged time dilation factor with \(g_{a}=1\) the number of \(a\) degrees of freedom. In the last step we assume Maxwell-Boltzmann statistic.
In terms of the DM and axion yields, where \(Y_{a}=n_{a}/s\) and \(Y_{\chi}=n_{\chi}/s\) with \(s\) the entropy density, Eq. (126) becomes
\[T\,H\,\dot{Y}_{\chi}=-2\left\langle\frac{m}{E}\right\rangle_{a}\Gamma_{a\to \chi\chi}Y_{a}\,. \tag{128}\]
Therefore, the late time solution is given by
\[Y_{\chi}(\infty)=\int_{0}^{T_{\rm HM}}dT\left\langle\frac{m}{E}\right\rangle_{ a}\frac{\Gamma_{a\to\chi\chi}Y_{a}}{TH}. \tag{129}\]
For the \(a\) bath in equilibrium, that is \(n_{a}=n_{a}^{\rm eq}\), this integral will be dominated in the IR near \(T\sim m_{a}\). Assuming that \(g_{*}\) and \(g_{*s}\) are not changing rapidly near \(T\sim m_{a}\), and that \(m_{a}<T_{\rm RH}\) an approximate solution can be obtained [12]
\[Y_{\chi}(\infty)=\frac{0.66\ g_{a}}{g_{*s}(m_{a})\sqrt{g_{*}(m_{a})}}\frac{m_{ \rm pl}\Gamma_{a}}{m_{a}^{2}}\,. \tag{130}\]
Alternatively, it is possible that the \(a\) abundance has frozen out, and has a constant abundance, \(Y_{a}=\) constant, before it decays, then
\[Y_{\chi}(\infty)=2Y_{a}{\rm BR}(a\to\chi\bar{\chi})\,. \tag{131}\]
## Appendix C Additional model details
### Axion branching ratios and decay widths
#### a.1.1 \(a\to\chi\bar{\chi}\)
The branching ratio of the axion decay to DM for different values of \(m_{\chi}\) is shown in Fig. 5. The non trivial features in the \(m_{a}\sim 100\,{\rm MeV}-2\,{\rm GeV}\) range mostly arise from the the \(\pi^{0}\), \(\eta\), and \(\eta^{\prime}\) resonances. These features are evident in the freeze-in and freeze-out curves depicted in Figs. 2 and 3. The slight kink in the curves at \(m_{a}=3\,{\rm GeV}\) is due to a transition between two approximations of \(\alpha_{s}\).
#### c.1.2 \(a\to gg\)
At \(m_{a}\gtrsim 2\,\mathrm{GeV}\), where QCD becomes perturbative, the axion decay width to SM particles is given by it is decay width to gluons. This rate is given in Ref. [119] to one-loop order
\[\Gamma_{a\to gg}=\frac{\alpha_{s}^{2}m_{a}^{3}}{32\pi^{3}f_{a}^{2}}\bigg{(}1+ \frac{83\alpha_{s}}{4\pi}\bigg{)}\,. \tag{10}\]
#### c.1.3 \(a\to\gamma\gamma\)
The axion-photon coupling is given by the coefficient \(c_{\gamma}\) of the dimension-5 operator given in Eq. (2). Even when the bare axion-photon coupling vanishes at the UV scale \(\Lambda\), two-loop contributions generate \(c_{\gamma}\) at lower scales. At \(m_{a}\ll m_{\pi}\) the leading contribution comes from a chiral rotation of the \(u,d,s\) quarks [40; 42]
\[c_{\gamma}\simeq 1.92\pm 0.04,\quad m_{a}\lesssim m_{\eta^{\prime}}\,. \tag{11}\]
The \(u,d,s\) quarks have additional contributions at axion masses \(m_{a}\lesssim 2.1\,\mathrm{GeV}\) originating from the the \(a-P\) mixing and vector-meson photon mixing. At higher masses, \(m_{a}\geq 2.1\,\mathrm{GeV}\) for \(u,d,s\) and \(m_{a}\geq 1.6\,\mathrm{GeV}\) for \(c,b,t\), the running of quarks in the loop can be treated using perturbative QCD (pQCD). When the bare axion-quark couplings vanish this results in a contribution of order \(\mathcal{O}\big{(}\alpha_{s}^{2}\log(f_{a})\big{)}\). A full quantitative discussion of the various terms contributing to \(c_{\gamma}\) can be found in Ref. [41], where the extension to masses \(m_{a}\geq 3\,\mathrm{GeV}\) is found by replacing the pQCD contributions with the exact loop form factors found in Ref. [51].
Considering the effective axion photon interaction of Eq. (2), the axion partial width into photons is given by
\[\Gamma_{a\to\gamma\gamma}=\frac{\alpha_{\mathrm{EM}}^{2}m_{a}^{3}}{(8\pi)^{3} f_{a}^{2}}|c_{\gamma}|^{2}\,. \tag{12}\]
### Meson decay rate to axion
#### c.2.1 \(V\to\gamma\,a\)
Ref. [51] provides a calculation of the branching ratio for the decay of quarkonium \(V\), which is composed of quarks \(q\bar{q}\), into a photon and an axion. The calculation takes into account one-loop radiative corrections and finds the expression
\[\frac{\mathrm{BR}(V\to\gamma a)}{\mathrm{BR}(V\to e^{+}e^{-})} \approx\frac{3m_{V}^{2}xQ_{q}^{2}}{8\alpha_{\mathrm{EM}}f_{a}^{2}(3\pi- \alpha_{s})}\] \[\times\bigg{|}c_{qq}\bigg{(}1-\frac{2\alpha_{s}a_{P}(x)}{3\pi} \bigg{)}-\frac{c_{\gamma}\alpha_{s}x}{\pi}\bigg{|}^{2}, \tag{13}\]
where \(c_{qq}\) is axion-quark coupling at the scale \(m_{q}\), \(Q_{q}\) is the quark's electric charge, \(x=1-m_{a}^{2}/m_{V}^{2}\) and \(a_{P}(x)\) is a dimensionless monotonically increasing function of \(x\), ranging with \(a_{P}(0)=2\) and \(a_{P}(1)\simeq 6.62\), see Ref. [51] for details. In our case, where \(c_{qq}(\Lambda)=0\), \(c_{qq}\) is generated from RG running from \(\Lambda\) to \(m_{q}\). Following Ref. [51], we can write
\[c_{cc}(m_{c})\simeq-0.02\,,\qquad c_{bb}(m_{b})\simeq-0.04\,, \tag{14}\]
which correspond to running from \(\Lambda=4\pi\,\mathrm{TeV}\). This approximation may lead to \(\mathcal{O}(1)\) corrections to the bounds we have presented for \(J/\psi\to\gamma\,a\) and \(\Upsilon\to\gamma\,a\).
#### c.2.2 \(B\to K\,a\)
To recast the \(B\) meson decays we use the result from Ref. [66]
\[\Gamma_{B\to Ka}= \frac{\left|C_{W}\right|^{2}m_{B}^{3}}{64\pi f_{a}^{2}}\left(1- \frac{m_{K}^{2}}{m_{B}^{2}}\right)^{2}\lambda(m_{B},m_{K},m_{a})^{1/2}\] \[\times\left[\frac{0.330}{1-m_{a}^{2}/37.5\,\mathrm{GeV}^{2}} \right]^{2}, \tag{15}\]
where \(m_{B}\,(m_{K})\) is the \(B\)-meson (kaon) mass and \(C_{W}\) is a dimensionless constant multiplying the axion-bottom-strange vertex. A non-zero value of \(C_{W}\) is induced by RG flow of the \(aG\tilde{G}\) coupling from the UV scale \(\Lambda\) to the electroweak scale (taken to be approximately the \(W\)-boson mass \(m_{W}\)). Motivated by the analytical form of \(C_{W}\), which was calculated to two-loop order in Ref. [66],
Figure 5: **Branching ratio for the axion decay into DM.** Values of \(\mathrm{BR}(a\to\chi\bar{\chi})\) for different DM masses \(m\chi\), ranging from \(10\,\mathrm{keV}\) to \(10\,\mathrm{GeV}\), each depicted by a different colored curve.
we approximate for \(\Lambda>m_{W}\)
\[C_{W}(\Lambda)= 0.2257-0.03428\log\left(\frac{\Lambda}{\text{GeV}}\right) \tag{10}\] \[-0.0014\log^{2}\left(\frac{\Lambda}{\text{GeV}}\right),\]
which holds to the \(1\,\%\) level comparing to the full result in Fig. 3 of [66].
In cases where \(\Lambda<m_{W}\), for any of the above bounds, we discard it. This is justified as other constraints are typically stronger.
### \(a\bar{N}N\) and \(a\bar{p}n\pi^{+}\) interactions
Following Ref. [120, 121, 90, 122] the axion-nucleon and axion-nucleon-pion interactions can be described by the effective Lagrangian
\[\begin{split}\mathcal{L}\supset&\frac{\partial_{ \mu}a}{2m_{N}}\Big{[}g_{ap}\bar{p}\gamma^{\mu}\gamma_{5}p+g_{an}\bar{n}\gamma^ {\mu}\gamma_{5}n+\\ &+\frac{g_{ap}-g_{an}}{\sqrt{2}g_{A}f_{\pi}}\left(i\pi^{+}\bar{p} \gamma^{\mu}n-i\pi^{-}\bar{n}\gamma^{\mu}p\right)\Big{]}\,,\end{split} \tag{11}\]
where \(m_{N}\) is the nucleon mass and \(g_{A}\) is a constant. Ref. [51] calculates the value of these coefficients
\[\begin{split} g_{ap}=&\frac{m_{N}}{2f_{a}}\bigg{(}g _{0}+g_{A}\delta_{I}\frac{m_{\pi_{0}}^{2}}{m_{\pi_{0}}^{2}-m_{a}^{2}+im_{\pi_{ 0}}\Gamma_{\pi_{0}}}\bigg{)},\\ g_{an}=&\frac{m_{N}}{2f_{a}}\bigg{(}g_{0}-g_{A} \delta_{I}\frac{m_{\pi_{0}}^{2}}{m_{\pi_{0}}^{2}-m_{a}^{2}+im_{\pi_{0}}\Gamma_ {\pi_{0}}}\bigg{)},\end{split} \tag{12}\]
where \(\delta_{I}\equiv\frac{m_{d}-m_{a}}{m_{d}+m_{a}}\), \(\Gamma_{\pi_{0}}\) is the \(\pi_{0}\) decay width and with the values \(g_{A}\simeq 1.25,\ g_{0}\simeq 0.44\) taken from Ref. [123].
## Appendix D Bounds from terrestrial experiments
### Matching signal probabilities
We use the following simplified procedure to recast terrestrial constraints (meson decays, colliders and beam dumps), see _e.g._[124]. We assume a measurement, which excludes a process \(X\to Y\,(a\to f)\) with \(X,Y,f\) denoting SM states and the on-shell ALP decay \(a\to f\) occurring between times \(\tau_{1}<\tau_{2}\) in the axion rest frame. (\(\tau_{1},\tau_{2}\) are related to geometric lengths \(L\) in the detector by \(\tau=L/(\beta\gamma c)\), where \(\beta,\gamma\) are the Lorentz transformation parameters and \(c\) is the speed of light). Assuming the measurement places a bound on the occurrence of more than \(N\) such events, we find that \(N\) factorizes into
\[N=N_{X}\times p(X\to Y\,a\mid X)\times p_{\text{detect}}\,, \tag{13}\]
where \(N_{X}\) is the total number of \(X\) states produced by the experiment (which is typically independent of the new physics), \(p(X\to Y\,a\mid X)\) is the probability of the process \(X\to Y\,a\) to occur given the initial state is \(X\), and \(p_{\text{detect}}\) is the probability of detecting the final state \(Y\,(a\to f)\) (which depends on \(\tau_{1},\tau_{2}\) and the branching ratio \(\text{BR}(a\to f)\)) in addition to other experimental factors such as geometrical acceptance. Typically, \(p(X\to Y\,a\mid X)\propto f_{a}^{-2}\times\mathcal{O}(\text{polylog}(f_{a}))\) and in the case where \(X\) is a single particle state \(p(X\to Y\,a\mid X)=\text{BR}(X\to Y\,a)\). We recast bounds by comparing the modifications introduced by our model to Eq. (13) with respect to the original analyses from which the bounds are obtained.
In cases where \(f\) is a visible state,
\[p_{\text{detect}}^{\text{visible}}=\text{BR}(a\to f)\Big{(}e^{-\frac{\tau_{1} }{\tau_{a}}}-e^{-\frac{\tau_{2}}{\tau_{a}}}\Big{)}p_{\text{Eff}}(Y\,f)\,, \tag{14}\]
where \(\tau_{a}\) is the axion proper lifetime and \(p_{\text{Eff}}(Y\,f)\) is the detection efficiency of \(Y\,f\) states which is assumed to encapsulate any additional experimental factors affecting the detection of \(f\). Throughout we assume \(p_{\text{Eff}}\) is independent of new physics.
In cases where \(f\) is an invisible final state (_e.g._ appearing as missing energy), \(\tau_{2}=\infty\) and the probability takes the form
\[p_{\text{detect}}^{\text{invisible}}=(1-B)+B\Big{[}1-(1-e^{-\frac{\xi}{B}})p_{ \text{Eff}}\Big{]}\,, \tag{15}\]
where \(B=\text{BR}(a\to\text{visible})\), \(\delta=\Gamma_{a\to\text{visible}}\tau_{1}\) (note that \(\Gamma_{a}=\Gamma_{a\to\text{visible}}/B\)) and \(p_{\text{Eff}}\) is the efficiency of detecting the visible decay modes. When recasting searches for invisible final states we place conservative bounds by taking \(p_{\text{Eff}}=1\) which ignores additional contributions from undetected axions decaying visibly within the decay volume.
Note that the addition of an invisible decay mode always results in a higher probability \(p_{\text{detect}}^{\text{invisible}}\). The additional decay mode decreases \(B\) while \(\delta\) and \(p_{\text{Eff}}\) remain constant, thus, it is sufficient to show \(p(B)\) is monotonically decreasing in \(B\). We find
\[\frac{\partial p}{\partial B}=p_{\text{Eff}}\bigg{[}-1+\bigg{(}1+\frac{\delta} {B}\bigg{)}e^{-\frac{\xi}{B}}\bigg{]}<0\,, \tag{16}\]
where we have used the positivity of \(\delta\), \(\sup_{x\in\mathbb{R}_{+}}\left\{(1+x)e^{-x}\right\}<1\) and \(0<p_{\text{Eff}}\leq 1\).
### Approximating \(\tau_{1},\tau_{2}\)
An experiment's dimensions allow us to relate \(\tau_{1}\) to \(\tau_{2}\). In particular, if we denote the distance the axion travels to the decay volume as \(z_{\text{DV}}\) and the length of the decay volume where the axion is detected as \(\ell_{\text{DV}}\) we find
\[\frac{\tau_{2}-\tau_{1}}{\tau_{1}}=\frac{\ell_{\text{DV}}}{z_{\text{DV}}}\,. \tag{17}\]
In the simple cases where \(z_{\text{DV}}=0\) we take \(\tau_{1}=0\) and where \(\ell_{\text{DV}}=\infty\) (as is the case for missing energy) we take
\(\tau_{2}=\infty\). The values of \(\ell_{\rm DV},z_{\rm DV}\) we have used for each of the experiments considered in this work are consolidated in Table. 1.
We use the following approximations for \(\tau_{1}\) and \(\tau_{2}\) when relevant:
* For a given \(m_{a}\), when the bounds in the original analyses exclude \(f_{a}\) in a certain range, \(f_{a}^{\rm min}<f_{a}<f_{a}^{\rm max}\), we can numerically find \(\tau_{1},\tau_{2}\) by equating Eq. (45) for the upper and lower bounds \(N\big{|}_{f_{a}^{\rm max},m_{a}}=N\big{|}_{f_{a}^{\rm min},m_{a}}\) and using Eq. (46) to find the bounds on our model.
* When the lab frame and the rest frame of the particle \(X\) are approximately the same, we can approximate the boost of the axion. We ignore the angular distribution of the final states. In particular, for \(X,Y\) that are single-particle states, the axion's boost and \(\tau_{1},\tau_{2}\) are given by \[\gamma\beta=\frac{m_{X}}{2m_{a}}\lambda^{1/2}(m_{X},m_{Y},m_{a})\,,\] (47) and \[\tau_{1}=\frac{z_{\rm DV}}{c\gamma\beta}\,,\qquad\tau_{2}=\frac{z_{\rm DV}+ \ell_{\rm DV}}{c\gamma\beta}\,,\] (48) where \(\lambda\) is defined in Eq. (39).
### Meson decays
Meson decay widths and their branching fractions may be modified in the presence of an axion-gluon coupling. Since we have taken all axion-SM couplings except the axion-gluon coupling to vanish at the UV scale \(\Lambda=8\pi f_{a}\) some of these effects are a result of RG running.
The bounds \(B\to K\,(a\to 3\pi)\), \(B\to K\,(a\to\eta\pi\pi)\), \(B\to K(a\to KK\pi)\) and \(B\to K\,(a\to\phi\phi)\) taken from Ref. [66] are corrected for RG running as is described above in Appendix C.2.2. The bound \(B^{+}\to K^{+}\,(a\to\mu^{+}\mu^{-})\) taken from Ref. [51] is not corrected for RG running since it is already calculated by running from the approximately correct scale \(\Lambda=4\pi\,{\rm TeV}\). We have not taken into account the NLO effects of the RG flow in the various \(K\to\pi\,a\) processes. In quarkonia decays, \(V\to\gamma\,a\) RG effects are not accounted for, and the branching ratios are calculated using the approximation described in Appendix C.2.2.
When recasting the BaBar and Belle bounds we have assumed a prompt ALP decay corresponding to \(z_{\rm DV}=0\) and \(\ell_{\rm DV}=5\,{\rm mm}\) as was suggested in Ref. [126]. The BESIII analysis on BR(\(J/\psi\to\gamma\,a\)) in Ref. [70] mentions only that the photon-coupled axion has a negligible decay width. Since any modifications our model introduces are only expected to make the ALP lifetime shorter, we place a conservative bound by ignoring the finite length of the BESIII detector decay volume.
### Beam Dumps
We present constraints from di-photon measurements in proton beam-dumps from the NuCal [54; 55] and CHARM [56; 57; 58] collaborations which are analyzed in Ref. [53]. In addition, we show constraints from the electron beam dumps E137 [59] and E141 [60], analyzed in Ref. [61]. These bounds are valid for the
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline Observable & References & \(z_{\rm DV}\) & \(\ell_{\rm DV}\) & RG flow \\ \hline \hline BR(\(B^{+}\to K^{+}a(\mu^{+}\mu^{-})\)) & [51; 62] (LHCb) & 0 & 0.74 m & No \\ \hline BR(\(K^{+}\to\pi^{+}a(\gamma\gamma)\)) & [51; 64] (E949) & 0 & 1.45 m & No \\ \hline BR(\(K^{+}\to\pi^{+}a(\gamma\gamma)\)) & [51; 63] (NA62) & 0 & 140 m & No \\ \hline BR(\(K^{+}\to\pi^{+}a(\nu\bar{\nu})\)) & [50; 51] (NA62) & 140 m & \(\infty\) & No \\ \hline BR(\(B\to Ka(3\pi)\)) & [66; 67] (Belle) & 0 & 5 mm & Yes \\ \hline BR(\(B\to Ka(p\pi\pi)\)) & [66; 69] (BaBar) & 0 & 5 mm & Yes \\ \hline BR(\(B\to Ka(KK\pi)\)) & [66; 68] (BaBar) & 0 & 5 mm & Yes \\ \hline BR(\(B\to Ka(c\phi)\)) & [66; 69] (BaBar) & 0 & 5 mm & Yes \\ \hline BR(\(B\to Ka(\gamma\gamma)\)) & [66; 69] (BaBar) & 0 & 5 mm & Yes \\ \hline BR(\(J/\psi\to\gamma a(\gamma\gamma)\)) & [70; 82] (BESIII) & 0 & \(\infty\) & No \\ \hline BR(\(\Upsilon\to\gamma a(\text{hadrons})\)) & [51; 65] (BaBar) & 0 & 5 mm & No \\ \hline BR(\(\text{SM}\to\text{SM}\;a(\gamma\gamma)\)) & [53; 54] (NuCal) & 64 m & 23 m & No \\ \hline BR(\(\text{SM}\to\text{SM}\;a(\gamma\gamma)\)) & [53; 56; 57; 58] (CHARM) & 480 m & 35 m & No \\ \hline E137 & [59; 125] (E137) & 179 m & 204 m & No \\ \hline E141 & [60; 125] (E141) & 12.16 cm & 35 m & No \\ \hline LLP in EMD & [86; 87] (CMS) & 4 m & 3 m & No \\ \hline \(pp\to(a\to\gamma\,\gamma)\) & [71; 72; 73; 74; 75; 76; 77; 78; 79; 82; 85] (CMS, ATLAS) & 0 & 1 mm & No \\ \hline \(pp\to(a\to j\,j)\) & [74; 85] (CMS) & 0 & 1 mm & No \\ \hline \end{tabular}
\end{table}
Table 1: Summary of the parameters used to recast the terrestrial experiments. \(z_{\rm DV}\) is the distance the axion must travel to the decay volume, \(\ell_{\rm DV}\) is the length of the decay volume, RG flow indicates if RG-flow corrections to the couplings were applied.
case where there are no invisible decays. For \(2m_{\chi}<m_{a}\) where invisible decays are present, we recast these bounds by numerically finding \(\tau_{1}\) and \(\tau_{2}\) as described in Appendix D.2.
### Colliders
We have used the LHC di-photon and di-jet bounds analyzed in Refs. [85; 81] based on the measurements performed by the CMS [71; 72; 73; 74; 75; 76; 77] and ATLAS [78; 79; 80] collaborations. In both cases, the original analysis assumed a GUT inspired model, with an ALP coupling to all three SM forces proportional to their coupling constants. We have approximated the di-jet production to be completely governed by the gluon coupling requiring only a rescaling of their gluon coupling. In di-photon searches, the narrow-width approximation is assumed where the cross-section factorizes to \(\sigma(p\,p\to a){\rm BR}(a\to\gamma\,\gamma)\). We have approximated \(\sigma(p\,p\to a)\) to depend only on the gluon coupling and cast the bounds by correcting for the significantly smaller \({\rm BR}(a\to\gamma\,\gamma)\) present in our model. In both cases, we considered prompt ALP decays with \(z_{\rm DV}=0\) and \(\ell_{\rm DV}=1\,{\rm mm}\).
Ref. [87] presented additional bounds on detection of long lived particles in the muon detection system of CMS [86]. We approximated the detector dimensions in this case as \(z_{\rm DV}=4\,{\rm m}\), \(\ell_{\rm DV}=3\,{\rm m}\) ignoring any angular information, which may lead to an \(\mathcal{O}(1)\) uncertainty on the bounds.
## Appendix E Astrophysical and cosmological bounds
### Supernovae
We present the bounds from Ref. [90], which consider the effects of axion-nucleon and axion-nucleon-pion couplings, see Eq. (10), and axion-photon couplings, see Eq. (11), on various observables. Among them, we consider SN1987A cooling, ALP energy deposition in the mantle, non-observation of \(\gamma\)-rays from SN1987A, diffuse SN ALP background (DSNALPB), and the expected \(\gamma\)-ray halo resulting from gravitational trapping of ALPs in Cassiopeia A. For the trapping regime (upper bound) of SN1987A cooling we use the estimation \(g_{ap}\leq 3\times 10^{-9}\). For the rest of the observables, we cast exclusions only for the displayed region in Fig. 4 of [90].
There are two main assumptions used in the analysis that deviate from our model. The first assumption is that \(g_{ap}\gg g_{an}\). This assumption breaks down in our model when \(m_{a}\sim m_{\pi}\) where Eq. (11) dictates \(g_{ap}\sim g_{an}\). Since the majority of the axions are produced from \(N\,\pi\,\to N\,a\), we have recast the bounds on \(g_{ap}\) in Ref. [90] to bounds on \(g_{ap}-g_{an}\) in our model Eq. (12).
The second assumption that requires altering is that the axion decays only to photons. For the EFT considered in this work, the axion decays predominantly to other states at masses \(m_{a}>{\rm min}\,\{3m_{\pi},2m_{\chi}\}\). To account for this we disregarded all bounds except SN1987A cooling for \(m_{a}>{\rm min}\,\{3m_{\pi},2m_{\chi}\}\). We find that the free-streaming regime (lower bound) of the SN1987A cooling bounds remains the same when the ALP decays invisibly while the trapping regime (upper limit) is expected to change significantly when a fraction of the axions decay to DM. As the mean free path of DM in the SN core (that scales as \(\sim f_{a}^{-4}\)) is much larger than the axion's mean free path (that scales as \(\sim f_{a}^{-2}\)), we expect the trapping regime to extend to much larger couplings that are already excluded by terrestrial searches. For simplicity, when \(m_{a}>2m_{\chi}\) we show exclusions for all couplings larger than those of the free-streaming bounds of SN1987A cooling, as can be seen in the upper panels of Fig. 3. A dedicated analysis of low energy SN may add complementary bounds in the large coupling regime [108].
### Neutron Star Heating
The presence of an axion-gluon portal may have consequences that can be observed in stellar dynamics. Ref. [111] has placed bounds on such a model by considering the DM-induced kinetic heating of neutron stars. The bounds are presented only for the mass ratios \(m_{a}/m_{\chi}=1,\,1/10\). As the \(\chi-N\) cross-section is expected to be \(m_{a}\) independent at \(m_{a}\ll m_{\chi}\) we approximate the bounds at \(m_{a}/m_{\chi}<1/10\) to be the same as the bounds at \(m_{a}/m_{\chi}=1/10\). For \(1/10<m_{a}/m_{\chi}<1\) we use a second-order polynomial interpolation to approximate the bounds.
### Bbn
The presence of a non-negligible abundance of long-lived axions during BBN may have measurable effects. Bounds considering the effects of an ALP with mass \(m_{a}<100\,{\rm MeV}\), which decays only to photons, have been placed in Refs. [127; 128; 49; 89]. We recast results from Ref. [89], which are agnostic to the production mechanism, to our model where the ALP is produced from other bath particles such as gluons. For such masses, the axion either freezes in at \(T\sim T_{\rm RH}\) to an abundance given by Eq. (13) or freezes out when \(\Gamma_{a\,{\rm SM}\to{\rm SM}}\sim H\) at temperatures \(T\lesssim m_{a}\) which are larger than \(T_{\rm BBN}\simeq 1\,{\rm MeV}\). Recasting the results presented in Ref. [89] is then straightforward, as each pair \((m_{a},f_{a})\) determines the axion lifetime \(\tau_{a}\) and initial abundance \(\left.m_{a}Y_{a}\right|_{T=T_{\rm BBN}}\). |
2310.17863 | Dimensionally Homogeneous Jacobian using Extended Selection Matrix for
Performance Evaluation and Optimization of Parallel Manipulators | This paper proposes a new methodology for deriving a point-based
dimensionally homogeneous Jacobian, intended for performance evaluation and
optimization of parallel manipulators with mixed degrees of freedom. Optimal
manipulator often rely on performance indices obtained from the Jacobian
matrix. However, when manipulators exhibit mixed translational and rotational
freedoms, the conventional Jacobian's inconsistency of units lead to unbalanced
optimal result. Addressing this issue, a point-based dimensionally homogeneous
Jacobian has appeared as a prominent solution. However, existing point-based
approaches for formulating dimensionally homogeneous Jacobian are applicable to
a limited variety of parallel manipulators. Moreover, they are complicated and
less intuitive. This paper introduces an extended selection matrix that
combines component velocities from different points to describe the entire
motion of moving plate. This proposed approach enables us to formulate an
intuitive point-based, dimensionally homogeneous Jacobian, which can be applied
to a wide variety of constrained parallel manipulators. To prove the validity
of proposed method, a numerical example is provided utilizing a
four-degree-of-freedom parallel manipulator. | Hassen Nigatu, Doik Kim | 2023-10-27T02:37:15Z | http://arxiv.org/abs/2310.17863v1 | Dimensionally Homogeneous Jacobian using Extended Selection Matrix for Performance Evaluation and Optimization of Parallel Manipulators
###### Abstract
This paper proposes a new methodology for deriving a point-based dimensionally homogeneous Jacobian, intended for performance evaluation and optimization of parallel manipulators with mixed degrees of freedom. Optimal manipulator often rely on performance indices obtained from the Jacobian matrix. However, when manipulators exhibit mixed translational and rotational freedoms, the conventional Jacobian's inconsistency of units lead to unbalanced optimal result. Addressing this issue, a point-based dimensionally homogeneous Jacobian has appeared as a prominent solution. However, existing point-based approaches for formulating dimensionally homogeneous Jacobian are applicable to a limited variety of parallel manipulators. Moreover, they are complicated and less intuitive. This paper introduces an extended selection matrix that combines component velocities from different points to describe the entire motion of moving plate. This proposed approach enables us to formulate an intuitive point-based, dimensionally homogeneous Jacobian, which can be applied to a wide variety of constrained parallel manipulators. To prove the validity of proposed method, a numerical example is provided utilizing a four-degree-of-freedom parallel manipulator.
Dimensionally homogeneous Jacobian, Selection matrix, Inverse Jacobian, Parallel manipulator, Performance indices +
Footnote †: This work was supported by Korea Institute of Science and Technology (KIST), under Grant 2E32302.
## 1 Introduction
Performance evaluation and obtaining optimized architectural parameters are a vital step in the design of parallel manipulators (PMs), as these significantly influences the effectiveness and accuracy of a robot's movements. The challenge lies in performing the tasks when the manipulators degree of freedom (DoFs) is a combination of rotational and translational types. This is primarily due to the inconsistency in the units or dimensions of the Jacobian, a factor that significantly affects the performance measuring indices of parallel manipulators [1, 2].
Several approaches have been suggested to address this problem [3, 4], and among them, Jacobian-based methods have been widely used. This popularity can be attributed to their capability to effectively translate the inherent mapping process from joint velocities to end-effector velocities, providing a good intuitive framework [1, 5, 6]. There are also variety of Jacobian-based approaches of homogenizing the units of the Jacobian matrix [4, 7]. Among these approaches, point-based approach is more intuitive [6, 8]. Despite its advantage, the firstly introduced point-based dimensionally homogeneous Jacobian (DHJ) formulations comprise dependent motions in their entries, resulting in a condition number with unclear physical meaning and potential erroneous results [6].
In response to this problem, Pond et al. [6] proposed a method to eliminate the undesired dependent motions from the system. However, the method is quite complicated to comprehend and involved a tedious derivative procedures which leads to higher computation cost [8]. To overcome this issue, the selection matrix with the shifting property and conventional Jacobian is used to formulate a point-based dimensionally homogeneous Jacobian matrix [8]. However, this previous paper by the authors focused on a specific scenario where each component's velocity encompass the desired motion of the moving plate, such as 1T2R PMs with \(TzRxRy\) type of motion.
Considering the aforementioned limitations, this paper formulates an \(f\times f\), with \(f\) representing the DoF of the mechanism, point-based DHJ matrix using an extended selection matrix. This Jacobian matrix maps the platform's nominal linear velocity to the joint rate. Here, nominal linear velocity refers to the velocity obtained by combining component velocities from different points, which can represent the entire motion of the moving plate. This approach integrates the extended selection matrix, the linear velocity of points on the moving plate and the manipulator's conventional Jacobian, resulting in a square DHJ. The dimensionally homogeneity of the resulting Jacobian is analytically proven. To validate the correctness of the proposed method, a numerical comparison is carried out using a four-degree-of-freedom parallel manipulator as an example. First, the distribution of the condition number is evaluated across the manipulator's rotational workspace, highlighting the disparity in the condition number values of the conventional and dimensionally homogeneous Jacobian. Then, the unit of geometric parameters are changed from millimeters to meters, and the condition number is reassessed to determine if it is invariant under unit changes.
## 2 Formulation of the Dimensionally Homogeneous Jacobian
The derivation method of DHJ involves the following steps. First, the screw-based constraint embedded inverse Jacobian of the manipulator is formulated and inverted to get the constraint compatible forward relation. Then, points that might adequately represent the motion of the moving plate are chosen and related to the Cartesian velocity. Next, the extended selection matrix is derived and applied to the points' linear velocity. This will combine components from different points to effectively describe the moving plate's motion, while also eliminating unwanted or dependent components from the equation. The resulting velocity is termed as the nominal linear velocity. Finally, the nominal linear velocity of the moving plate and the forward velocity equation are related with an \(f\times f\) dimensionally homogeneous Jacobian matrix.
### Constraint-Embedded Velocity Relation
The screw-based Jacobian of the manipulator can be analytically obtained using the method introduced in [5, 9, 10, 11]. Given the task velocity, \(\dot{\mathbf{x}}\), of the moving moving plate, the general inverse velocity equation of the parallel manipulator has the following form.
\[\begin{bmatrix}\dot{\mathbf{q}}\\ \mathbf{0}\end{bmatrix}=\begin{bmatrix}\mathbf{G}_{\dot{\mathbf{q}}}^{T}\\ \mathbf{G}_{\dot{\mathbf{c}}}^{T}\end{bmatrix}\dot{\mathbf{x}}=\begin{bmatrix}\mathbf{G}_{av}^ {T}&\mathbf{G}_{av}^{T}\\ \mathbf{G}_{cv}^{T}&\mathbf{G}_{cv}^{T}\end{bmatrix}\begin{bmatrix}\mathbf{v}\\ \mathbf{\omega}\end{bmatrix} \tag{1}\]
The units of the entries in \(\mathbf{G}\) in Eq. (1), are dependent on the type of actuators employed in the manipulator. This paper focus exclusively on scenarios where the manipulator employs only linear or rotational actuators, not considering situations involving a combination of these actuator types.
Inverting Eq. (1) yields a constraint compatible forward velocity relation as
\[\dot{\mathbf{x}}=\mathbf{J}\dot{\mathbf{q}}=\begin{bmatrix}\mathbf{J}_{a}&\mathbf{J}_{c}\end{bmatrix} \begin{bmatrix}\dot{\mathbf{q}}_{a}\\ \mathbf{0}\end{bmatrix} \tag{2}\]
In Eq. (2), \(\mathbf{J}\in\mathbb{R}^{6\times 6}\) is the inverse of \(\mathbf{G}^{T}\) and its sub-matrix \(\mathbf{J}_{c}\) is related to the constraint. Thus, we can explicitly describe the relation of \(\dot{\mathbf{x}}\) and \(\dot{\mathbf{q}}_{a}\) as
\[\dot{\mathbf{x}}=\mathbf{J}_{a}\dot{\mathbf{q}}_{a},\;\text{where}\;\mathbf{J}_{a}=\begin{bmatrix} \mathbf{J}_{a1}\\ \mathbf{J}_{a2}\end{bmatrix} \tag{3}\]
The Cartesian velocity, \(\dot{\mathbf{x}}\in\mathbb{R}^{6\times 1}\), in Eq. (3) is constraint compatible.When the manipulator employs linear actuators, \(\mathbf{J}_{a1}\) is dimensionless, while \(\mathbf{J}_{a2}\) has a unit of \(\frac{1}{\text{length}}\). Conversely, if the manipulator utilizes rotational actuators, \(\mathbf{J}_{a1}\) has a unit of length and \(\mathbf{J}_{a2}\) dimensionless. Considering these distinctions, the point's linear velocity and selection matrix are established to ensure consistency or removal of units in the Jacobian.
### Linear Velocity of Points
According to the well known shifting property [12] in the rigid body kinematics, any points velocity on the moving plate can be related to the Cartesian velocity of the moving plate as
\[\mathbf{v}_{i}=\mathbf{v}+\mathbf{\omega}\times\mathbf{a}_{i} \tag{4}\]
where \(\mathbf{v}\) and \(\mathbf{\omega}\) denotes the linear and angular velocity of the moving plate, while \(\mathbf{a}_{i}\) corresponds to a constant vector extending from the origin of the Cartesian reference frame to the \(i^{th}\) point on the moving plate. Expanding Eq. (4) reveals the motion of the moving plate that each component of \(\mathbf{v}_{i}\) encompasses.
\[\begin{split} v_{ix}&=v_{x}+\omega_{y}a_{iz}-\omega_ {z}a_{iy}\\ v_{iy}&=v_{y}-\omega_{x}a_{iz}+\omega_{z}a_{ix}\\ v_{iz}&=v_{z}+\omega_{x}a_{iy}-\omega_{y}a_{ix} \end{split} \tag{5}\]
By distributing these points on the moving plate in a noncollinear manner, it is possible to satisfy the minimum requirement of points needed to fully represent the motion of the moving plate. Theoretically, the translations of three noncollinear points on the moving plate are sufficient to uniquely identify the motion of the body in terms of translation and rotation, but more points may be required depending on the DoF of the mechanism.
Hence, Eq. (4) can be generalized as
\[\begin{split}\mathbf{v}_{p}=\begin{bmatrix}\mathbf{v}_{1}\\ \vdots\\ \mathbf{v}_{i}\end{bmatrix}&=\begin{bmatrix}\mathbf{I}&-[\mathbf{a}_{1}]_{\times}\\ \vdots&\vdots\\ \mathbf{I}&-[\mathbf{a}_{i}]_{\times}\end{bmatrix}\begin{bmatrix}\mathbf{v}\\ \mathbf{\omega}\end{bmatrix}\\ =\mathbf{V}_{p}\dot{\mathbf{x}}\end{split} \tag{6}\]
where \(\mathbf{V}_{p}\in\mathbb{R}^{3f\times 6}\) maps the moving plate cartesian velocity to the points velocity on the moving plate. Vector \(\mathbf{v}_{i}\) in Eq. (6) has three components and hence from \(\mathbf{v}_{p}\in\mathbb{R}^{3f\times 1}\), we need to determine the components that can appropriately describe the motion of the moving plate via a selection matrix [8] as follows
\[\begin{split}\mathbf{Sv}_{p}&=\mathbf{SV}_{p}\dot{\mathbf{x}},\; \text{where}\;\mathbf{S}\in\mathbb{R}^{f\times 3f}\;\text{is a selection}\\ &\text{matrix that extracts the components from}\;\mathbf{v}_{p}.\\ \mathbf{v}_{ps}&=\mathbf{V}_{ps}\dot{\mathbf{x}}\;\text{where}\;\mathbf{V}_{ ps}\in\mathbb{R}^{f\times 6}\end{split} \tag{7}\]
However, deriving the selection matrix \(\mathbf{S}\) is not always straightforward. This is because only manipulators whose moving plates exhibit \(T_{x}R_{y}R_{z}\), \(T_{y}R_{x}R_{z}\) and \(T_{z}R_{x}R_{y}\) types of motion can be uniquely represented with the component velocities shown in Eq. (5). For a comprehensive understanding of the establishment of selection matrices for these groups of PMs, readers are encouraged to refer to [8]. PMs falling outside of these categories will need to utilize a combination of components from different points, an approach that is covered in this paper.
### Dimensionally Homogeneous Jacobian
In this paper, we derive the dimensionally homogeneous Jacobian by representing the motion of the moving plate using linear velocity, ensuring uniform units across its entries. However, it is important to note that the linear velocities used here are not merely the component
velocities of individual points on the moving plate. Instead, they are a combination of components from various points. This approach is used to encompass all desired motion of the moving plate into a representative velocity equation, which we call it the nominal velocity.
To derive the dimensionally homogeneous Jacobian, relations, Eq. (3) and Eq. (7) are combined as follows
\[\begin{split}\mathbf{v}_{ps}&=\mathbf{V}_{ps}\mathbf{\dot{x}} \\ &=\mathbf{V}_{ps}\mathbf{J}_{a}\mathbf{\dot{q}}_{a}\\ &=\mathbf{J}_{dh}\mathbf{\dot{q}}_{a}\end{split} \tag{8}\]
In Eq. (8), \(\mathbf{J}_{dh}\in\mathbb{R}^{f\times f}\) is a Jacobian that relates the nominal linear velocity (\(\mathbf{v}_{ps}\)) of the moving plate to the actuated joint rate (\(\mathbf{\dot{q}}_{a}\in\mathbb{R}^{f\times 1}\)). To demonstrate the consistency of units in its entries, we considered the following two generic cases.
_Case 1: PMs with linear actuators._ In this case, \(\mathbf{\dot{q}}_{a}\) has unit of \(\frac{\text{length}}{\text{time}}\) while \(\mathbf{S}\) is dimensionless. Referring to Eq. (6), we can observe that the first term is dimensionless, while the second term has a unit of length. Furthermore, in Eq. (2), we know Block matrix \(\mathbf{J}_{a1}\) is dimensionless and \(\mathbf{J}_{a2}\) has a unit of \(\frac{1}{\text{length}}\). As a result, we conclude that the Jacobian for this particular group of manipulators is dimensionless.
_Case 2: PMs with rotational actuators._ For PMs with rotational actuators \(\mathbf{\dot{q}}_{a}\) has unit of \(\frac{\text{angle}}{\text{time}}\) while the unit of \(\mathbf{V}_{p}\) is unchanged. Furthermore, the matrix \(\mathbf{J}_{a1}\) has a unit of length and \(\mathbf{J}_{a2}\) is dimensionless for this group of PMs. Consequently, the resulting Jacobian \(\mathbf{J}_{dh}\) to has a unit of length which is consistent.
Because entries of \(\mathbf{J}_{dh}\) are either dimensionless or dimensionally homogeneous, its condition number or singular values have physical significance and can be used to measure the dexterity of the manipulator.
The next section demonstrates how to derive it by considering a relevant example: a four DoF (degrees of freedom) \(T_{y}T_{z}R_{x}R_{y}\) type Parallel Manipulator (PM).
## 3 Example
The mechanism depicted in Fig. 1 is a \(T_{y}T_{z}R_{x}R_{y}\) type 4 DoF PM [13] with a PUS joint order in the first and third limbs, and a PRS type joint sequence in the second and fourth limbs. The P joint is parallel to the \(z\)-axis, while the R joint in the PRS limb is parallel to the \(x\)-axis, and the U joint in the PUS limb has axes parallel to the \(x\) and \(y\) axes, respectively. The mechanism is capable of rotating about the \(x\) and \(y\) axes, as well as translating along the \(y\) and \(z\) directions. However, due to the presence of revolute joints in the second and fourth limbs, the mechanism is constrained in terms of \(x\)-axis translation and \(z\)-axis rotations, making it a zero-torsion type PM mechanism. The DoF of the mechanism can also be determined by employing Tsai's DoF formula, which is expressed as follows:
\[\begin{split} F&=\lambda(n-j-1)+\sum_{i=1}^{j}f_{i} \\ 4&=6(10-12-1)+22\end{split} \tag{9}\]
Point \(A_{i}\) at the based is location of limbs while \(B_{i}\) is the center of spherical joints. Point \(C_{i}\) is the center universal joints for the first and third limbs while it is the center of revolute joints for limbs 2 and 4. The position vector \(\mathbf{a}_{i}\) is extended from origin moving frame to the \(i^{th}\) spherical joint while \(\mathbf{b}_{i}\) is extended from the fixed frame to the point \(A_{i}\). The direction vector \(\mathbf{s}_{j\|}\) is associated to each joint axis.
To appropriately represent the motion of the moving plate, we need four points and for convenience these points are chosen to be the center of the spherical joints. Expanding Eq. (4) to the four points located at the center of spherical joints at the moving plate, we get a \(12\times 6\) matrix that relates the \(i^{th}\) points linear velocity to the moving plate center velocity (\(\mathbf{\dot{x}}\)) as
\[\begin{bmatrix}v_{1x}\\ v_{1y}\\ v_{1z}\\ \vdots\\ v_{4x}\\ v_{4y}\\ v_{4z}\end{bmatrix}=\begin{bmatrix}1&0&0&0&a_{1z}&-a_{1y}\\ 0&1&0&-a_{1z}&0&a_{1x}\\ 0&0&1&a_{1y}&-a_{1x}&0\\ \vdots&\vdots&\vdots&\vdots&\vdots&\vdots\\ 1&0&0&0&a_{4z}&-a_{4y}\\ 0&1&0&-a_{4z}&0&a_{4x}\\ 0&0&1&a_{4y}&-a_{4x}&0\end{bmatrix}\begin{bmatrix}v_{x}\\ v_{y}\\ v_{z}\\ \omega_{x}\\ \omega_{y}\\ \omega_{z}\end{bmatrix} \tag{10}\]
The points' linear velocity in Eq. (10) includes 12 components, three for each point. Therefore, we have many dependent motions, yet we only require four components. Referring to Eq. (5), none of the components encompass the desired motion of the moving plate. Hence, we need to formulate a selection matrix that can combine components from different points and obtain a nominal velocity that describes the motion of the moving plate. As the independent motion of the moving plate for this manipulator are \(v_{y},v_{z},\omega_{x}\) and \(\omega_{y}\), combining \(v_{iy}\) and \(v_{iz}\) components can sufficiently describe the manipulator's motion. However, a combination of component is not unique and one can freely choose one of the following
Figure 1: Four DoF PM
pairs.
\[\begin{split}\text{ Limb 1:}&(v_{1y},v_{2z}),(v_{1y},v_{3z}),(v_{1y},v _{4z})\\ \text{ Limb 2:}&(v_{2y},v_{1z}),(v_{2y},v_{3z}),(v_{2y},v _{4z})\\ \text{ Limb 3:}&(v_{3y},v_{1z}),(v_{3y},v_{2z}),(v _{3y},v_{4z})\\ \text{ Limb 4:}&(v_{4y},v_{1z}),(v_{4y},v_{2z}),(v _{4y},v_{3z})\\ \end{split} \tag{11}\]
For this particularly case, we selected the following combination from Eq. (11).
\[\begin{split}\text{ Limb 1:}&(v_{1y},v_{2z})\\ \text{ Limb 2:}&(v_{2y},v_{3z})\\ \text{ Limb 3:}&(v_{3y},v_{4z})\\ \text{ Limb 4:}&(v_{4y},v_{1z})\\ \end{split} \tag{12}\]
By utilizing Eq. (12), we can establish the extended selection matrix as
\[\mathbf{S}=\begin{bmatrix}0&-\dfrac{a_{2x}}{a_{1x}-a_{2x}}&1&0&\dfrac{a_{1x}}{a_{1x}-a_{2x}}&0\\ 0&0&0&-\dfrac{a_{3x}}{a_{2x}-a_{3x}}&1\\ 0&0&0&0&0\\ 0&-\dfrac{a_{4x}}{a_{1x}-a_{4x}}&1&0&0&0\\ 0&\dfrac{a_{2x}}{a_{2x}}&\dfrac{a_{3x}}{a_{4x}-a_{4x}}&0&\\ 0&-\dfrac{a_{3x}}{a_{3x}-a_{4x}}&1&0&\dfrac{a_{3x}}{a_{3x}-a_{4x}}&0\\ 0&0&0&0&\dfrac{a_{1x}}{a_{1x}-a_{4x}}&0\\ \end{bmatrix} \tag{13}\]
It is quite important to note that \(\mathbf{S}\) in Eq. (13) is dimensionless and same as to that the usual selection matrix in terms of units. However, the usual selection matrices [8] have entries of 1s and 0s unlike the extended selection matrix derived in the paper. Then, multiplying Eq. (13) with Eq. (10), matrix\(\mathbf{V}_{ps}\in\mathbb{R}^{4\times 4}\) relates the nominal velocity (\(\mathbf{v}_{ps}\)) and independent Cartesian velocity of the moving plate as in Eq. (14).
\[\begin{bmatrix}\dfrac{v_{1}}{v_{2}}\\ \dfrac{v_{3}}{v_{4}}\\ \dfrac{v_{3}}{v_{4}}\\ \dfrac{v_{1}}{v_{4}}\\ \dfrac{v_{1}}{v_{4}}\\ \dfrac{a_{1x}-a_{2x}a_{1z}}{a_{3y}a_{2x}+a_{2x}a_{3z}^{2}-a_{3x}a_{3z}}&-a_{2x }\\ \dfrac{a_{2x}-a_{3x}}{a_{3x}+a_{3x}a_{4x}-a_{4x}}&1\\ \dfrac{a_{3x}-a_{4x}}{a_{1x}-a_{4x}}&-a_{4x}\\ \dfrac{a_{1y}a_{1x}-a_{2x}a_{1z}+a_{1x}a_{3z}}{a_{3y}a_{2x}+a_{2x}a_{3z}-a_{3x }a_{3z}}&-a_{2x}\\ \dfrac{a_{3y}a_{3x}+a_{3x}a_{4x}-a_{4x}a_{3z}}{a_{3x}-a_{4x}}&-a_{3x}\\ \dfrac{a_{3x}-a_{4x}}{a_{1y}a_{1x}+a_{1x}a_{4x}-a_{4x}a_{1z}}&-a_{1x}\\ \end{bmatrix} \tag{14}\]
\[\text{where,}\begin{bmatrix}\dfrac{v_{1}}{v_{2}}\\ \dfrac{v_{3}}{v_{4}}\\ \dfrac{v_{3}}{v_{4}}\\ \dfrac{v_{4}}{v_{4}}\\ \dfrac{\left(v_{2y}+v_{1z}\right)a_{1x}-\left(v_{1y}+v_{1z}\right)a_{2x}}{a_{ 1x}-a_{2x}}\\ \dfrac{\left(v_{3y}+v_{2z}\right)a_{2x}-\left(v_{2y}+v_{2z}\right)a_{3x}}{a_{ 2x}-a_{3x}}\\ \dfrac{a_{2x}-a_{3x}}{a_{3x}-\left(v_{4y}+v_{3z}\right)a_{3x}-\left(v_{3y}+v_{ 3z}\right)a_{4x}}\\ \dfrac{a_{3x}-a_{4x}}{a_{3x}+a_{4x}}\\ \dfrac{\left(v_{4y}+v_{1z}\right)a_{1x}-\left(v_{1y}+v_{1z}\right)a_{4x}}{a_{ 1x}-a_{4x}}\\ \end{bmatrix}\]
The inverse Jacobian of the manipulator is obtained through the analytic screw theory method and is given as shown in Eq. (15). The first four row of \(\mathbf{G}^{T}\) represents the motion Jacobian while the last two depicts the structural constraints. Hence, \(\mathbf{G}^{T}_{c}\mathbf{\dot{x}}=\mathbf{0}\) is always satisfied.
\[\dot{\mathbf{q}}=\begin{bmatrix}\mathbf{G}^{T}_{a}\\ \mathbf{G}^{T}_{a}\\ \mathbf{G}^{T}_{c}\end{bmatrix}\dot{\mathbf{x}}=\begin{bmatrix}\dfrac{\mathbf{n}^{T}_{1}} {\mathbf{n}^{T}_{1}\mathbf{s}_{11\parallel}}&\dfrac{\left(\mathbf{n}_{1}\times\mathbf{a}_{1} \right)^{T}}{\mathbf{n}^{T}_{1}\mathbf{s}_{11\parallel}}\\ \dfrac{\mathbf{l}^{T}_{2}}{\mathbf{l}^{T}_{2}\mathbf{s}_{12\parallel}}&\dfrac{\left(\mathbf{l}_ {2}\times\mathbf{a}_{2}\right)^{T}}{\mathbf{l}^{T}_{2}\mathbf{s}_{12\parallel}}\\ \dfrac{\mathbf{n}^{T}_{3}}{\mathbf{n}^{T}_{3}\mathbf{s}_{13\parallel}}&\dfrac{\left(\mathbf{n} _{3}\times\mathbf{a}_{3}\right)^{T}}{\mathbf{n}^{T}_{3}\mathbf{s}_{13\parallel}}\\ \dfrac{\mathbf{l}^{T}_{4}}{\mathbf{l}^{T}_{4}\mathbf{s}_{14\parallel}}&\dfrac{\left(\mathbf{l}_ {4}\times\mathbf{a}_{4}\right)^{T}}{\mathbf{l}^{T}_{4}\mathbf{s}_{14\parallel}}\\ \mathbf{s}^{T}_{2\parallel}&\left(\mathbf{s}_{22\parallel}\times\mathbf{a}_{2}\right)^{T} \\ \mathbf{s}^{T}_{24\parallel}&\left(\mathbf{s}_{24\parallel}\times\mathbf{a}_{4}\right)^{T} \end{bmatrix}\begin{bmatrix}\mathbf{v}\\ \mathbf{\omega}\\ \end{bmatrix} \tag{15}\]
where \(\mathbf{G}^{T}_{a}\in\mathbb{R}^{4\times 6}\) and \(\mathbf{G}^{T}_{c}\in\mathbb{R}^{2\times 6}\) and \(\mathbf{n}_{i}=\mathbf{s}_{3i\parallel}\times\mathbf{s}_{2i\parallel}\). \(\mathbf{l}_{i}\) is a vector extending from \(C_{i}\) to \(B_{i}\).
The first term of \(\mathbf{G}^{T}\) is dimensionless while the second term has a unit of length. Hence, units of the inverse Jacobian of this manipulator is inconsistent and must be changed to dimensionless or consistent unit.
The forward Jacobian \(\mathbf{J}_{a}\in\mathbb{R}^{6\times 3}\) is analytically obtained by inverting \(\mathbf{G}^{-T}\) as in Eq. (16).
\[\mathbf{J}_{a}=\begin{bmatrix}\mathbf{G}^{-T}_{av}\mathbf{G}^{T}_{av}(\mathbf{G}^{T}_{cv}-\mathbf{ G}^{T}_{av}\mathbf{G}^{-T}_{av}\\ -(\mathbf{G}^{T}_{cv}-\mathbf{G}^{T}_{cv}\mathbf{G}^{-T}_{av})^{-1}\times\\ \mathbf{G}^{-T}_{av}+\mathbf{G}^{T}_{av})^{-1}\mathbf{G}^{T}_{cv}\mathbf{G}^{-T}_{av}\\ \mathbf{G}^{T}_{cv}\mathbf{G}^{-T}_{av}\\ \end{bmatrix} \tag{16}\]
By substituting Eq. (13) and Eq. (10) into Eq. (7), and subsequently replacing \(\dot{\mathbf{x}}\) with \(\mathbf{J}_{a}\dot{\mathbf{q}}\) in Eq. (7), we derive the \(4\times 4\) dimensionless Jacobian as discussed in _case 1_.
### Numerical Evaluation
In order to verify the correctness of the derived dimensionally homogeneous Jacobian, the distribution of the condition number (\(k\)) for the manipulator over the entire workspace is evaluated using geometric and motion parameters outlined in Table 1.
It is known that in parallel manipulator design, the condition number (\(k\)) of the Jacobian matrix can be used as a performance measure to evaluate the quality of motion, precision, and stability of the manipulator. The best value of \(k\) is 1 which is the minimum possible value and it indicates that all columns (or rows) of the Jacobian matrix matrix are orthogonal to each other. This implies that
the system of equations is well-conditioned and the solution will not be overly sensitive to errors in the data or to small changes in the input. This can be interpreted as that the manipulator is _isotropic_[2]. As \(k\) increases beyond 1, the system of equations becomes increasingly ill-conditioned. This means that the solution may be very sensitive to errors in the data or to small changes in the input and hence the manipulator is approaching to singularity. Contrary to this, if \(k\) values is small enough or remains closer to 1, it can be interpreted the manipulator is away from singular configuration.
Accordingly, \(k=cond(\mathbf{G}^{T})\) is first computed and the result is shown in Fig. 2 over the rotational workspace. The simulation result indicate a substantial increase in the condition number, which do not to adequately reflect the physical properties of the manipulator. Consequently, \(cond(\mathbf{J}_{dh})\) was determined, as shown in Fig. 3. In the rotational workspace, the value of \(k\) remained low and near 1. This value of \(k\) can properly indicate whether the manipulator is far or approaching a singular configuration.
Additionally, the sensitivity of \(k\) of both Jacobians to unit changes is evaluated over the rotational workspace. The results shows significant discrepancy in the condition number of the conventional Jacobian when units are changed from millimeter to meters, as depicted in Fig. 4. Comparing this result with Fig. 2 can be considered more optimized even-though nothing is changed but unit. However, the value of \(cond(\mathbf{J}_{dh})\), when measured in meters, remained unchanged. The result of \(cond(\mathbf{J}_{dh})\) in meter is not provided here because it is the same as to that of shown in Fig. 3. Hence, \(cond(\mathbf{J}_{dh})\) is invariant under the change of units. As previously mentioned, the choice of component combinations is not unique. Hence, we have the flexibility to choose various pairs of \(v_{iy}\) and \(v_{iz}\) from the provided candidates. For instance, by selecting \((v_{1y},v_{3z}),(v_{2y},v_{4z}),(v_{3y},v_{1z})\), and \((v_{4y},v_{2z})\), we can derive the following selection matrix.
\[\mathbf{S}=\begin{bmatrix}0&-\dfrac{a_{2x}}{a_{3x}-a_{2x}}&1&0&\dfrac{a_{3x}}{a_ {4x}}\dfrac{a_{2x}}{a_{4x}}&0\\ 0&0&0&0&-\dfrac{a_{3x}}{a_{2x}-a_{4x}}&1\\ 0&0&0&0&0&0\\ 0&-\dfrac{a_{4x}}{a_{2x}-a_{4x}}&1&0&0&0\\ 0&0&0&0&0&0\\ 0&\dfrac{a_{2x}}{a_{2x}}\dfrac{a_{4x}}{a_{1x}}&0&0&0&0\\ 0&-\dfrac{a_{3x}}{a_{3x}-a_{1x}}&1&0&\dfrac{a_{3x}}{a_{3x}-a_{1x}}&0\\ 0&0&0&0&\dfrac{a_{2x}}{a_{2x}-a_{4x}}&0\\ \end{bmatrix}\]
Then, with this selection matrix, we establish the di
\begin{table}
\begin{tabular}{|c|c|} \hline Parameters & value \\ \hline Radius of the moving plate \((r_{a})\) & 200 mm \\ \hline Radius of the base plate \((r_{b})\) & 450 mm \\ \hline Fixed length link \((l)\) & 687 mm \\ \hline \(\theta\) & \(\pm 50^{o}\) \\ \hline \(\psi\) & \(\pm 50^{o}\) \\ \hline \(y\) & 0 mm \\ \hline \(z\) & 100-200 mm \\ \hline \end{tabular}
\end{table}
Table 1: Structural and pose parameters of the PM.
mensionally homogeneous Jacobian as shown in Eq. (8) and the condition number distribution over the workspace is evaluated. The simulation has shown the same result to that of the dimensionally homogeneous Jacobian obtained using Eq. (13).
This consistent property of the Jacobian matrix is quite important while using the condition number as a measure of performance or computing the dexterity for parameter optimization of PMs.
## 4 Conclusion
This paper introduces an extended selection matrix to formulate a point-based, dimensionally homogeneous Jacobian of various constrained parallel manipulators. The proposed method allows for the derived Jacobian's condition number and singular values to be utilized as a performance index and optimization with unit independence.
To validate the proposed approach, the condition number (\(k\)) for both the conventional Jacobian (\(\mathbf{G}\)) and the dimensionally homogeneous Jacobian (\(\mathbf{J}_{dh}\)) across the rotational workspace were compared. Simulation results indicated a large value of \(k\) for \(\mathbf{G}\) and a remarkably stable value of \(k\) for \(\mathbf{J}_{dh}\).
Further, we reassessed the distribution of the \(k\) value for the two Jacobians by changing the units from millimeters to meters. The results confirmed that \(k\) of \(\mathbf{G}\) varied significantly, while \(k\) of \(\mathbf{J}_{dh}\) remained consistent, irrespective of the unit change. This phenomenon proves the dimensional homogeneity of the proposed Jacobian, where both the linear and angular parts exhibit similar value distributions and are not unit-dependent. As a result, our method allows for the correct optimization of the manipulators with mixed DoFs. By employing the proposed approach for different manipulators with mixed DoFs, we can confidently assess and optimize their performance.
## Acknowledgment
This work was supported by Korea Institute of Science and Technology (KIST), under Grant 2E32302.
|
2310.09485 | Applying Bayesian Ridge Regression AI Modeling in Virus Severity
Prediction | Artificial intelligence (AI) is a powerful tool for reshaping healthcare
systems. In healthcare, AI is invaluable for its capacity to manage vast
amounts of data, which can lead to more accurate and speedy diagnoses,
ultimately easing the workload on healthcare professionals. As a result, AI has
proven itself to be a power tool across various industries, simplifying complex
tasks and pattern recognition that would otherwise be overwhelming for humans
or traditional computer algorithms. In this paper, we review the strengths and
weaknesses of Bayesian Ridge Regression, an AI model that can be used to bring
cutting edge virus analysis to healthcare professionals around the world. The
model's accuracy assessment revealed promising results, with room for
improvement primarily related to data organization. In addition, the severity
index serves as a valuable tool to gain a broad overview of patient care needs,
aligning with healthcare professionals' preference for broader categorizations. | Jai Pal, Bryan Hong | 2023-10-14T04:17:00Z | http://arxiv.org/abs/2310.09485v3 | # Applying Bayesian Ridge Regression AI Modeling in Virus Severity Prediction
###### Abstract
Artificial intelligence (AI) is a powerful tool for reshaping healthcare systems. In healthcare, AI is invaluable for its capacity to manage vast amounts of data, which can lead to more accurate and speedy diagnoses, ultimately easing the workload on healthcare professionals. As a result, AI has proven itself to be a power tool across various industries, simplifying complex tasks and pattern recognition that would otherwise be overwhelming for humans or traditional computer algorithms. In this paper, we review the strengths and weaknesses of Bayesian Ridge Regression, an AI model that can be used to bring cutting edge virus analysis to healthcare professionals around the world. The model's accuracy assessment revealed promising results, with room for improvement primarily related to data organization. In addition, the severity index serves as a valuable tool to gain a broad overview of patient care needs, aligning with healthcare professionals' preference for broader categorizations.
## 1 Introduction
Artificial intelligence, or AI, has proven itself to be a powerful tool across various industries, simplifying complex tasks and pattern recognition that would otherwise be overwhelming for humans or traditional computer algorithms. Its versatility is evident in its ability to transform operations in many fields, and healthcare is no exception. In healthcare, AI is invaluable for its capacity to manage vast amounts of data, which can lead to more accurate and speedy diagnoses, ultimately easing the workload on healthcare professionals.
The utility of AI spans far and wide, from optimizing supply chains to revolutionizing customer service and financial forecasting. However, when it comes to healthcare, the focus shifts to its incredible potential to handle the immense volumes of medical data we encounter daily.
In the healthcare sector, data-driven decisions are crucial. Precise and timely diagnoses and prognoses are paramount, and AI plays a pivotal role in achieving these goals. It can compile and analyze millions of data points, creating comprehensive models that assist in making medical assessments. This becomes particularly important during critical times, such as the peak of the COVID-19 pandemic. Throughout the pandemic, healthcare workers faced an unprecedented workload, strained resources, and a dire need for rapid and accurate decision-making. In such circumstances, AI modeling became a lifeline. AI tools were employed to analyze patient data, predict disease progression, and optimize the allocation of resources. These applications not only saved time but also helped save lives when healthcare systems were pushed to their limits.
Furthermore, healthcare research and analysis inherently involve vast amounts of data. This encompasses patient records, genetic information, clinical trials, and medical imaging, creating a need for a sophisticated approach. While traditional computer algorithms can handle large datasets, they may struggle to adapt to changing data trends and patterns.
AI systems excel in this regard. They possess the capability to continuously learn and adapt as new data becomes available, making them ideal for the dynamic nature of healthcare research. Whether it's identifying rare genetic mutations linked to diseases or predicting the outcomes of innovative treatments, AI's ability to navigate extensive datasets and discern nuanced patterns is unparalleled. |
2308.13697 | Material Characteristics Governing In-Plane Phonon-Polariton Thermal
Conductance | The material dependence of phonon-polariton based in-plane thermal
conductance is investigated by examining systems composed of air and several
wurtzite and zinc-blende crystals. Phonon-polariton based thermal conductance
varies by over an order of magnitude ($\sim 0.5-60$ nW/K), which is similar to
the variation observed in the materials corresponding bulk thermal
conductivity. Regardless of material, phonon-polaritons exhibit similar thermal
conductance to that of phonons when layers become ultrathin ($\sim 10$ nm)
suggesting the generality of the effect at these length-scales. A figure of
merit is proposed to explain the large variation of in-plane polariton thermal
conductance that is composed entirely of easily predicted and measured optical
phonon energies and lifetimes. Using this figure of merit, in-plane
phonon-polariton thermal conductance enlarges with increases in: (1) optical
phonon energies, (2) splitting between transverse and longitudinal mode pairs,
and (3) phonon lifetimes. | Jacob D. Minyard, Thomas E. Beechem | 2023-08-25T22:58:04Z | http://arxiv.org/abs/2308.13697v2 | # Material Characteristics Governing In-Plane Phonon-Polariton Thermal Conductance
###### Abstract
The material dependence of phonon-polariton based in-plane thermal conductance is investigated by examining systems composed of air and several wurtzite and zinc-blende crystals. Phonon-polariton based thermal conductance varies by over an order of magnitude (\(\sim 0.5-60\) nW/K), which is similar to the variation observed in the materials corresponding bulk thermal conductivity. Regardless of material, phonon-polaritons exhibit similar thermal conductance to that of phonons when layers become ultrathin (\(\sim 10\) nm) suggesting the generality of the effect at these length-scales. A figure of merit is proposed to explain the large variation of in-plane polariton thermal conductance that is composed entirely of easily predicted and measured optical phonon energies and lifetimes. Using this figure of merit, in-plane phonon-polariton thermal conductance enlarges with increases in: (1) optical phonon energies, (2) splitting between transverse and longitudinal mode pairs, and (3) phonon lifetimes.
Introduction
Since the invention of the transistor in 1948, electronic devices have decreased in size and increased in density according to Moore's Law.[1] Between 2003 and 2022, device scaling increased from \(10^{8}\) to \(10^{11}\) transistors per chip.[2] Size scaling implicit in Moore's Law necessarily enhances heat flux and results in significant device heating. Heating, in turn, accelerates aging, increases parasitic power consumption, detrimentally boosts electromigration, and reduces transistor performance.[3] At a systems level, the implications of self-heating are profound. Data centers use 1-1.5% of total worldwide energy production of which nearly a third is dedicated to device cooling.[4; 5; 6]
Cooling modern silicon electronics is challenging owing to the size scales involved. Fin-shaped field effect transistors, finFETs, and gate-all around (GAA) transistors have characteristic features less than 10 nm,[3] while the metal lines connecting the transistors are \(<\)20 nm wide.[7] At these scales, phonons and electrons are significantly less efficient in moving heat. Phonons in Si, for example, have a predicted thermal conductivity of only 10% relative to bulk at 20 nm, while the resistivity of copper lines increases by at least 2x when wire thickness decreases from 100 to 20 nm.[8; 9; 10] These reductions are intrinsic to the heat carriers themselves. They are not the result of extrinsic defects. Augmenting heat transport at nanoscale therefore necessitates considering alternative approaches and even alternative heat carriers.
Polaritons--quasiparticles emerging from the hybridization of photons and material dipoles--are an intriguing possibility for increasing thermal transport in electronic devices. Their heat conductance increases under the same conditions that decrease thermal transport with traditional carriers. Take, for example, heat transport along a surface caused by the propagation of surface-plasmon[11] or surface-phonon[12] polaritons, which has been termed radiation conduction.[13] Radiation conduction is much less sensitive to device geometry than either electron or phonon transport. Under certain circumstances, radiation conduction can even increase with decreasing layer thickness.[14; 12] This occurs because polaritons move along interfaces, creating long mean free paths for energy transport, while phonons and electrons scatter off interfaces. Radiation conduction increases at higher temperatures,[15] while traditional heat transport decreases.[16] Thanks to these advantages, recent experimental work has shown the significance of radiation conduction as a comparable channel for heat transport
relative to phonons and electrons in a variety of materials including: SiC,[12; 17] SiO\({}_{2}\),[18] SiN,[14] Ti[11] and hBN.[19]
Despite its potential, the material characteristics that enhance radiation conduction remain relatively unexplored. In response, we examine here the link between optical phonon characteristics and the thermal conductance of radiation conduction stemming from the propagation of surface-phonon polaritons. Simply put, we seek to understand the phonon characteristics that lead to "big" radiation conduction. This is accomplished by examining the simplest geometry of two semi-infinite planes composed of air on one side and a polar semiconductor with a dielectric function described by its transverse- and longitudinal-optical (TOLO) phonon energies and lifetimes. By surveying radiation conduction for many different materials, clear relationships between optical phonon properties and polaritonic radiation conduction are deduced. Large optical phonon energies and lifetimes accompanied by sizable splitting between the transverse and longitudinal modes are associated with increases in phonon-polariton driven radiation conduction.
## II Materials and Methods
This study quantifies radiation conduction for a number of materials using kinetic theory. All calculations are performed at 300 K unless otherwise noted. Results for gallium arsenide (GaAs), gallium nitride (GaN), and indium antimonide (InSb) are highlighted, since they span a representative range of optical phonon energies and radiation conduction. The foundation of kinetic theory is the Boltzmann Transport Equation under the single mode relaxation time.[12] Mathematically, the thermal conductance of the surface phonon-polariton, \(\kappa_{SPhP}\), is quantified by integrating over all branches, \(s\), and real-parts of the wavevectors, \(\beta_{r}\), of the polariton dispersion as given by:
\[\kappa_{SPhP}=\frac{1}{4\pi}\sum_{s}\int_{0}^{\beta_{r,max}}\beta_{r}\hbar \omega v\Lambda\frac{df_{o}}{dT}d\beta_{r} \tag{1}\]
where \(\hbar\) is modified Planck's constant, \(\omega\) the polariton energy, \(v\) its velocity, \(\Lambda\) mean free path, and \(f_{o}\) the Bose-Einstein distribution function. Polaritons decay evanescently away from the interface on which they primarily exist. Therefore, it is difficult to define a thickness--and thus an area --through which the heat moves. An areal property, thermal conductivity is therefore ill posed when considering radiation conduction in the same way it is
when considering transport through 2D-solids.[20] Consequently, Eq. 1 has been derived using a two-dimensional density of states. As such, \(k_{SPhP}\) quantifies conductance rather than a conductivity and has units of \(\left[\frac{W}{K}\right]\) instead of the typical conductivity \(\left[\frac{W}{m\cdot K}\right]\). The conductance approach is consistent with established means of comparing the thermal properties of ultrathin solids.[20]
The polariton dispersion is determined by the boundary conditions of Maxwell's equations and is therefore dependent upon the optical properties of the materials on either side of the interface. For the polar material, the optical properties are described by a dielectric function defined by the energies and lifetimes of the transverse- (TO) and longitudinal optical (LO) phonons via:
\[\epsilon(\omega)=\epsilon_{\infty}\left(1+\sum_{i}^{n}\frac{\omega_{LO,i}^{2} -\omega_{TO,i}^{2}}{\omega_{TO,i}^{2}-\omega^{2}-i\omega\Gamma_{i}}\right) \tag{2}\]
where \(\epsilon_{\infty}\) is the high-frequency permittivity of the solid, \(\omega_{TO(LO)}\) is the frequency of the transverse-optical (longitudinal-optical) phonons and \(\Gamma\) is the mean phonon lifetime. Phonon energies and lifetimes were taken from the literature.[21; 22] To compare intrinsic material responses, phonon lifetimes from first-principles calculations were employed.[21] Values for these parameters are tabulated in the Supplemental Material.
Our model consists of two semi-infinite planes made up of air on the upper plane and the polar dielectric having a dielectric permittivity of the form of Eq. 2 as the lower plane. This arrangement is shown schematically in the upper panel of Figure 1. For this simple arrangement, the polariton dispersion can be analytically determined and is given by:
\[\beta=k_{0}\sqrt{\frac{\epsilon_{1}\epsilon_{2}}{\epsilon_{1}+\epsilon_{2}}} \tag{3}\]
where \(k_{0}=\omega/c\) is the wavevector of the incident light in vacuum and \(\epsilon_{1,2}\) are the permittivities of the materials in the model. A slightly more involved relation can also be found for birefringent materials and is provided in the Supplemental Material.
As the dielectric function is a complex quantity, the resulting wavevectors are likewise complex. The real-part of the wavevector (\(\beta_{r}\)) provides the in-plane momentum for the species and is therefore utilized in Eq. 1 defining energy transport. Only modes where \(\beta_{r}>k_{0}\) are considered in the analysis corresponding to the frequency range of \(\omega_{TO}\leq\omega\leq\omega_{LO}\). Beyond this range, Brewster modes exist above the light-line but are not localized to the interface (_i.e._, not true surface modes). They have a finite wavevector pointing orthogonal
to the temperature gradient and are therefore assumed to be of secondary importance to radiation conduction.[23; 24; 25]
The imaginary portion of the wavevector (\(\beta_{i}\)) is related to the propagation length of the polariton through:
\[\Lambda=\frac{1}{\beta_{i}} \tag{4}\]
It is therefore a reasonable surrogate for the mean free path and is used as such. As the dispersion of Eq. 3 is derived assuming an infinite lateral plane making up the interface, \(\Lambda\) does not take into account any finite size effects that may impact the scattering of the phonon-polariton or any other extrinsic effect. Eq. 4 differs by a factor of two relative to that commonly employed in similar treatments of radiation conduction;[26] it is used here owing to its correlation with experimental results.[27]
## III Results and Discussion
Figure 1 presents the resulting phonon-polariton dispersion for GaN, GaAs, and InSb. These materials are highlighted from among the more than twenty analyzed because their vibrational characteristics span much of the range observed for polar crystalline solids. For each material, the phonon-polariton dispersion branches off of the so-called light line (diagonal line in Figure 1). Having a slope that is comparable to the light-line, phonon-polaritons are characterized by extremely high velocity (on the order of \(10^{7}\ \frac{m}{s}\)), but much smaller momentum, than the phonons from which they derive. The dispersion is also bounded by the energies of the LO and TO phonons. Therefore, materials with large optical phonon splitting have more phase space to create polaritons and thus a greater total population.
Radiation conduction from phonon-polaritons necessarily depends upon the characteristics of the LO and TO phonons via Eq. 2. Traditional thermal conductivity is dependent upon all phonons existing within the solid. Recognizing this overlap, there is significant correlation between polariton conductance and bulk thermal conductivity as is apparent upon inspection of Figure 2. Increasing polaritonic conductance is correlated with higher thermal conductivity. However, the correlation between polaritonic conductance and thermal conductivity is not complete, since phonon thermal conductivity is determined by the entirety of the Brillouin zone, whereas polaritonic conductance is driven by only \(\Gamma\)-point optical phonons. Examining the characteristics of \(\Gamma\)-point phonons, therefore, is a means of
understanding the characteristics of phonon-driven radiation conduction.
The remainder of this paper quantifies the relative amount of heat moved by radiation conduction and then seeks to understand the underlying phonon characteristics by which radiation conduction can be maximized, using three case studies (InSb, GaAs, and GaN) with small, medium, and large phonon and polariton conductances respectively (see Figure 2). To quantify the relation between phonon and polariton conductances, the size-dependent phonon conductance was calculated using previously reported mean free path spectra and thermal conductivity accumulation functions for InSb, GaAs, and GaN.[28; 29; 30] The size-dependent phonon thermal conductance \(\sigma_{ph}\) was calculated via:[20]
\[\sigma_{ph}=\kappa(t)t \tag{5}\]
where \(\kappa(t)\) is the phonon thermal conductivity at a given thickness \(t\), deduced by multiplying
Figure 1: (a) Model schematic. Near-field radiation occurs perpendicular to polariton propagation whereas radiation conduction occurs parallel to polariton propagation. The system is composed of two infinite planes consisting of air (top) and a polar semiconductor (bottom). The phonon-polariton is localized to the interface as shown here by the simulated electric-field contours for the mode. (b) Phonon-polariton dispersion for GaN, GaAs, and InSb. The horizontal lines show their respective LO and TO phonons; LO phonons reside at higher energies.
the bulk thermal conductivity \(\kappa_{bulk}\) by its accumulation function. The results are depicted in Figure 3 where phonon conductance is compared to polariton conductance as a function of slab thickness for in-plane transport. It should be noted that the polaritonic dispersion--and thus the polaritonic conductance--will be changed as the film approaches sub-100 nm length scales owing to interaction of the fields between the surfaces.[26] The effect is ignored here, however, to allow for material comparisons apart from geometric effects.
As Figure 3(a) shows, phonon conductance and polariton-based conductance are comparable at thicknesses of 10 nm regardless of the large differences in bulk thermal conductivity between all materials. This occurs because phonon thermal conductivity reduces with thickness at roughly comparable rates regardless of its bulk value. This is indicated in Figure 3(b), which plots previously-reported thickness-dependent thermal conductivities measured at room temperature for several materials that are normalized relative to their value at a thickness of 1 \(\mu\)m. Although the examined materials exhibit bulk thermal conductivities that vary by a factor exceeding 100, the rate of reduction in thermal conductivity is similar for the entire set of materials. Along with the correlation observed in Figure 2, this trend explains why phonon-polariton based radiation conduction becomes comparable
Figure 2: Correlation between phonon thermal conductivity and phonon-polariton driven radiation conduction at 300 K for polar semiconductors with dielectric functions described under the TOLO-formalism of Eq. 4. Phonon thermal conductivity values are taken from Togo _et al.[21]_. Materials are individually labeled in a complementary version of this figure provided within the Supplemental Material.
Figure 3: (a) Phonon conductance versus thickness for InSb, GaAs, and GaN. Shaded regions bound those thicknesses in which the radiation conduction contribution from phonon polaritons spans 1 to 100% that of the phonon value. (b) Measured thickness dependent thermal conductivity at room temperature normalized relative to its value at a thickness of 1 \(\mu\)m. Experimental data is drawn from Si\({}^{31}\), GaN\({}^{32}\), InGaAs\({}^{33}\), AlGaN\({}^{34}\), SiGe.\({}^{35}\)
to that of traditional phonon conduction at similar length scales for very different materials. It should therefore be presumed that radiation conduction mediated by phonon-polaritons is intrinsically capable of playing a significant role in heat transport for polar materials with lengths approaching 10 nm.
Phonon-polaritons move appreciable amounts of energy for three reasons. First, phonon-polaritons couple to optical phonons and thus have larger energies than the acoustic phonons that dominate standard phonon conduction. Second, their photonic character permits group velocities closer to the speed of light, around ten times the speed of phonons, which move at the speed of sound. This is indicated by the dispersion curves in Figure 1, wherein the polaritonic dispersion lies close to the light line, implying a velocity close to that of light as can be seen explicitly in Figure 4(a), where the phonon-polariton velocity for InSb, GaAs, and GaN is plotted. The group velocities of the phonon-polaritons in InSb, GaAs, and GaN exceed \(10^{7}\) m/s throughout their dispersion. This is four orders of magnitude greater than acoustic phonon speeds, which typically are on the order of \(10^{3}\) m/s. Finally, phonon-polaritons move exceptionally long distances before scattering. Figure 4b shows that the vast majority of radiation conduction is mediated by phonon-polaritons with propagation lengths greater than 1 mm, consistent with previous reports and far longer than the mean free path of phonons.[16; 26]
The dielectric function of Eq. 2 ultimately defines the dispersion and propagation length of the phonon-polaritons and is dependent upon only phonon energies and lifetimes. This suggests that phonon-characteristics can be used to define a figure of merit (\(FoM\)) to compare materials in their ability for radiation conduction. We define the \(FoM\) for phonon-polariton driven radiation conduction in Eq. 6:
\[FoM=\frac{\omega_{LO}-\omega_{TO}}{\Gamma} \tag{6}\]
Figure 5 plots the predicted conductance for all materials versus this value.
The proposed figure of merit strongly links the magnitude of polariton conductance with \(\Gamma\)-point phonon energies and lifetimes. The strong correlation can be understood by examining the \(FoM\) in light of the characteristics of radiation conduction. The numerator of Eq. 6 (\(\omega_{LO}-\omega_{TO}\)) describes the energy of the phonon-polaritons and the phase-space available for their creation based on the difference between the corresponding TO- and LO-modes. A larger difference implies "more" phonon-polaritons of higher energy. The denominator
(\(\Gamma\)) is a quantification of loss induced by the phonons and thus is a measure of polariton propagation. A higher value of \(\Gamma\) corresponds to smaller propagation and thus less efficient transport. The figure of merit does not take into account the scaling of polariton population that increases as the thermal energy (208 cm\({}^{-1}\) at 300 K) approaches the TO and LO-phonon energies. Even with this omission, the simple ratio explains over 80% of the variance between materials.
Figure 4: (a) Velocities of phonon-polaritons and acoustic phonons for InSb, GaAs, and GaN. Phonon-velocities acquired from data reported in InSb [36], GaAs [37], and GaN [38] (b) Accumulation of phonon-polariton radiation conductance versus polariton propagation distance calculated from Eq. 5. The majority of heat is transported by polaritons having a propagation length greater than a 1 mm.
As the phonon lifetime plays a central role in determining the magnitude of radiation conduction. The temperature dependence of phonon lifetime will, therefore, dictate the temperature dependence of radiation conduction. Due to anharmonicity, the phonon linewidth increases with temperature in a manner that impacts radiation conduction.[39; 40; 41; 42] To account for this fact while removing any extrinsic causes, the temperature-dependent polariton conductance for GaAs was calculated using the first principles estimations for phonon lifetime reported in Yang _et al.[40]_ Temperature-dependent changes in phonon energies were not accounted for since they are comparatively small relative to the variation in linewidth. Figure 6 plots the resulting temperature-dependent phonon-polariton radiation conduction for GaAs.
When considering a temperature dependent phonon lifetime, a maximum is observed in the polariton conductance near 150 K. This maximum is distinct from the Umklapp hump of GaAs that occurs near 20 K of phonon conduction[43] since the radiation conduction is dictated by the balance in phonon-polariton population and scattering rather than phonon population and scattering. The decrease in radiation conduction with temperature above room temperature is contrary to previous reports.[13; 44] There are two possibilities to account
Figure 5: Phonon-polariton radiation conductance versus Figure of Merit (FoM) (see Eq. 6) for each of the examined materials. Over 80% of the plot’s variance is described through the FoM, which derives from the characteristic features of energy, speed, and propagation of the phonon-polariton. Materials are individually labeled in a complementary version of this figure provided within the Supplemental Material.
for the discrepancy. Extrinsic polariton scattering from defects or size effects could dominate the intrinsic temperature-dependent changes in phonon lifetime. It is also possible that the discrepancy indicates a limitation in the ability of kinetic theory to fully describe radiation conduction, suggesting that approaches emphasizing fluctuational electrodynamics may be more relevant.[11; 13; 45] Regardless of causation, there remains much to explore regarding the effects of temperature and other extrinsic factors on radiation conduction.
## IV Conclusions
In-plane phonon-polariton thermal conductance--termed radiation conduction--takes on a value that is comparable to phonon conduction for ultrathin solids (\(\sim 10\) nm) regardless of the material's bulk thermal conductivity. This is because phonon-polaritons have comparatively high energy combined with extreme velocities and long intrinsic mean-free paths. These characteristics derive from the dispersion of the phonon-polariton that itself is primarily determined by the \(\Gamma-\)point optical phonon energies and lifetimes. A figure of merit (\(FoM=\frac{(\omega_{LO}-\omega_{TO})}{\Gamma}\)) was established leveraging these optical phonon characteristics that was highly correlative with the over 10x variation in radiation conduction observed for the 20 different wurtzite and zinc-blende materials examined here. This figure of merit highlights
Figure 6: Temperature dependence of phonon-polariton radiation conduction in GaAs when considering both (solid line) temperature dependent and (dash) constant phonon linewidth. Radiation conduction varies with temperature as does the phonon linewidth.
that radiation conduction increases for high-energy phonons having long lifetimes and can be used identify promising materials to cultivate radiation conduction.
## V Supplementary Material
Optical properties and thermal conductivity values utilized within the analysis presented here is provided in the Supplementary Material along with the expression describing the surface-phonon polariton dispersion for air atop a birefringent material. Figure 2 and 5 are reproduced with each material labelled individually.
## VI Data Availability
Code used to produce the figures in this manuscript and underlying data is available upon reasonable request to the corresponding author.
|
2306.04647 | Compressed Sensing: A Discrete Optimization Approach | We study the Compressed Sensing (CS) problem, which is the problem of finding
the most sparse vector that satisfies a set of linear measurements up to some
numerical tolerance. We introduce an $\ell_2$ regularized formulation of CS
which we reformulate as a mixed integer second order cone program. We derive a
second order cone relaxation of this problem and show that under mild
conditions on the regularization parameter, the resulting relaxation is
equivalent to the well studied basis pursuit denoising problem. We present a
semidefinite relaxation that strengthens the second order cone relaxation and
develop a custom branch-and-bound algorithm that leverages our second order
cone relaxation to solve small-scale instances of CS to certifiable optimality.
When compared against solutions produced by three state of the art benchmark
methods on synthetic data, our numerical results show that our approach
produces solutions that are on average $6.22\%$ more sparse. When compared only
against the experiment-wise best performing benchmark method on synthetic data,
our approach produces solutions that are on average $3.10\%$ more sparse. On
real world ECG data, for a given $\ell_2$ reconstruction error our approach
produces solutions that are on average $9.95\%$ more sparse than benchmark
methods ($3.88\%$ more sparse if only compared against the best performing
benchmark), while for a given sparsity level our approach produces solutions
that have on average $10.77\%$ lower reconstruction error than benchmark
methods ($1.42\%$ lower error if only compared against the best performing
benchmark). When used as a component of a multi-label classification algorithm,
our approach achieves greater classification accuracy than benchmark compressed
sensing methods. This improved accuracy comes at the cost of an increase in
computation time by several orders of magnitude. | Dimitris Bertsimas, Nicholas A. G. Johnson | 2023-06-05T01:29:24Z | http://arxiv.org/abs/2306.04647v3 | # Compressed Sensing: A Discrete Optimization Approach
###### Abstract
We study the Compressed Sensing (CS) problem, which is the problem of finding the most sparse vector that satisfies a set of linear measurements up to some numerical tolerance. CS is a central problem in Statistics, Operations Research and Machine Learning which arises in applications such as signal processing, data compression and image reconstruction. We introduce an \(\boldsymbol{\ell_{2}}\) regularized formulation of CS which we reformulate as a mixed integer second order cone program. We derive a second order cone relaxation of this problem and show that under mild conditions on the regularization parameter, the resulting relaxation is equivalent to the well studied basis pursuit denoising problem. We present a semidefinite relaxation that strengthens the second order cone relaxation and develop a custom branch-and-bound algorithm that leverages our second order cone relaxation to solve instances of CS to certifiable optimality. Our numerical results show that our approach produces solutions that are on average \(\boldsymbol{6.22\%}\) more sparse than solutions returned by state of the art benchmark methods on synthetic data in minutes. On real world ECG data, for a given \(\boldsymbol{\ell_{2}}\) reconstruction error our approach produces solutions that are on average \(\boldsymbol{9.95\%}\) more sparse than benchmark methods, while for a given sparsity level our approach produces solutions that have on average \(\boldsymbol{10.77\%}\) lower reconstruction error than benchmark methods in minutes.
**Keywords:** Sparsity; Sparse Approximation, Compressed Sensing; Convex Relaxation; Branch-and-bound
## 1 Introduction
The _Compressed Sensing_ (CS) problem seeks to find a most sparse vector \(\mathbf{x}\in\mathbb{R}^{n}\) that is consistent with a set of \(m\) linear equalities. CS is a fundamental problem in Statistics, Operations Research and Machine Learning which arises in numerous applications such as medical resonance imaging [1], holography [2], climate monitoring [3], natural resource mining [4] and electrocardiogram signal acquisition [5] among many others. Formally, given a matrix \(\mathbf{A}\in\mathbb{R}^{m\times n}\) and a vector \(\mathbf{b}\in\mathbb{R}^{m}\), CS is given by [6]:
\[\min_{\mathbf{x}\in\mathbb{R}^{n}}\|\mathbf{x}\|_{0}\text{ s.t. }\mathbf{Ax}=\mathbf{b}. \tag{1}\]
In the presence of noisy measurements, it is necessary to relax the equality constraint in (1), leading to the following formulation for \(\epsilon>0\):
\[\min_{\mathbf{x}\in\mathbb{R}^{n}}\|\mathbf{x}\|_{0}\text{ s.t. }\|\mathbf{Ax}-\mathbf{b}\|_{2}^{2} \leq\epsilon. \tag{2}\]
This problem is sometimes referred to as sparse approximation in the literature [7] and trivially reduces to (1) for \(\epsilon=0\). CS allows signals to be reconstructed surprisingly well after sampling at a rate far below the Nyquist sampling rate by leveraging the inherent sparsity of most signals, either in the signal's latent space or in an appropriately defined transform space. For example, natural images tend to have a sparse representation in the wavelet domain, speech can be represented using a small number of coefficients in the Fourier transform domain and medical images can be represented sparsely in the Radon transform domain [8].
In Section 2, we will see that the vast majority of existing approaches to CS either rely on \(\ell_{1}\) based convex approximations to (2) or are greedy heuristics whereas the use of integer optimization techniques has gone relatively unexplored. In this work, we formulate CS as:
\[\min_{\mathbf{x}\in\mathbb{R}^{n}}\|\mathbf{x}\|_{0}+\frac{1}{\gamma}\|\mathbf{x}\|_{2}^{ 2}\text{ s.t. }\|\mathbf{Ax}-\mathbf{b}\|_{2}^{2}\leq\epsilon, \tag{3}\]
where \(\gamma>0\) is a regularization parameter that in practice can either take a default value (e.g. \(\sqrt{n}\)) or be cross-validated by minimizing a validation metric [see, e.g., 9] to obtain strong out-of-sample performance [10]. A defining characteristic of the approach we present in this work is that we leverage techniques from integer optimization to exploit the inherent discreteness of formulation (3) rather than relying on more commonly studied approximate methods. Note that Problem (3) is a special case of the formulation given by:
\[\min_{\mathbf{x}\in\mathbb{R}^{n}}\|\mathbf{x}\|_{0}+\frac{1}{\gamma}\|\mathbf{W}\mathbf{x}\|_ {2}^{2}\text{ s.t. }\|\mathbf{Ax}-\mathbf{b}\|_{2}^{2}\leq\epsilon. \tag{4}\]
where \(\mathbf{W}\in\mathbb{R}^{n\times n}\) is a diagonal matrix with nonnegative diagonal entries that should be interpreted as coordinate weights on the vector \(\mathbf{x}\). Indeed, (4) reduces to (3) when we take \(\mathbf{W}=\mathbf{I}\).
In this work, we develop strong convex relaxations to (3) and leverage our relaxations to develop a custom branch-and-bound algorithm that can solve (3) to certifiable optimality. We show that compared to state of the art benchmark methods, our branch-and-bound algorithm produces solutions that are on average \(6.22\%\) more sparse on synthetic data and on average \(9.95\%\) more sparse on real world ECG data at the expense of increased computation time. Thus, for applications where runtime is not of critical importance, leveraging integer optimization can yield sparser solutions to CS than existing benchmarks.
### Contributions and Structure
In this paper, we approach CS using mixed integer second order cone optimization. We derive a second order cone relaxation of this problem and show that under mild conditions on the regularization parameter, the resulting relaxation is equivalent to the well studied basis pursuit denoising problem. We present a semidefinite relaxation that strengthens the second order cone relaxation and develop a custom branch-and-bound algorithm that leverages our second order cone relaxation to solve instances of CS to certifiable optimality. Our numerical results show that our approach produces solutions that are on average \(6.22\%\) more sparse than solutions returned by state of the art benchmark methods on synthetic data in minutes. On real world ECG data, for a given \(\ell_{2}\) reconstruction error our approach produces solutions that are on average \(9.95\%\) more sparse than benchmark methods, while for a given sparsity level our approach produces solutions that have on average \(10.77\%\) lower reconstruction error than benchmark methods in minutes.
The rest of the paper is structured as follows. In Section 2, we review existing formulations and solution methods of the CS problem. In Section 3, we study how our regularized formulation of CS (3) connects to the commonly used formulation (2). We reformulate (3) exactly as a mixed integer second order cone problem in Section 4 and present a second order cone relaxation in Section 4.1 and a stronger but more computationally expensive semidefinite cone relaxation in Section 4.2. We show that our second order cone relaxation is equivalent to the Basis Pursuit Denoising problem under mild conditions offering a new interpretation of this well studied method as a convex relaxation of our mixed integer second order cone reformulation of (3). We leverage our second order cone relaxation to develop a custom branch-and-bound algorithm in Section 5 that can solve instances of (3) to certifiable optimality. In Section 6, we investigate the performance of our branch-and-bound algorithm against state of the art benchmark methods on synthetic and real world data.
_Notation:_
We let nonbold face characters such as \(b\) denote scalars, lowercase bold faced characters such as \(\mathbf{x}\) denote vectors, uppercase bold faced characters such as \(\mathbf{X}\) denote matrices, and calligraphic uppercase characters such as \(\mathcal{Z}\) denote sets. We let \([n]\) denote the set of running indices \(\{1,\ldots,n\}\). We let \(\mathbf{e}\) denote a vector of all 1's, \(\mathbf{0}_{n}\) denote an n-dimensional vector of all 0's, and \(\mathbf{I}\) denote the identity matrix. We let \(\mathcal{S}^{n}\) denote the cone of \(n\times n\) symmetric matrices and \(\mathcal{S}^{n}_{+}\) denote the cone of \(n\times n\) positive semidefinite matrices.
## 2 Literature Review
In this section, we review several key approaches from the literature that have been employed to solve the CS problem. As an exhaustive literature review is outside of the scope of this paper, we focus our review on a handful of well studied approaches which will be used as benchmarks in this work. For a more detailed CS literature review, we refer the reader to [7].
The majority of existing approaches to the CS problem are heuristic in nature and generally can be classified as either convex approximations or greedy methods as we will see in this section. For these methods, associated performance guarantees require making strong statistical assumptions on the underlying problem data. Integer optimization has been given little attention in the CS literature despite its powerful modelling capabilities. [11] and [12] explore formulating Problem (2) as a mixed integer linear program for the case when \(\epsilon=0\). However this approach relies on using the big-\(M\) method which requires estimating reasonable values for \(M\) and cannot immediately generalize to the setting where \(\epsilon>0\).
### Basis Pursuit Denoising
A common class of CS methods rely on solving convex approximations of (2) rather than attempting to solve (2) directly. A popular approach is to use the \(\ell_{1}\) norm as a convex surrogate for the \(\ell_{0}\) norm [6, 13, 14, 15, 16]. This approximation is typically motivated by the observation that the unit \(\ell_{1}\) ball given by \(\mathcal{B}_{\ell_{1}}=\{\mathbf{x}\in\mathbb{R}^{n}:\|\mathbf{x}\|_{1}\leq 1\}\) is the convex hull of the nonconvex set \(\mathcal{X}=\{\mathbf{x}\in\mathbb{R}^{n}:\|\mathbf{x}\|_{0}\leq 1,\|\mathbf{x}\|_{ \infty}\leq 1\}\). Replacing the \(\ell_{0}\) norm by the \(\ell_{1}\) norm in (2), we obtain:
\[\min_{\mathbf{x}\in\mathbb{R}^{n}}\|\mathbf{x}\|_{1}\text{ s.t. }\|\mathbf{A}\mathbf{x}-\mathbf{b} \|_{2}^{2}\leq\epsilon. \tag{5}\]
Problem (5) is referred to as Basis Pursuit Denoising and is a quadratically constrained convex optimization problem which can be solved efficiently using one of several off the shelf optimization packages. Basis Pursuit Denoising produces an approximate solution to Problem (2) by either directly returning the solution of (5) or by post-processing the solution of (5) to further sparsify the result. One such post-processing technique is a greedy rounding mechanism where columns of the matrix \(\mathbf{A}\) are iteratively selected in the order corresponding to decreasing magnitude of the entries of the optimal solution of (5) until the selected column set of \(\mathbf{A}\) is sufficiently large to produce a feasible solution to (2). Basis Pursuit Denoising is very closely related to the Lasso problem which is given by:
\[\min_{\mathbf{x}\in\mathbb{R}^{n}}\quad\|\mathbf{A}\mathbf{x}-\mathbf{b}\|_{2}^{2}+\lambda\| \mathbf{x}\|_{1}, \tag{6}\]
where \(\lambda>0\) is a tunable hyperparameter. Lasso is a statistical estimator commonly used for sparse regression as empirically, the optimal solution of Problem (6) tends to be sparse [17]. More recently, strong connections between Lasso and robust optimization have been established [18]. Basis Pursuit Denoising and Lasso are equivalent in that Lasso is obtained by relaxing the hard constraint in (5) and instead introducing a penalty term in the objective function. It is straightforward to show that for given
input data \(\mathbf{A},\mathbf{b}\) and \(\epsilon\) in (5), there exists a value \(\lambda^{\star}>0\) such that there exists a solution \(\mathbf{x}^{\star}\) that is both optimal for (5) and (6) when the tunable parameter takes value \(\lambda=\lambda^{\star}\).
Note that by taking \(\epsilon=0\), Problem (5) reduces to the well studied Basis Pursuit problem where the equality constraint \(\mathbf{A}\mathbf{x}=\mathbf{b}\) is enforced. A large body of work studies conditions under which the optimal solution of the Basis Pursuit problem is also an optimal solution of (1). For example, see [19], [20], [21], and [22]. One of the most well studied conditions under which this equivalence holds is when the input matrix \(\mathbf{A}\) satisfies the Restricted Isometry Property (RIP). Formally, a matrix \(\mathbf{A}\in\mathbb{R}^{m\times n}\) is said to satisfy RIP of order \(s\) and parameter \(\delta_{s}\in(0,1)\) if for every vector \(\mathbf{x}\in\mathbb{R}^{n}\) such that \(\|\mathbf{x}\|_{0}\leq s\), we have
\[(1-\delta_{s})\|\mathbf{x}\|_{2}^{2}\leq\|\mathbf{A}\mathbf{x}\|_{2}^{2}\leq(1+\delta_{s} )\|\mathbf{x}\|_{2}^{2}.\]
It has been established that if \(\mathbf{A}\) satisfies RIP or order \(2s\) and parameter \(\delta_{2s}<1/3\), then the optimal solution of the Basis Pursuit problem is also an optimal solution of (1) where \(s\) denotes the cardinality of this optimal solution [23]. While it has been shown that certain random matrices satisfy this desired RIP property with high probability [24, 25], RIP in general is not tractable to verify on arbitrary real world data.
### Iterative Reweighted L1
Iterative Reweighted \(\ell_{1}\) minimization is an iterative method that can generate an approximate solution to (2) by solving a sequence of convex optimization problems that are very closely related to the Basis Pursuit Denoising problem given by (5) [26, 27, 28]. This approach falls in the class of convex approximation based methods for solving CS. The approach considers the weighted \(\ell_{1}\) minimization problem given by:
\[\begin{split}\min_{\mathbf{x}\in\mathbb{R}^{n}}&\|\mathbf{W} \mathbf{x}\|_{1}\\ \text{s.t.}&\|\mathbf{A}\mathbf{x}-\mathbf{b}\|_{2}^{2}\leq \epsilon,\end{split} \tag{7}\]
where \(\mathbf{W}\in\mathbb{R}^{n\times n}\) is a diagonal matrix with nonnegative diagonal entries. Each diagonal entry \(W_{ii}=w_{i}\) of \(\mathbf{W}\) can be interpreted as a weighting of the \(i^{th}\) coordinate of the vector \(\mathbf{x}\). Interpreting the \(\ell_{1}\) norm as a convex surrogate for the \(\ell_{0}\) norm, Problem (7) can be viewed as a relaxation of the nonconvex problem given by
\[\begin{split}\min_{\mathbf{x}\in\mathbb{R}^{n}}&\|\mathbf{W} \mathbf{x}\|_{0}\\ \text{s.t.}&\|\mathbf{A}\mathbf{x}-\mathbf{b}\|_{2}^{2}\leq \epsilon.\end{split} \tag{8}\]
It is trivial to verify that when \(\mathbf{W}=\alpha\mathbf{I}\), where \(\alpha>0\) and \(\mathbf{I}\) is the \(n\)-by-\(n\) identity matrix, (8) and (7) reduce exactly to (2) and (5) respectively. Assuming the weights never vanish, the nonconvex Problems (2) and (8) have the same optimal solution, yet their convex relaxations (5) and (7) will generally have very different solutions. In this regard, the weights can be regarded as parameters that if chosen correctly can
produce a better solution than (5). Iterative Reweighted \(\ell_{1}\) minimization proceeds as follows [26]:
1. Initialize the iteration count \(t\gets 0\) and the weights \(w_{i}^{(0)}\gets 1\).
2. Solve (7) with \(\mathbf{W}=\mathbf{W}^{(t)}\). Let \(\mathbf{x}^{(t)}\) denote the optimal solution.
3. Update the weights as \(w_{i}^{(t+1)}\leftarrow\frac{1}{|x_{i}^{(t)}|+\delta}\) where \(\delta>0\) is a fixed parameter for numerical stability.
4. Terminate if \(t\) reaches a maximum number of iterations or if the iterates \(\mathbf{x}^{(t)}\) have converged. Otherwise, increment \(t\) and return to Step 2.
[26] show empirically that in many settings the solution returned by Iterated Reweighted \(\ell_{1}\) minimization outperforms the solution returned by Basis Pursuit Denoising by recovering the true underlying signal while requiring fewer measurements to be taken. We note that this approach is an instance of a broader class of sparsifying iterative reweighted methods [29, 30, 31].
### Orthogonal Matching Pursuit
Orthogonal Matching Pursuit (OMP) is a canonical greedy algorithm for obtaining heuristic solutions to (2) [32, 33]. Solving Problem (2) can be interpreted as determining the minimum number of columns from the input matrix \(\mathbf{A}\) that must be selected such that the residual of the projection of the input vector \(\mathbf{b}\) onto the span of the selected columns has \(\ell_{2}\) norm equal to at most \(\sqrt{\epsilon}\). The OMP algorithm proceeds by first selecting the column of \(\mathbf{A}\) that is most collinear with \(\mathbf{b}\) and subsequently iteratively adding the column of \(\mathbf{A}\) that is most collinear with the residual of the projection of \(\mathbf{b}\) onto the subspace spanned by the selected columns until the norm of this residual is at most \(\sqrt{\epsilon}\). Concretely, OMP proceeds as follows where for an arbitrary collection of indices \(\mathcal{I}_{t}\subseteq[n]\), we let \(\mathbf{A}(\mathcal{I}_{t})\in\mathbb{R}^{m\times|\mathcal{I}_{t}|}\) denote the matrix obtained by stacking the \(|\mathcal{I}_{t}|\) columns of \(\mathbf{A}\) corresponding to the indices in the set \(\mathcal{I}_{t}\):
1. Initialize the iteration count \(t\gets 0\), the residual \(\mathbf{r}_{0}\leftarrow\mathbf{b}\) and the index set \(\mathcal{I}_{0}\leftarrow\emptyset\).
2. Select the column that is most collinear with the residual \(i_{t}\leftarrow\operatorname*{arg\,max}_{i\in[n]\setminus\mathcal{I}_{t}}| \mathbf{a}_{i}^{T}\mathbf{r}_{t}|\) and update the index set \(\mathcal{I}_{t+1}\leftarrow\mathcal{I}_{t}\cup i_{t}\).
3. Compute the projection of \(\mathbf{b}\) onto the current set of columns \[\mathbf{x}_{t+1}\leftarrow\big{[}\mathbf{A}(\mathcal{I}_{t+1})^{T}\mathbf{A}(\mathcal{I}_ {t+1})\big{]}^{\dagger}\mathbf{A}(\mathcal{I}_{t+1})^{T}\mathbf{b},\] and update the residual \(\mathbf{r}_{t+1}\leftarrow\mathbf{b}-\mathbf{A}(\mathcal{I}_{t+1})\mathbf{x}_{t+1}\).
4. Terminate if \(\|\mathbf{r}_{t+1}\|_{2}^{2}\leq\epsilon\), otherwise increment \(t\) and return to Step 2.
Conditions under which the solution returned by OMP is the optimal solution of (2) (either with high probability or with certainty) have been studied extensively [34, 35, 22]. Unfortunately, these conditions suffer from the same limitation as RIP in that in general they are not tractable to verify on real world data. A closely related method to OMP is Subspace Pursuit (SP) which is another greedy algorithm for obtaining a heuristic solution to (2) in the \(\epsilon=0\) setting but has the additional requirement that a target sparsity value \(K\) must be specified in advance [36]. SP is initialized by selecting
the \(K\) columns of \(\mathbf{A}\) that are most collinear with the vector \(\mathbf{b}\). At each iteration, SP first computes the residual of the projection of \(\mathbf{b}\) onto the current column set and then greedily updates up to \(K\) elements of the column set, repeating this process until doing so no longer decreases the norm of the residual.
## 3 Formulation Properties
In this section, we rigorously investigate connections between formulations (3) and (2) for the CS problem in the noisy setting. The only difference between formulations (2) and (3) is the inclusion of a \(\ell_{2}\) regularization term in the objective function in (3). We will see in Section 4 that the presence of this regularization term facilitates useful reformulations. Moreover, in the case of regression, [18] show that augmenting the ordinary least squares objective function with a \(\ell_{2}\) regularization penalty produces regression vectors that are robust against data perturbations which suggests the presence of such a regularization term may result in a similar benefit in (3). A natural question to ask is: under what conditions do problems (2) and (3) have the same solution? We answer this question in Theorem 1.
**Theorem 1**.: _There exists a finite value \(\gamma_{0}<\infty\) such that for all \(\bar{\gamma}\geq\gamma_{0}\), there exists a vector \(\mathbf{x}^{\star}\) such that \(\mathbf{x}^{\star}\) is an optimal solution of (2) and also an optimal solution of (3) where we set \(\gamma=\bar{\gamma}\). Letting \(\tilde{\mathbf{x}}\) denote a minimum norm solution to (2), we can take \(\gamma_{0}=\|\tilde{\mathbf{x}}\|_{2}^{2}\) and \(\mathbf{x}^{\star}=\tilde{\mathbf{x}}\)._
Phrased simply, Theorem 1 establishes that there exists a finite value \(\gamma_{0}\) such that if the regularization parameter \(\gamma\) in problem (3) is at least as large as \(\gamma_{0}\), then there is a vector \(\mathbf{x}^{\star}\) that is optimal to both problems (2) and (3). We note that this finite value \(\gamma_{0}\) depends on the input data \(\mathbf{A},\mathbf{b}\) and \(\epsilon\).
Proof.: Consider any matrix \(\mathbf{A}\in\mathbb{R}^{m\times n}\), vector \(\mathbf{b}\in\mathbb{R}^{m}\) and scalar \(\epsilon>0\). Let \(\Omega\) denote the set of optimal solutions to (2) and let \(\mathcal{X}\) denote the feasible set of (2) and (3). We have \(\mathcal{X}=\{\mathbf{x}:\|\mathbf{A}\mathbf{x}-\mathbf{b}\|_{2}^{2}\leq\epsilon\}\) and \(\Omega\subseteq\mathcal{X}\). Let \(\tilde{\mathbf{x}}\in\arg\min_{\mathbf{x}\in\Omega}\|\mathbf{x}\|_{2}^{2}\) and let \(\gamma_{0}=\|\tilde{\mathbf{x}}\|_{2}^{2}\). Since \(\tilde{\mathbf{x}}\in\Omega\), \(\tilde{\mathbf{x}}\) is an optimal solution to (2). It remains to show that \(\tilde{\mathbf{x}}\) is optimal to (3) for all \(\gamma\geq\gamma_{0}\).
Fix any \(\gamma\geq\gamma_{0}\). To show that \(\tilde{\mathbf{x}}\) is an optimal solution of (3), we will show that for all \(\tilde{\mathbf{x}}\in\mathcal{X}\), we have
\[\|\tilde{\mathbf{x}}\|_{0}+\frac{1}{\gamma}\|\tilde{\mathbf{x}}\|_{2}^{2}\leq\| \tilde{\mathbf{x}}\|_{0}+\frac{1}{\gamma}\|\bar{\mathbf{x}}\|_{2}^{2}.\]
Fix an arbitrary \(\bar{\mathbf{x}}\in\mathcal{X}\). Either \(\bar{\mathbf{x}}\in\mathcal{X}\setminus\Omega\) or \(\bar{\mathbf{x}}\in\Omega\). Suppose \(\bar{\mathbf{x}}\in\mathcal{X}\setminus\Omega\). The definition of \(\Omega\) and the fact that \(\tilde{\mathbf{x}}\in\Omega\) implies
\[\|\tilde{\mathbf{x}}\|_{0}<\|\bar{\mathbf{x}}\|_{0}\implies\|\tilde{\mathbf{x}}\|_{0}+1 \leq\|\bar{\mathbf{x}}\|_{0}.\]
Next, note that since \(\gamma\geq\gamma_{0}=\|\tilde{\mathbf{x}}\|_{2}^{2}\), we have
\[\|\tilde{\mathbf{x}}\|_{0}+\frac{1}{\gamma}\|\tilde{\mathbf{x}}\|_{2}^{2}\leq\|\tilde {\mathbf{x}}\|_{0}+1\leq\|\bar{\mathbf{x}}\|_{0}\leq\|\bar{\mathbf{x}}\|_{0}+\frac{1}{ \gamma}\|\bar{\mathbf{x}}\|_{2}^{2}.\]
Suppose instead that \(\vec{\mathbf{x}}\in\Omega\). The definition of \(\Omega\) and \(\vec{\mathbf{x}}\) imply \(\|\vec{\mathbf{x}}\|_{0}=\|\vec{\mathbf{x}}\|_{0}\) and \(\|\vec{\mathbf{x}}\|_{2}^{2}\leq\|\vec{\mathbf{x}}\|_{2}^{2}\). It then follows immediately that \(\|\vec{\mathbf{x}}\|_{0}+\frac{1}{\gamma}\|\vec{\mathbf{x}}\|_{2}^{2}\leq\|\vec{\mathbf{x}} \|_{0}+\frac{1}{\gamma}\|\vec{\mathbf{x}}\|_{2}^{2}\). Thus, \(\vec{\mathbf{x}}\) is optimal to (3). This completes the proof.
Though Theorem 1 is useful in establishing conditions for the equivalence of problems (2) and (3), it is important to note that computing the value of \(\gamma_{0}\) specified in the Theorem requires solving (2) which is difficult in general. Suppose we are solving problem (3) with some regularization parameter \(\gamma\) in the regime where \(0<\gamma<\gamma_{0}\). A natural question to ask is: how well does the solution of (3) approximate the solution of (2). We answer this question in Theorem 2.
**Theorem 2.**_Let \(\tilde{\mathbf{x}}\) and \(\gamma_{0}\) be as defined in Theorem 1, and let \(\mathcal{X}\) denote the feasible set of (2) and (3). Specifically, \(\tilde{\mathbf{x}}\) denotes a minimum norm solution to (2), \(\gamma_{0}=\|\tilde{\mathbf{x}}\|_{2}^{2}\) and \(\mathcal{X}=\{\mathbf{x}:\|\mathbf{A}\mathbf{x}-\mathbf{b}\|_{2}^{2}\leq\epsilon\}\). Let \(\lambda_{\epsilon}>0\) be a value such that_
\[\operatorname*{arg\,min}_{\mathbf{x}\in\mathcal{X}}\|\mathbf{x}\|_{2}^{2}= \operatorname*{arg\,min}_{\mathbf{x}}\|\mathbf{A}\mathbf{x}-\mathbf{b}\|_{2}^{2}+\lambda_{ \epsilon}\|\mathbf{x}\|_{2}^{2}.\]
_Fix any value \(\gamma\) with \(0<\gamma<\gamma_{0}\). Suppose \(\bar{\mathbf{x}}\) is an optimal solution to (3). Then we have_
\[\|\tilde{\mathbf{x}}\|_{0}\leq\|\bar{\mathbf{x}}\|_{0}\leq\|\tilde{\mathbf{x}}\|_{0}+\frac {1}{\gamma}\bigg{(}\|\tilde{\mathbf{x}}\|_{2}^{2}-\Big{\|}\Big{(}\frac{1}{\lambda_ {\epsilon}}\mathbf{I}+\mathbf{A}^{T}\mathbf{A}\Big{)}^{-1}\mathbf{A}^{T}\mathbf{b}\Big{\|}_{2}^{2 }\bigg{)}.\]
Proof.: Fix any value \(\gamma\) with \(0<\gamma<\gamma_{0}\) and consider any optimal solution \(\bar{\mathbf{x}}\) to (3). The inequality \(\|\tilde{\mathbf{x}}\|_{0}\leq\|\bar{\mathbf{x}}\|_{0}\) follows immediately from the optimality of \(\tilde{\mathbf{x}}\) in (2). By the optimality of \(\bar{\mathbf{x}}\), we must have
\[\|\bar{\mathbf{x}}\|_{0}+\frac{1}{\gamma}\|\bar{\mathbf{x}}\|_{2}^{2}\leq\|\tilde{\bm {x}}\|_{0}+\frac{1}{\gamma}\|\tilde{\mathbf{x}}\|_{2}^{2}\implies\|\bar{\mathbf{x}}\|_{ 0}\leq\|\tilde{\mathbf{x}}\|_{0}+\frac{1}{\gamma}(\|\tilde{\mathbf{x}}\|_{2}^{2}-\| \bar{\mathbf{x}}\|_{2}^{2}).\]
Thus, to establish the result we need only derive an upper bound for the term \((\|\tilde{\mathbf{x}}\|_{2}^{2}-\|\bar{\mathbf{x}}\|_{2}^{2})\), or equivalently to derive a lower bound for the term \(\|\bar{\mathbf{x}}\|_{2}^{2}\). Since \(\bar{\mathbf{x}}\in\mathcal{X}\), such a lower bound can be obtained by solving the optimization problem given by
\[\begin{split}\min_{\mathbf{x}\in\mathbb{R}^{n}}&\|\mathbf{x} \|_{2}^{2}\\ \text{s.t.}&\mathbf{x}\in\mathcal{X}=\{\mathbf{x}:\|\mathbf{A}\mathbf{ x}-\mathbf{b}\|_{2}^{2}\leq\epsilon\}.\end{split} \tag{9}\]
This optimization problem has the same optimal solution as the ridge regression problem given by
\[\min_{\mathbf{x}\in\mathbb{R}^{n}}\|\mathbf{A}\mathbf{x}-\mathbf{b}\|_{2}^{2}+\lambda_{ \epsilon}\|\mathbf{x}\|_{2}^{2}. \tag{10}\]
for some value \(\lambda_{\epsilon}>0\). To see this, we form the Lagrangian for (9) \(L(\mathbf{x},\mu)=\|\mathbf{x}\|_{2}^{2}+\mu(\|\mathbf{A}\mathbf{x}-\mathbf{b}\|_{2}^{2}-\epsilon)\) and observe that the KKT conditions for \((\mathbf{x},\mu)\in\mathbb{R}^{n}\times\mathbb{R}\) are given by
1. \(\|\mathbf{A}\mathbf{x}-\mathbf{b}\|_{2}^{2}\leq\epsilon\);
2. \(\mu\geq 0\);
3. \(\mu(\|\mathbf{A}\mathbf{x}-\mathbf{b}\|_{2}^{2}-\epsilon)=0\implies\mu=0\) or \(\|\mathbf{A}\mathbf{x}-\mathbf{b}\|_{2}^{2}=\epsilon\);
4. \(\nabla_{\mathbf{x}}L(\mathbf{x},\mu)=\mathbf{0}\implies\mathbf{x}=(\frac{1}{\mu}\mathbf{I}+\mathbf{A}^ {T}\mathbf{A})^{-1}\mathbf{A}^{T}\mathbf{b}\) if \(\mu\neq 0\) and \(\mathbf{x}=0\) if \(\mu=0\).
We note that if \(\mathbf{0}\in\mathcal{X}\), then \(\mathbf{0}\) is trivially an optimal solution to (9) with optimal value given by \(0\). This corresponds to the degenerate case. In the nondegenerate case, we have \(\mathbf{0}\notin\mathcal{X}\). This condition, coupled with the first and fourth KKT conditions implies that at optimality, we have \(\mu\neq 0\) and \(\mathbf{x}=(\frac{1}{\mu}\mathbf{I}+\mathbf{A}^{T}\mathbf{A})^{-1}\mathbf{A}^{T}\mathbf{b}\). Next, we note that the unconstrained quadratic optimization problem given by (10) has an optimal solution \(\mathbf{x}^{\star}\) given by \(\mathbf{x}^{\star}=(\lambda_{\epsilon}\mathbf{I}+\mathbf{A}^{T}\mathbf{A})^{-1}\mathbf{A}^{T}\mathbf{b}\). Finally, we observe that the two preceding expressions are the same when \(\lambda_{\epsilon}=\frac{1}{\mu}>0\). Thus, we have
\[\|\bar{\mathbf{x}}\|_{2}^{2}\geq\min_{\mathbf{x}\in\mathcal{X}}\|\mathbf{x}\|_{2}^{2}=\Big{\|} \Big{(}\frac{1}{\lambda_{\epsilon}}\mathbf{I}+\mathbf{A}^{T}\mathbf{A}\Big{)}^{-1}\mathbf{A}^ {T}\mathbf{b}\Big{\|}_{2}^{2},\]
which implies that
\[\|\bar{\mathbf{x}}\|_{0}\leq\|\bar{\mathbf{x}}\|_{0}+\frac{1}{\gamma}\bigg{(}\|\tilde{ \mathbf{x}}\|_{2}^{2}-\Big{\|}\Big{(}\frac{1}{\lambda_{\epsilon}}\mathbf{I}+\mathbf{A}^{T }\mathbf{A}\Big{)}^{-1}\mathbf{A}^{T}\mathbf{b}\Big{\|}_{2}^{2}\bigg{)}.\]
This completes the proof.
**Remark 1**.: _Though the statement of Theorem 2 is made for any fixed \(\gamma\) satisfying \(0<\gamma<\gamma_{0}\) with \(\gamma_{0}\) given by Theorem 1, we note that the proof of Theorem 2 in fact generalizes to any \(\gamma>0\). This implies that the result of Theorem 1 holds for any \(\gamma_{0}^{\prime}\) satisfying \(\gamma_{0}^{\prime}>\bigg{(}\|\tilde{\mathbf{x}}\|_{2}^{2}-\Big{\|}\Big{(}\frac{1 }{\lambda_{\epsilon}}\mathbf{I}+\mathbf{A}^{T}\mathbf{A}\Big{)}^{-1}\mathbf{A}^{T}\mathbf{b}\Big{\|} _{2}^{2}\bigg{)}\). This is a stronger condition than the one established by Theorem 1 but has the drawback of depending on the value \(\lambda_{\epsilon}\) which in general cannot be computed easily._
Theorem 2 provides a worst case guarantee on the sparsity of the solution of (3) when the regularization parameter \(\gamma\) satisfies \(0<\gamma<\gamma_{0}\).
## 4 An Exact Reformulation and Convex Relaxations
In this section, we reformulate (4) as a mixed integer second order cone optimization problem. We then employ the perspective relaxation [37] to construct a second order cone relaxation for (4) and demonstrate that under certain conditions on the regularization parameter \(\gamma\), the resulting relaxation is equivalent to the Weighted Basis Pursuit Denoising problem given by (7). As a special case, we obtain a convex relaxation for (3) and demonstrate that it is equivalent to (5) under the same conditions on \(\gamma\). Finally, we present a family of semidefinite relaxations to (4) using techniques from polynomial optimization.
To model the sparsity of the vector \(\mathbf{x}\) in (4), we introduce binary variables \(\mathbf{z}\in\{0,1\}^{n}\) and require that \(x_{i}=z_{i}x_{i}\). This gives the following reformulation of (4):
\[\begin{split}\min_{\mathbf{z},\mathbf{x}\in\mathbb{R}^{n}}& \sum_{i=1}^{n}z_{i}+\frac{1}{\gamma}\sum_{i=1}^{n}w_{i}^{2}x_{i}^{2}\\ \text{s.t.}&\|\mathbf{A}\mathbf{x}-\mathbf{b}\|_{2}^{2}\leq \epsilon,\ x_{i}=z_{i}x_{i}\ \forall\ i,\ z_{i}\in\{0,1\}\ \forall\ i.\end{split} \tag{11}\]
The constraints \(x_{i}=z_{i}x_{i}\) in (11) are nonconvex in the decision variables \((\mathbf{x},\mathbf{z})\). To deal with these constraints, we make use of the perspective reformulation [37]. Specifically,
we introduce non-negative variables \(\mathbf{\theta}\in\mathbb{R}_{+}^{n}\) where \(\theta_{i}\) models \(x_{i}^{2}\) and introduce the constraints \(\theta_{i}z_{i}\geq x_{i}^{2}\), which are second order cone representable. Thus, if \(z_{i}=0\), we will have \(x_{i}=0\). This results in the following reformulation of (11):
\[\begin{split}\min_{\mathbf{z},\mathbf{x},\mathbf{\theta}\in\mathbb{R}^{n}}& \sum_{i=1}^{n}z_{i}+\frac{1}{\gamma}\sum_{i=1}^{n}w_{i}^{2}\theta_{i}\\ \text{s.t.}&\|\mathbf{A}\mathbf{x}-\mathbf{b}\|_{2}^{2}\leq \epsilon,\ x_{i}^{2}\leq z_{i}\theta_{i}\ \forall\ i,\\ & z_{i}\in\{0,1\}\ \forall\ i,\ \theta_{i}\geq 0\ \forall\ i.\end{split} \tag{12}\]
**Theorem 3**.: _The mixed integer second order cone problem given by (12) is an exact reformulation of (4)._
Proof.: We show that given a feasible solution to (4), we can construct a feasible solution to (12) that achieves the same objective value and vice versa.
Consider an arbitrary solution \(\bar{\mathbf{x}}\) to (4). Let \(\bar{\mathbf{z}}\in\mathbb{R}^{n}\) be the binary vector obtained by setting \(\bar{z}_{i}=\mathbb{1}\left\{\bar{x}_{i}\neq 0\right\}\) and let \(\bar{\mathbf{\theta}}\in\mathbb{R}^{n}\) be the vector obtained by setting \(\bar{\theta}_{i}=\bar{x}_{i}^{2}\). We have \(\|\mathbf{A}\bar{\mathbf{x}}-\mathbf{b}\|_{2}^{2}\leq\epsilon\), \(\bar{z}_{i}\bar{\theta}_{i}=\mathbb{1}\left\{\bar{x}_{i}\neq 0\right\}\cdot \bar{x}_{i}^{2}=\bar{x}_{i}^{2}\), \(\bar{\mathbf{z}}\in\{0,1\}^{n}\) and \(\bar{\theta}_{i}\geq 0\) so the solution \((\bar{\mathbf{x}},\bar{\mathbf{z}},\bar{\mathbf{\theta}})\) is feasible to (12). Lastly, notice that we have
\[\sum_{i=1}^{n}\bar{z}_{i}+\frac{1}{\gamma}\sum_{i=1}^{n}w_{i}^{2}\bar{\theta}_ {i}=\sum_{i=1}^{n}\mathbb{1}\left\{\bar{x}_{i}\neq 0\right\}+\frac{1}{ \gamma}\sum_{i=1}^{n}w_{i}^{2}\bar{x}_{i}^{2}=\|\bar{\mathbf{x}}\|_{0}+\frac{1}{ \gamma}\|\mathbf{W}\bar{\mathbf{x}}\|_{2}^{2}.\]
Thus, the solution \((\bar{\mathbf{x}},\bar{\mathbf{z}},\bar{\mathbf{\theta}})\) is a feasible solution to (12) that achieves the same objective value as \(\bar{\mathbf{x}}\) does in (4).
Consider now an arbitrary solution \((\bar{\mathbf{x}},\bar{\mathbf{z}},\bar{\mathbf{\theta}})\) to (12). Since we have \(\|\mathbf{A}\bar{\mathbf{x}}-\mathbf{b}\|_{2}^{2}\leq\epsilon\), \(\bar{\mathbf{x}}\) is feasible to (4). Next, we note that the constraints \(x_{i}^{2}\leq z_{i}\theta_{i}\) and \(z_{i}\in\{0,1\}\) imply that \(\bar{z}_{i}\geq\mathbb{1}\left\{\bar{x}_{i}\neq 0\right\}\) and \(\bar{\theta}_{i}\geq\bar{x}_{i}^{2}\). Finally, we observe that
\[\|\bar{\mathbf{x}}\|_{0}+\frac{1}{\gamma}\|\mathbf{W}\bar{\mathbf{x}}\|_{2}^{2}=\sum_{i=1}^ {n}\mathbb{1}\left\{\bar{x}_{i}\neq 0\right\}+\frac{1}{\gamma}\sum_{i=1}^{n}w_{i}^{2} \bar{x}_{i}^{2}\leq\sum_{i=1}^{n}\bar{z}_{i}+\frac{1}{\gamma}\sum_{i=1}^{n}w_{i }^{2}\bar{\theta}_{i}.\]
Thus, the solution \(\bar{\mathbf{x}}\) is a feasible solution to (4) that achieves an objective value equal to or less than the objective value that \((\bar{\mathbf{x}},\bar{\mathbf{z}},\bar{\mathbf{\theta}})\) achieves in (12). This completes the proof.
### A Second Order Cone Relaxation
Problem (12) is a reformulation of Problem (4) where the problem's nonconvexity is entirely captured by the binary variables \(\mathbf{z}\). We now obtain a convex relaxation of (4) by solving (12) with \(\mathbf{z}\in\text{conv}(\{0,1\}^{n})=[0,1]^{n}\). This gives the following convex optimization problem:
\[\begin{array}{ll}\min_{\mathbf{z},\mathbf{x},\mathbf{\theta}\in\mathbb{R}^{n}}& \sum_{i=1}^{n}z_{i}+\frac{1}{\gamma}\sum_{i=1}^{n}w_{i}^{2} \theta_{i}\\ \text{s.t.}&\|\mathbf{A}\mathbf{x}-\mathbf{b}\|_{2}^{2}\leq\epsilon,\ x_{i}^{2} \leq z_{i}\theta_{i}\ \forall\ i,\\ &0\leq z_{i}\leq 1\ \forall\ i,\ \theta_{i}\geq 0\ \forall\ i.\end{array} \tag{13}\]
A natural question to ask is how problem (13) compares to the Weighted Basis Pursuit Denoising problem given by (7), a common convex approximation for CS in the noisy setting. Surprisingly, under mild conditions on the regularization parameter \(\gamma\), it can be shown that solving (13) is exactly equivalent to solving (7). This implies that though Basis Pursuit Denoising is typically motivated as a convex approximation to CS in the presence of noise, it can alternatively be understood as the natural convex relaxation of the mixed integer second order cone problem given by (12) for appropriately chosen values of \(\gamma\). We formalize this statement in Theorem 4.
**Theorem 4**.: _There exists a finite value \(\gamma_{0}<\infty\) such that for all \(\bar{\gamma}\geq\gamma_{0}\), any vector \(\mathbf{x}^{\star}\) that is an optimal solution of (7) is also an optimal solution of (13). Let \(\mathcal{X}=\{\mathbf{x}:\|\mathbf{A}\mathbf{x}-\mathbf{b}\|_{2}^{2}\leq\epsilon\}\), the feasible set of (7). We can take \(\gamma_{0}=\max_{x\in\mathcal{X}}\|\mathbf{W}\mathbf{x}\|_{\infty}^{2}\)._
Proof.: Rewrite (13) as the two stage optimization problem given by (14).
\[\begin{array}{ll}\min_{\mathbf{x}\in\mathcal{X}}&\min_{\mathbf{z},\mathbf{\theta}\in \mathbb{R}^{n}}&\sum_{i=1}^{n}z_{i}+\frac{1}{\gamma}\sum_{i=1}^{n}w_{i}^{2} \theta_{i}\\ &\text{s.t.}&x_{i}^{2}\leq z_{i}\theta_{i}\ \forall\ i,\ 0\leq z_{i}\leq 1\ \forall\ i,\ \theta_{i}\geq 0\ \forall\ i.\end{array} \tag{14}\]
Let \(\gamma_{0}=\max_{x\in\mathcal{X}}\|x\|_{\infty}^{2}\). To establish the result, we will show that for any \(\mathbf{x}\in\mathcal{X}\) the optimal value of the inner minimization problem in (14) is a scalar multiple of the \(\ell_{1}\) norm of \(\mathbf{W}\mathbf{x}\) provided that \(\gamma\geq\gamma_{0}\).
Fix \(\gamma\geq\gamma_{0}\) and consider any \(\bar{\mathbf{x}}\in\mathcal{X}\). We make three observations that allow us to reformulate the inner minimization problem in (14):
1. The objective function of the inner minimization problem is separable.
2. For any \(i\) such that \(\bar{x}_{i}=0\), it is optimal to set \(z_{i}=\theta_{i}=0\) which results in no contribution to the objective function.
3. For any \(i\) such that \(\bar{x}_{i}\neq 0\), we must have \(z_{i}>0\) and it is optimal to take \(\theta_{i}=\frac{\bar{x}_{i}^{2}}{z_{i}}\).
We can therefore equivalently express the inner minimization problem of (14) as:
\[\begin{array}{ll}\min_{\mathbf{z}\in\mathbb{R}^{n}}&\sum_{i:x_{i} \neq 0}\left[z_{i}+\frac{w_{i}^{2}}{\gamma}\cdot\frac{\bar{x}_{i}^{2}}{z_{i} }\right]\\ \text{s.t.}&0<z_{i}\leq 1\ \forall\ i.\end{array} \tag{15}\]
Let \(f_{i}(z)=z+\frac{w_{i}^{2}}{\gamma}\cdot\frac{\bar{x}_{i}^{2}}{z}\). We want to minimize the function \(f_{i}(z)\) over the interval \((0,1]\) for all \(i\) such that \(\bar{x}_{i}\neq 0\). Fix an arbitrary \(i\) satisfying \(\bar{x}_{i}\neq 0\). We have \(\frac{d}{dz}f_{i}(z)=1-\frac{w_{i}^{2}}{\gamma}\cdot\frac{\bar{x}_{i}^{2}}{z_{ i}^{2}}\) and \(\frac{d}{dz}f_{i}(z^{*})=0\iff z^{*}=\pm\frac{w_{i}}{\sqrt{\gamma}}|\bar{x}_{i}|\). The condition \(\gamma\geq\gamma_{0}=\max_{x\in\mathcal{X}}\|\mathbf{W}\mathbf{x}\|_{\infty}^{2}\) and the fact that \(\bar{\mathbf{x}}\in\mathcal{X}\) implies that \(1\geq\frac{w_{i}^{2}\bar{x}_{i}^{2}}{\gamma}\) for all \(i\). Thus, we have \(0<\frac{w_{i}}{\sqrt{\gamma}}|\bar{x}_{i}|\leq 1\). Let
\(\bar{z}=\frac{w_{i}}{\sqrt{\gamma}}|\bar{x}_{i}|\). Noting that \(\lim_{z\to 0^{+}}f_{i}(z)=\infty\), the minimum of \(f_{i}(z)\) over the interval \((0,1]\) must occur either at \(1\) or \(\bar{z}\). We have
\[\left(\frac{w_{i}}{\sqrt{\gamma}}|\bar{x}_{i}|-1\right)^{2}\geq 0\implies \frac{w_{i}^{2}\bar{x}_{i}^{2}}{\gamma}-\frac{2w_{i}}{\sqrt{\gamma}}|\bar{x}_ {i}|+1\geq 0\implies f_{i}(1)=\frac{w_{i}^{2}\bar{x}_{i}^{2}}{\gamma}+1 \geq\frac{2w_{i}}{\sqrt{\gamma}}|\bar{x}_{i}|=f_{i}(\bar{z}).\]
Therefore, the minimum of \(f_{i}(z)\) on \((0,1]\) occurs at \(\bar{z}=\frac{w_{i}}{\sqrt{\gamma}}|\bar{x}_{i}|\) and is equal to \(f_{i}(\bar{z})=\frac{2}{\sqrt{\gamma}}|\bar{x}_{i}|\). This allows us to conclude that the optimal value of (14) is given by:
\[\sum_{i:x_{i}\neq 0}\frac{2w_{i}}{\sqrt{\gamma}}|\bar{x}_{i}|=\sum_{i=1}^{n} \frac{2w_{i}}{\sqrt{\gamma}}|\bar{x}_{i}|=\frac{2}{\sqrt{\gamma}}\|\mathbf{W} \bar{\mathbf{x}}\|_{1}.\]
We have shown that for fixed \(\mathbf{x}\in\mathcal{X}\), the optimal value of the inner minimization problem of (14) is a scalar multiple of the \(\ell_{1}\) norm of \(\mathbf{W}\mathbf{x}\). We can rewrite (14) as
\[\min_{\mathbf{x}\in\mathcal{X}}\frac{2}{\sqrt{\gamma}}\|\mathbf{W}\bar{\mathbf{x}}\|_{1}, \tag{16}\]
which has the same set of optimal solutions as (7) because this set is invariant under scaling of the objective function. This completes the proof.
**Remark 2**.: _Note that by taking \(\mathbf{W}=\mathbf{I}\), it immediately follows from Theorem 4 that any vector \(\mathbf{x}^{\star}\) that is an optimal solution of (5) is also an optimal solution of (13) when we set \(\gamma\geq\gamma_{0}=\max_{\mathbf{x}\in\mathcal{X}}\|\mathbf{x}\|_{\infty}^{2}\)._
Convex relaxations of nonconvex optimization problems are helpful for two reasons. Firstly, a convex relaxation provides a lower (upper) bound to a minimization (maximization) problem which given a feasible solution to the nonconvex optimization problem provides a certificate of worst case suboptimality. Secondly, convex relaxations can often be used as building blocks in the construction of global optimization algorithms or heuristics for nonconvex optimization problems. Strong convex relaxations are desirable because they produce tighter bounds on the optimal value of the problem of interest (stronger certificates of worst case suboptimality) and generally lead to more performant global optimization algorithms and heuristics. Let \(\mathcal{X}_{1}=\{(\mathbf{z},\mathbf{x},\mathbf{\theta})\in\mathbb{R}^{n}\times\mathbb{R}^ {n}\times\mathbb{R}^{n}:\|\mathbf{A}\mathbf{x}-\mathbf{b}\|_{2}^{2}\leq\epsilon,x_{i}^{2} \leq z_{i}\theta_{i}\;\forall\;i,\theta_{i}\geq 0\;\forall\;i\}\) and \(\mathcal{X}_{1}=\{(\mathbf{z},\mathbf{x},\mathbf{\theta})\in\mathbb{R}^{n}\times\mathbb{R}^ {n}\times\mathbb{R}^{n}:\mathbf{z}\in\{0,1\}^{n}\}\). We can equivalently write (12) as:
\[\min_{(\mathbf{z},\mathbf{x},\mathbf{\theta})\in\mathcal{X}_{1}\cap\mathcal{X}_{2}}\sum_{i =1}^{n}z_{i}+\frac{1}{\gamma}\sum_{i=1}^{n}w_{i}^{2}\theta_{i}.\]
The strongest possible convex relaxation to (12) would be obtained by minimizing the objective function in (12) subject to the constraint that \((\mathbf{z},\mathbf{x},\mathbf{\theta})\in\mathrm{conv}(\mathcal{X}_{1}\cap\mathcal{X}_{2})\). Since the objective function is linear in the decision variables, solving over \(\mathrm{conv}(\mathcal{X}_{1}\cap\mathcal{X}_{2})\) would produce an optimal solution to (12) since the objective would be minimized at an extreme point of \(\mathrm{conv}(\mathcal{X}_{1}\cap\mathcal{X}_{2})\) which by definition must be an element of \(\mathcal{X}_{1}\cap\mathcal{X}_{2}\).
Unfortunately, in general it is hard to represent \(\mathrm{conv}(\mathcal{X}_{1}\cap\mathcal{X}_{2})\) explicitly. The relaxation given by (13) consists of minimizing the objective function of (12) subject to the constraint that \((\mathbf{z},\mathbf{x},\mathbf{\theta})\in(\mathrm{conv}(\mathcal{X}_{1})\cap\mathrm{conv}( \mathcal{X}_{2}))=\mathcal{X}_{1}\cap\mathrm{conv}(\mathcal{X}_{2})\supseteq \mathrm{conv}(\mathcal{X}_{1}\cap\mathcal{X}_{2})\).
Stronger convex relaxations to (12) can be obtained by introducing additional valid inequalities to (12) and then relaxing the integrality constraint on \(\mathbf{z}\). For example, suppose we know a value \(M\geq\gamma_{0}=\max_{\mathbf{z}\in\mathcal{X}}\|\mathbf{W}\mathbf{x}\|_{\infty}^{2}\). We can use this value to introduce Big-M constraints similar in similar flavour to the formulation proposed by [11]. Under this assumption, it follows immediately that any feasible solution to (12) satisfies \(-Mz_{i}\leq w_{i}x_{i}\leq Mz_{i}\ \forall\ i\). Thus, we can obtain another convex relaxtion of (12) by minimizing its objective function subject to the constraint \((\mathbf{z},\mathbf{x},\mathbf{\theta})\in\bar{\mathcal{X}}_{1}\cap\mathrm{conv}(\mathcal{ X}_{2})\supseteq\mathrm{conv}(\mathcal{X}_{1}\cap\mathcal{X}_{2})\) where we define \(\hat{\mathcal{X}}_{1}=\mathcal{X}_{1}\cap\{(\mathbf{z},\mathbf{x},\mathbf{\theta})\in \mathbb{R}^{n}\times\mathbb{R}^{n}\times\mathbb{R}^{n}:-Mz_{i}\leq w_{i}x_{i} \leq Mz_{i}\ \forall\ i\}\). Explicitly, with knowledge of such a value \(M\) we can solve
\[\min_{\mathbf{z},\mathbf{x},\mathbf{\theta}\in\mathbb{R}^{n}} \sum_{i=1}^{n}z_{i}+\frac{1}{\gamma}\sum_{i=1}^{n}w_{i}^{2}\theta _{i} \tag{17}\] \[\mathrm{s.t.} \|\mathbf{A}\mathbf{x}-\mathbf{b}\|_{2}^{2}\leq\epsilon,\ x_{i}^{2}\leq z_{i }\theta_{i}\ \forall\ i,\] \[-Mz_{i}\leq w_{i}x_{i}\leq Mz_{i}\ \forall\ i,\ 0\leq z_{i}\leq 1 \ \forall\ i,\ \theta_{i}\geq 0\ \forall\ i.\]
**Remark 3**.: _Given any input data \(\mathbf{A},\mathbf{b},\epsilon\), if \(M\) satisfies \(M\geq\gamma_{0}=\max_{\mathbf{x}\in\mathcal{X}}\|\mathbf{W}\mathbf{x}\|_{\infty}^{2}\), then the optimal value of (17) is no less than the optimal value of (13). This follows immediately by noting that under the condition on \(M\), the feasible set of (17) is contained in the feasible set of (13)._
The mixed integer second order cone reformulation and convex relaxation introduced in this section lead to two approaches for solving (4) to certifiable optimality. On the one hand, solvers like Gurobi contain direct support for solving mixed integer second order cone problems so problem (4) can be solved directly. On the other hand, it is possible to develop a custom branch-and-bound routine that leverages a modification of (7) to compute lower bounds. We illustrate this in Section 5. This custom, problem specific approach outperforms Gurobi because (5) is a more tractable problem than (13) due in part to the presence of fewer second order cone constraints which decreases the computational time spent computing lower bounds.
### A Positive Semidefinite Cone Relaxation
In this section, we formulate (4) as a polynomial optimization problem and present a semidefinite relaxation using the sum of squares (SOS) hierarchy [38]. We show that this semidefinite relaxation is tighter than the second order cone relaxation presented previously.
Let \(f(\mathbf{z},\mathbf{x})=\sum_{i=1}^{n}z_{i}+\frac{1}{\gamma}\sum_{i=1}^{n}w_{i}^{2} \theta_{i}\) denote the objective function of (12). Notice that the constraint \(\mathbf{z}\in\{0,1\}^{n}\) in (12) is equivalent to the constraint \(\mathbf{z}\circ\mathbf{z}=\mathbf{z}\) (where \(\circ\) denotes the element wise product). With this observation, we can express the feasible set of (12) as the semialgebraic set given by:
\[\Omega=\{(\mathbf{z},\mathbf{x})\in\mathbb{R}^{n}\times\mathbb{R}^{n}:\epsilon-\|\mathbf{ A}\mathbf{x}-\mathbf{b}\|_{2}^{2}\geq 0,x_{i}z_{i}-x_{i}=0\ \forall\ i,z_{i}^{2}-z_{i}=0\ \forall\ i\}.\]
Thus, we can equivalently write (12) as \(\min_{(\mathbf{z},\mathbf{x})\in\Omega}f(\mathbf{z},\mathbf{x})\). It is not difficult to see that the preceding optimization problem has the same optimal value as the problem given by
\[\max_{\lambda\in\mathbb{R}}\quad\lambda\text{ s.t. }f(\mathbf{z},\mathbf{x})-\lambda \geq 0\ \forall\ (\mathbf{z},\mathbf{x})\in\Omega. \tag{18}\]
Problem (18) is a polynomial optimization problem that has the same optimal value as (4).
We can obtain tractable lower bounds for (18) by leveraging techniques from sum of squares (SOS) optimization [39, 40]. A polynomial \(g\in\mathbb{R}[x]\) is said to be sum of squares (SOS) if for some \(K\in\mathbb{N}\) there exists polynomials \(\{g_{k}\}_{k=1}^{K}\subset\mathbb{R}[x]\) such that \(g=\sum_{k=1}^{K}g_{k}^{2}\). We denote the set of all SOS polynomials as \(\Sigma^{2}[x]\). Moreover, we denote the set of polynomials of degree at most \(d\) as \(\mathbb{R}_{d}[x]\subset\mathbb{R}[x]\) and we denote the set of SOS polynomials of degree at most \(2d\) as \(\Sigma^{2}_{d}[x]\subset\Sigma^{2}[x]\). It is trivial to see that any polynomial that is SOS is globally non-negative. More generally, SOS polynomials can be utilized to model polynomial non-negativity over arbitrary semialgebraic sets. The quadratic module associated with the semialgebraic set \(\Omega\) is defined as:
\[\begin{split} QM(\Omega)=&\ \bigg{\{}s_{0}(\mathbf{z},\mathbf{x})+s_{1}(\mathbf{z},\mathbf{x})( \epsilon-\|\mathbf{A}\mathbf{x}-\mathbf{b}\|_{2}^{2})+\sum_{i=1}^{n}t_{i}(\mathbf{z},\mathbf{x})(x _{i}z_{i}-x_{i})\\ &+\sum_{i=1}^{n}r_{i}(\mathbf{z},\mathbf{x})(z_{i}^{2}-z_{i}):s_{0},s_{1 }\in\Sigma^{2}[\mathbf{z},\mathbf{x}],t_{i},r_{i}\in\mathbb{R}[\mathbf{z},\mathbf{x}]\ \forall\ i\bigg{\}}.\end{split} \tag{19}\]
It is straightforward to see that if a function \(h(\mathbf{z},\mathbf{x})\) is an element of \(QM(\Omega)\), then \(h(\mathbf{z},\mathbf{x})\) is non-negative on \(\Omega\) (since for points in \(\Omega\), \(h(\mathbf{z},\mathbf{x})\) takes the form of the sum of two SOS polynomials). Thus, membership in \(QM(\Omega)\) is a sufficient condition for non-negativity on \(\Omega\). We further define the restriction of \(QM(\Omega)\) to polynomials of degree at most \(2d\) as:
\[\begin{split} QM_{d}(\Omega)&=\ \bigg{\{}s_{0}(\mathbf{z},\mathbf{x})+s_{1}(\mathbf{z},\mathbf{x})(\epsilon-\|\mathbf{A}\mathbf{x}-\mathbf{ b}\|_{2}^{2})+\sum_{i=1}^{n}t_{i}(\mathbf{z},\mathbf{x})(x_{i}z_{i}-x_{i})\\ &+\sum_{i=1}^{n}r_{i}(\mathbf{z},\mathbf{x})(z_{i}^{2}-z_{i}):s_{0}\in \Sigma^{2}_{d}[\mathbf{z},\mathbf{x}],s_{1}\in\Sigma^{2}_{d-1}[\mathbf{z},\mathbf{x}],t_{i},r _{i}\in\mathbb{R}_{2d-2}[\mathbf{z},\mathbf{x}]\ \forall\ i\bigg{\}}.\end{split} \tag{20}\]
It is immediate that \(QM_{d}(\Omega)\subset QM(\Omega)\) and membership in \(QM_{d}(\Omega)\) provides a certificate of non-negativity on \(\Omega\). Importantly, given an arbitrary polynomial \(h(\mathbf{z},\mathbf{x})\) it is possible to verify membership in \(QM_{d}(\Omega)\) by checking feasibility of a semidefinite program. Thus, for any \(d\in\mathbb{N}\), we obtain a semidefinite relaxation of (4) by solving:
\[\max_{\lambda\in\mathbb{R}}\quad\lambda\text{ s.t. }f(\mathbf{z},\mathbf{x})-\lambda \in QM_{d}(\Omega). \tag{21}\]
Since \(QM_{d}(\Omega)\subset QM_{d+1}(\Omega)\), (21) produces an increasingly strong lower bound with increasing values of \(d\). A natural question to ask is how the relaxation given by (21) compares to that given by (13). We answer this question in Theorem 5.
**Theorem 5**.: _For every \(d\geq 1\), the optimal value of (21) is no less than the optimal value of (13)._
Proof.: Without loss of generality, we take \(\mathbf{W}=\mathbf{I}\). We prove the result for \(\gamma\geq\gamma_{0}=\max_{\mathbf{x}\in\mathcal{X}}\|\mathbf{x}\|_{\infty}^{2}\) though the result extends naturally to the case of arbitrary \(\gamma\). Fix any \(\epsilon>0\), \(\mathbf{A}\in\mathbb{R}^{m\times n}\) and \(\mathbf{b}\in\mathbb{R}^{m}\). By Theorem 4, (13) has the same optimal value as (7). Consider the dual of (7) which for \(\mathbf{W}=\mathbf{I}\) is given by
\[\max_{\mathbf{\nu}\in\mathbb{R}^{m}}\ \ \mathbf{b}^{T}\mathbf{\nu}-\sqrt{\epsilon}\|\mathbf{\nu} \|_{2}\ \text{s.t.}\ |\mathbf{\nu}^{T}A_{i}|\leq\frac{2}{\sqrt{\gamma}}\ \ \forall\ i. \tag{22}\]
Strong duality holds between (22) and (7) since \(\mathbf{\nu}=0\) is always a strictly feasible point in (22) [41]. Fix \(d=1\). We will show that for any feasible solution to (22), we can construct a feasible solution to (21) that achieves the same objective value. Let \(\bar{\mathbf{\nu}}\in\mathbb{R}^{m}\) denote an arbitrary feasible solution to (22). Define \(\bar{r}_{i}(\mathbf{z},\mathbf{x})=-1,\bar{t}_{i}(\mathbf{z},\mathbf{x})=A_{i}^{T}\bar{\mathbf{\nu}}\) for all \(i\), \(\bar{s}_{1}(\mathbf{z},\mathbf{x})=\tau\) and define \(\bar{s}_{0}(\mathbf{z},\mathbf{x})=\text{monomial}(\mathbf{z},\mathbf{x},1)^{T}\bar{\mathbf{S}} \text{monomial}(\mathbf{z},\mathbf{x},1)\) where \(\text{monomial}(\mathbf{x},1)\in\mathbb{R}[\mathbf{z},\mathbf{x}]^{2n+1}\) is the vector of monomials in \(\mathbb{R}[\mathbf{z},\mathbf{x}]\) of degree at most \(1\) and \(\bar{\mathbf{S}}\in\mathbb{R}^{2n+1\times 2n+1}\) is given by
\[\bar{\mathbf{S}}=\begin{bmatrix}\frac{1}{\gamma}\mathbf{I}_{n}+\tau\mathbf{A}^{T}\mathbf{A}& \text{diag}(\frac{-\mathbf{A}^{T}\bar{\mathbf{\nu}}}{2})&\mathbf{A}^{T}(\frac{1}{2}\bar{\bm {\nu}}-\tau\mathbf{b})\\ \hline\text{diag}(\frac{-\mathbf{A}^{T}\bar{\mathbf{\nu}}}{2})&\mathbf{I}_{n}&\mathbf{0}_{n}\\ \hline(\frac{1}{2}\bar{\mathbf{\nu}}^{T}-\tau\mathbf{b}^{T})\mathbf{A}&\mathbf{0}_{n}^{T}& \tau(\mathbf{b}^{T}\mathbf{b}-\epsilon)-\bar{\mathbf{\nu}}^{T}\mathbf{b}+\sqrt{\epsilon}\| \bar{\mathbf{\nu}}\|_{2}\end{bmatrix}.\]
Clearly, we have \(\bar{t}_{i},\bar{r}_{i}\in\mathbb{R}_{0}[\mathbf{z},\mathbf{x}]\) for all \(i\) and \(\bar{s}_{1}\in\Sigma_{0}^{2}[\mathbf{z},\mathbf{x}]\) provided \(\tau\geq 0\). We claim that \(\bar{s}_{0}\in\Sigma_{1}^{2}[\mathbf{z},\mathbf{x}]\) for an appropriately chosen value of \(\tau\geq 0\). To see this, note that by the generalized Schur complement lemma (see Boyd et al. 1994, Equation 2.41), \(\bar{\mathbf{S}}\succeq 0\) if and only if \(\begin{pmatrix}\mathbf{I}_{n}&\mathbf{0}_{n}\\ \mathbf{0}_{n}^{T}&\sigma\end{pmatrix}\succeq 0\) and \(\frac{1}{\gamma}\mathbf{I}_{n}+\tau\mathbf{A}^{T}\mathbf{A}-\text{diag}(\frac{-\mathbf{A}^{T} \bar{\mathbf{\nu}}}{2})^{2}-\frac{1}{4\sigma}\mathbf{A}^{T}(\frac{1}{2}\bar{\mathbf{\nu}} -\tau\mathbf{b})(\frac{1}{2}\bar{\mathbf{\nu}}^{T}-\tau\mathbf{b}^{T})\mathbf{A}\succeq 0\) where we let \(\sigma=\tau(\mathbf{b}^{T}\mathbf{b}-\epsilon)-\bar{\mathbf{\nu}}^{T}\mathbf{b}+\sqrt{\epsilon} \|\bar{\mathbf{\nu}}\|_{2}\). The first condition is satisfied provided that \(\sigma\geq 0\iff\tau\geq\frac{\bar{\mathbf{\nu}}^{T}\mathbf{b}-\sqrt{\epsilon}\|\bar{\mathbf{ \nu}}\|_{2}}{(\mathbf{b}^{T}\mathbf{b}-\epsilon)}>0\). Moreover, it is possible to choose \(\tau\geq\frac{\bar{\mathbf{\nu}}^{T}\mathbf{b}-\sqrt{\epsilon}\|\bar{\mathbf{\nu}}\|_{2}}{ (\mathbf{b}^{T}\mathbf{b}-\epsilon)}\) such that the matrix \(-\frac{1}{4\sigma}\mathbf{A}^{T}(\frac{1}{2}\bar{\mathbf{\nu}}-\tau\mathbf{b})(\frac{1}{2} \bar{\mathbf{\nu}}^{T}-\tau\mathbf{b}^{T})\mathbf{A}\) is positive semidefinite, while the matrix \(\frac{1}{\gamma}\mathbf{I}_{n}+\tau\mathbf{A}^{T}\mathbf{A}-\text{diag}(\frac{-\mathbf{A}^{T} \bar{\mathbf{\nu}}}{2})^{2}\) is positive semidefinite for any \(\tau\geq 0\) as long as \(|\bar{\mathbf{\nu}}^{T}A_{i}|\leq\frac{2}{\sqrt{\gamma}}\) for all \(i\) which is guaranteed by the feasibility of \(\bar{\mathbf{\nu}}\) in (22). Thus, we have \(\bar{\mathbf{S}}\succeq 0\implies\bar{s}_{0}\in\Sigma_{1}^{2}[\mathbf{z},\mathbf{x}]\). Finally, we note that
\[\bar{s}_{0}+\bar{s}_{1}(\epsilon-\|\mathbf{A}\mathbf{x}-\mathbf{b}\|_{2}^{2})+\sum_{i=1}^{n} \bar{t}_{i}(x_{i}z_{i}-x_{i})+\sum_{i=1}^{n}\bar{r}_{i}(z_{i}^{2}-z_{i})=f(\mathbf{ z},\mathbf{x})-\mathbf{b}^{T}\bar{\mathbf{\nu}}+\sqrt{\epsilon}\|\bar{\mathbf{\nu}}\|_{2}\]
We have shown that given an arbitrary feasible solution to (22), we can construct a solution that is feasible to (21) that achieves the same objective value. Note that this construction holds for any \(d\geq 1\). Thus, for any \(d\in\mathbb{N}\) the optimal value of (21) is at least as high as the optimal value of (13).
We have shown that for any value of \(d\), (21) produces a lower bound on the optimal value of (4) at least as strong as the bound given by (13). Unfortunately, (21) suffers from scalability challenges as it requires solving a positive semidefinite program with PSD constraints on matrices with dimension \(\binom{2n+d}{d}\times\binom{2n+d}{d}\). We further discuss the scalability of (21) in Section 6. Note that since (21) is a maximization problem, any feasible solution (in particular a nearly optimal one) still consists of a valid lower bound on the optimal value of (4).
## 5 Branch-and-Bound
In this section, we propose a branch-and-bound algorithm in the sense of [42, 43] that computes certifiably optimal solutions to Problem (3) by solving the mixed integer second order cone reformulation given by (12). We state explicitly our subproblem strategy in Section 5.1, before stating our overall algorithmic approach in Section 5.2.
### Subproblems
Henceforth, for simplicity we will assume the weights \(w_{i}\) take value \(1\) for all \(i\). What follows generalizes immediately to the setting where this assumption does not hold. Notice that (12) can be equivalently written as the two stage optimization problem given by \(\min_{\mathbf{z}\in\{0,1\}^{n}}h(\mathbf{z})\) where we define \(h(\mathbf{z})\) as:
\[h(\mathbf{z})=\min_{\mathbf{x},\mathbf{\theta}\in\mathbb{R}^{n}} \sum_{i=1}^{n}z_{i}+\frac{1}{\gamma}\sum_{i=1}^{n}\theta_{i}\] (23) s.t. \[\|\mathbf{Ax}-\mathbf{b}\|_{2}^{2}\leq\epsilon,\ x_{i}^{2}\leq z_{i} \theta_{i}\ \forall\ i,\ \theta_{i}\geq 0\ \forall i.\]
Note that in general, there exist binary vectors \(\bar{\mathbf{z}}\in\{0,1\}^{n}\) such that the optimization problem in (23) is infeasible. For any such \(\bar{\mathbf{z}}\), we define \(h(\bar{\mathbf{z}})=\infty\). We construct an enumeration tree that branches on the entries of the binary vector \(\mathbf{z}\) which models the support of \(\mathbf{x}\). A (partial or complete) sparsity pattern is associated with each node in the tree and is defined by disjoint collections \(\mathcal{I}_{0},\mathcal{I}_{1}\subseteq[n]\). For indices \(i\in\mathcal{I}_{0}\), we constrain \(z_{i}=0\) and for indices \(j\in\mathcal{I}_{1}\), we constrain \(z_{j}=1\). We say that \(\mathcal{I}_{0}\) and \(\mathcal{I}_{1}\) define a complete sparsity pattern if \(|\mathcal{I}_{0}|+|\mathcal{I}_{1}|=n\), otherwise we say that \(\mathcal{I}_{0}\) and \(\mathcal{I}_{1}\) define a partial sparsity pattern. A node in the tree is said to be terminal if its associated sparsity pattern is complete.
Each node in the enumeration tree has an associated subproblem, defined by the collections \(\mathcal{I}_{0}\) and \(\mathcal{I}_{1}\), which is given by:
\[\min_{\mathbf{z}\in\{0,1\}^{n}}\ \ \ h(\mathbf{z}),\ \ \ \text{s.t.}\ \ \ \ z_{i}=0\ \forall\,i\in\mathcal{I}_{0},z_{j}=1\ \forall\,j\in\mathcal{I}_{1}. \tag{24}\]
Note that if \(\mathcal{I}_{0}=\mathcal{I}_{1}=\emptyset\), (24) is equivalent to (12) (under the assumption that \(w_{i}=1\) for all \(i\)).
#### 5.1.1 Subproblem Lower Bound
Let \(\mathcal{I}=\mathcal{I}_{0}\cup\mathcal{I}_{1}\). We obtain a lower bound for (24) by relaxing the binary variables that are not fixed (\(z_{i}\) such that \(i\notin\mathcal{I}\)) to take values within the interval \([0,1]\). The resulting lower bound is given by
\[\begin{split}\min_{\mathbf{z},\mathbf{x},\mathbf{b}\in\mathbb{R}^{n}}& \quad\sum_{i=1}^{n}z_{i}+\frac{1}{\gamma}\sum_{i=1}^{n}\theta_{i}\\ \text{s.t.}&\|\mathbf{Ax}-\mathbf{b}\|_{2}^{2}\leq\epsilon, \ x_{i}^{2}\leq z_{i}\theta_{i}\ \forall\ i,\ 0\leq z_{i}\leq 1\ \forall\ i\notin\mathcal{I},\\ & z_{i}=0\ \forall\ i\in\mathcal{I}_{0},\ z_{i}=1\ \forall\ i\in\mathcal{I}_{1},\ \theta_{i}\geq 0\ \forall\ i.\end{split} \tag{25}\]
Notice that for an arbitrary set \(\bar{\mathcal{I}}_{0}\subseteq[n]\), problems (24) and (25) are infeasible if and only if the set \(\{\mathbf{x}:\|\mathbf{Ax}-\mathbf{b}\|_{2}^{2}\leq\epsilon,x_{i}=0\ \forall\ i\in\bar{\mathcal{I}}_{0}\}\) is empty. Moreover, observe it immediately follows that if (24) and (25) are infeasible for \(\bar{\mathcal{I}}_{0}\), then they are also infeasible for any set \(\hat{\mathcal{I}}_{0}\subseteq[n]\) satisfying \(\bar{\mathcal{I}}_{0}\subseteq\hat{\mathcal{I}}_{0}\). We use this observation in section 5.2 to generate feasibility cuts whenever an infeasible subproblem is encountered in the branch-and-bound tree. Using a similar argument as in the proof of Theorem 4, it can be shown that when \(\gamma\geq\gamma_{0}=\max_{x\in\mathcal{X}}\|\mathbf{x}\|_{\infty}^{2}\), (25) is equivalent to the convex optimization problem given by (26):
\[\begin{split}\min_{\mathbf{x}\in\mathbb{R}^{n}}&\quad| \mathcal{I}_{1}|+\frac{1}{\gamma}\sum_{i\in\mathcal{I}_{1}}x_{i}^{2}+\frac{2}{ \sqrt{\gamma}}\sum_{i\notin\mathcal{I}}|x_{i}|\\ \text{s.t.}&\|\mathbf{Ax}-\mathbf{b}\|_{2}^{2}\leq\epsilon, \ x_{i}=0\ \forall\ i\in\mathcal{I}_{0},\end{split} \tag{26}\]
where if \(\mathbf{x}^{\star}\) is optimal to (26), then \((\mathbf{z}^{\star},\mathbf{x}^{\star},\mathbf{\theta}^{\star})\) is optimal to (25) taking \(z_{i}^{\star}=\frac{|x_{i}|^{\star}}{\sqrt{\gamma}}\) and \(\theta_{i}^{\star}=x_{i}^{\star 2}\). Problem (26) is a second order cone problem that that emits the following dual:
\[\begin{split}\max_{\mathbf{\nu}\in\mathbb{R}^{m}}&\quad| \mathcal{I}_{1}|+\mathbf{b}^{T}\mathbf{\nu}-\sqrt{\epsilon}\|\mathbf{\nu}\|_{2}-\frac{ \gamma}{4}\mathbf{\nu}^{T}\sum_{i\in\mathcal{I}_{1}}(A_{i}A_{i}^{T})\mathbf{\nu}\ \text{s.t.}\ |\mathbf{\nu}^{T}A_{i}|\leq \frac{2}{\sqrt{\gamma}}\ \forall\ i\notin\mathcal{I}.\end{split} \tag{27}\]
Strong duality holds between (26) and (27) since \(\mathbf{\nu}=0\) is always a strictly feasible point in (27) for any collections \(\mathcal{I}_{0},\mathcal{I}_{1}\)[41]. In our branch-and-bound implementation described in 5.2, we compute lower bounds by solving (26) using Gurobi. We note that depending on the solver employed, it may be beneficial to compute lower bounds using (27) in place of (26).
#### 5.1.2 Subproblem Upper Bound
Recall that solving Problem (2) can be interpreted as determining the minimum number of columns from the input matrix \(\mathbf{A}\) that must be selected such that the residual of the projection of the input vector \(\mathbf{b}\) onto the span of the selected columns has \(\ell_{2}\) norm equal to at most \(\sqrt{\epsilon}\). The same interpretation holds for Problem (3) under the assumption that the \(\ell_{2}\) regularization term in the objective is negligible.
Consider an arbitrary node in the branch-and-bound algorithm and let \(\mathbf{x}^{\star}\) denote an optimal solution to (26). To obtain an upperbound to (24), we define an ordering
on the columns of \(\mathbf{A}\) and iteratively select columns from this ordering from largest to smallest until the \(\ell_{2}\) norm of the residual of the projection of \(\mathbf{b}\) onto the selected columns is less than \(\sqrt{\epsilon}\). The ordering of the columns of \(\mathbf{A}\) corresponds to sorting the entries of \(\mathbf{x}^{\star}\) in decreasing absolute value. Specifically, we have \(A_{i}\succeq A_{j}\iff|x_{i}^{\star}|\geq|x_{j}^{\star}|\). Algorithm 1 outlines this approach. For an arbitrary collection of indices \(\mathcal{I}_{t}\subseteq[n]\), we let \(\mathbf{A}(\mathcal{I}_{t})\in\mathbb{R}^{m\times|\mathcal{I}_{t}|}\) denote the matrix obtained by stacking the \(|\mathcal{I}_{t}|\) columns of \(\mathbf{A}\) corresponding to the indices in the set \(\mathcal{I}_{t}\). Specifically, if \(i_{k}\) denotes the \(k^{th}\) entry of \(\mathcal{I}_{t}\), then the \(k^{th}\) column of \(\mathbf{A}(\mathcal{I}_{t})\) is \(A_{i}\). Let \(\mathbf{x}^{ub}\) denote the output of Algorithm 1. The objective value achieved by \(\mathbf{x}^{ub}\) in (3) is the upper bound.
```
0:\(\mathbf{A}\in\mathbb{R}^{m\times n},\mathbf{b}\in\mathbb{R}^{m},\epsilon>0\). An optimal solution \(\mathbf{x}^{\star}\) of (26).
0:\(\bar{\mathbf{x}}\) is feasible to (3).
1:\(\mathcal{I}_{0}\leftarrow\emptyset\);
2:\(\mathbf{r}_{0}\leftarrow\mathbf{b}\);
3:\(t\gets 0\);
4:\(\delta_{0}\leftarrow\|\mathbf{r}_{0}\|_{2}^{2}\)
5:while\(\delta_{t}>\epsilon\)do
6:\(i_{t}\leftarrow\arg\max_{i\in[n]\setminus\mathcal{I}_{t}}|x_{i}^{\star}|\);
7:\(\mathcal{I}_{t+1}\leftarrow\mathcal{I}_{t}\cup i_{t}\);
8:\(\mathbf{x}_{t+1}\leftarrow\left[\mathbf{A}(\mathcal{I}_{t+1})^{T}\mathbf{A}(\mathcal{I}_ {t+1})\right]^{\dagger}\mathbf{A}(\mathcal{I}_{t+1})^{T}\mathbf{b}\);
9:\(\mathbf{r}_{t+1}\leftarrow\mathbf{b}-\mathbf{A}(\mathcal{I}_{t+1})\mathbf{x}_{t+1}\);
10:\(\delta_{t+1}\leftarrow\|\mathbf{r}_{t+1}\|_{2}^{2}\);
11:\(t\gets t+1\);
12:endwhile
13: Define \(\bar{\mathbf{x}}\in\mathbb{R}^{n}\) as \(\bar{x}(i_{k})=x_{t}(k)\) for \(i_{k}\in\mathcal{I}_{t}\) and \(\bar{x}(i_{k})=0\) otherwise;
14: Return \(\bar{\mathbf{x}}\).
```
**Algorithm 1** Branch-and-Bound Upper Bound
The computational bottleneck of Algorithm 1 is computing the matrix inverse of \(\mathbf{A}(\mathcal{I}_{t})^{T}\mathbf{A}(\mathcal{I}_{t})\in\mathbb{R}^{|\mathcal{I}_ {t}|\times|\mathcal{I}_{t}|}\) at each iteration. Doing so explicitly at each iteration \(t\) would require \(O(|\mathcal{I}_{t}|^{3})\) operations. Letting \(k^{\star}=\|\mathbf{x}^{ub}\|_{0}\) where \(\mathbf{x}^{ub}\) is the output of Algorithm 1, the total cost of executing these matrix inversions is
\[\sum_{t=1}^{k^{\star}}|\mathcal{I}_{t}|^{3}=\sum_{t=1}^{k^{\star}}t^{3}=\left[ \frac{k^{\star}(k^{\star}+1)}{2}\right]^{2}=O(k^{\star 4})\]
However, it is possible to accelerate the computation of these matrix inverses by leveraging the fact that \(\mathbf{A}(\mathcal{I}_{t})\) and \(\mathbf{A}(\mathcal{I}_{t+1})\) differ only by the addition of one column and leveraging block matrix inversion which states that for matrices \(\mathbf{C}\in\mathbb{R}^{n_{1}\times n_{1}},\mathbf{D}\in\mathbb{R}^{n_{2}\times n_{2}}\) and \(\mathbf{U},\mathbf{V}\in\mathbb{R}^{n_{1}\times n_{2}}\), we have:
\[\left[\begin{array}{c|c}\mathbf{C}&\mathbf{U}\\ \hline\mathbf{V}^{T}&\mathbf{D}\end{array}\right]^{\dagger}=\left[\begin{array}{c| c}\mathbf{C}^{\dagger}+\mathbf{C}^{\dagger}\mathbf{U}(\mathbf{D}-\mathbf{V}^{T}\mathbf{C}^{\dagger}\mathbf{U})^{-1} \mathbf{V}^{T}\mathbf{C}^{\dagger}&-\mathbf{C}^{\dagger}\mathbf{U}(\mathbf{D}-\mathbf{V}^{T}\mathbf{C}^{ \dagger}\mathbf{U})^{-1}\\ -(\mathbf{D}-\mathbf{V}^{T}\mathbf{C}^{\dagger}\mathbf{U})^{-1}\mathbf{V}^{T}\mathbf{C}^{\dagger}&(\bm {D}-\mathbf{V}^{T}\mathbf{C}^{\dagger}\mathbf{U})^{-1}\end{array}\right]\]
where it is assumed that the matrix \((\mathbf{D}-\mathbf{V}^{T}\mathbf{C}^{\dagger}\mathbf{U})\) is invertible [44]. Letting \(n_{1}=|\mathcal{I}_{t}|,n_{2}=1,\mathbf{C}=\mathbf{A}(\mathcal{I}_{t})^{T}\mathbf{A}( \mathcal{I}_{t}),\mathbf{U}=\mathbf{V}=\mathbf{A}(\mathcal{I}_{t})^{T}a_{i_{t}}\), and \(\mathbf{D}=a_{i_{t}}^{T}a_{i_{t}}\), we can compute the matrix inverse of \(\mathbf{A}(\mathcal{I}_{t+1})^{T}\mathbf{A}(\mathcal{I}_{t+1})\) using \(O(|\mathcal{I}_{t}|^{2}+m|\mathcal{I}_{t}|)\) operations. With this implementation, the total cost of executing matrix inversions in Algorithm 1 becomes
\[\sum_{t=1}^{k^{\star}}|\mathcal{I}_{t}|^{2}+m|\mathcal{I}_{t}|=\sum_{t=1}^{k^{ \star}}t^{2}+m\sum_{t=1}^{k^{\star}}t=\frac{k^{\star}(k^{\star}+1)(2k^{\star}+ 1)}{6}+\frac{mk^{\star}(k^{\star}+1)}{2}=O(k^{\star 3}+mk^{\star 2})\]
which is a significant improvement over the naive \(O(k^{\star 4})\) approach.
### Branch-and-Bound Algorithm
Having stated how we can compute upper and lower bounds to (23) at each node in the enumeration tree, we are now ready to present the branch-and-bound algorithm in its entirety. Algorithm 2 describes our approach which is based on the implementation by [45]. Though branching rules and node selection rules for branch-and-bound algorithms form a rich literature [46], we follow the design of [45] and employ the most fractional branching rule and least lower bound node selection rule.
Explicitly, for an arbitrary non-terminal node \(p\), let \(\mathbf{z}^{\star}\) be the optimal vector \(\mathbf{z}\) of the node relaxation given by (25). We branch on entry \(i^{\star}=\arg\min_{i\notin\mathcal{I}_{0}\cup\mathcal{I}_{1}}|z_{i}-0.5|\). When selecting a node to investigate, we select the node whose lower bound is equal to the global lower bound. If multiple such nodes exist, we choose arbitrarily from the collection of nodes satisfying this condition. Suppose that a given node produces a subproblem (26) that is infeasible where we let \(\mathcal{I}_{0}\) correspond to the zero index set of this node. Note that this implies that all child nodes of this node will also produce infeasible subproblems. Accordingly, to prune this region of the parameter space entirely, we introduce the feasibility cut \(\sum_{i\in\mathcal{I}_{0}}z_{i}\geq 1\). Let \(f(\mathbf{x})=\|\mathbf{x}\|_{0}+\frac{1}{\gamma}\|\mathbf{x}\|_{2}^{2}\), the objective function of (3) and let \(g(\mathcal{I}_{0},\mathcal{I}_{1})\) denote the optimal value of (26) for any collections \(\mathcal{I}_{0},\mathcal{I}_{1}\subseteq[n],\mathcal{I}_{0}\cap\mathcal{I}_{1}= \emptyset\). The final objective value returned by Algorithm 2 is given by \(\min_{i}f(\mathbf{x}_{i})\) where \(\{\mathbf{x}_{i}\}_{i}\) denotes the collection of feasible solutions produced by Algorithm 1 at any point during the execution of Algorithm 2. The output lower bound of Algorithm 2 is given by \(\min_{(\mathcal{I}_{0},\mathcal{I}_{1})\in\mathcal{N}}g(\mathcal{I}_{0}, \mathcal{I}_{1})\) where \(\mathcal{N}\) denotes the set of non-discarded nodes upon the termination of Algorithm 2.
```
1:\(\mathbf{x}_{i}\leftarrow\mathbf{x}_{i}\)
2:for\(i=1,\ldots,k\)do
3:\(\mathbf{x}_{i}\leftarrow\mathbf{x}_{i}\cup\mathbf{x}_{i}\)
4:for\(i=1,\ldots,k\)do
5:\(\mathbf{x}_{i}\leftarrow\mathbf{x}_{i}\cup\mathbf{x}_{i}\)
6:\(\mathbf{x}_{i}\leftarrow\mathbf{x}_{i}\cup\mathbf{x}_{i}\)
7:endfor
8:\(\mathbf{x}_{i}\leftarrow\mathbf{x}_{i}\cup\mathbf{x}_{i}\)
9:for\(i=1,\ldots,k\)do
10:\(\mathbf{x}_{i}\leftarrow\mathbf{x}_{i}\cup\mathbf{x}_{i}\)
11:\(\mathbf{x}_{i}\leftarrow\mathbf{x}_{i}\cup\mathbf{x}_{i}\)
12:endfor
13:for\(i=1,\ldots,k\)do
14:\(\mathbf{x}_{i}\leftarrow\mathbf{x}_{i}\cup\mathbf{x}_{i}\)
15:endfor
16:for\(i=1,\ldots,k\)do
17:\(\mathbf{x}_{i}\leftarrow\mathbf{x}_{i}\cup\mathbf{x}_{i}\)
18:endfor
19:endfor
20:for\(i=1,\ldots,k\)do
21:\(\mathbf{x}_{i}\leftarrow\mathbf{x}_{i}\cup\mathbf{x}_{i}\)
22:endfor
23:endfor
24:for\(i=1,\ldots,k\)do
25:\(\mathbf{x}_{i}\leftarrow\mathbf{x}_{i}\cup\mathbf{x}_{i}\)
[MISSING_PAGE_POST]
46:endfor
47:end
48:endfor
49:end
49:endfor
49:end
510:endfor
40:end
52:endfor
40:end
[MISSING_PAGE_POST]
```
0:\(\mathbf{A}\in\mathbb{R}^{m\times n},\mathbf{b}\in\mathbb{R}^{m},\epsilon,\gamma\in\mathbb{ R}^{+}.\) Tolerance parameter \(\delta\geq 0.\)
0:\(\mathbf{\bar{x}}\) solves (3) within the optimality tolerance \(\delta\).
1:if\(\|(\mathbf{I}-\mathbf{A}\big{[}\mathbf{A}^{T}\mathbf{A}\big{]}^{\dagger}\mathbf{A}^{T})\mathbf{b}\|_{2}^{2}>\epsilon\)then
2: Return \(\emptyset\);
3:endif
4:if\(\|\mathbf{b}\|_{2}^{2}\leq\epsilon\)then
5: Return 0;
6:endif
7:\(p_{0}\leftarrow(\mathcal{I}_{0},\mathcal{I}_{1})=(\emptyset,\emptyset)\);
8:\(\mathcal{N}\leftarrow\{p_{0}\}\);
9:\(lb\leftarrow\) optimal value of (26);
10:\(\mathbf{\bar{x}}\leftarrow\) solution returned by Algorithm 1;
11:\(ub\gets f(\mathbf{\bar{x}})\);
12:while\(\frac{ub-lb}{ub}>\epsilon\)do
13: select \((\mathcal{I}_{0},\mathcal{I}_{1})\in\mathcal{N}\) according to the node selection rule;
14: select an index \(i\notin\mathcal{I}_{0}\cup\mathcal{I}_{1}\) according to the branching rule;
15:for\(k=0,1\)do
16:\(l\leftarrow(k+1)\) mod 2;
17: newnode \(\leftarrow\Big{(}\big{(}\mathcal{I}_{k}\cup i\big{)},\mathcal{I}_{l}\Big{)}\);
18:if newnode violates an existing feasibility cut then
19: continue;
20:endif
21:if newnode is infeasible then
22: Add the feasibility cut \(\sum_{i\in\mathcal{I}_{0}}z_{i}\geq 1\);
23:endif
24:\(\mathit{lower}\leftarrow\) lowerBound(newnode);
25:\(\mathit{upper}\leftarrow\) upperBound(newnode) with feasible point \(\mathbf{x}^{\star}\);
26:if\(\mathit{upper}<ub\)then
27:\(ub\leftarrow\)\(\mathit{upper}\);
28:\(\mathbf{\bar{x}}\leftarrow\mathbf{x}^{\star}\);
29: remove any node in \(\mathcal{N}\) with \(\mathit{lower}\geq ub\);
30:endif
31:if\(\mathit{lower}<ub\)then
32: add newnode to \(\mathcal{N}\);
33:endif
34:endfor
35: remove \((\mathcal{I}_{0},\mathcal{I}_{1})\) from \(\mathcal{N}\);
36: update \(lb\) to be the lowest value of \(\mathit{lower}\) over \(\mathcal{N}\);
37:endwhile
38: Return \(\mathbf{\bar{x}}\), \(lb\).
```
**Algorithm 2** Optimal Compressed Sensing
sacrificing the universal optimality guarantee by drawing on techniques from the high dimensional sparse machine learning literature [47] and the deep learning literature [48].
#### 5.2.1 Backbone Optimization
Note that the total number of terminal nodes in the branch-and-bound tree is at most \(\sum_{k=1}^{n}\binom{n}{k}=2^{n}\) in the worst case so the total number of nodes can be upper bounded by \(2^{n+1}-1\). Since the runtime of Algorithm 2 (and the feasible space) is proportional to the number of nodes explored which grows exponentially in \(n\), reducing \(n\) leads to reduced run time. Observe that if we knew in advance that the support of the optimal solution to (3) was contained within a set of cardinality less than \(n\), then we could run Algorithm (2) on the corresponding reduced feature set which would result in improving the runtime of (2) while preserving its optimality guarantee. Formally, let \(\mathbf{x}^{\star}\) denote an optimal solution to (3). If we know a priori that \(\operatorname{support}(\mathbf{x}^{\star})\subseteq\mathcal{I}\subset[n]\), then we can pass \(\mathbf{A}(\mathcal{I})\) to Algorithm 2 in place of \(\mathbf{A}\) without discarding \(\mathbf{x}^{\star}\) from the feasible set. The speed up can be quite significant when \(|\mathcal{I}|\ll n\).
Knowing with certainty that \(\operatorname{support}(\mathbf{x}^{\star})\subseteq\mathcal{I}\subset[n]\) a priori is too strong an assumption, however a more reasonable assumption is knowing a priori that with high probability there exists a good solution \(\bar{\mathbf{x}}\) with \(\operatorname{support}(\bar{\mathbf{x}})\subseteq\mathcal{I}\subset[n]\). In this setting, we can still pass \(\mathbf{A}(\mathcal{I})\) to Algorithm 2 and benefit from an improved runtime at the expense of sacrificing optimality guarantees. In this setting, the columns of \(\mathbf{A}(\mathcal{I})\) can be interpreted as a backbone for (3) [47]. In practice, \(\mathcal{I}\) can be taken to be the set of features selected by some heuristic method. In Section 6, we take \(\mathcal{I}=\{i:|\bar{x}_{i}|\geq 10^{-6}\}\) where \(\bar{\mathbf{x}}\) is an optimal solution to (5).
#### 5.2.2 Early Stopping
A common property of branch-and-bound algorithms is that the algorithm quickly arrives at an optimal (or near-optimal) solution early during the optimization procedure and spends the majority of its execution time improving the lower bound to obtain a certificate of optimality. Accordingly, this motivates halting Algorithm 2 before it terminates and taking its upper bound at the time of termination to be its output. Doing so is likely to still yield a high quality solution while reducing the Algorithm's runtime. In Section 6, we place an explicit time limit on Algorithm 2 and return the current upper bound if the Algorithm has not already terminated before reaching the time limit. Note that this approach shares strong connections with early stopping in the training of neural networks [48]. A well studied property of over-parameterized neural networks is that as the optimization procedure progresses, the error on the training data continues to decrease though the validation error plateaus and sometimes even increases. Given that the validation error is the metric of greater import, a common network training technique is to stop the optimization procedure after the validation error has not decreased for a prespecified number of iterations. To illustrate the connection in the case of Algorithm 2, the upper bound loosely plays the role of the validation error while the lower bound loosely plays the role of the training error. Note that the neural network literature suggests an alternate approach to early stopping Algorithm 2 (instead of an explicit time limit) by terminating the algorithm after
the upper bound has remained unchanged after visiting some prespecified number of nodes in the enumeration tree.
## 6 Computational Results
We evaluate the performance of our branch-and-bound algorithm (Algorithm 2, with \(\gamma=\sqrt{n}\)), our second order cone lower bound (13) (with \(\gamma=\sqrt{n}\)) and our semidefinite lower bound (21) (with \(\gamma=\sqrt{n}\) and \(d=1\)) implemented in Julia 1.5.2 using the JuMP.jl package version 0.21.7, using Gurobi version 9.0.3 to solve all second order cone optimization (sub)problems and using Mosek version 9.3 to solve all semidefinite optimization problems. We compare our methods against Basis Pursuit Denoising (BPD) given by (5), Iterative Reweighted \(\ell_{1}\) Minimizaton (IRWL1) described in Section 2.2 and Orthogonal Matching Pursuit (OMP) described in Section 2.3. We perform experiments on both synthetic data and real world data. We conduct our experiments on MIT's Supercloud Cluster [49], which hosts Intel Xeon Platinum 8260 processors. To bridge the gap between theory and practice, we have made our code freely available on GitHub at github.com/NicholasJohnson2020/DiscreteCompressedSensing.jl.
### Synthetic Data Experiments
To evaluate the performance of Algorithm 2, BPD, IRWL1 and OMP on synthetic data, we consider the sparsity of the solution returned by each method, its accuracy (ACC), true positive rate (TPR) and true negative rate (TNR). Let \(\mathbf{x}^{true}\in\mathbb{R}^{n}\) denote the ground truth and consider an arbitrary vector \(\hat{\mathbf{x}}\in\mathbb{R}^{n}\). Let \(\mathcal{I}^{true}=\{i:|x_{i}^{true}|>10^{-4}\}\), \(\hat{\mathcal{I}}=\{i:|\hat{x}_{i}|>10^{-4}\}\). The sparsity of \(\hat{\mathbf{x}}\) is given by \(|\hat{\mathcal{I}}|\). We define the accuracy of \(\hat{\mathbf{x}}\) as
\[ACC(\hat{\mathbf{x}})=\frac{\sum_{i\in\mathcal{I}^{true}}\mathbb{1}\{|\hat{x}_{i} |>10^{-4}\}+\sum_{i\notin\mathcal{I}^{true}}\mathbb{1}\{|\hat{x}_{i}|\leq 10^{-4} \}}{n}.\]
Similarly, we define the true positive rate of \(\hat{\mathbf{x}}\) as
\[TPR(\hat{\mathbf{x}})=\frac{\sum_{i\in\mathcal{I}^{true}}\mathbb{1}\{|\hat{x}_{i} |>10^{-4}\}}{|\hat{\mathcal{I}}|},\]
and we define the true negative rate of \(\hat{\mathbf{x}}\) as
\[TNR(\hat{\mathbf{x}})=\frac{\sum_{i\notin\mathcal{I}^{true}}\mathbb{1}\{|\hat{x}_ {i}|\leq 10^{-4}\}}{n-|\hat{\mathcal{I}}|}.\]
To evaluate the performance of (13) and (21), we consider the strength of the lower bound and execution time of each method. We seek to answer the following questions:
1. How does the performance of Algorithm 2 compare to state-of-the-art methods such as BPD, IRWL1 and OMP on synthetic data?
2. How is the performance of Algorithm 2 affected by the number of features \(n\), the underlying sparsity \(k\) of the ground truth, and the tolerance parameter \(\epsilon\)?
3. How does the strength of the lower bound produced by (21) compare to that produced by (13)?
#### 6.1.1 Synthetic Data Generation
To generate synthetic data \(\mathbf{x}\in\mathbb{R}^{n},\mathbf{A}\in\mathbb{R}^{m\times n}\) and \(\mathbf{b}\in\mathbb{R}^{m}\), we first select a random subset of indices \(\mathcal{I}^{true}\subset[n]\) that has cardinality \(k\) (\(|\mathcal{I}^{true}|=k\)) and sample \(x_{i}\sim N(0,\frac{\sigma^{2}}{n})\) for \(i\in\mathcal{I}^{true}\) (for \(i\notin\mathcal{I}^{true}\), we fix \(x_{i}=0\)). Next, we sample \(A_{ij}\sim N(0,\frac{\sigma^{2}}{n})\) where \(\sigma>0\) is a parameter that controls the signal to noise ratio. We fix \(\sigma=10\) and \(m=100\) throughout all experiments unless stated otherwise. Next, we set \(\mathbf{b}=\mathbf{A}\mathbf{x}+\mathbf{n}\) where \(n_{j}\sim N(0,\sigma^{2})\). Finally, we set \(\epsilon=\alpha\|\mathbf{b}\|_{2}^{2}\). \(\alpha\in[0,1]\) is a parameter that can be thought of as controlling the proportion of observations that are allowed to go unexplained by a solution to (3).
#### 6.1.2 Sensitivity to \(\mathbf{n}\)
We present a comparison of Algorithm 2 with BPD, IRWL1 and OMP as we vary the number of features \(n\). In these experiments, we fixed \(k=10\), and \(\alpha=0.2\) across all trials. We varied \(n\in\{100,200,300,400,500,600,700,800\}\) and we performed 100 trials for each value of \(n\). We give Algorithm 2 a cutoff time of 10 minutes. For IRWL1, we terminate the algorithm after the \(50^{th}\) iteration or after two subsequent iterates are equal up to numerical tolerance. Formally, letting \(\bar{\mathbf{x}}_{t}\) denote the iterate after iteration \(t\) of IRWL1, we terminate the algorithm if either \(t>50\) or if \(\|\bar{\mathbf{x}}_{t}-\bar{\mathbf{x}}_{t-1}\|_{2}\leq 10^{-6}\). Additionally, we further sparsify the solutions returned by BPD (respectively IRWL1) by performing a greedy rounding following the procedure defined by Algorithm 1 where we pass the solution returned by BPD (respectively IRWL1) as input to the algorithm in place of an optimal solution to (26).
We report the sparsity, accuracy (ACC), true positive rate (TPR) and true negative rate (TNR) for each method in Figure 1. We additionally report the sparsity accuracy and execution time for each method in Tables A1, A2, A3 of Appendix A. The performance metric of greatest interest is the sparsity. Our main findings from this set of experiments are:
1. Algorithm 2 systematically produces sparser solutions than OMP, IRWL1 and BPD. This trend holds in all but one trial (see Table A1). Algorithm 2 on average produces solutions that are \(2.71\%\) more sparse than OMP, \(16.62\%\) more sparse than BPD and \(6.04\%\) more sparse than IRWL1. BPD is the poorest performing method in terms of sparsity of the fitted solutions. We remind the reader that sparsity is computed only after a greedy rounding of the BPD (respectively IRWL1) solution. The sparsity of the BPD (respectively IRWL1) solution prior to rounding is much greater. Indeed, before further sparsifying the BPD (respectively IRWL1) solution, the solution returned by Algorithm 2 is on average \(66.33\%\) (respectively \(6.21\%\)) more sparse than the BPD (respectively IRWL1) solution. The sparsity of solutions returned by all methods increases as the number of features \(n\) increases.
2. Algorithm 2 marginally outperforms the benchmark methods on accuracy with the exception of the first two parameter configurations (\(n=100\) and \(n=200\), see Table A2). The accuracy of all methods tends to trend upwards with increasing \(n\).
3. The TPR and TNR of all methods are roughly comparable across these experiments. The TPR of all methods decreases while the TNR increases as the number of features \(n\) is increased.
#### 6.1.3 Sensitivity to \(k\)
We present a comparison of Algorithm 2 with BPD, IRWL1 and OMP as we vary \(k\) the sparsity of the underlying ground truth signal. In these experiments, we fixed \(n=200\) and \(\alpha=0.2\) across all trials. We varied \(k\in\{10,15,20,25,30,35,40,45,50,55\}\) and we performed 100 trials for each value of \(k\). We give Algorithm 2 a cutoff time of 10 minutes.
We report the sparsity, accuracy (ACC), true positive rate (TPR) and true negative rate (TNR) for each method in Figure 2. We additionally report the sparsity, accuracy
Figure 1: Sparsity (top left), accuracy (top right), true positive rate (bottom left) and true negative rate (bottom right) versus \(n\) with \(k=10\), and \(\alpha=0.2\). Averaged over 100 trials for each parameter configuration.
and execution time for each method in Tables A4, A5 and A6 of Appendix A. Our main findings from this set of experiments are:
1. Consistent with the results in the previous section, Algorithm 2 systematically produces sparser solutions than OMP, IRWL1 and BPD. This trend holds across trials (see Table A4. Algorithm 2 on average produces solutions that are \(4.78\%\) more sparse than OMP, \(10.73\%\) more sparse than BPD and \(4.20\%\) more sparse than IRWL1. Before further sparsifying the BPD (respectively IRWL1) solution, the solution returned by Algorithm 2 is on average \(62.97\%\) (respectively \(4.29\%\)) more sparse than the BPD (respectively IRWL1) solution. BPD is again the poorest performing method in terms of sparsity of the fitted solutions. IRWL1 and OMP produce comparably sparse solutions. The sparsity of solutions returned by all methods initially decreases than subsequently increases as the sparsity level \(k\) of the ground truth signal increases.
2. Algorithm 2 is competitive with OMP and IRWL1 on accuracy and slightly outperforms BPD on accuracy for larger values of \(k\) The accuracy of all methods trends downwards with increasing \(k\).
3. The TPR and TNR of Algorithm 2, OMP, and IRWL1 are comparable across these experiments. The TPR and TNR of BPD is competitive with the other methods for small values of \(k\), but slightly deteriorates for larger values of \(k\).
#### 6.1.4 Sensitivity to \(\epsilon\)
We present a comparison of Algorithm 2 with BPD, IRWL1 and OMP as we vary \(\alpha\) which controls the value of the parameter \(\epsilon\). Recall we have \(\epsilon=\alpha\|\mathbf{b}\|_{2}^{2}\), so \(\alpha\) can loosely be interpreted as the fraction of the measurements \(\mathbf{b}\) that can be unexplained by the returned solution to (3). In these experiments, we fixed \(n=200\) and \(k=10\) across all trials. We varied \(\alpha\in\{0.05,0.1,0.15,\ldots,0.9\}\) and we performed \(100\) trials for each value of \(\alpha\). We give Algorithm 2 a cutoff time of \(10\) minutes.
We report the sparsity, accuracy (ACC), true positive rate (TPR) and true negative rate (TNR) for each method in Figure 3, and we report the sparsity, accuracy and execution time for each method in Tables A7, A8 and A9 of Appendix A. Consistent with previous experiments, Algorithm 2 outperforms the benchmark methods in terms of sparsity of the returned solution while having comparable performance on accuracy, TPR and TNR. Here, Algorithm 2 on average produces solutions that are \(2.40\%\) more sparse than OMP, \(5.92\%\) more sparse than BPD and \(2.54\%\) more sparse than IRWL1. Before further sparsifying the BPD (respectively IRWL1) solution, the solution returned by Algorithm 2 is on average \(59.23\%\) (respectively \(2.62\%\)) more sparse than the BPD (respectively IRWL1) solution.
#### 6.1.5 Lower Bound Performance
In Section 4, we reformulated (3) exactly as a mixed integer second order cone problem and illustrated multiple approaches to obtain lower bounds on the optimal value of the reformulation. In this Section, we compare the strength of the second order cone relaxation given by (13) and the semidefinite cone relaxation given by (21). We fixed \(k=10\) and we varied \(\alpha\in\{0.05,0.1,0.15,\ldots,0.9\}\). We report results for
\((25,100)\) in Figure 4 and \((n,m)=(50,25)\) in Figure 5. We performed \(100\) trials for each value of \(\alpha\). Letting \(lb^{SOC}\) denote the optimal value of (13) and \(lb^{SOS}\) denote the optimal value of (21), we define the SOS lower bound improvement to be \(\frac{lb^{SOS}-lb^{SOC}}{lb^{SOC}}\).
Consistent with the Theorem 5, Problem (21) produces a stronger lower bound than Problem (13) at the expense of being more computationally intensive to compute due to the presence of positive semidefinite constraints. On average, the bound produced by (21) is \(8.92\%\) greater than the bound produced by (13). These results suggests that if Problem (21) can be solved to optimality or near optimality efficiently at scale, it could potentially be used to accelerate Algorithm 2 by producing stronger lower bounds than the current approach, thereby allowing for a more aggressive pruning of the feasible space. Off the shelf interior point methods suffer from scalability challenges for semidefinite optimization problems.
### Real World Data
We seek to answer the following question: how does the performance of Algorithm 2 compare to state-of-the-art methods such as BPD, IRWL1 and OMP on real world data? To evaluate the performance of Algorithm 2, BPD, IRWL1 and OMP on real world data, we consider the problem of compressed sensing for electrocardiogram
Figure 2: Sparsity (top left), accuracy (top right), true positive rate (bottom left) and true negative rate (bottom right) versus \(k\) with \(N=200\) and \(\alpha=0.2\). Averaged over \(100\) trials for each parameter configuration.
(ECG) acquisition [5]. We obtain real ECG recording samples from the MIT-BIH Arrhythmia Database ([https://www.physionet.org/content/mitdb/1.0.0/](https://www.physionet.org/content/mitdb/1.0.0/)) and consider the performance of the methods in terms sparsity of the returned signal and reconstruction error between the returned signal and the true signal.
#### 6.2.1 ECG Experiment Setup
We employ the same 100 ECG recordings sampled at 360 Hz from the MIT-BIH Arrhythmia Database that are used in [5]. These recordings collectively originate from 10 distinct patients (each contributing 10 independent recordings) and the recording length of an individual record is 1024. In keeping with [5], we use 30 ECG recordings as a training set to fit an overcomplete dictionary \(\mathbf{D}\) via the K-SVD method [50]. We fit a dictionary with 2000 atoms, meaning that \(\mathbf{D}\in\mathbb{R}^{1024\times 2000}\) and \(\mathbf{X}^{train}\approx\mathbf{D}\mathbf{\Theta}\) where \(\mathbf{X}^{train}\in\mathbb{R}^{1024\times 30}\) is a matrix whose columns are the training ECG signals and \(\mathbf{\Theta}\in\mathbb{R}^{2000\times 30}\) is a sparse matrix. Each column of \(\mathbf{\Theta}\) should be thought of as a (sparse) representation of the corresponding column of \(\mathbf{X}^{train}\) in the dictionary given by \(\mathbf{D}\) (\(\|\mathbf{\Theta}\|_{0}\ll\|\mathbf{X}^{train}\|_{0}\)). We employ the Bernouilli sensing matrix \(\mathbf{B}\in\mathbb{R}^{40\times 1024}\) considered by [5]. Given an ECG signal \(\mathbf{x}^{test}\in\mathbb{R}^{1024}\), we consider the perturbed observations \(\mathbf{s}=\mathbf{B}(\mathbf{x}^{test}+\mathbf{\eta})\) where \(\mathbf{\eta}\in\mathbb{R}^{1024}\) is a vector of mean 0 normal perturbations with
Figure 3: Sparsity (top left), accuracy (top right), true positive rate (bottom left) and true negative rate (bottom right) versus \(\alpha\) with \(n=200\) and \(k=10\). Averaged over 100 trials for each parameter configuration.
variance \(\left(\frac{\|\mathbf{x}^{test}\|_{1}}{4\cdot 1024}\right)^{2}\mathbf{I}.\) Figure 6 illustrates the ECG signal and perturbed ECG signal for record \(31\) of the dataset. With these preliminaries, we consider the reconstruction problem given by
\[\begin{split}\min_{\mathbf{\theta}\in\mathbb{R}^{2000}}& \quad\|\mathbf{\theta}\|_{0}+\frac{1}{\gamma}\|\mathbf{\theta}\|_{2}^{2}\\ \text{s.t.}&\quad\|\mathbf{BD}\mathbf{\theta}-\mathbf{s}\|_{2}^{ 2}\leq\epsilon.\end{split} \tag{28}\]
where we set \(\epsilon=1.05\cdot\|\mathbf{s}-\mathbf{B}\mathbf{x}^{test}\|_{2}^{2}.\) Note that (28) is equivalent to (3) where \((\mathbf{\theta},\mathbf{BD},\mathbf{s})\) play the role of \((\mathbf{x},\mathbf{A},\mathbf{b})\) and we have \((n,m)=(2000,40).\) Letting \(\hat{\mathbf{\theta}}\) denote a feasible solution to (28) returned by one of the solution methods, we employ \(10^{-4}\) as the numerical threshold to compute the sparsity \(\|\hat{\mathbf{\theta}}\|_{0}\) of \(\hat{\mathbf{\theta}}\) and we define the \(\ell_{q}\) reconstruction error of \(\hat{\mathbf{\theta}}\) as \(\frac{\|\mathbf{D}\hat{\mathbf{\theta}}-\mathbf{x}^{test}\|_{q}^{4}}{\|\mathbf{x}^{test}\|_{q }^{4}}\) for \(q\in\{1,2\}.\)
Figure 4: Problem (3) lower bound (left) produced by Problem (13) (SOC) and Problem (21) (SOS) with \(d=1.\) Percent improvement of Problem (3) lower bound of compared to (right). \(n=25,m=100\) and \(k=10.\)
Figure 5: Problem (3) lower bound (left) produced by Problem (13) (SOC) and Problem (21) (SOS) with \(d=1.\) Percent improvement of Problem (3) lower bound of compared to (right). \(n=50,m=25\) and \(k=10.\)
#### 6.2.2 ECG Computational Results
We present a comparison of BPD, IRWL1, OMP and Algorithm 2 as we vary the regularization parameter \(\gamma\) in (28). We considered values of \(\gamma\) in the set
\[\Gamma=\{(8a+0.01)f(n):a\in[14],f(n)\in\{\sqrt{n},n,n^{2}\}\},\]
and we evaluate performance on the 70 ECG recordings that are not part of the training set used to fit the overcomplete dictionary \(\boldsymbol{D}\). We give Algorithm 2 a cutoff time of 5 minutes. As in the synthetic experiments, we terminate IRWL1 after the \(50^{th}\) iteration or after two subsequent iterates are equal up to numerical tolerance. Moreover, we sparsify the solutions returned by BPD and IRWL1 using the procedure given by Algorithm 1 as done in the synthetic experiments.
Figure 7 illustrates the average \(\ell_{1}\) error (left) and average \(\ell_{2}\) error (right) versus the average sparsity of solutions returned by each method. Each red dot corresponds to the performance of Algorithm 2 for a fixed value of \(\gamma\in\Gamma\). Given that more sparse solutions and solutions with lesser \(\ell_{1}\) (respectively \(\ell_{2}\)) error are desirable, Figure 7 demonstrates that as we vary \(\gamma\), the solutions returned by Algorithm 2 trace out an efficient frontier that dominates the solutions returned by BPD, IRWL1 and OMP. Indeed, for all benchmark methods (BPD, IRWL1 and OMP), there is a value of \(\gamma\) such that Algorithm 2 finds solutions that achieve lower sparsity and lower reconstruction error than the solution returned by the benchmark method. For the same \(\ell_{2}\) reconstruction error, Algorithm 2 can produce solutions that are on average 3.88% more sparse than IRWL1, 6.29% more sparse than BPD and 19.70% more sparse than OMP. For the same sparsity level, Algorithm 2 can produce solutions that have on average 1.42% lower \(\ell_{2}\) error than IRWL1, 2.66% lower \(\ell_{2}\) error than BPD and 28.23% lower \(\ell_{2}\) error than OMP. Thus, Algorithm 2 outperforms BPD, IRWL1 and OMP on this real world dataset.
### Summary of Findings
We now summarize our findings from our numerical experiments. In Sections 6.1.2-6.1.4, we see that across all experiments using synthetic data, Algorithm 2 produces
Figure 6: Ground truth ECG signal (left) and perturbed signal (right) for ECG record 31.
solutions that are on average \(6.22\%\) more sparse than the solutions returned by state of the art benchmark methods after they are further sparsified by greedy rounding. If we omit greedy rounding, Algorithm 2 produces solutions that are on average \(17.17\%\) more sparse in our synthetic experiments. In Section 6.1.5, we find that the bound produced by (21) is on average \(8.92\%\) greater than the bound produced by (13). Finally, in Section 6.2, we see that for a given level of \(\ell_{2}\) reconstruction error, Algorithm 2 produces solutions that are on average \(9.95\%\) more sparse than the solutions returned by state of the art benchmark methods after they are further sparsified by greedy rounding on the real world dataset we experiment with. Furthermore, for a given sparsity level, Algorithm 2 produces solutions that have on average \(10.77\%\) lower \(\ell_{2}\) reconstruction error than benchmark methods.
## 7 Conclusion
In this paper, we introduced an \(\ell_{2}\) regularized formulation (3) for CS which emits a natural reformulation as a mixed integer second order cone program (12). We presented a second order cone relaxation (13) and a stronger but more expensive semidefinite cone relaxation (21) to (12). We presented Algorithm 2, a custom branch-and-bound algorithm that can compute globally optimal solution for (3). We find that our approach produces solutions that are on average \(6.22\%\) more sparse on synthetic data and \(9.95\%\) more sparse on real world ECG data when compared to state of the art benchmark approaches. Further work might focus on strengthening our convex relaxations by deriving additional valid inequalities for (12) or increasing the scalability of our branch-and-bound method. Algorithm 2 currently uses our second order cone relaxation to compute lower bounds. If fast problem specific solution methods could be derived for our positive semidefinite cone relaxation, employing the latter for lower bounds in Algorithm 2 could potentially lead to important scalability gains.
Figure 7: \(\ell_{1}\) reconstruction error (left) and \(\ell_{2}\) reconstruction error (right) versus \(\ell_{0}\) norm (Sparsity) for ECG reconstructions obtained using OMP, BPD, IRWL1 and Algorithm 2 for varying values of \(\gamma\). \(n=2000\) and \(m=40\).
## Declarations
_Funding:_
The authors did not receive support from any organization for the submitted work.
_Conflict of interest/Competing interests:_
The authors have no relevant financial or non-financial interests to disclose.
_Ethics approval:_
Not applicable.
_Consent to participate:_
Not applicable.
_Consent for publication:_
Not applicable.
_Availability of data and materials:_
We obtained the ECG recording samples employed in Section 6.2 from the MIT-BIH Arrhythmia Database ([https://www.physionet.org/content/mitdb/1.0.0/](https://www.physionet.org/content/mitdb/1.0.0/)).
_Code availability:_
To bridge the gap between theory and practice, we have made our code freely available on GitHub at github.com/NicholasJohnson2020/DiscreteCompressedSensing.jl
_Authors' contributions:_
Both authors contributed to the algorithmic ideation and design. Algorithm implementation, data collection, simulation and data analysis was performed by Nicholas Johnson. The first draft of the manuscript was written by Nicholas Johnson and both authors commented and edited subsequent versions of the manuscript. Both authors read and approved the final manuscript.
\begin{table}
\begin{tabular}{c||c c c c} \hline \hline \multicolumn{5}{c}{Accuracy} \\ \hline N & Algorithm 2 & OMP & IRWL1 & BPD \\ \hline
100 & 0.944 & **0.947** & 0.946 & 0.945 \\
200 & **0.949** & 0.948 & 0.948 & 0.944 \\
300 & **0.944** & 0.942 & 0.943 & 0.938 \\
400 & **0.948** & 0.945 & 0.946 & 0.941 \\
500 & **0.955** & 0.954 & 0.954 & 0.949 \\
600 & **0.960** & 0.959 & 0.960 & 0.955 \\
700 & **0.965** & 0.964 & 0.963 & 0.960 \\
800 & **0.969** & 0.969 & 0.967 & 0.964 \\ \hline \hline \end{tabular}
\end{table}
Table A1: Comparison of the sparsity of solutions returned by (2), OMP, IRWL1 and BPD for different values of \(n\). Averaged over 100 trials for each parameter configuration.
\begin{table}
\begin{tabular}{c||c c c c} \hline \hline & \multicolumn{4}{c}{Accuracy} \\ \hline K & Algorithm 2 & OMP & IRWL1 & BPD \\ \hline
10 & **0.948** & **0.948** & 0.947 & 0.943 \\
15 & **0.945** & 0.944 & **0.945** & 0.942 \\
20 & 0.935 & 0.934 & **0.936** & 0.933 \\
25 & 0.915 & 0.911 & **0.917** & 0.915 \\
30 & 0.893 & 0.887 & **0.896** & 0.895 \\
35 & 0.872 & 0.866 & **0.875** & 0.873 \\
40 & 0.851 & 0.840 & **0.852** & 0.850 \\
45 & 0.825 & 0.815 & 0.827 & **0.827** \\
50 & 0.801 & 0.791 & 0.803 & **0.804** \\
55 & 0.772 & 0.764 & 0.777 & **0.780** \\ \hline \hline \end{tabular}
\end{table}
Table 10: Comparison of the execution time of solutions returned by (2), OMP, IRWL1 and BPD for different values of \(k\). Averaged over 100 trials for each parameter configuration.
\begin{table}
\begin{tabular}{c||c c c c} \hline \hline & \multicolumn{4}{c}{Execution Time (milliseconds)} \\ \hline \(N\) & Algorithm 2 & OMP & IRWL1 & BPD \\ \hline
100 & 2048.646 & **5.717** & 463.636 & 146.111 \\
200 & 334804.020 & **13.212** & 1109.263 & 234.263 \\
300 & 574501.859 & **25.141** & 1630.212 & 297.849 \\
400 & 601792.939 & **42.919** & 2181.636 & 351.717 \\
500 & 601424.020 & **72.535** & 2435.141 & 405.131 \\
600 & 601451.838 & **110.364** & 3118.465 & 433.626 \\
700 & 601572.848 & **166.525** & 3674.980 & 504.626 \\
800 & 601716.929 & **231.980** & 3865.788 & 540.859 \\ \hline \hline \end{tabular}
\end{table}
Table 11: Comparison of the sparsity of solutions returned by (2), OMP, IRWL1 and BPD for different values of \(k\). Averaged over 100 trials for each parameter configuration.
\begin{table}
\begin{tabular}{c||c c c c} \hline \hline \multicolumn{5}{c}{Sparsity Level} \\ \hline \(\alpha\) & Algorithm 2 & OMP & IRWL1 & BPD \\ \hline
[MISSING_PAGE_POST]
*1.0** & **1.0** & **1.0** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Comparison of the execution time of solutions returned by (2), OMP, IRWL1 and BPD for different values of \(k\). Averaged over 100 trials for each parameter configuration.
\begin{table}
\begin{tabular}{c||c c c c} \hline \hline \multicolumn{5}{c}{Execution Time (milliseconds)} \\ \hline \(K\) & Algorithm 2 & OMP & IRWL1 & BPD \\ \hline
10 & 305993.475 & **13.182** & 1270.000 & 341.454 \\
15 & 199128.374 & **12.556** & 1144.818 & 284.071 \\
20 & 119282.667 & **12.646** & 1080.535 & 278.596 \\
25 & 139224.525 & **13.263** & 1081.202 & 327.151 \\
30 & 171844.485 & **12.909** & 1169.798 & 314.192 \\
35 & 193257.535 & **12.798** & 1163.121 & 361.485 \\
40 & 231721.737 & **13.404** & 1151.455 & 277.647 \\
45 & 314269.394 & **13.495** & 1142.374 & 308.919 \\
50 & 351790.071 & **13.727** & 1219.707 & 315.081 \\
55 & 412429.717 & **14.010** & 1260.899 & 289.616 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Comparison of the sparsity of solutions returned by (2), OMP, IRWL1 and BPD for different values of \(\alpha\). Averaged over 100 trials for each parameter configuration.
\begin{table}
\begin{tabular}{c||c c c c} \hline \hline & \multicolumn{4}{c}{Execution Time (milliseconds)} \\ \hline \(\alpha\) & Algorithm 2 & OMP & IRWL1 & BPD \\ \hline
[MISSING_PAGE_POST]
\hline \hline \end{tabular}
\end{table}
Table 8: Comparison of the accuracy of solutions returned by (2), OMP, IRWL1 and BPD for different values of \(\alpha\). Averaged over 100 trials for each parameter configuration.
\begin{table}
\begin{tabular}{c||c c c c} \hline \hline & \multicolumn{4}{c}{Accuracy} \\ \hline \(\alpha\) & Algorithm 2 & OMP & IRWL1 & BPD \\ \hline
[MISSING_PAGE_POST]
*0.955** & **0.955** & **0.955** \\ \hline \hline \end{tabular}
\end{table}
Table 9: Comparison of the execution time of solutions returned by (2), OMP, IRWL1 and BPD for different values of \(\alpha\). Averaged over 100 trials for each parameter configuration. |
2308.03484 | A Systematic Study of Associations between Supernova Remnants and
Molecular Clouds | We universally search for evidence of kinematic and spatial correlation of
supernova remnant (SNR) and molecular cloud (MC) associations for nearly all
SNRs in the coverage of the MWISP CO survey, i.e. 149 SNRs, 170 SNR candidates,
and 18 pure pulsar wind nebulae (PWNe) in 1 deg < l < 230 deg and -5.5 deg < b
< 5.5 deg. Based on high-quality and unbiased 12CO/13CO/C18O (J = 1--0) survey
data, we apply automatic algorithms to identify broad lines and spatial
correlations for molecular gas in each SNR region. The 91% of SNR-MC
associations detected previously are identified in this paper by CO line
emission. Overall, there could be as high as 80% of SNRs associated with MCs.
The proportion of SNRs associated with MCs is high within the Galactic
longitude less than ~50 deg. Kinematic distances of all SNRs that are
associated with MCs are estimated based on systemic velocities of associated
MCs. The radius of SNRs associated with MCs follows a lognormal distribution,
which peaks at ~8.1 pc. The progenitor initial mass of these SNRs follows a
power-law distribution with an index of ~-2.3 that is consistent with the
Salpeter index of -2.35. We find that SNR-MC associations are mainly
distributed in a thin disk along the Galactic plane, while a small amount
distributed in a thick disk. With the height of these SNRs from the Galactic
plane below ~45 pc, the distribution of the average radius relative to the
height of them is roughly flat, and the average radius increases with the
height when above ~45 pc. | Xin Zhou, Yang Su, Ji Yang, Xuepeng Chen, Yan Sun, Zhibo Jiang, Min Wang, Hongchi Wang, Shaobo Zhang, Ye Xu, Qingzeng Yan, Lixia Yuan, Zhiwei Chen, Yiping Ao, Yuehui Ma | 2023-08-07T11:27:09Z | http://arxiv.org/abs/2308.03484v2 | # A Systematic Study of Associations between Supernova Remnants and Molecular Clouds
###### Abstract
We universally search for evidence of kinematic and spatial correlation of supernova remnant (SNR) and molecular cloud (MC) associations for nearly all SNRs in the coverage of the MWISP CO survey, i.e. 149 SNRs, 170 SNR candidates, and 18 pure pulsar wind nebulae (PWNe) in \(1^{\circ}<l<230^{\circ}\) and \(-5.^{\circ}5<b<5.^{\circ}5\). Based on high-quality and unbiased \({}^{12}\)CO/\({}^{13}\)CO/C\({}^{18}\)O (J = 1-0) survey data, we apply automatic algorithms to identify broad lines and spatial correlations for molecular gas in each SNR region. The 91% of SNR-MC associations detected previously are identified in this paper by CO line emission. Overall, there could be as high as 80% of SNRs associated with MCs. The proportion of SNRs associated with MCs is high within the Galactic longitude less than \(\sim 50^{\circ}\). Kinematic distances of all SNRs that are associated with MCs are estimated based on systemic velocities of associated MCs. The radius of SNRs associated with MCs follows a lognormal distribution, which peaks at \(\sim\)8.1 pc. The progenitor initial mass of these SNRs follows a power-law distribution with an index of \(\sim\)\(-2.3\) that is consistent with the Salpeter index of \(-2.35\). We find that SNR-MC associations are mainly distributed in a thin disk along the Galactic plane, while a small amount distributed in a thick disk. With the height of these SNRs from the Galactic plane below \(\sim\)45 pc, the distribution of the average radius relative to the height of them is roughly flat, and the average radius increases with the height when above \(\sim\)45 pc.
ISM (847) -- Supernova remnants (1667) -- Molecular clouds (1072) -- Galaxy structure (622)
## 1 Introduction
Supernova remnants (SNRs) release large amounts of momentum, energy, and heavy elements into the interstellar medium (ISM), which modify the physical and chemical properties of the ISM and trigger star formation. Such stellar feedback, affecting the next generation of star formation, primarily characterizes the nonlinear evolution of the ISM. SNRs are also prime candidates of Galactic cosmic ray (CR) sources. Some SNRs have bright \(\gamma\)-ray emission, which may originate from CR particles impacting on the surrounding dense medium (e.g., Aharonian et al., 2008; Giuliani et al., 2011; Fukui et al., 2021). These sources provide valuable opportunities to study CR acceleration in SNRs. Many SNRs originate from core-collapse supernova explosions of massive stars. Because their progenitor massive stars formed in molecular clouds (MCs) and had relatively short lifetimes, these SNRs are expected to be associated with their parent MCs. In addition, type Ia supernovae may also occur near dense molecular gases (e.g., Lee et al., 2004; Zhou et al., 2016; Chen et al., 2017). It is not clear that how many SNRs are associated with MCs. Supposing half of SNRs originated from core-collapse supernovae (Reynoso & Mangum, 2001), there may be more than half of SNRs associated with MCs.
Among different evidences of SNR-MC interaction, i.e. shifted and broadened CO line emission (e.g., Denoyer, 1979a; Seta et al., 1998; Kilpatrick et al., 2016, etc.), OH maser line emission at 1720 MHz (e.g., Goss & Robinson, 1968; Frail et al., 1994, 1996; Green et al., 1997; Wardle & Yusef-Zadeh, 2002; Yusef-Zadeh et al., 2003; Hewitt & Yusef-Zadeh, 2009, etc.), shock excited molecular line emission (e.g., Wootten, 1981, etc.), enhanced IR emission (e.g., Arendt, 1989; Saken et al., 1992; Reach et al., 2005, 2006; Hewitt et al., 2009, etc.), etc., molecular line emission is wildly used in searching associated MC, since it originates from associated molecular gas and provides its local standard-of-rest (LSR) velocity information. To date, about 80 Galactic SNRs have been confirmed or suggested to be associated with MCs, out of about 300 Galac
tic SNRs (Gaensler et al., 2008; Jiang et al., 2010; Tian et al., 2010; Eger et al., 2011; Jeong et al., 2012; Frail et al., 2013; Chen et al., 2014; Fukuda et al., 2014; Su et al., 2014b; Zhou et al., 2014; Zhu et al., 2014; Zhang et al., 2015; Voisin et al., 2016; Zhou et al., 2016,c; de Wilt et al., 2017; Lau et al., 2017; Liu et al., 2017; Su et al., 2017b; Liu et al., 2018; Maxted et al., 2018; Su et al., 2018; Ma et al., 2019; Yu et al., 2019; Green, 2019; Ranasinghe & Leahy, 2022, and references therein). Most of these Galactic SNR-MC associations have been confirmed or suggested through CO or OH maser line emission. In particular, Kilpatrick et al. (2016) performed an effective systematic search for broad \({}^{12}\)CO (J=2-1) line regions toward 50 SNRs, and confirmed the detection in 19 SNRs including 9 newly identified ones. Sofue et al. (2021) also provided some supplementary morphological search results of CO line emission toward 63 Galactic SNRs in the 10\({}^{\circ}\leq l\leq 50^{\circ}\), \(|b|\leq 1^{\circ}\) region. Only a small percentage of SNRs are confirmed to be associated with MCs with clear evidence, e.g., uncontaminated broad CO line emission or OH 1720 MHz maser emission. Many SNRs are located in crowded regions with multiple MCs overlapping each other in the line-of-sight, hence, molecular lines toward these SNRs are overlapping each other. It is difficult to distinguish potential disturbed molecular gases in these SNRs. In addition, physical conditions for the formation of OH 1720 MHz masers are strict, i.e. in regions with moderate temperatures and densities (\(T\sim 50\)-125 K, \(n_{\rm H_{2}}\sim 10^{5}\) cm\({}^{-3}\)) behind slow C-type shocks (Lockett et al., 1999; Hewitt & Yusef-Zadeh, 2009), and many SNRs do not have such physical conditions. A large amount of potential SNR-MC associations prohibit general statistical studies of MC environments around SNRs.
Intending to investigate SNR-MC associations, we present in this work a general study of CO line emission toward most SNRs in the northern sky, based on the Milky Way Imaging Scroll Painting (MWISP) unbiased three CO isotope lines survey using the 13.7-meter millimeter wavelength telescope at Qinghai station. The survey data has appropriate spatial resolution, high sensitivity, and large coverage. Both kinematic evidence and spatial correlations are examined, and further statistical analyses of SNRs associated with MCs are performed.
## 2 Observations
SNRs in the coverage of the Milky Way Imaging Scroll Painting (MWISP1) project, i.e. nearly all SNRs in 1\({}^{\circ}<l<230^{\circ}\) and \(-5.^{\circ}5<b<5.^{\circ}5\), are investigated. The MWISP project is an unbiased survey of \({}^{12}\)CO/\({}^{13}\)CO/C\({}^{18}\)O (J = 1-0) emission lines (see Su et al., 2019; Sun et al., 2020, and references therein, for details), using the Purple Mountain Observatory Delingha (PMODLH) 13.7 m millimeter wavelength telescope (Zuo et al., 2011). The three CO lines were simultaneously observed by a \(3\times 3\) multibeam sideband separating superconducting receiver (Shan et al., 2012). The region is mapped via on-the-fly (OTF) observation mode, with a half-power beamwidth (HPBW) of \(\sim\)51\({}^{\prime\prime}\). Spectral resolutions of the three CO lines are \(\sim\)0.16 km s\({}^{-1}\) for \({}^{12}\)CO (J = 1-0), and \(\sim\)0.17 km s\({}^{-1}\) for \({}^{13}\)CO and C\({}^{18}\)O (J = 1-0). The typical rms noise level is about 0.5 K for \({}^{12}\)CO (J = 1-0); and about 0.3 K for \({}^{13}\)CO and C\({}^{18}\)O (J = 1-0), corresponding to their spectral resolutions. All data were processed using dedicated pipelines by MWISP working group and the GILDAS/CLASS package2. A linear fit was performed in the baseline subtraction. The OTF raw data was meshed with a grid spacing of 30\({}^{\prime\prime}\), and was corrected for beam efficiency using T\({}_{mb}\)=T\({}_{A}^{\star}\)/\(\eta_{mb}\). The data of the full extent of each SNR was extracted, covering an area of 2 to 4 times the size of the remnant. Moreover, for pulsar wind nebulae (PWNe), areas of 16 to 32 times their sizes are covered. SNRs G82.2+5.3 and G93.3+6.9 are not fully covered, nevertheless, more than half of their extents are covered.
Footnote 1: [http://www.radioast.nsdc.cn/mwisp.php](http://www.radioast.nsdc.cn/mwisp.php)
Radio continuum emission data at 200 MHz were obtained from the Galactic and Extragalactic All-sky Murchison Widefield Array (GLEAM) survey3(Wayth et al., 2015; Hurley-Walker et al., 2017, 2019b). 1.4 GHz radio continuum emission data were obtained from the NRAO VLA Sky Survey (NVSS; Condon et al., 1998) and The HI/OH/Recombination line survey (THOR; Beuther et al., 2016; Wang et al., 2020). 4850 MHz radio continuum data were also obtained from the Green Bank 6-cm/Parkes-MIT-NRAO (GB6) survey (Condon et al., 1991, 1993, 1994). 20 cm radio continuum data are from the Multi-Array Galactic Plane Imaging Survey (MAGPIS4; Helfand et al., 2006). The information on SNRs in the Green (2019) catalog5 and the catalog of high-energy SNRs (SNRcat6; Ferrand & Safi-Harb, 2012) was used. Based on the radio continuum emission, we define circular regions to designate individual SNRs by visual inspection, mostly along their outermost bright radio continuum shell.
Footnote 2: [http://www.iram.fr/IRAMFR/GILDAS](http://www.iram.fr/IRAMFR/GILDAS)
Footnote 3: [https://www.mwatelescope.org/gleam](https://www.mwatelescope.org/gleam)
Footnote 4: [http://third.ucllnl.org/gps](http://third.ucllnl.org/gps)
Footnote 5: [http://www.mrao.cam.ac.uk/surveys/snrs/snrs.info.html](http://www.mrao.cam.ac.uk/surveys/snrs/snrs.info.html)
Footnote 6: [http://www.physics.umanitoba.ca/snr/SNRcat](http://www.physics.umanitoba.ca/snr/SNRcat)
## 3 Methods
Molecular gases impacted by SNR shock will be disturbed in two respects, i.e. heating and macro
turbulence. Molecular gas behind a transmitted shock in a molecular clump will be accelerated and heated, which is usually detected as broadened line wing in CO line profiles. In addition, molecular gas engulfed by SNR will become cold as the SNR gets old, but still maintains the injected momentum. Hence, relative motion between different parts of molecular gas in SNR, i.e. macro-turbulence, leads to the corresponding CO line profile being broader than that from surrounding quiescent molecular gas. The macro-turbulence in molecular gas injected by SNR could be very general, especially in old SNRs where most of the dense shocked molecular gases become cold. Overall, these effects would result in broadened CO lines from shocked molecular gases. Unlike CO lines from molecular outflows in star-forming regions, these broadened CO lines usually present as extra peaks beside narrow CO lines from surrounding quiescent molecular gases. Such kinematic evidence in SNRs is very useful to confirm SNR-MC associations. It is noteworthy that such broadened CO lines might not be present in some young SNRs that are associated with MCs, because CO molecules nearby can be dissociated by UV radiation from progenitor massive stars or strong shock heating of these young SNRs, and, in general, the kinetic energy of SNR's shock is not sufficiently converted into a molecular cloud (e.g., Inoue et al., 2012; Zhou et al., 2016; Celli et al., 2019; Sano et al., 2020, and references therein).
An SNR will also reshape nearby MCs, which can destroy less dense molecular gas and leave a cavity, or can accumulate molecular gas and form a shell (e.g., Seta et al., 1998; Inutsuka et al., 2015). Spatial correlation is commonly used as evidence of SNR-MC association. Many known SNR-MC associations are proposed based on their spatial correlations (see Table 2 in Jiang et al., 2010). In some cases, spatially correlated MCs originate from wind-blown bubbles of the SNR's progenitor, especially for very young SNRs or PWNe, for which kinematic evidence barely exists (Chevalier, 1999; Chen et al., 2013, and references therein). Note that the spatial correlation evidence is not as robust as the kinematic evidence, since overlapping of multiple MCs across the Galactic plane through the line of sight would complicate the spatial distribution of molecular gases.
Our goal is to search universally for evidence of kinematic and spatial correlation of SNR-MC association. Thereafter, we also need to settle associated spectral components, by comparing the searching results of broad line and spatial correlation, comparing them with the radio continuum emission of SNRs, and eliminating the contamination of overlaid energetic objects. Finally, we can determine the systemic velocity of MCs associated with SNRs. In this paper, the velocity of the intensity peak of the associated spectral component, which represents the local standard of rest (LSR) velocity of the main part of molecular gas, is considered as the systemic velocity of MCs associated with SNRs. The detail of searching method is presented in Section 3.1 and 3.2.
### Broad Line Identification
Kinematic evidence of SNR-MC associations would present as broadened lines, and the goal is to search these broadened lines in \({}^{12}\)CO (J=1-0) data. At the beginning, we smooth all \({}^{12}\)CO (J=1-0) spectra by bin nearby three channels, to get a high signal-to-noise ratio in broad line searching. The spectral resolution after smoothing is 0.48 km s\({}^{-1}\), which is about half of the narrowest \({}^{12}\)CO (J=1-0) line. Then, we decompose these spectra into individual spectral components, of which the minimum intensity of bottoms is 1\(\sigma\), the minimum intensity of peak-to-valley differences is 3\(\sigma\), and the minimum intensity of peaks is 5\(\sigma\). Broad line identifications are performed based on these spectral components. For these spectral components, we identify broad lines first, then find broader lines in the SNR region than those in the background if exist. Broad lines are identified by following criteria.
* The full width at half maximum (FWHM) of the spectral component be larger than 6 km s\({}^{-1}\).
* The kurtosis of the component be larger than 1.1 times that of the best-fitting Gaussian function with the same FWHM in the same velocity range.
* The square rooted variance of channels with intensity greater than the best-fitting Gaussian function be smaller than or equal to the square rooted variance of all channels of the whole component.
* All required values be over three times their \(\sigma\) errors.
As indicated by the Larson's line width-size relationship, the typical FWHM of \({}^{12}\)CO (J=1-0) lines of normal, parsec scale, quiescent MCs is below \(\sim\)6 km s\({}^{-1}\)(Heyer and Dame, 2015). The \({}^{12}\)CO (J=1-0) line of most shocked molecular gases is with FWHM greater than 6 km s\({}^{-1}\)(e.g., SNR HB3; Zhou et al., 2016). The FWHM of the \({}^{12}\)CO (J=1-0) line of some small shocked molecular clumps can be lower than 6 km s\({}^{-1}\); however, such clumps only occupy a small part of the shocked molecular gas in the SNR in most cases. Therefore, we adopted 6 km s\({}^{-1}\) as a lower limit of the FWHM of broad lines. Note that, for SNRs adjacent to MCs of much
larger size, a larger FWHM threshold will be applied to identify broad lines associated with SNRs in the following step, which depends on the FWHM of background spectral components. The information on the width of background lines can also help us to eliminate some line overlapping effects.
We use the kurtosis value to characterize the deviation from a Gaussian profile, of which a larger value indicates a distribution with extra wings. Together with the variance of channels with intensity greater than the best-fitting Gaussian function, we can eliminate some line overlapping effects. As shown in Figure 1, components comprising a strong narrow line plus a weak broad wing are mostly selected, and those comprising a strong broad line plus a weak narrow line are eliminated. The CO emission of shocked MC by an SNR usually presents as a strong narrow line plus a broadened line wing, where the narrow line originates from the quiescent matrix MC and the broad wing from shocked molecular gas. Narrow lines of some small molecular clumps in SNRs could be weaker than broad lines, or no narrow line left at all. However, in most cases, such small molecular clumps contribute only a small portion of shocked molecular gas comparing to that from the large matrix MC (e.g., Zhou et al., 2016), and velocities of these broad lines are usually significantly shifted from that of corresponding narrow lines, which are easily to be distinguished as separate spectral components. Commonly, overlapping CO lines from different MCs in the line-of-sight do not have such features. For instance, in some star-forming regions, central dense molecular gases are mostly disturbed, where broad CO lines are stronger. These criteria are strict, which may eliminate some individual broad lines, but they provide more accurate results for SNRs in complicated backgrounds.
In the following step, we further select broader lines within the SNR region than that in the background. The size of the SNR region is 1.1 times enlarged than that indicated by its radio continuum emission. These broader lines are considered as primary kinematic evidence of the SNR-MC association. The final FWHM threshold for selecting broad lines originating from SNRs satisfies the condition that their FWHM is larger than or equal to the average FWHM of background broad lines. In practice, some SNRs still have many broad lines at different velocities selected. In this case, we add an additional condition to select broad lines for reference. The additional condition requires that the final FWHM threshold be larger than all broad lines in at least one of equally divided sub-regions in the background, which have areas comparable to that of the SNR region. Note that, for SNRs with simple backgrounds, this additional condition does not change the broad line identification results. This additional condition is hereafter referred to as the clean subbackground region condition. Thereby, we can better identify broad lines originating from SNRs in backgrounds where CO lines are broad, e.g., nearby the Galactic center.
As noted above, full broad line identification criteria are strict, which provide better accuracy but less completeness. Full broad line identification criteria plus the clean subbackground region condition are even more strict. As a complement, a second broad line identification procedure is performed, applying only partial criteria, i.e. FWHM and error criteria, and the final FWHM threshold being larger than or equal to the average FWHM of background broad lines. Broad lines identified only by partial criteria are considered as broad line candidates.
At last, identified broad lines in an SNR are divided into different components, according to their intensity peaks being in each other's velocity range or not. The systemic velocity of broad line groups is determined as the velocity of the overall intensity peak of corresponding \({}^{13}\)CO (J=1-0) emission. If there is no significant \({}^{13}\)CO (J=1-0) emission, the systemic velocity is applied as the velocity of the overall intensity peak of \({}^{12}\)CO (J=1-0) emission. The case that broad lines being adjacent to the associated narrow lines is common, since the SNR shock is slow when it transmitted into a dense cloud. Broad lines are also usually strong near the observed boundary of shocked MCs, where the column density of shocked molecular gas is high, and LSR velocities in the line-of-sight direction of these broad lines are low. The systemic velocity obtained based on broad lines would represent the LSR velocity of most quiescent molecular gases in most cases. Nevertheless, some broad lines can be shifted to velocities far away from associated narrow lines, and are identified as separated broad line components. Therefore, further examination is needed to settle the systemic velocity of associated MCs, e.g., spatial correlation examination that is introduced in Section 3.2.
### Spatial Correlation Coefficient
To examine spatial correlation between SNRs and MCs, we calculate spatial correlation coefficients (abbreviated as SCCs below) between a series of circular rings and channel maps of \({}^{12}\)CO (J=1-0) emission. Positions and sizes of SNRs, which are used to derive circular rings, are mostly determined by radio continuum emission of SNRs. For preparation, \({}^{12}\)CO (J=1-0) data is moment masked at first (see Dame, 2011, for reference), to eliminate noise that affects the result of weak emission. \({}^{12}\)CO (J=1-0) channel maps are also linearly interpolated to an \(80\times 80\) pixel resolution and normalized to maximum intensity of 1. The \(80\times 80\) pixel resolution would simplify the calculation, and enables
Figure 1: Test results of broad line identification. The spectrum comprising two Gaussian components is used for test, with standard deviations of \(\sigma_{1}\) and \(\sigma_{2}\) and the distance between their peaks of \(\Delta x_{0}\). The FWHM of the first Gaussian component is set to 3, i.e. \(\sigma_{1}\) is 1.27. Results are similar for other values of \(\sigma_{1}\). Black contours show the distribution of the kurtosis ratio between the spectrum and a best-fitting Gaussian function. Red contours show the distribution of the square rooted variance ratio between the part of the spectrum larger than the best-fitting Gaussian function and the whole spectrum. Panels in the left column show cases that the broad Gaussian component is weaker than the narrow component, with peak ratios of 0.2, 0.5, and 0.8 from top to bottom, respectively. The right column is opposite, with the ratios of peaks between the broad and narrow components of 5, 2, and 1.2 from top to bottom, respectively. Gray scale regions indicate where two Gaussian components can be resolved. Blue dashed lines denote regions selected by full broad line identification criteria.
the smallest circular ring to be still resolved. The minimum inner radius of circular rings, corresponding to a quarter of the radius of SNRs, is at least 4 pixels, and the maximum inner radius is 20 pixels. The increment of the inner radius of the series of circular rings is 1 pixel. Thicknesses of circular rings are from 2 to 30 pixels at intervals of 2 pixels. The results obtained with these parameters are fully feasible for the following calculations. After these preparations, we calculate Spearman correlation coefficients between \({}^{12}\)CO tile images and ring template images, as \(\rho=\frac{\Sigma_{i}(x_{i}-\bar{x})(y_{i}-\bar{y})}{\sqrt{\Sigma_{i}(x_{i}-\bar {x})^{2}\Sigma_{i}(y_{i}-\bar{y})^{2}}}\). This SCC would be large for molecular gases with similar spatial distributions to circular rings. Since the large number of pixels outside the SNR has a large impact on the SCC result, we only consider pixels inside ring templates or within twice the SNR radius. In the following step, we select the largest coefficient for each channel map, and make a coefficient versus velocity channel plot. In the plot, velocity channels with coefficients over 3\(\sigma\) levels usually congregate into different groups that correspond to velocity components in CO spectra. Groups of coefficients with peaks over 5\(\sigma\) confidence level, or over 3\(\sigma\) confidence level and larger than 0.5, are considered as candidates of correlated components. If there is no coefficient over 5\(\sigma\) or larger than 0.5, the maximum coefficient over 3\(\sigma\) will be given for reference. In addition, SCCs that consider pixels inside ring templates or within the SNR radius are also calculated, which can better examine thin molecular shell structures around an SNR. Note that the SCC would be underestimated in some cases, e.g., correlations between partial shells or other irregular shapes, nevertheless, they are still noticeable on the coefficient versus velocity plot in most cases.
### Demonstration
We perform our searching method to several known SNR-MC associations for demonstration at first, and it helps us to optimize the parameters of the algorithm. Among these known SNR-MC associations, SNR IC 443, SNR HB 3, and SNR G16.0\(-\)0.5 are representative of SNRs with clear, simple, and complicate CO emission backgrounds, respectively.
#### 3.3.1 Ic 443
The association between SNR IC 443 and MCs has been well-established in previous works. The variety of molecular lines studied for this remnant is the most complete, providing a good understanding of the physics of the interaction between the SNR shock and the molecular gas (e.g., van Dishoeck et al., 1993). There are nine shocked molecular clumps in IC 443, which exhibit broad \({}^{12}\)CO (J=1-0) line emission (Denoyer, 1979a; Huang et al., 1986; Dickman et al., 1992; Lee et al., 2012a). These clumps are indicated in Figure 2. Eight of them are labeled as A through H, following the nomenclature in Dickman et al. (1992), and the remaining one that is adjacent to clump C is labeled as C\({}^{\prime}\), which was named as SC 05 in the study of Lee et al. (2012a). Note that we apply a circular region to denote the remnant in our analysis, which is defined following the northern bright radio continuum shell of the remnant, and there is also radio continuum emission in the south outside the region. As shown in Figure 2, we identify broad lines in five clumps by the full criteria, and in eight clumps by the partial criteria. The broad \({}^{12}\)CO (J=1-0) line emission in clump A is too weak, with \(T_{A}^{*}\sim 0.3\) K (Denoyer, 1979a), which is below our detection limit, hence, we found no broad line there. The broad line emission in clump H is also weak, and we only identify broad lines there by partial criteria. In clumps B and C, broad lines are far blue-shifted, separated from the associated narrow line component, which are only identified by partial identification criteria. We evaluate the accuracy of the broad line identification by \((n_{p,SNR}-n_{p,outside})/(n_{p,SNR}+n_{p,outside})\), where \(n_{p,SNR}\) and \(n_{p,outside}\) are numbers of broad line points per unit area inside and outside the SNR region, respectively. For IC 443, the broad line identifications using either full or partial criteria have an accuracy of 1. It benefits from the clean MC background toward the SNR in the outer Galactic region. We also evaluate the completeness of the broad line identification by the number ratio between identified broad lines and all lines with FWHM over 6 km s\({}^{-1}\). For IC 443, the completeness of broad line identifications using full and partial criteria is \(\sim\)0.20 and 1, respectively. Some broad lines separated from the associated narrow component or stronger than the narrow component are eliminated by full criteria. Since there is no contamination from background line emission, the identification by partial criteria is the most effective.
For SNR IC 443, the SCC has a maximum value of 0.24 at \(-2.1\) km s\({}^{-1}\), which is over 3\(\sigma\) but below 5\(\sigma\) levels. The velocity of the maximum SCC is consistent with that of molecular gas associated with the remnant. As seen in Figure 2, associated molecular gas presents some partial shell structures surrounding the remnant. These partial shell structures result in large SCCs. However, central bright CO emission makes the enhancement of the SCC less significant. These associated partial shells were studied by Su et al. (2014a), which are possibly swept up by the stellar wind of the remnant's progenitor.
Both the kinematic evidence and the spatial correlation result indicate that SNR IC 443 is associated with the \(-3.0\) km s\({}^{-1}\) velocity component. Based on a full distance probability density func
Figure 2: Integrated intensity and index-velocity maps of \({}^{12}\)CO (J=1–0) emission toward SNR IC 443. Both intensity maps in the left column are integrated over the velocity range of \(-15.9\) to \(-1.6\) km s\({}^{-1}\), which covers most of the broad lines. The red dashed circle denotes the remnant region, defined by visual inspection according to the bright radio continuum shell of the remnant. Regions A, B, C, D, E, F, G, and H are adopted from Denoyer (1979a), Dickman et al. (1992), where shocked molecular gases were detected by \({}^{12}\)CO (J=1–0) emission. Region C\({}^{\prime}\) is an additional shocked molecular clump, which is adjacent to region C. Locations of broad lines identified by full (upper) and partial (lower) criteria are denoted by red tiny stars. The minimum value of \({}^{12}\)CO emission intensity shown in the colorbar is at 5\(\sigma\) confidence level. The beam is represented by a black circle in the lower left corner. This beam designation is used in all intensity maps in the paper. Index-velocity maps in the right column contain all significant \({}^{12}\)CO (J=1–0) emission lines in the 1.1 times enlarged remnant region, whose intensity is normalized and logarithmically scaled for better visibility. The index indicates the arrangement of the spectrum at each point, sorted by the distance of the point from the SNR center. Some distances are shown on the right axis. The center of each region adopted from previous works in intensity maps is also shown in index-velocity maps. Broad lines identified by full (upper) and partial (lower) criteria are in red, and remaining lines with FWHM over 6 km s\({}^{-1}\) are in blue.
tion7(Reid et al., 2016, 2019), the kinematic distance of the \(-3.0\) km s\({}^{-1}\) component is estimated as 2.10\(\pm\)0.03 kpc, which is located in the Perseus spiral arm. Note that Yu et al. (2019) also measured the distance of the associated MC by dust extinction estimation as 1729\({}^{+116}_{-94}\) pc.
Footnote 7: [http://bessel.vlbi-astrometry.org/node/378](http://bessel.vlbi-astrometry.org/node/378)
#### 3.3.2 Hb 3
SNR HB 3 was initially suggested to be associated with MCs based on their spatial correlations (Landecker et al., 1987; Routledge et al., 1991), and this was further supported by broadened CO lines and H\({}_{2}\) 2.12 \(\mu\)m emission detected in later studies (Zhou et al., 2016; Rho et al., 2021). Note that broadened \({}^{12}\)CO(J=2-1) line emission toward this remnant was detected by Kilpatrick et al. (2016), but was suggested to be associated with the H ii region W3 (OH) rather than HB 3. SNR HB 3 is associated with the nearby H ii region/MC complex W3 (e.g., Zhou et al., 2016), hence, there is broad line emission in the background. Line overlapping effect in the direction of the outer galaxy toward this SNR is not significant. All shocked molecular clumps found by detailed examination in Zhou et al. (2016) are also identified here by partial criteria, i.e. in regions a1, a2, b1, b2, c, and d (see Figure 3). Many broad lines outside the SNR region are also identified, and all of them are located in the star-forming region W3 (see Figure 3), possibly associated with H ii regions within it. The accuracy and completeness of the broad line identification using partial criteria are 0.98 and 0.61, respectively. Since the broad line identification is barely affected by line overlapping effect for this SNR, the accuracy of using partial criteria is high. There are fewer broad lines identified in the remnant using full criteria, and the corresponding accuracy is evaluated as 0.94. The number ratio between broad lines identified inside and outside the remnant using full criteria is a bit less than that using partial criteria. The completeness of the broad line identification using strict full criteria is as low as 0.11.
In the SCC plot in the upper left panel in Figure 4, when considering pixels inside ring templates or within twice the SNR radius, a component with a peak above the 5\(\sigma\) confidence level indicates a correlation between SNR HB 3 and molecular gas at \(\sim\)\(-42.2\) km s\({}^{-1}\). The SCC result considering pixels inside ring templates or within the SNR radius also supports this correlation (see panels in the bottom row in Figure 4). There is a molecular shell at \(\sim\)\(-42.2\) km s\({}^{-1}\) spatially correlated with the bright radio continuum shell of the remnant well (also seen in Zhou et al., 2016). Since the molecular shell occupies only about a quarter of the ring surrounding the remnant (see the right panel in Figure 4), the peak of the SCC is only 0.22, which is underestimated.
The kinematic evidence and the spatial correlation result indicate that SNR HB 3 is associated with the \(-44.0\) km s\({}^{-1}\) velocity component. Based on the full distance probability density function, the kinematic distance of the \(-44.0\) km s\({}^{-1}\) component is estimated as 1.96\(\pm\)0.04 kpc, which is located in the Perseus spiral arm.
#### 3.3.3 G16.0-0.5
Beaumont et al. (2011) applied Support Vector Machines (SVMs) machine learning algorithm to classify broadened \({}^{12}\)CO (J=3-2) lines in SNR G16.0\(-\)0.5, and found shocked shell-like molecular gas in a wide velocity range from \(-\)5 to +90 km s\({}^{-1}\). Near-infrared H\({}_{2}\) lines at +51 km s\({}^{-1}\) were also detected toward this SNR (Lee et al., 2020). SNR G16.0\(-\)0.5 locates in the inner galaxy direction, where line overlapping effect is serious. It needs strict criteria to identify broad lines in such complicated background. Here, applying full criteria, we identify three broad line components in the SNR at velocities of +43.7, +67.9, and +87.5 km s\({}^{-1}\). The identification accuracy for the +67.9 km s\({}^{-1}\) component is the highest, which is 0.83, and accuracies of other two components are around 0.3. The total completeness is only \(\sim\)0.08. The high accuracy and low completeness indicate an efficient suppression of line overlapping effect. The +43.7 and +67.9 km s\({}^{-1}\) broad line components are also identified when using full criteria plus the clean subbackground region condition. Accordingly, identification accuracies of two components are improved to 0.59 and 0.92, respectively, and the overall completeness drops to 0.03. Overall, broad line identification using partial criteria seems greatly affected by line overlapping effect, with broad line candidates identified at many velocities. Nevertheless, many broad line candidates at +67.9 km s\({}^{-1}\) are identified using partial criteria, and most of them are distributed in the remnant (see Figure 5). There are also many broad line candidates identified at +43.7 km s\({}^{-1}\) by partial criteria, which are widely distributed inside and outside the remnant. As shown in Figure 5, the shocked molecular shell detected by \({}^{12}\)CO (J=3-2) emission in the remnant (Beaumont et al., 2011) is mainly composed of gases of the +67.9 km s\({}^{-1}\) velocity component, which also follows the remnant's radio continuum shell peak well. Some molecular gas of the +43.7 km s\({}^{-1}\) velocity component contributes to the shocked molecular shell outside the remnant.
The peak of the SCC for SNR G16.0\(-\)0.5 is at \(\sim\)+63.3 km s\({}^{-1}\) and above 0.5, which considers pixels inside the ring templates or within twice the radius of the SNR (see the upper left panel in Figure 6). In addition, the \(\sim\)+63.3 km s\({}^{-1}\) component
Figure 3: Same as Figure 2, but for SNR HB 3. Both integrated intensity maps in the left column are in the velocity range of \(-57.3\) to \(-26.8\) km s\({}^{-1}\). Regions a1, a2, b1, b2, c, and d are adopted from Zhou et al. (2016c), which indicate positions of shocked molecular clumps. All broad lines outside the SNR region are located in the star-forming region W3.
Figure 4: Maximum spatial correlation coefficients (SCCs) at each velocity channel and \({}^{12}\)CO (J=1–0) intensity maps at the corresponding velocity channel at the SCC peak for SNR HB 3. Dashed and dotted lines in panels in the left column denote 5\(\sigma\) and 3\(\sigma\) levels, respectively. Black circles in panels in the right column indicate the annular template applied in the calculation to obtain the maximum SCC. The red dashed circle is the same as in Figure 3. Upper panels show SCC results considering pixels inside the ring template or within twice the SNR radius, which has a significant peak at \(\sim\)\(-\)42.2 km s\({}^{-1}\). For comparison, the peak of the SCC at \(\sim\)\(-\)41.6 km s\({}^{-1}\) in lower panels, considering pixels inside the ring template or within the SNR radius, is not above 5\(\sigma\), however, it distinguishes the molecular shell structure around the SNR more precisely.
Figure 5: Same as Figure 2, but for SNR G16.0\(-\)0.5. Two integrate intensity maps of \({}^{12}\)CO (J=1–0) emission in the upper row are in the velocity ranges of +35 to +50 km s\({}^{-1}\) (left) and +60 to +80 km s\({}^{-1}\) (right). Locations of broad lines at velocities of +43.7 and +67.9 km s\({}^{-1}\) are denoted by tiny stars on the corresponding left and right integrated intensity maps, respectively. Broad lines identified by partial criteria are in pink, those by full criteria are in red, and those by full criteria plus the clean subbackground region condition are in blue. Cyan contours in both integrated intensity maps are at half the column density maximum of shocked molecular gas obtained by Beaumont et al. (2011). The GLEAM 200 MHz radio continuum emission map in power-law scale is superimposed on the upper left corner of the left integrated intensity map, overlaid with the same red dashed circle as in integrated intensity maps. In the index-velocity maps in the bottom row, broad lines identified by full criteria plus the clean subbackground region condition (left), full criteria (middle), and partial criteria (right) are in red.
Figure 6: Same as Figure 4, but for SNR G16.0\(-\)0.5. The red dashed circle in the right panel is the same as in Figure 5. The peak of the SCC at \(\sim\)+63.3 km s\({}^{-1}\) in upper panels is above 0.5, nevertheless, the peak of the SCC in lower panels at \(\sim\)+37.6 km s\({}^{-1}\) is not so significant.
in the velocity versus SCC plot is wide. There is shell-like molecular gas at \(\sim\)+63.3 km s\({}^{-1}\) spatially correlated with the southeastern radio continuum shell of the remnant. It supports the association between the remnant and the 67.9 km s\({}^{-1}\) broad line component. Nevertheless, when considering pixels inside the ring templates or within the radius of the SNR, the SCC has a nonsignificant peak at \(\sim\)+37.6 km s\({}^{-1}\). Molecular gas at \(\sim\)+37.6 km s\({}^{-1}\) is distributed around the southeastern and southwestern boundaries, which belongs to the +43.7 km s\({}^{-1}\) broad line component.
There is not much \({}^{12}\)CO (J=3-2) emission detected around +67.9 km s\({}^{-1}\) as shown by Beaumont et al. (2011). \({}^{12}\)CO (J=3-2) line emission is efficient to trace hot shocked molecular gas but not cold quiet molecular gas. Based on the spatial correlation result as well as identified broad CO lines with high accuracy, the SNR is probably associated with the +67.9 km s\({}^{-1}\) component. However, we cannot totally rule out the possibility of an association between the SNR and the +43.7 km s\({}^{-1}\) component. Note that a further study of the expanding gas motion suggests that the +43.7 km s\({}^{-1}\) component is associated with another object, which supports the association between SNR G16.0\(-\)0.5 and the +67.9 km s\({}^{-1}\) component (Zhou et al., 2023, in preparation). Based on the full distance probability density function, the kinematic distance of the +67.9 km s\({}^{-1}\) velocity component is estimated as 3.9\(\pm\)0.3 kpc, which is located in the Norma spiral arm. Kinematic distances of the +43.7 km s\({}^{-1}\) component are estimated as 3.2\(\pm\)0.3 and 4.0\(\pm\)0.3 kpc. The +43.7 km s\({}^{-1}\) velocity component might be also located in the Norma spiral arm, as indicated by the full distance probability density function.
## 4 Results
We universally search for evidence of kinematic and spatial correlation of SNR-MC association for 149 SNRs, 170 SNR candidates (SNRCs), and 18 pure PWNe in our coverage. We apply full criteria to identify broad lines associated with SNRs at first, thereafter, apply the partial criteria to those with no broad lines found by full criteria. Broad lines identified by the partial criteria are considered as candidates. In some SNRs, more than one broad line components at different velocities are found. Multiple broad line components found in one SNR might all belong to it, which originate from molecular gases pushed toward and away from us at the near side and far side of the remnant, respectively. However, such multiple components might also originate from contamination of the line overlapping effect or overlaid energetic objects. In particular, in directions perpendicular to the direction of the circular motion of the solar system barycenter around the Galactic center, i.e. around l\(\sim\)0\({}^{\circ}\) and l\(\sim\)180\({}^{\circ}\), MCs in different spiral arms are with systemic velocities close to each other, where the line overlapping effect is severe. Some lines are intrinsically indistinguishable in observations, which affects broad line identification results. For SNRs with many broad lines at different velocities identified even by full criteria, we use broad lines identified by full criteria plus the clean subbackground region condition as a reference. We greatly eliminate the line overlapping effect in our searching method. Besides, further examination is performed in settling the systemic velocity of MC associated with SNRs. Spatial correlations are also examined to support kinematic evidence, or provided as independent evidence for SNR-MC association candidates. For SNRs and SNRs within 10\({}^{\circ}\) of l\(\sim\)0\({}^{\circ}\) or within 5\({}^{\circ}\) of l\(\sim\)180\({}^{\circ}\), where broad line identifications are affected by serious line overlapping effects, we search spatial correlation evidence first, and then choose those supported by identified broad line candidates. For 18 pure PWNe, to identify candidates of associated MCs, we also search spatial correlation evidence over a large extent at first, and then further examine related broad lines identified by full criteria.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c} \hline \hline SNR (Other Name) & GLon & GLat & Coverage & Typea & \(\rm{b}_{\rm{LSR}}\)b & \(\rm{c}\)b & \(\rm{V}_{\rm{LSR}}\)d & Distance & \(\rm{V}_{\rm{BH}}\)f \\ & \(\circ\) & \(\gamma\)v & \(\rm{b}_{\rm{LSR}}\)h [FOOTNOTE:h]Footnote h: \({}_{\rm{LSR}}\) of synthesized molecular gas, with the corresponding spatial correlation coefficient enclosed in brackets. Those with coefficients below 5\(\sigma\) confidence level and thus 0.
\begin{table}
\begin{tabular}{l l l l l l l l} \hline \hline SNR (Other Name) & OH 1720 MHz Maser \(V_{\rm LSR}\)\({}^{\rm a}\) & CO BL \(V_{\rm LSR}\)\({}^{\rm b}\) & CO SC \(V_{\rm LSR}\)\({}^{\rm c}\) & Other Evidences \(V_{\rm LSR}\)\({}^{\rm a}\) & Reference\({}^{\rm a}\) \\ & km s\({}^{-1}\) & km s\({}^{-1}\) & km s\({}^{-1}\) & km s\({}^{-1}\) & & \\ \hline G1.4\(-\)0.1 & \(-\)2.3 to \(-\)2.5 & - & - & & (CH\({}_{5}\)OH & 36 GHz & \\ & & & & & maser) & & \\ G1.9\(+\)0.3 & D & - & - & - & & 1 \\ G5.4\(-\)1.2 (Milne 56) & \(-\)21 & - & \(-\)21 & - & & 4, 5 \\ G5.5\(+\)0.3 & - & - & +12.5 & - & & \\ G6.4\(-\)0.1 (W28) & \(+\)2.45 to \(+\)15.94 & +5.8, +7 & Y & -2.3 (OH & 6035 MHz & 1, 10, 11, 12, \\ & & & & & maser), +6.8 & +8.2 & 13, 14, 3, 6, 7, \\ & & & & & (CH\({}_{5}\)OH & 36 and & 44 \\ & & & & & GHz maser), +6.1 to & \\ & & & & & +13.1 (CS line), +6.3 & \\ G8.7\(-\)0.1 (W30) & \(+\)36 & - & - & +7.4 (HCO+) & & \\ G9.7\(-\)0.0 & \(+\)43 & - & - & - & 4 & \\ G9.9\(-\)0.8 & - & +31 & - & +30 (H\({}_{2}\) line) & 15, 16 & \\ G11.0\(-\)0.0 & - & - & +21, +40 & +30 (SiO and CO lines, & 17, 18, 19 \\ G11.1\(+\)0.1 & - & - & +56 & - & & 18 \\ G11.2\(-\)0.3 & D & - & +33 & - & & 18 \\ G11.4\(-\)0.1 & D & - & +30, +50 & - & - & 1, 18 \\ G11.8\(-\)0.2 & - & - & +49.8 & - & 18 \\ G12.0\(-\)0.1 & D & - & +37.4 & - & 1, 18 \\ G12.2\(+\)0.3 & - & +50 & - & - & 15 \\ G13.3\(-\)1.3 & D & - & - & - & 1 \\ G13.5\(+\)0.2 & - & - & +24 & +40 (H\({}_{2}\) line) & 16, 18 \\ G15.4\(+\)0.1 & - & - & +34, +47.8 & +60 (H i SC), +60 (H i & 18, 21, 22 \\ & & & & absorption) & & \\ G15.9\(+\)0.2 & - & - & +29 & - & 18 \\ G16.0\(-\)0.5 & - & +40 & - & +51 (H\({}_{2}\) line) & 16, 23 \\ G16.7\(+\)0.1 & +19.92 to \(+\)20.0 & +25 & +25.6, +47, +62 & - & 1, 15, 18, 24, 7 \\ G17.0\(-\)0.0 & - & - & +31, +93.4 & - & 18 \\ G17.4\(-\)2.3 & D & - & - & - & 1 \\ G17.8\(-\)2.6 & D & - & - & - & 1 \\ G18.1\(-\)0.1 & - & - & +53.1, +49 & +100, +102, +103.74 & 16, 18, 25, 26, \\ & & & & (H i absorption), +73 & 27, 28 \\ & & & & to +85 (H\({}_{2}\) line) & & \\ G18.6\(-\)0.2 & - & +42 & +62, +66 & +62 (H i absorption), & 15, 18, 25, 26, \\ & & & & & +62.84 (H i absorption) & 29 \\ G18.8\(+\)0.3 (Kes 67) & - & - & +19, +20 & +20 (CO (2-1)/(1-0) & 18, 26, 30, 31, \\ & & & & +21.35 & ratio of 1.25), +21.35 & 32, 33, 34 \\ & & & & & (H i SC) & \\ G18.9\(-\)1.1 & - & - & +25.6 & +23 (H i absorption), & 16, 35, 36, 37 \\ G20.0\(-\)0.2 & - & - & +65, +66, +64 & +66.4 (H i absorption) & 18, 26, 38 \\ G21.5\(-\)0.9 & D & - & - & +68 (H i absorption) & 1, 26, 39 \\ G21.8\(-\)0.6 (Kes 69) & +69.3 to +69.76 (compact), - & +83, +85, & +85 (HCO+ line on & 1, 16, 18, 26, \\ & (extended) & & & +93.35 (H i absorption), +61 (H\({}_{2}\) line) & \\ G22.7\(-\)0.2 & D & +77 & +75, +76.3 & +76.3 (H i absorption) & 1, 18, 26, 41, \\ & & & +77 & +77 & +63 (H i SC), +78.51 & 1, 18, 26, 42, \\ G23.3\(-\)0.3 (W41) & D (at +71) & - & +63, +70, +77, & +63 (H i SC), +78.51 & 1, 18, 26, 42, \\ & & & & +78.51 & (H i absorption) & 43, 44, 45 \\ G24.7\(-\)0.6 & D & - & +60, +60.67 & - & 18 & 46, 47 \\ G27.4\(+\)0.6 & - & - & +112 & - & 18 & 46, 47 \\ G27.4\(+\)0.0 (Kes 73, D & +100 & +90, +101 & +90 (CO (2-1)/(1-0) & 1, 15, 16, 18, \\ & & & & & \(\times\) 1.1), +99.5 (H i absorption), +99 (H i line) & 26, 48 \\ & & & & & & line) \\ G27.8\(+\)0.6 & D & - & - & +86 & - & 47 \\ G28.6\(-\)0.1 & - & - & +86 & - & 15, 18 \\ G29.6\(-\)0.1 & - & +94 & +99.2 & - & 15, 18 \\ G29.7\(-\)0.3 (Kes 75) & D & +53, +54 & +52, +54, +95, & +95 + to +102 (H i absorption) & 49, 50 \\ & & & & +112 & - & 49, 50 \\ G31.5\(-\)0.6 & - & - & +87.5, +97 & - & 18 \\ G31.9\(+\)0.0 (3C 391) & +97.04 to +110.2 & +103.9 & to +107, +100 & +100 (H i line), +104.2 & 1, 15, 16, 18, \\ & & & +112 & - & +108.9 (broad CS line), & 47, 51, 52, 53, \\ & & & & & & \\ G32.1\(-\)0.9 & - & - & +95 & +85 (H\({}_{2}\) line) & 16, 18 \\ G32.4\(+\)0.1 & - & +43 & +10.8, +42.6 & - & - & 15, 18 \\ G32.8\(-\)0.1 (Kes 78) & +86.1 & +81 & +74, +81, +81 & +81 (CO (2-1)/(1-0) & 16, 26, 56, \\ & & & & & +81.81 (H i absorption), +90 (H i line) & 57 \\ G33.2\(-\)0.6 & - & - & +54, +91 & +82 (H\({}_{2}\) line)
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline SNR (Other Name) & OH 1720 MHz Maser \(V_{\rm LSR}\)a & CO BL \(V_{\rm LSR}\)a & CO SC \(V_{\rm LSR}\)a & Other Evidences \(V_{\rm LSR}\)a & Reference \\ & \(\rm km\,s^{-1}\) & & \(\rm km\,s^{-1}\) & & \(\rm km\,s^{-1}\) & & \\ \hline G33.6+0.1 (Kes 79) & D & \(\sim\)\(\sim\)105 & +70, +80, +103, & +80 (CO (3–2)/(1–0) & 1, 15, 18, 26, & \\ & & & +105 & \(>\) 08 and CO shell & 58, 59, 60, 61, & \\ & & & & & in P\(\times\) map), +105 & 62 \\ & & & & & (HCOO+ line), +95 (H i & \\ & & & & & & \\ & & & & & & \\ & & & & & & \\ G34.7\(-\)0.4 (W44) & +23.8 to +47.35 & +40, & +45, +40, +48, +52 & +40 and +45 (high-J & 1, 10, 11, 16, & \\ & & & +46.6, +47.5, & & & 0 lines), CO (2–18, 26, 6, 63, & \\ & & & +48 & & & 0 lines), CO (2–18, 26, 6, 63, & \\ & & & & & 1/(1–0) \(>\) 1, +40 & 64, 65, 66, 67, & \\ & & & & & & (1/1–0) \(>\) 1, +40 & 64, 65, 66, 67, & \\ & & & & & & (1/1–0) \(\sim\) 10\({}^{-2}\)CO (1–0) & 68, 69, 7, 70, & \\ & & & & & & (1/1–0) \(\sim\) 10\({}^{-2}\)CO (1–0) & 67, 72, 8 & \\ & & & & & & & \\ G35.6\(-\)0.4 & - & - & +55, +90 & +63.67 (H i absorption) & 18, 26, 73, 74 & \\ G36.6\(-\)0.7 & - & - & +57, +79 & +18 & 18 & \\ G39.2\(-\)0.3 (3C 396) & D & +69, +77, & +51, +67, +69, & +56 (H i line), +69.39 & 15, 16, 18, & \\ & & & +84 & +69, +39, +84 & (H i absorption) & 26, 75, 76, 77 & \\ G39.7\(-\)2.0 (W50) & - & +53, +77 & +32 & +53 (CO (2–1)/(1–0) & 78, 79, 80 & \\ & & & & & \(>\) 09, CN (3/1–1)), & \\ G40.5\(-\)0.5 & - & - & +55, +58, +67 & - & - & \\ G41.1\(-\)0.3 (3C 397) & - & +31, +32 & +32, +38, +40 & - & 15, 18, 83, 84 & \\ G41.5\(+\)0.4 & - & - & +58 & - & 18 & \\ G42.0\(-\)0.1 & - & - & +66 & - & 18 & \\ G43.3\(-\)0.2 (W49B) & - & +14 & +10, +32, +40, & +11 (OH 6035 MHz & 11, 15, 16, & \\ & & & & +45, +62 & maser), +63 (HCOO+ & 26, 85, 86, 87, & \\ & & & & & & (1–0)/CO (1–0) ratio), & \\ & & & & & & (1–25,5 (H i absorption), +40 (H i absorption), +10 (CO & \\ & & & & & & high T\({}_{\rm re}\) and \(\sim\)+6 & \\ & & & & & & km s\({}^{-1}\) expanding motion), +64 (H i line) & \\ G43.9+1.6 & - & +50 & Y & - & 89 & \\ G45.7\(-\)0.4 & - & - & +26, +48.5 & - & 18 & \\ G46.8\(-\)0.3 (HC30) & - & +19 & +19, +52 & +0 to +40.4 (H i absorption) & 18, 90 & \\ G49.2\(-\)0.7 (W51C) & +68.9 to +72.13 & +60 & +50, +60 & +67.6 (OH 6035 MHz & 1, 11, 18, 63, & \\ & & & & & & (H\({}_{\rm HCO}\) and 7, 91, 92, 93 & \\ & & & & & (CH\({}_{\rm GHOH}\) 36 GHz & \\ & & & & & maser), \(\sim\)+70 (H i absorption) & \\ G53.6\(-\)2.2 (3C 400.2) & - & - & - & +27 (H i SC) & 94 & \\ G54.1+0.3 & - & +23 & +23, +53, +53.66 (H i absorption) & 26, 95, 96, 97 & \\ G54.4\(-\)0.3 (HC40) & - & - & +36.66, +40 & +44 (H i line) & 16, 52, 98 & \\ G57.2+0.8 (4C\(-\)21.53) & +30 & - & +12, +30 & +46 (H i SC) & 100, 99 & \\ G59.5\(+\)0.1 & - & +28 & Y & +28 (CO (3–2)/(2–1) & 101 & \\ & & & & & \(\sim\) 1.58) & \\ G63.7+1.1 & - & - & \(\sim\)\(\sim\)+13, +21 & +\(\sim\)+13 (H i SC) & 102, 103 & \\ G69.0+2.7 (CTB 80) & - & +11.5 & +11.5 & +11.5 (H i SC) & 104 & \\ G73.9+0.9 & - & - & +3 & - & 105 & \\ G74.9+1.2 (CTB 87) & - & -58 & \(-\)58, \(-\)57.3, \(-\)57 & \(-\)64 (H i SC) & 106, 107, 108, & \\ G78.2+2.1 (DR4) & - & - & \(-\)26 +13.6 & - & 110, 111, 112 & \\ G84.2\(-\)0.8 & - & - & -39, +17 & - & 105, 106, 113 & \\ G85.4+0.7 & - & - & \(-\)41 & - & - & 105 \\ G89.0+4.7 (HB 21) & - & -5, +3 & \(-\)5, \(-\)3, \(\sim\)\(\sim\)\(\sim\)1 & +3 (CO (2–1)/(1–0) & 114, 115, 116, & \\ & & & & (\(-\)10 to +8) & ratio of 1.6 to 2.3), & \\ & & & & & \(\sim\) -1 (H i SC) & 117 & \\ G93.7\(-\)0.2 (CTB - & - & - & -6 (H i SC) & 118 & \\
104A) & - & - & - & -6 (H i SC) & 118 & \\ G94.0+1.0 (3C 434.1) & - & - & -13 & \(-\)13 (CO (2–1)/(1–0) & 105, 119 & \\ & & & & & ratio \(\sim\) 1.6) & \\ G106.3+2.7 & - & - & -6.4 & -6.4 (H i SC) & 120 & \\ G109.1\(-\)1.0 (CTB 109) & - & \(\sim\)\(-\)55 & \(-\)50 to \(-\)48 & \(-\)50 (H i SC) & 121, 122, 123, & \\ G111.7\(-\)2.1 (Cas A) & - & \(\sim\)\(-\)40 & Y &
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline \multicolumn{1}{c}{ SNR (Other Name)} & OH 1720 MHz Maser V\({}_{\rm LSR}\)a & CO BLV \({}_{\rm LSR}\)a & Other Evidences \(V_{\rm LSR}\)a & Reference \\ & \(\rm km\,s^{-1}\) & & \(\rm km\,s^{-1}\) & & \(\rm km\,s^{-1}\) & \(\rm km\,s^{-1}\) \\ \hline G120.1+1.4 (Tycho) & - & \(-\)63.5 & \(-\)63.5, \(-\)62, \(-\)61 & \(-\)61 (high CO (2–1)/(1–0) ratio), \(-\)51.5 & 134, 135 \\ G127.1+0.5 (R3) & - & & \(\sim\)+5 & Y & \({}^{-1}\) \\ G130.7+3.1 (3C 58) & - & - & - & -39.1 to \(-\)34.1, \(\sim\)38 & 137, 138, 139 \\ & & & & (H i absorption), \(\sim\)36 & \\ G132.7+1.3 (HB 3) & - & \(-\)45, \(-\)42, \(-\)45, \(-\)43, \(-\)42, & \(-\)30 (H i \(\rm SC\)) & \(-\)30 (H i \(\rm SC\)), \(-\)42 (high-) CO lines with & \\ & & \(-\)40 & \(-\)40.5 & (high-) CO lines with & 143, 15 \\ G141.2+5.0 & - & - & - & \(-\)53 (H i \(\rm SC\)) & 144 \\ G160.9+2.6 (HB 9) & - & - & - & -6 (H i \(\rm SC\)) & 145 \\ G166.0+4.3 (VRO 42.05.01) & - & - & \(-\)22 & \(-\)6 (H i \(\rm SC\) and CO & 106, 146, 147 \\ & & & & & \((\rm H\,I\,SC)\) & 148 \\ G180.0\(-\)1.7 (S147) & - & - & - & \(-\)25 (optical extinction SC) & 105 \\ G182.4+4.3 & - & - & +4 & \(-\)5 (OH absorption & 1, 149, 15, 12, \\ & & \(-\)3.5, \(-\)5, & \(+\)5 & CO lines, CC line & 153, 154, 155, \\ & & \(-\)2, \(+\)5 & polarization), \(+\)9 & 156, 157, 158, \\ & & & (H\({}_{\alpha}\) line), \(-\)2 (CO & 159, 160, 161, \\ & & & & \((\rm 2-1)/(1–0) ratio \(>\) & 162, 163, 164, \\ & & & & \(\rm 165, 68, 7\) \\ & & & & & \\ & & & & & \\ & & & & & \\ & & & & & \\ & & & & & \\ G205.5+0.5 (Mono- & - & \(\sim\)+5, \(\sim\)+19 & +5, +10, \(\sim\)+11, \\ & & & & \(\sim\)+15, +19, +20 & 18 \\ G206.9+2.3 (PKS & - & - & +15 & - & 166, 167, 168, \\ & & & & & \\ G213.0\(-\)0.6 & - & \(\sim\)+9 & +9, +21 & - & 166, 18 \\ G5.7\(-\)0.1 & +13 & - & +12 & -26.4 and \(-\)24.4 & 4, 5, 63 \\ & & & & & \\ & & & & & \\ G21.8\(-\)3.0 & - & - & +5.9 & \({}^{-}\) & 169 \\ G28.56+0.00 & - & - & - & +96.6 (RRL of H i \(\rm SG\)) & 37 \\ & & & & & & \\ & & & & & \\ & & & & & & \\ G29.4+0.1 & - & - & \(\sim\)+82 & \({}^{-}\) & 170 \\ G44.5\(-\)0.2 & - & +60 & Y & +60 (H i \(\rm SC\)) & 171 \\ G51.21+0.11 & - & - & - & +37 to +70 (H i \(\rm SG\)) & 172 \\ (G51.26+0.11) & & & - & +37 to +70 (H i \(\rm SG\)) & 172 \\ & & & & & \\ & & & & & \\ \hline \end{tabular}
\end{table}
Table 3: **Continuing**
Ranasinghe & Leahy (2018a); (47) Frail et al. (1996); (48) Liu et al. (2017); (49) Su et al. (2009); (50) Leahy & Tian (2008a); (51) Reach & Rho (1999); (52) Ranasinghe & Leahy (2017); (53) Reynolds & Moffett (1993); (54) Wilner et al. (1998); (55) Gustdorf et al. (2014); (56) Koralesky et al. (1998); (57) Zhou & Chen (2011); (58) Zhou et al. (2016a); (59) Kuriki et al. (2018); (60) Green & Dewdney (1992); (61) Giacani et al. (2009); (62) Stanimirovic et al. (2003); (63) McEwen (2016); (64) Hoffman et al. (2005); (65) Anderl et al. (2014); (66) Sashida et al. (2013); (67) Seta et al. (2004); (68) Seta et al. (1988b); (69) Wootten (1977); (70) Yamada et al. (2017); (71) Yoshilie et al. (2013); (72) Caswell et al. (1975b); (73) Zhu et al. (2013); (74) Paron & Giacani (2010); (75) Su et al. (2011); (76) Lee et al. (2009); (77) de Ona Wilhelmi et al. (2020); (78) Su et al. (2018); (79) Huang et al. (1983); (80) Liu et al. (2020); (81) Yang et al. (2006); (82) Duvidovich et al. (2020); (83) Jiang et al. (2010); (84) Safi-Harb et al. (2005); (85) Chen et al. (2014); (86) Zhu et al. (2014); (87) Sano et al. (2021); (88) Zhou et al. (2022); (89) Zhou et al. (2020b); (90) Supan et al. (2022); (91) Brogan et al. (2000); (92) Koo & Moon (1997); (93) Tian & Leahy (2013); (94) Giacani et al. (1998); (95) Lee et al. (2012b); (96) Koo et al. (2008); (97) Leahy et al. (2008); (98) Junkes et al. (1992); (99) Zhou et al. (2020a); (100) Kothes et al. (2018); (101) Xu & Wang (2012); (102) Wallace et al. (1997); (103) Matheson et al. (2016); (104) Koo et al. (1993); (105) Jeong et al. (2012); (106) Huang & Thaddeus (1986); (107) Cho et al. (1994); (108) Kothes et al. (2003); (109) Liu et al. (2018); (110) Cong (1977); (111) Pollock (1985); (112) Higgs et al. (1983); (113) Feldt & Green (1993); (114) Koo et al. (2001); (115) Byun et al. (2006); (116) Dobashi et al. (2019); (117) Tatematsu et al. (1990b); (118) Uyaniker et al. (2002); (119) Jeong et al. (2013); (120) Kothes et al. (2001); (121) Sasaki et al. (2006); (122) Tatematsu et al. (1987); (123) Tatematsu et al. (1990a); (124) Kothes et al. (2002); (125) Ma et al. (2019); (126) Klupatrick et al. (2014); (127) Zhou et al. (2018); (128) Reynoso & Goss (2002); (129) Keohane et al. (1996); (130) Willis & Dickel (1971); (131) Cai et al. (2009); (132) Lee et al. (2004); (133) Zhou et al. (2016b); (134) Chen et al. (2017b); (135) Reynoso et al. (1999); (136) Zhou et al. (2014); (137) Green & Gull (1982); (138) Roberts et al. (1993); (139) Kothes (2013); (140) Zhou et al. (2016c); (141) Routledge et al. (1991); (142) Landecker et al. (1987); (143) Rho et al. (2021); (144) Kothes et al. (2014); (145) Leahy & Tian (2007); (146) Arias et al. (2019); (147) Landecker et al. (1989); (148) Chen et al. (2017a); (149) Denoyer (1979a); (150) Denoyer (1979b); (151) Hewitt et al. (2006); (152) Lee et al. (2012a); (153) White et al. (1987); (154) Zhang et al. (2010); (155) Turner et al. (1992); (156) van Dishoeck et al. (1993); (157) Su et al. (2014a); (158) Cornett et al. (1977); (159) Ambrocio-Cruz et al. (2017); (160) Zhang et al. (2013); (161) Rosado et al. (2007); (162) Burton et al. (1988); (163) Dell'Ova et al. (2020); (164) Hezareh et al. (2013); (165) Xu et al. (2011); (166) Su et al. (2017b); (167) Aharonian et al. (2007); (168) Oliver et al. (1996); (169) Gao et al. (2020); (170) Castelletti et al. (2017); (171) Su et al. (2017a); (172) Ranasinghe & Leahy (2022).
Results of SNRs and SNRs are listed in Table 1 and 2, respectively. We only present short versions of Table 1 and 2 here, and full versions are only available in electronic form. Systemic velocities of MCs associated with SNRs and SNRCs are settled, and corresponding kinematic distances are estimated by the full distance probability density function (Reid et al. 2016, 2019). Note that CO (J=1-0) emission is more advantageous in determining the systemic velocity of the associated MC, from which sufficient information about quiet molecular gases can be obtained. The systemic velocity is a directly observed quantity, and the kinematic distance is model dependent. To avoid bias between different models, we uniformly apply the full distance probability density function to estimate kinematic distances here. We also considered available H i absorption results in previous works for some SNRs to discriminate between near and far kinematic distances. For comparison, we also list the results of SNR-MC associations studied in previous works in Table 3 (see also Table 1 in Ranasinghe & Leahy 2022, for a summary of known SNR distances). For known SNRs, about 60% of them were found to be associated with MCs in previous works, however, new SNR-MC associations have still been gradually discovered in recent years. For SNRs and PWNe, the study of MCs around them is still very limited, of which only about 4 and 11%, respectively, were found to be associated with MCs in previous works. It is worth noting that in some SNRs different evidence gives different velocity results. On the other hand, the interaction details of some SNR-MC associations were very well studied, with shocked molecular gases investigated through multiband observations, e.g., W28, W44, IC 443, etc. (see Table 3 for details).
We present details of searching results of SNRs and SNRs in Sections 4.1 and 4.2, respectively. In Section 4.3, we discuss searching results for pure PWNe. Corresponding figures of individual objects are available online (Zhou et al. 2023).
### SNRs
SNR-MC association results are summarized in Table 1. In order to better confirm the association evidence, further investigations, e.g., on spatial correlations with partial radio continuum shells or locations of broad lines, are performed on some objects, especially those with more than one broad line components identified. We also make efforts to eliminate contaminations of other energetic sources overlapped, e.g., H ii regions. All considered H ii regions are introduced from the WISE catalog of Galactic H ii regions (Anderson et al. 2014), which is one of the most complete catalogs of H ii regions in the Galaxy. Systemic velocities of H ii regions overlapping SNRs are listed in Table 1. \({}^{12}\)CO or \({}^{13}\)CO velocities of H ii regions are applied for better comparison with our data, and if they do not exist, velocities of other tracers are used. As noted by Anderson et al. (2014), velocities of most H ii regions by different tracers are consistent, the mean of absolute velocity differences between molecular and ionized gas emission is \(\sim\)4 km s\({}^{-1}\).
Among 149 SNRs studied in this paper, 57 of them are found to be associated with MCs, and 70 of them are considered to be possibly associated with MCs. 40 of the 57 SNR-MC associations and 43 of the 70 possible SNR-MC associations were studied in previous works (see Table 3). There are also 7 SNRs with no associated MC found here, but suggested to be associated with surrounding clouds in previous works, mostly based on spatial correlations with H i gases. Few SNR-MC association results in this paper are not the same as those in previous works, including 7 of SNR-MC associations and 4 of possible SNR-MC associations, i.e. G9.9-0.8, G11.1+0.1, G15.4+0.1, G16.0-0.5, G18.9-1.1, G24.7+0.6, G41.5+0.4, G21.5-0.9, G29.7-0.3, G84.2-0.8, and G93.7-0.2. We discuss individual SNRs in Appendix A.
### SNR Candidates
We list all results of SNR-MC associations in Table 2. Most of SNRs in our coverage have been recently discovered (Kassim, 1988; Gorham, 1990; Gray, 1994; Trushkin, 2001; Brogan et al., 2006; Helfand et al., 2006; Gerbrandt et al., 2014; Anderson et al., 2017; Hurley-Walker et al., 2019; Gao et al., 2020; Dokara et al., 2021; Ranasinghe et al., 2021), which are not well studied by multiband observations. A similar approach to study SNR-MC associations is performed to investigate associations between SNRs and MCs. All types of SNRs are examined in the same way, in particular, no enlargement of coverage is applied for plerionic SNRs.
170 SNRCs are examined, and 50 of them are suggested to be associated with MCs. There are also 91 of SNRCs considered to be possibly associated with MCs. For all SNRs, only six of them were studied in previous works, of which five are considered as possible SNR-MC associations and one as a fixed SNRC-MC association here. The result for one of them is not the same as that in previous works, i.e. G28.56+0.00. We discuss individual SNRCs in Appendix B.
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline \multicolumn{1}{c}{ PWN1 } & \multicolumn{1}{c}{Glon} & \multicolumn{1}{c}{GLat} & \multicolumn{1}{c}{Coverage} & SC \(V_{\rm LSR}\) (Coefficient)2 } & \multicolumn{1}{c}{BL Detection3 } & \multicolumn{1}{c}{Vys4 } & \multicolumn{1}{c}{Distance5 } \\ & & \(\circ\) & \(\times\)1 [FOOTNOTE:1]Footnote 1: \(\times\)1 [OTNOTE:1]Footnote 1: \(\times\)1
### PWNe
We examine MCs around 18 pure PWNe here, to search candidates of associated MCs. Our results of PWNe-MC association candidates are presented in Table 4. Kinematic distances of PWN-MC association candidates are estimated. Some of these PWNe have distance estimation results from previous works, which are comparable to the distance results here. Note that most distance measurements in previous works are for associated pulsars. Only two PWNe are with surrounding CO emissions studied in previous works, i.e. G63.7+1.1 and CTB 87, of which possible associations with MCs are found here. There are another two PWNe with surrounding H i emission studied in previous works, i.e. 3C 58 and G141.2+5.0, however, no associated MCs are found here.
We discuss individual PWNe in Appendix C.
## 5 Discussion
### Proportion of SNRs associated with MCs
Ratios of SNRs associated with MCs along the Galactic longitude are shown in Figure 7, including fixed and all SNR-MC associations in known and all SNRs, respectively. All SNRs include known and candidate SNRs. The ratio of previously detected SNR-MC associations in known SNRs is also shown for reference. All ratios have similar distributions, which are large within the Galactic longitude of \(\sim\)50\({}^{\circ}\). It is consistent with the distribution of the ratio of previously detected SNR-MC associations. If all previously detected SNR-MC associations are real, the accuracy of fixed SNR-MC associations suggested in this paper would be at least 70%, and at least 60% for possible SNR-MC associations. Among the previously detected SNR-MC associations, 91% are identified here by \({}^{12}\)CO (J=1-0) line emission. \({}^{12}\)CO (J=1-0) line emission of SNR-MC associations can be
Figure 7: Ratios of SNRs associated with MCs along the Galactic longitude. Each histogram is adaptively binned with the number of corresponding SNRs to be sixteen. Black and blue histograms show ratios of all and fixed SNR-MC associations, respectively. Solid lines are for known SNRs, and dotted lines for all known and candidate SNRs. Red dashed line is the ratio of previously detected SNR-MC associations in known SNRs.
contaminated by background emission. However, it is still a good tracer for these associations. Note that there are about 41% of previously detected SNR-MC associations suggested as fixed SNR-MC associations here, and about 50% considered as possible ones.
Figure 8 shows the number of different types of SNRs and that associated with MCs. Except for SNRs of certain composite type, all types of SNRs contain about 30% fixed SNR-MC associations. Considering that only about half of previously detected SNR-MC associations are identified as fixed SNR-MC associations, there is probably more than 60% of SNRs associated with MCs. As indicated by the number of all fixed and possible SNR-MC associations, the percentage of SNRs associated with MCs could be as high as about 80. Among different types of SNRs, composite type SNRs have the highest proportion of being associated with MCs. This is reasonable, since composite SNRs are originated from core-collapse supernovae, and they are interacting with surrounding medium. Filled-center type SNRs are also originated from core-collapse supernovae; however, they have no radio continuum shell indicating effective interactions with surrounding medium. As indicated by the number of all fixed and possible SNR-MC associations for composite type SNRs, there could be close to 90% of core-collapse SNRs interacted with MCs during their lifetime.
### Distribution of SNRs associated with MCs
We estimate radii of SNRs that are associated with MCs, based on their angular sizes and kinematic distances. Angular sizes are estimated from radio continuum emission of SNRs or associated molecular shell-like structures for those with no significant radio continuum emission detected. Kinematic distances of SNRs are estimated based on systemic velocities of their associated MCs by applying the full distance probability density function8(Reid et al., 2016, 2019). We also estimate progenitor initial masses of some SNRs based on their linear relationship with the size of the main-sequence interstellar bubble in a molecular environment, which is reflected by the size of SNRs associated with MCs (see details in Chen et al., 2013). Since this relationship is only valid for stars in the mass range of 8 to 25-30 \(M_{\odot}\), those with estimated progenitor masses outside this range are excluded. The number of SNRs with small-mass progenitors far exceeds the number of SNRs with
Figure 8: Type distributions of SNRs associated with MCs and all SNRs. For each pair of bars, the left one is for SNRs of certain type, and the right one for all SNRs of certain and possible type. The histogram of possible SNR-MC associations (dark gray) is stacked on top of that of fixed SNR-MC associations (light gray), and they are contained in that of all SNRs (black).
large-mass progenitors. SNRs misidentified at large distances have a large impact on the mass distribution, even though their number is comparable to that of SNRs misidentified at small distances. Therefore, when estimating progenitor masses, we use only the smallest distance of all possible distances for each possible SNR-MC association.
Distributions of the radii and the progenitor initial masses of the SNRs associated with MCs are shown in Figure 9, which are fitted by lognormal and power-law distribution functions, respectively. Fitting of the mass distributions starts at the peak. For both radius and progenitor initial mass distributions, fitting parameters for fixed SNR-MC associations except for those within 10\({}^{\circ}\) of l\(\sim\)0\({}^{\circ}\) and 5\({}^{\circ}\) of l\(\sim\)180\({}^{\circ}\) are consistent with that for all fixed and possible SNR-MC associations. It indicates that possible SNR-MC associations do not introduce any significant systematic deviations. The radius distribution of SNRs associated with MCs peaks at \(\sim\)8.1 pc. The progenitor initial mass distribution has an index of \(\sim\)\(-\)2.3, which is consistent with the Salpeter index of \(-\)2.35 (Salpeter 1955; Bastian et al. 2010, etc.).
The height from the Galactic plane for SNR-MC associations is estimated according to their kinematic distances. Because of the Sun's vertical displacement from the Galactic midplane, there is a small angle between the Galactic plane and the \(b=0^{\circ}\) plane. The position of the Galactic midplane is derived from fixed SNR-MC associations in known SNRs except for those within 10\({}^{\circ}\) of l\(\sim\)0\({}^{\circ}\) and 5\({}^{\circ}\) of l\(\sim\)180\({}^{\circ}\) by the least squares fitting. Thereby, the height of the Sun above the Galactic midplane is estimated as \(\sim\)15.7 pc. We use the same Galactic plane position to calculate the height of all SNR-MC associations. Distributions of the height from the Galactic plane for SNR-MC associations are shown in Figure 10. The height distribution of all SNR-MC associations has two components, a major narrow component and a minor broad component. Some possible SNR-MC associations with uncertain distances may contribute to the broad component; however, the broad component still exists when we use only the minimum distance among all possible distances for each possible SNR-MC association. The number of fixed SNR-MC associations is limited, of which the height distribution may also have a weak broad component, not significant. It can be fitted well by one Gaussian function. For the height distribution of all SNR-MC associations, the thicknesses (i.e. FWHMs) of two components are estimated as \(65\pm 6\) and \(182\pm 64\) pc, respectively. The thickness (i.e. FWHM) is estimated as \(90\pm 9\) pc for the height distribution of the fixed SNR-MC associations, which is consistent with that of the thin CO disk revealed by Su et al. (2019). However, it may be just an intermediate value between the thicknesses of two components of the distribution of all SNR-MC associations. SNR-MC associations may trace MCs of active star-formation. The thin and thick disks of MCs associated with SNRs found here may be inner layers of the thin and thick CO
Figure 9: Radius (_left_) and progenitor initial mass (_right_) distributions for SNRs associated with MCs. Fixed SNR-MC associations in known SNRs are in blue except for those within 10\({}^{\circ}\) of l\(\sim\)0\({}^{\circ}\) and within 5\({}^{\circ}\) of l\(\sim\)180\({}^{\circ}\). All fixed and possible SNR-MC associations in all known and candidate SNRs are in black. The numbers of fixed SNR-MC associations are multiplied by 4 for better visibility. Radii are mostly estimated based on SNRs’ bright radio continuum shells. The progenitor initial mass is estimated based on its linear relationship with the size of the main-sequence interstellar bubble in a molecular environment reflected by the size of SNRs associated with MCs (see details in Chen et al. 2013). Lognormal fitting results of radius distributions and power-law fitting results of progenitor initial mass distributions are shown as dotted lines. SNR radius distributions peak at \(9.7^{+2.8}_{-2.2}\) and 8.1\(\pm\)0.5 pc, for the fixed and all SNR-MC associations, respectively. Progenitor initial mass distributions have indices of \(-2.6\pm 0.6\) and \(-2.3\pm 0.1\), for the fixed and all SNR-MC associations, respectively.
disks revealed by Su et al. (2019), respectively. The ratio of the peak of the thin and thick disks of SNR-MC associations is about 6, which is larger than that of CO disks, i.e. about 2. It indicates that the star-formation may be more efficient in the thin disk.
Figure 11 shows correlations between the radius and the height from the Galactic plane for SNRs associated with MCs. Fixed and all SNR-MC associations selected from known SNRs except for those within 10\({}^{\circ}\) of l\(\sim\)0\({}^{\circ}\) and 5\({}^{\circ}\) of l\(\sim\)180\({}^{\circ}\) and all SNR-MC associations, respectively, are considered separately. In the interval where the height is less than \(\sim\)45 pc, the average radius distribution is roughly flat. When the height is greater than \(\sim\)45 pc, the average radius for both fixed and all SNR-MC associations increases with the height. The turning point of the relation between the radius and the height, i.e. at \(\sim\)45 pc, is consistent with the thickness of the thin CO disk revealed by Su et al. (2019). The radius of an SNR is related to its age, its ambient particle density, and the energy of its supernova explosion. The age and supernova explosion energy of SNRs are probably not correlated with their heights from the Galactic midplane. Different ages and explosion energies would make the radius scattered very much in Figure 11. The relationship between the average radius and the height may be caused by a height-dependent density distribution of molecular environments around SNRs. Though, some SNRs associated with MCs are interacting with cavity or shell-like molecular structures that may be relics of wind-blown bubbles created by their progenitor stars, yet, the sizes of these molecular structures are still confined by ambient particle density. As the ambient particle density becomes larger, the radius of SNRs becomes smaller. Therefore, such relation between the average radius and the height probably indicates that the overall density of MCs with active star-formation does not vary much in the thin CO disk. Beyond the thin CO disk, it decreases with increasing height from the Galactic plane. Further investigation on MCs themselves is needed to draw a definite conclusion.
## 6 Summary
The MWISP project is an unbiased \({}^{12}\)CO/\({}^{13}\)CO/C\({}^{18}\)O (J = 1-0) survey of the Galactic plane in the northern sky. We universally search for evidence of kinematic and spatial correlation of SNR-MC associations for nearly all SNRs in the coverage of the MWISP CO survey, i.e. 149 SNRs, 170 SNRCs, and 18 pure PWNe in 1\({}^{\circ}<l<230^{\circ}\) and \(-5.^{\circ}5<b<5.^{\circ}5\). Based on the high-quality CO data obtained from the MWISP survey, we apply automatic algorithms to identify broad lines and spatial correlations for molecular gas in each SNR region. The searching method is demonstrated to be efficient. Among 149 SNRs studied in this paper, 57 of them are found to be associated with MCs, and 70 of them are considered to be possibly associated with MCs. We find 50 SNRs
Figure 10: Distributions of heights from the Galactic plane for SNR-MC associations. Fixed SNR-MC associations in known SNRs except for those within 10\({}^{\circ}\) of l\(\sim\)0\({}^{\circ}\) and 5\({}^{\circ}\) of l\(\sim\)180\({}^{\circ}\) are shown in the left panel. All fixed and possible SNR-MC associations in all known and candidate SNRs are shown in the right panel. The position of the Galactic midplane is derived from the fixed SNR-MC associations by the least squares fitting. The height of the Sun above the Galactic midplane is estimated as \(\sim\)15.7 pc. The same position of the Galactic midplane is also used to calculate heights of all SNR-MC associations. Black dotted lines indicate fitting results by a Gaussian function in the left panel and by two Gaussian functions in the right panel (separated components shown by red dotted lines). Note that all distances estimated for possible SNR-MC associations are used. However, if we use only the smallest distance for each possible SNR-MC association, the height distribution still has two components.
Figure 11: Radius vs. height from the Galactic plane for fixed (red pluses) and all (blue tiny stars) SNR-MC associations. Fixed SNR-MC associations are selected from known SNRs except for those within 10\({}^{\circ}\) of l\(\sim\)0\({}^{\circ}\) and 5\({}^{\circ}\) of l\(\sim\)180\({}^{\circ}\). All SNR-MC associations, including fixed and possible ones, are selected from all known and candidate SNRs. The position of the Galactic plane is the same as that applied in Figure 10. Two histograms respectively show the average radius of corresponding SNR-MC associations, which are binned with the number of sources to be at least ten and the step of the height to be at least 5 pc.
to be associated with MCs, and 91 SNRCs to be possibly associated with MCs. We also find candidates of associated MCs for 14 pure PWNe. Assuming all SNR-MC associations detected in previous works are real, the accuracy of fixed SNR-MC associations suggested in this paper would be at least 70%, and at least 60% for possible SNR-MC associations. The 91% previously detected SNR-MC associations are identified in this paper by CO line emission, which indicates that CO line emission is efficient for detecting SNR-MC associations. The nine previously detected SNR-cloud associations (about 9%) are not identified here, which are mostly based on spatial correlations with H i gases. We find that the proportion of SNRs associated with MCs is high within the Galactic longitude of \(\sim\)50\({}^{\circ}\), with the distribution consistent with that of previously detected SNR-MC associations. Overall, there could be as high as 80% of SNRs associated with MCs.
Based on systemic velocities of associated MCs, kinematic distances of SNRs are estimated. Accordingly, distributions of their radii, progenitor initial masses, and heights from the Galactic plane are studied. We find that the radius and progenitor initial mass distributions of SNRs associated with MCs follow lognormal and power-law distributions, respectively. The radius distribution peaks at \(\sim\)8.1 pc, and the progenitor initial mass distribution has an index of \(\sim\)\(-\)2.3 that is consistent with the Salpeter index of \(-\)2.35. SNR-MC associations are mainly distributed in a thin disk and also distributed in a faint thick disk along the Galactic plane. For only fixed SNR-MC associations except for those within 10\({}^{\circ}\) of l\(\sim\)0\({}^{\circ}\) and 5\({}^{\circ}\) of l\(\sim\)180\({}^{\circ}\), due to their limited number, the thick disk is hard to distinguish, and the overall thickness (FWHM) is estimated as 90\(\pm\)9 pc. For both fixed and possible SNR-MC associations, thicknesses (i.e. FWHMs) of the thin and thick disks are estimated as 65\(\pm\)6 and 182\(\pm\)64 pc, respectively. The thin and thick disks found here may be inner layers of the thin and thick CO disks revealed by Su et al. (2019), respectively. The ratio of peaks of the thin and thick disks of SNR-MC associations is about 5, which is larger than that of CO disks, i.e. about 2. It indicates that star-formation may be more efficient in the thin disk. The Sun's vertical displacement from the Galactic midplane is estimated as \(\sim\)15.7 pc. Radii of SNRs associated with MCs show some correlations with their heights from the Galactic plane, with a turning point at the height of \(\sim\)45 pc. When the height is below \(\sim\)45 pc, the average radius distribution is roughly flat, and the radii of individual SNRs are much scattered. When the height is above \(\sim\)45 pc, the average radius increases with the height. It indicates that the overall density of MCs with active star-formation may not vary much in the thin CO disk, and it decreases with increasing height from the Galactic plane beyond the thin CO disk.
We thank the anonymous referee for providing very helpful comments that improved the paper and its conclusions. This research made use of the data from the Milky Way Imaging Scroll Painting (MWISP) project, which is a multiline survey in \({}^{12}\)CO/\({}^{13}\)CO/C\({}^{18}\)O along the northern Galactic plane with PMO 13.7m telescope. We are grateful to all the members of the MWISP working group, particularly the staff members at PMO 13.7m telescope, for their long-term support. MWISP was sponsored by National Key R&D Program of China with grant 2017YFA0402701 and CAS Key Research Program of Frontier Sciences with grant QYZDJ-SSW-SLH047. Y.Su and J.Y. are supported by National Natural Science Foundation of China through grants 12173090 and 12041305, respectively. Y.Sun acknowledges support by the Youth Innovation Promotion Association, CAS (Y2022085), and the "Light of West China" Program (No. xbgz-zdys-202212).
## Data Availability
Figures of individual objects are available online in [https://www.scidb.cn/en](https://www.scidb.cn/en), at [https://doi.org/10.57760/sciencedb.08076](https://doi.org/10.57760/sciencedb.08076) (Zhou et al. 2023).
|
2307.05664 | TESS Stellar Rotation up to 80 days in the Southern Continuous Viewing
Zone | The TESS mission delivers time-series photometry for millions of stars across
the sky, offering a probe into stellar astrophysics, including rotation, on a
population scale. However, light curve systematics related to the satellite's
13.7-day orbit have prevented stellar rotation searches for periods longer than
13 days, putting the majority of stars beyond reach. Machine learning methods
have the ability to identify systematics and recover robust signals, enabling
us to recover rotation periods up to 35 days for GK dwarfs and 80 days for M
dwarfs. We present a catalog of 7245 rotation periods for cool dwarfs in the
Southern Continuous Viewing Zone, estimated using convolutional neural
networks. We find evidence for structure in the period distribution consistent
with prior Kepler and K2 results, including a gap in 10--20-day cool star
periods thought to arise from a change in stellar spin-down or activity. Using
a combination of spectroscopic and gyrochronologic constraints, we fit stellar
evolution models to estimate masses and ages for stars with rotation periods.
We find strong correlations between the detectability of rotation in TESS and
the effective temperature, age, and metallicity of the stars. Finally, we
investigate the relationships between rotation and newly obtained spot filling
fractions estimated from APOGEE spectra. Field star spot filling fractions are
elevated in the same temperature and period regime where open clusters'
magnetic braking stalls, lending support to an internal shear mechanism that
can produce both phenomena. | Zachary R. Claytor, Jennifer L. van Saders, Lyra Cao, Marc H. Pinsonneault, Johanna Teske, Rachael L. Beaton | 2023-07-11T18:00:00Z | http://arxiv.org/abs/2307.05664v2 | # TESS Stellar Rotation up to 80 days in the Southern Continuous Viewing Zone
###### Abstract
The TESS mission delivers time-series photometry for millions of stars across the sky, offering a probe into stellar astrophysics, including rotation, on a population scale. However, light curve systematics related to the satellite's 13.7-day orbit have prevented stellar rotation searches for periods longer than 13 days, putting the majority of stars beyond reach. Machine learning methods have the ability to identify systematics and recover robust signals, enabling us to recover rotation periods up to 30 days for FGK dwarfs and 80 days for M dwarfs. We present a catalog of 7,971 rotation periods for cool dwarfs in the Southern Continuous Viewing Zone, estimated using convolutional neural networks. We find evidence for structure in the period distribution consistent with prior _Kepler_ and K2 results, including a gap in 10-20-day cool star periods thought to arise from a change in stellar spin-down or activity. Using a combination of spectroscopic and gyrochronologic constraints, we fit stellar evolution models to estimate masses and ages for stars with rotation periods. We find strong correlations between the detectability of rotation in TESS and the effective temperature, age, and metallicity of the stars. Finally, we investigate the relationships between rotation and newly obtained spot filling fractions estimated from APOGEE spectra. Field star spot filling fractions are elevated in the same temperature and period regime where open clusters' magnetic braking stalls, lending support to an internal shear mechanism that can produce both phenomena.
+
Footnote †: journal: AAS Journals
0000-0002-8070-7885]Zachary R. Claytor
## 1 Introduction
Rotation, activity, and magnetism are all deeply connected to the structure and evolution of stars. In stars similar to and less massive than the Sun, rotation and convection power magnetism, which influences stellar winds and causes flares. Magnetized winds create torque on stars, causing them to spin down over time (Weber and Davis, 1967); this allows us to infer stellar ages from rotation periods using gyrochronology (Skumanich, 1972; Barnes, 2003). Stellar magnetism is the source of space weather, which directly affects life on Earth as well as the habitability of planets around other stars. Because of the inextricable links to rotation, a complete picture of stellar activity and magnetism demands a grasp of rotation across all types of stars.
The _Kepler_ mission (Borucki et al., 2010) enabled rotation period estimates for more than 50,000 stars in a single 100-square-degree patch of sky (McQuillan et al., 2014; Santos et al., 2019, 2021), revolutionizing our understanding of stellar rotation. _Kepler_'s rotation periods enabled precise age estimates for field stars (e.g., Claytor et al., 2020; Lu et al., 2021) and investigations of changing stellar activity with time (Mathur et al., 2023). The mission also revealed departures from expected rotational behavior, such as a gap in the period distribution of cool stars (McQuillan et al., 2014) and a halt of magnetic braking in middle-aged Solar-like stars (Angus et al., 2015; van Saders et al., 2016; David et al.,
2022). Stellar evolution and population synthesis models failed to predict these behaviors (van Saders et al., 2019), highlighting the need for updated theory as well as more period measurements.
As successful as _Kepler_ was in measuring rotation periods, the survey design imposed tight limitations on the kinds of stars that could be studied in rotation. The spacecraft's goal of finding Earth-like planets around Sun-like stars resulted in a complex selection function that biased observed stellar samples in comparison to the underlying population (e.g., Borucki et al., 2010; Wolniewicz et al., 2021). For example, the choice to avoid young stellar populations biases the _Kepler_ sample toward less active, slowly rotating stars intrinsically more difficult to detect in rotation. Any _Kepler_ study preserves these biases, and the selection function is difficult to correct for. Furthermore, the small observing footprint means that any new rotational physics inferred from _Kepler_ must be tested against other samples across the sky. The solution is an untargeted, all-sky survey.
The _Transiting Exoplanet Survey Satellite_ (TESS Ricker et al., 2015) states at millions of stars in its search for transiting planets, surveying the entire sky in 27-day sectors. In addition to short-cadence light curves for pre-selected targets, TESS delivers full-frame images (FFIs), enabling high-precision photometry for any source brighter than \(\sim\)15th magnitude. Importantly, TESS does not rely only on postage stamps for selected targets as _Kepler_ did; the FFIs permit investigators to design their own surveys. While primarily a planet-finding mission, the mission's short cadence and long temporal baseline also make it suitable for studying stellar variability due to oscillations, pulsations, and rotational spot modulation. While studies of stellar oscillations and pulsations have achieved some success (e.g., Silva Aguirre et al., 2020; Mackereth et al., 2021; Chontos et al., 2021; Hon et al., 2021, 2022; Stello et al., 2022), systematics related to TESS's observing strategy and data processing have slowed the quest for rotation periods (Oelkers and Stassun, 2018; Canto Martins et al., 2020; Avallone et al., 2022; Kounkel et al., 2022). It is worth noting that the _Kepler_ mission faced similar challenges; the seminal stellar rotation paper (McQuillan et al., 2014) was published 5 years after the satellite was launched.
TESS's unique 2:1 resonance orbit of the Earth-Moon system subjects the detectors to earthshine and moonlight on the timescale of the orbit, 13.7 days (Vanderspek et al., 2018). The earthshine itself has time-varying signals within it, such as a 1-day modulation from the rotation of the Earth (Luger et al., 2019). Besides earthshine, TESS encounters systematics related to angular momentum dumps, detector heating, data downlinks, and more, all on timescales that interfere with astrophysical signals. The telescope's large field of view (24\({}^{\circ}\) by 96\({}^{\circ}\) total) makes the background spatially non-uniform as well. Because of these effects, throughout a sector the TESS detectors encounter systematics on different pixels at different times and with varying intensity, making them difficult to correct.
Attempts to remove or correct spurious instrumental signals may also attenuate astrophysical signals, particularly those on the timescales of the telescope's orbital period (13.7 days) and longer. Rapid rotators, which also have larger spot modulation amplitudes, are affected less, and conventional rotation searches with TESS have been largely successful at measuring periods shorter than 13 days (see, for example, Canto Martins et al., 2020 with 131 periods, Avallone et al., 2022 with 169, Holcomb et al., 2022 with 13,504, and Kounkel et al., 2022 with \(\sim\)100,000). However, the same searches have struggled to recover longer periods, instead catching periods associated with the TESS systematics. So far, only Lu et al. (2020) have claimed to recover long periods in TESS, but they relied heavily on priors from the observed _Kepler_ period distribution.
The efforts to correct TESS systematics have yielded broadly useful public pipelines and tools like _eleanor_(Feinstein et al., 2019), TESS-SIP (Hedges et al., 2020), _Unpopular_(Hattori et al., 2022), and T'DA (Handberg et al., 2021; Lund et al., 2021). While each pipeline makes different but well-motivated choices to handle the systematics, each decision runs the risk of accidentally removing stellar signals (Hattori et al., 2022; Kounkel et al., 2022). Rather than trying to remove the systematics at the risk of removing astrophysical signals, we adopt deep machine learning methods that see the periodicity with the noise and disentangle them.
Deep learning methods are now widely used in stellar astrophysics. Breton et al. (2021) used random forests to classify and detect rotation signals in _Kepler_ light curves, while Lu et al. (2020) used random forests to draw connections between stellar parameters to estimate TESS rotation periods. Feinstein et al. (2020) employed convolutional neural networks (CNNs) to identify stellar flares in light curves, while Hon et al. (2021) applied similar techniques to detect oscillations in red giant stars. CNNs are particularly powerful when working with images or image-like data, which are ubiquitous in astronomy. CNNs can be trained to identify images of many different classes despite contaminating features, making them particularly attractive for our problem with TESS systematics.
In a pilot study of 21 objects, Claytor et al. (2022) demonstrated that long periods can be recovered from TESS data using CNNs trained on simulated data. In this work we apply the Claytor et al. (2022) approach to a greatly expanded sample to infer rotation periods with uncertainties for cool, main-sequence stars in the TESS Southern Continuous Viewing Zone (SCVZ). We employ new training sets tailored to specific period ranges and the specific light curves in which we search for periods. We present the periods, their associated uncertainties, and model-inferred stellar parameters in the first catalog probing long stellar rotation periods with TESS.
The paper is outlined as follows. In Section 2 we describe our data and sample selection. In Section 3 we outline our deep learning framework, including the training sets and model architectures. Section 4 details our method to fit stellar evolutionary models to stars with reliable rotation periods. In Section 5 we present the rotation periods and analyze the TESS SCVZ period distribution, comparing and contrasting with _Kepler_ and K2. In Section 6 we explore the detectability of periods as a function of temperature, metallicity, age, and convection zone properties to understand the effects of detection limits on the period distribution. In Section 7 we use new spot filling fraction measurements from infrared spectroscopy to examine the effects of spottedness on the detectability of rotation, and we finally conclude in Section 8.
## 2 Data and Sample Selection
For the period search, we targeted stars cool enough to maintain surface convection zones and dynamos capable of producing surface spots. The steps of our full sample selection our outlined in Table 1. We excluded evolved red giants, of which the large majority are slow rotators (Ceillier et al., 2017). The 1-2% that rotate rapidly are typically the products of binary star interactions (Carlberg et al., 2011), and not reliable age tracers. We selected relatively bright, cool dwarf and subgiant stars in the TESS SCVZ, a 450 square degree field centered around the southern ecliptic pole. TESS observed the SCVZ continuously for 350 days in its first year, taking FFIs every 30 minutes. The long baseline ensures sufficient coverage for the most slowly-rotating stars we might hope to detect. For example, an M-dwarf rotating once every 100 days will complete 3.5 rotations under observation in the CVZs. In the same interval, an old K-dwarf rotating at 45 days will rotate nearly 8 times, and a G-dwarf at 30 days will rotate more than 10.
We selected stars from the TESS Input Catalog (TIC, Stassun et al., 2019; Paegert et al., 2021) with effective temperature \(\leq 10,000\) K, TESS magnitude \(\leq 15\), and ecliptic latitude \(\leq-78^{\circ}\) to target the SCVZ. There are 398,977 such stars in the TIC, but requiring public photometry narrowed the sample considerably. 38,215 targets had public FFI photometry from the TESS Science Processing Operations Center (TESS-SPOC, Jenkins et al., 2016; Caldwell et al., 2020). We also used FFI data products from the TESS Asteroseismic Science Operations Center (TASOC, Handberg et al., 2021; Lund et al., 2021), but we selected only the 29,609 targets with TIC \(T_{\rm eff}<5,000\) K to prioritize the most likely rotation detections. We motivate the choice to use both TESS-SPOC and TASOC products in Section 2.3, and we detail each pipeline's target selections in Sections 2.3.1 and 2.3.2.
### APOGEE Spectroscopy
While the TESS Input Catalog has metallicities and surface gravities for all the stars in our sample, the sources are a heterogeneous combination of photometry and spectroscopy, observations and models. Furthermore, the TIC has no information on detailed abundances, which are useful when investigating changing Galactic chemistry with time, and which are important to the connection between rotation and magnetism (e.g., Claytor et al., 2020). We therefore supplement TESS photometric rotation periods with spectroscopic parameters from the Apache Point Observatory Galactic Evolution Experiment (Majewski et al., 2017, APOGEE).
APOGEE collects high-resolution (\(R\sim 22,500\)), near-infrared (1.51-1.70 \(\mu\)m) spectra and provides calibrated, model-dependent estimates of effective temperature, surface gravity, metallicity, and detailed abundances for hundreds of thousands of stars across the entire sky. The TESS/APOGEE survey within APOGEE-2S (Section 5.8 of Santana et al., 2021) targeted 38,000 stars in the TESS SCVZ with 2MASS color and magnitude ranges \(7<H<11\) and \(J-K>0.3\), and about 9,000 other SCVZ stars were observed for other programs. We crossmatched the TIC SCVZ cool dwarfs with APOGEE Data Release 17 (Abdurro'uf et al., 2022) to obtain spectroscopic parameters for 47,142 stars. Of those, 16,545 have TESS-SPOC data products, and 3,156 have data products in our TASOC subsample. These combine to yield 17,796 unique targets with APOGEE spectroscopy and either TESS-SPOC or TASOC photometry.
We adopted calibrated effective temperatures, metallicities, and \(\alpha\)-element abundances estimated by the APOGEE Stellar Parameters and Abundances Pipeline (ASPCAP, Garcia Perez et al., 2016). Comparisons
between APOGEE stellar parameters and high-fidelity measurements have demonstrated the ASPCAP-derived uncertainties to be underestimated for giants (Serenelli et al., 2017), and dwarfs alike (Birky et al., 2020; Sarmento et al., 2021). Pinsonneault et al. (in prep.) find temperature errors of 30 K in giants, larger for dwarfs, and scatter in clusters of 0.05 dex in metallicity and 0.03 dex in \(\alpha\) enhancement. We therefore set minimum uncertainty floors of 50 K for \(T_{\rm eff}\), 0.05 dex for [M/H], and 0.03 dex for [\(\alpha\)/M]. While these likely still underestimate the error in the ASPCAP measurements, they were large enough for our fitting routines to find self-consistent models that successfully predicted other stellar parameters, e.g., luminosity or surface gravity.
### Gaia
We supplemented our sample with data from _Gaia_ DR3 (Gaia Collaboration et al., 2022), including \(G\), \(G_{BP}\), and \(G_{RP}\) magnitudes, parallaxes, and Renormalized Unit Weight Error (RUWE). _Gaia_ data were available for all targets. Computing the absolute magnitude \(M_{G}\) from \(G\) and parallax, we use a photometric excess and RUWE to identify and remove likely binaries before population analysis.
### Photometry
There are several publicly available light curve sets, pipelines, and tools designed and optimized for TESS data. We review some of the most widely used in Appendix A. After trying several systematics removal pipelines and data products, we found that all pipelines were too aggressive and removed stellar signal. Instead, we used the apertures from two public pipelines and performed our own minimal corrections. Due to data availability and lightweight data products, we determined the apertures from the TESS-SPOC (Jenkins et al., 2016; Caldwell et al., 2020) and TASOC (Handberg et al., 2021; Lund et al., 2021) to be the best available for a rotation search at the time of writing.
TESS-SPOC provides data products for fewer stars over a longer baseline, while TASOC provides products for a larger sample, but over a shorter baseline in TESS year 1. The two pipelines feature different target and aperture selections, providing two slightly overlapping stellar samples so that we can maximize the number of rotation periods while testing for robustness of periods against the pipelines' different apertures. We summarize the pipelines' key differences in the next two sections; we then describe our custom photometry using the pipeline apertures in Section 2.3.3. Both pipelines' target pixel file (TPF) and aperture data are publicly available on MAST1.
Footnote 1: TESS-SPOC data are available at 10.17909/t9-wp21-8s54, while TASOC data are available at 10.17909/t9-4smn-dx89
#### 2.3.1 Tess-Spoc
The SPOC pipeline (Jenkins et al., 2016) was initially used to calibrate the TESS FFIs and generate TPFs and light curves for all two-minute cadence targets. Caldwell et al. (2020) more recently used the SPOC pipeline to
\begin{table}
\begin{tabular}{c c c c} \hline \hline Designation & Description & Criteria & \# targets \\ \hline A1 & TESS-SPOC Dwarfs & \(\texttt{eclat}\leq-78^{\circ}\) \& \(\texttt{Tmag}\leq 15\) \& \(\texttt{Teff}\leq 10,000\) K & 38,215 \\ A2 & TASOC Dwarfs & \(\texttt{eclat}\leq-78^{\circ}\) \& \(\texttt{Tmag}\leq 15\) \& \(\texttt{Teff}\leq 5,000\) K & 29,609 \\ \hline B1 & APOGEE-TESS-SPOC & A1 \& APOGEE DR17 & 16,545 \\ B2 & APOGEE-TASOC & A2 \& APOGEE DR17 & 3,156 \\ \hline C1 & Rotators & Either A1 or A2 \& reliable period & 7,971 \\ C2 & Non-Rotators & Either A1 or A2 \& no reliable period & 53,141 \\ \hline D & APOGEE Rotators & Either B1 or B2 \& C1 & 2,654 \\ \hline Gold & Binary-Cleaned Rotators & D \& \(\texttt{STAR\_BAD}=\texttt{SNR\_BAD}=0\) \& \(\texttt{RUNE}<1.2\)\& \(\texttt{contratio}<0.1\) & 1,227 \\ \hline Platinum & Single Cool Dwarfs & Gold \& \(M_{G}>4.4\)\& \(G_{BP}-G_{RP}>1\)\& \(|\Delta M_{G}|<0.4\) & 566 \\ \hline \end{tabular} Note. – Our sample selections using TIC version 8.2 (Stassun et al., 2019; Paegert et al., 2021), APOGEE DR17 (Abdurro’uf et al., 2022), and _Gaia_ DR3 (Gaia Collaboration et al., 2022). Selection criteria are given as table column names where convenient and include ecliptic latitude (TIC eclat), TESS magnitude (TIC Tmag), effective temperature (TIC Teff), ASPCAP (García Pérez et al., 2016) spectral fit flags STAR_BAD and SNR_BAD, absolute \(G\)-magnitude (_Gaia_\(M_{G}\)), color index (_Gaia_\(G_{BP}-G_{RP}\)), and _Gaia_ photometric excess above the main sequence (\(|\Delta M_{G}|\)). The lettered samples are described in Section 2, while the Gold and Platinum science samples are detailed in Sections 5.5 and 7 respectively.
\end{table}
Table 1: Sample Selection
create TPFs and light curves for FFI targets, providing the TESS-SPOC light curves on MAST.
Caldwell et al. (2020) selected a maximum of ten thousand targets per CCD from the TIC for a maximum of 40,000 stars in the SCVZ. For each CCD, the selection order was (1) all two-minute cadence targets; (2) potentially high-value planet host candidates with \(H\) magnitude \(\leq 10\) or distance \(\leq 100\) pc, flux contamination \(\leq 50\%\), and TESS magnitude \(Tmag\leq 16\); (3) field star targets brighter than \(Tmag\leq 13.5\), log surface gravity \(\geq 3.5\) (CGS units), and flux contamination \(\leq 20\%\). The depth \(Tmag\leq 13.5\) was chosen to ensure sufficient signal-to-noise. We estimated the 6-hour CDPP of our custom TESS-SPOC light curves to be about 4,000 ppm at \(Tmag=13.5\). At this faint limit, a \(5\sigma\) detection should vary at the 2% level. About 0.3% of _Kepler_ rotators varied at this level (Santos et al., 2019, 2021).
TESS-SPOC computed photometric apertures using the same module as was used for _Kepler_. Briefly, the module uses a synthetic FFI produced from the input catalog and the real pixel response function to compute the optimal aperture for each target. Caldwell et al. (2020) detail the full FFI target selection, Jenkins et al. (2016) describe the SPOC pipeline, and Smith et al. (2016) outline the aperture selection. The TESS-SPOC pipeline has produced TPFs, which include target apertures, for all sectors in year 1. We queried all TPFs available for our sample, yielding time-series images and photometric apertures for 38,215 targets.
#### 2.3.2 Tasoc
TASOC has performed photometry for all stars brighter than TESS magnitude \(\leq 15\) for use in asteroseismology (Handberg et al., 2021; Lund et al., 2021). To date, only sectors 1-6 from the first year have been processed, yielding time-series FFI photometry with a 160-day baseline and 30-minute cadence. While fewer sectors of data are available from TASOC, limiting us to shorter rotation periods than TESS-SPOC, TASOC's fainter magnitude limit and lack of number cap (i.e., TESS-SPOC processed not more than 10,000 stars per CCD, but TASOC has no such limit) complements the TESS-SPOC data. To compute light curves, we downloaded the TASOC apertures and applied them to cutouts from the calibrated FFIs.
The TASOC pipeline computed apertures for all TIC targets brighter than \(Tmag\leq 15\). Aperture selection is fully described by Handberg et al. (2021), but uses the clustering algorithm DBSCAN (Ester et al., 1996) to find clusters of pixels associated with TIC targets. The watershed image segmentation routine from scikit-image(van der Walt et al., 2014) is then used to segment apertures containing more than one target. In general, the apertures created by the TASOC pipeline are larger than those created by TESS-SPOC, resulting in light curves with higher photometric precision. We estimated our custom TASOC light curves to have 6-hour CDPP of 3,000 ppm at \(Tmag=13.5\). A \(5\sigma\) detection at this magnitude will vary at the 1.5% level. In _Kepler_, about 0.8% of rotating stars varied at this level (Santos et al., 2019, 2021).
TASOC data products are also available on MAST. To obtain the likeliest targets for detecting rotation, we queried data for TIC dwarf stars cooler than 5,000 K, yielding FFI cutouts for 29,609 targets spanning the first 6 sectors of the TESS mission.
#### 2.3.3 Custom Light Curves and Wavelet Transform
For both datasets, we began with the publicly available TPF cutouts from calibrated FFIs. The FFI calibrations include traditional bias, dark, and flat field corrections, cosmic ray removal, corrections for variations in pixel sensitivity, and removal of smear signals resulting from the cameras' lack of shutters (Jenkins et al., 2016). After FFI calibration, both the TESS-SPOC and TASOC pipelines perform background subtraction and systematics correction to produce light curves; we opt not to use this next level of data correction, as they can have the unintended consequence of removing or attenuating the stellar signals. To mitigate the removal of stellar rotation signals, we performed custom photometry using the apertures supplied by the pipelines. For each available TPF, we computed light curves as follows:
1. reject cadences with bad quality flags, which are usually associated with cosmic rays, data downlinks, or angular momentum dumps
2. compute a raw light curve using simple aperture photometry, adding all pixels within the aperture
3. remove the first three principal components of the time series using lightkurve.RegressionCorrector
4. reject \(5\sigma\) outliers from light curve.
Although neural networks can perform regression in spite of systematics to some extent, _some_ systematics removal is necessary. We sought to perform as little systematics correction as possible in order to preserve the underlying stellar signals. Removing the first three principal components corrected the largest TESS systematics--Earthshine and angular momentum dumps--while leaving smaller systematics and stellar signals mostly intact. To determine the optimal number
\(n_{\rm pca}\) of principal components to remove, we removed 1, 2, 3, 4, and 5 components from a set of 10 randomly selected light curves. We then visually inspected the resulting light curves to determine for what value of \(n_{\rm pca}\) the largest systematics were removed. Meanwhile, removing \(5\sigma\) outliers cleaned the light curves from systematic jumps and stellar flares. Next, we median-divided the light curves for each target and stitched them together, linearly interpolating to fill any gaps. Finally, we computed Morlet wavelet transforms following Claytor et al. (2022) and binned them to \(64\times 64\) pixels to be used as input to the convolutional neural network.
#### 2.3.4 Variability Amplitudes
We computed the photometric variability amplitudes \(R_{\rm per}\) and \(S_{\rm ph}\) for all our stars with estimated periods. Like McQuillan et al. (2014), to compute \(R_{\rm per}\) we measured the interval between the 5th and 95th percentile of normalized flux in each period bin, then took the median of those values. We computed \(S_{\rm ph}\) as in Mathur et al. (2023), by partitioning the light curve into segments of duration \(5P_{\rm rot}\), then taking the standard deviation of the light curve flux over each segment. This creates a time series of standard deviations; \(S_{\rm ph}\) is taken to be the median value. For different analyses we use either \(R_{\rm per}\) or \(S_{\rm ph}\), but in theory the two metrics are related by \(S_{\rm ph}\approx 0.35R_{\rm per}\)2. We verified this relation in our measurements, so for this work we consider the two metrics to be interchangeable.
Footnote 2: This approximation holds true for perfect sinusoids observed for an integer number \(N\) of cycles or in the limit of large \(N\). However, our measured \(S_{\rm ph}\) and \(R_{\rm per}\) follow this relation remarkably well.
We emphasize that due to sector-to-sector stitching and detector sensitivity changes during momentum dumps, variability amplitudes for periods longer than about 13 days and especially 27 days will inevitably be suppressed. To attempt to account for this, we ran a series of noiseless, sinusoidal light curve simulations through a renormalization and stitching algorithm and compared the measured amplitudes to the true input amplitudes. For perfect sinusoids, we found that the amplitude suppression factor decays exponentially with period past 27 days. However, applying a correction to our measured amplitudes did not affect our results; to avoid artificially injecting period-amplitude biases we leave our reported amplitudes uncorrected.
Finally, we also measured the 6-hour combined differential photometric precision (CDPP, Christiansen et al., 2012), which quantifies the photometric noise, for each of our light curves. Since the CDPP is measured on timescales shorter than the typical TESS systematics, it should be unaffected by momentum dumps and sector-to-sector stitching.
## 3 Deep Learning Framework
To infer rotation periods from TESS light curves, we applied the method of Claytor et al. (2022) with a few modifications. Namely, we generated new training sets tailored to both the TESS-SPOC and TASOC samples (mostly to represent the different light curve lengths), and we optimized different neural networks for each data set.
### Training Set
In Claytor et al. (2022) we trained a convolutional neural network on a set of synthetic light curves made from physically realistic spot evolution simulations, combined with real TESS noise from SCVZ galaxy light curves. Inactive galaxies do not vary on yearlong timescales or shorter, and thus they are a robust standard sample that can be useful to infer systematics. Other quiescent objects can serve the same role, such as hot stars, which we employ here.
Two weaknesses of our previous approach were that (1) we were not successful in recovering periods of less than 10 days from our held-out test set, and (2) the neural network overfit within a few (of order 10) iterations over the training set. The first weakness was due to the choice of a loss function that enabled the network to estimate period uncertainty. In the presence of uncertainty, inferred periods are biased toward the mean of the distribution and away from the upper and lower limits. The effect is most pronounced for the top and bottom 10% of the training set period range, affecting the ranges from 0-18 days and 162-180 days. Since the ability to estimate the period uncertainty is a key strength of our approach, we worked around this problem by using multiple training sets with different period ranges.
We created four separate simulated training sets using butterpy(Claytor et al., 2021; Claytor et al., 2022) with periods ranging from 0.1 day to 30, 60, 90, and 180 days. Having a shorter upper limit such as 30 days allows us to more successfully probe the short-period range--here only 0-3 days and 27-30 days are severely affected--while having multiple training sets with increasing upper limits gives us multiple period estimates that we can mutually compare for extra tests of reliability. The distributions of all simulation input parameters besides period were the same as in Claytor et al. (2022) (the simulations for the 180 day upper limit _are_ the same as in the previous work), and the same simulations were used for both the TESS-SPOC and TASOC training sets. The only other difference was the source of the
light curves combined with the simulations to emulate instrumental noise and systematics. We note that using multiple training realizations yields multiple period estimates for the same star; we discuss the breaking of ties in Section 5.
The second shortcoming was simply due to the small number (\(\sim 2,000\)) of galaxy light curve examples. If there are too few examples of noise in the training set, the neural network learns the noise quickly and overfits the data. Since there are many more bright stars than galaxies in TESS, we addressed this by combining our simulations with light curves of stars in temperature ranges that should be comparatively quiescent to emulate TESS noise. McQuillan et al. (2014) detected periods in _Kepler_ stars hotter than the Sun half as often as in cooler stars. Given TESS's slightly worse photometric precision and redder pass band than _Kepler_, we expect TESS stars hotter than the Sun to be even harder to detect in rotation. This makes stars in the temperature range above \(\sim 5,800\) K ideal for use as quiescent light curve sources. At first we queried TPFs and computed light curves for TASOC stars in the range 5,800 K \(\leq T_{\rm eff}\leq\) 6,000 K. We kept light curves with at least 4 sectors to allow for gaps in the data while ensuring that there were data for more than half the time baseline. This yielded a set of 23,332 TASOC noise templates, an order of magnitude more than the number of galaxy sources used in the previous exercise. The same range of temperatures in TESS-SPOC, requiring that light curves have at least 7 sectors to cover more than half of the time baseline, has only 6,000 targets, so a larger temperature range was required. We used the range 6,000 K \(\leq T_{\rm eff}\leq\) 8,000 K, which contained 17,637 sources. Table 2 details the noise light curves samples that make up the TESS-SPOC and TASOC training sets.
We note that the temperature range for the TESS-SPOC noise light curves overlaps with the \(\delta\) Scuti instability strip (e.g., Murphy et al., 2019) and with the \(\gamma\) Doradus strip (e.g., Balona et al., 2011; Bradley et al., 2015). Of our TESS-SPOC noise targets, 1,724 (\(\sim\)10%) fall within the \(\delta\) Scuti strip, and depending on the criterion used, as few as 30% (Balona et al., 2011) and as many as two-thirds (Bradley et al., 2015) are within the \(\gamma\) Dor strip. Because \(\delta\) Scuti stars pulsate with periods on the order of hours and \(\gamma\) Dor less than about 3 days, we do not expect significant contamination from pulsation in our training light curves. The TASOC noise sample does not overlap with either instability strip. The presence of contaminants in the training set should make the CNN more robust against contamination (i.e., misidentifying a pulsator as a rotator), but thoroughly testing this is beyond the scope of this work.
### Convolutional Neural Network
We began with the same CNN as in Claytor et al. (2022), which uses the Adam optimizer (Kingma & Ba, 2014) and negative log-Laplacian loss, enabling the estimation of uncertainty along with the rotation period. The loss function has the form
\[\mathcal{L}=\ln{(2b)}+\frac{|P_{\rm true}-P_{\rm pred}|}{b}, \tag{1}\]
where \(b\), the median absolute deviation, is taken to represent the uncertainty.
We experimented with different architectures to optimize different networks to the TESS-SPOC and TASOC training sets. The original architecture had three convolution layers with (A) 8, 16, and 32 kernels, respectively, but we also tried (B) 16, 32, and 64 kernels; (C) 32, 64, and 128; and (D) 64, 128, and 256. More kernels or filters per layer allow the network to learn more features if they are present in the data, but they may also cause the network to overfit the data faster. We trained each architecture individually on each training set and chose the architecture with the best overall recovery on a held-out test sample. For the TESS-SPOC set, architecture C performed best overall, but architecture A was optimal for the TASOC set. We discuss the details of architecture optimization in Appendix B.
## 4 Rotational Modeling
With newly obtained TESS rotation periods, we will be able to look for trends of rotation detectability and variability across fundamental stellar parameters. Stars spin down and become less active as they age (Skumanich, 1972), so we expect both detectability and variability to decrease with age. We also know activity to vary with Rossby number, the ratio of rotation period to the convective overturn timescale (e.g., Noyes et al., 1984; Wright et al., 2011). To validate these relationships and look for potential departures from expected behavior in TESS, we will need to derive ages and Rossby numbers for our sample. We employ the stellar evolution
\begin{table}
\begin{tabular}{l|c|c} \hline \hline & TESS-SPOC & TASOC \\ \hline Sectors included & 1–13 & 1–6 \\ Minimum number of sectors & 7 & 4 \\ Time baseline (days) & 350 & 160 \\ \(T_{\rm eff}\) range of sources (K) & 6,000–8,000 & 5,800–6,000 \\ Number of sources & 17,637 & 23,332 \\ \hline \end{tabular}
\end{table}
Table 2: TESS-SPOC and TASOC Noise Light Curves for Training Set
and rotational modeling using kiauhoku(Claytor et al., 2020, 2020) to infer ages, masses, convective timescales, and Rossby numbers for our stars with rotation periods and APOGEE spectroscopy.
The stellar evolution tracks were generated using the non-rotating version of the Yale Rotating Stellar Evolution Code (YREC, Demarque et al., 2008), then global stellar properties were used to calculate angular momentum evolution following the magnetic braking law of van Saders et al. (2016). The models are fully described by Claytor et al. (2020), but we list the input physics and solar calibration here in Table 3. The angular momentum evolution includes weakened magnetic braking beginning about halfway through the main sequence (van Saders et al., 2016), but does not include core-envelope decoupling (e.g., Spada & Lanzafame, 2020) or the apparent stalling of spin-down that appears to occur in young, cool stars (Curtis et al., 2019).
Using the Markov-chain Monte Carlo (MCMC) tools in kiauhoku, we interpolated and fit stellar evolution models to observational data. For the MCMC we used a \(\chi^{2}\) log-likelihood of the form
\[\mathcal{L}_{\chi^{2}}=-\frac{1}{2}\sum_{i}\frac{\left(x_{i}-x_{i}^{\prime} \right)^{2}}{\sigma_{x_{i}}^{2}}, \tag{2}\]
where \(x_{i}\) and \(\sigma_{x_{i}}\) are the observational input parameters and uncertainties, respectively, \(x_{i}^{\prime}\) is the computed value from the model, and \(i\) iterates over the input parameters. The observables used in this computation were the CNN-inferred rotation periods, APOGEE calibrated temperatures, metallicities ([M/H]) and \(\alpha\)-element abundances ([\(\alpha\)/M]). All MCMC input data are provided with uncertainties in Table 4.
## 5 Rotation periods of TESS stars
We estimated periods for TESS-SPOC targets with at least 7 sectors and TASOC targets with at least 4 sectors, the same minimum numbers of sectors as for the training light curves. To determine reliability we followed the method of Claytor et al. (2022) and used a cut in fractional uncertainty to denote reliability. We do not treat the estimated \(\sigma_{P}\) (\(=b\) in Eq. 1) as a formal uncertainty. Rather, the quantity \(\sigma_{P}/P\) serves as a metric of relative credibility of period estimates. \(\sigma_{P}/P\leq 35\%\) translated to \(\sim\)10% median percent error in the training set recovery, so we adopt this uncertainty cut as our baseline for reliability. Since there are four neural networks, each with its own period range, we obtained four sets of period candidates for both the TESS-SPOC and TASOC data sets. If two or more neural networks yielded estimates passing our reliability cut for the same star, we averaged the estimates and added their standard deviation in quadrature to the uncertainty. If the newly combined fractional uncertainty was larger than 35%, we discarded the star.
We obtained 4,853 TESS-SPOC stars with reliable periods and 3,545 reliable TASOC periods. These combine for a total of 7,971 unique targets, 427 of which overlap between the two samples. We discuss the overlap sample in Section 5.3. The rotation periods up to 80 days, their photometric amplitudes, selected spectroscopic parameters, and associated flags are presented in Table 4. We also list the stellar parameters for the 53,141 rotationally non-detected stars in Table 5. For stars with periods from both TESS-SPOC and TASOC data, we favored the TESS-SPOC period in the final table due to the light curves having twice the duration as the TASOC light curves. We note that while the CNN estimated periods longer than 80 days that passed the uncertainty cut, this regime is highly contaminated by obviously spurious detections. We leave the vetting of periods longer than 80 days to future work, and for now we consider only shorter periods to be reliable. Figure 1 shows a small selection of light curves for which we obtained periods. The periods are plotted against TIC effective temperature in Figure 2 to illustrate, for the first time, the distribution of main sequence stellar rotation periods longer than 13 days in TESS.
\begin{table}
\begin{tabular}{l c} \hline \hline \multicolumn{1}{c}{ Parameter} & Value/Source \\ \hline Atmosphere & Castelli \& Kurucz (2004) \\ Convective overshoot & False \\ Diffusion & True \\ Equation of state & OPAL (Rogers \& Nayfonov, 2002) \\ High-temperature opacities & OP (Mendoza et al., 2007) \\ Low-temperature opacities & Ferguson et al. (2005) \\ Mixing length \(\alpha\) & 1.86 \\ Mixture and solar \(Z/X\) & Grevesse \& Sauval (1998) \\ Nuclear reaction rates & Adelberger et al. (2011) \\ Solar \(X\) & 0.7089 \\ Solar \(Y\) & 0.2728 \\ Solar \(Z\) & 0.0183 \\ \(\Delta Y/\Delta Z\) & 1.4 \\ Surface \((Z/X)_{\odot}\) & 0.02289 \\ \hline Angular momentum evolution & van Saders et al. (2016) \\ Initial rotation period & 8.134 d \\ Critical Rossby number & 2.16 \\ Critical \(\omega\) for saturation & 3.394\(\times 10^{-5}\) s\({}^{-1}\) \\ \(f_{k}\) & 6.575 \\ Disk coupling timescale & 0.281 Myr \\ \hline \end{tabular}
\end{table}
Table 3: Input Physics to Stellar Evolution Models
Figure 1: A selection of light curves for which we successfully estimated rotation periods using the neural network. _Top_: and M dwarf with strong, coherent spot modulation. _Middle_: a slowly-rotating M dwarf with several missing sectors. _Bottom_: a rapidly rotating G dwarf.
Figure 2: Period–temperature distribution for the TESS-SPOC (left) and TASOC (right) samples. Estimating periods using CNNs we recover the short-period slope and intermediate-period gap seen in other rotating populations (e.g., McQuillan et al., 2014). Detection biases create a sharp edge at 27 days, above which the period uncertainties are larger due to the sector-to-sector stitching necessary for long-baseline TESS light curves. The vertical edge at 6,000 K occurs at the lower temperature edge of the training set. The drop in period detections for hotter stars may be due to conflicts with the training set or a true drop in amplitudes. Toward \(P_{\rm rot}=90\) days, contamination increases because less certain period estimates are biased toward the median period of the training set. The detections above \(P_{\rm rot}>30\) d and \(T_{\rm eff}>5,000\) K are likely to be spurious, but most of the M dwarf periods up to 80 days appear to be real.
### Features of the Period Distribution: Biases
Since the TESS-SPOC sample spans a wider range in temperature than our TASOC sample, we will focus our main discussion of the period distribution on the TESS-SPOC sample. In this period distribution there are two apparent edges worth noting. First, there is temperature edge at 6,000 K. The underlying sample distribution has no such edge, so it must be produced by the period search. 6,000 K is the lower bound of the noise source sample used for the TESS-SPOC training set, so above this temperature there is some overlap between the training light curves and the "real" data. It is possible that inclusion in the training set as a noise template (multiple instances with varying injected simulated rotation signals) confuses the neural network and causes it to assign a large uncertainty to these targets. Another possibility is that spot modulation amplitudes drop above 6,000 K, where the convective envelope disappears and stars become less active. This drop in amplitude is seen in the _Kepler_ stars of Fig. 3. The drop in detections above 6,000 K is likely a combination of these effects.
The other edge is in rotation period and occurs at roughly 27 days. While slow rotators tend to be less active than fast rotators at fixed temperature, the spot modulation amplitudes at which we expect to lose detections vary in period across temperature. In other words, a period detection edge produced by astrophysical variability should not be flat. Rather, the 27-day detection edge is likely related to the 27-day sector length in TESS. Without a reliable absolute flux calibration in each sector, stitching sector light curves together can destroy coherent signals longer than 27 days in period. While we include sector-to-sector stitching in all our training sets, the 27-day edge suggests that the training sets do
\begin{table}
\begin{tabular}{l l} \hline \hline \multicolumn{1}{c}{ Label} & \multicolumn{1}{c}{Description} \\ \hline TIC & TESS Input Catalog ID \\ prot & CNN-inferred rotation period \\ e\_prot & rotation period uncertainty \\ prov & period provenance: TESS-SPOC or TASOC \\ rper & photometric activity range \(R_{per}\) \\ sph & photometric activity index \(S_{ph}\) \\ cdpp & combined differential photometric precision \\ Tmag & TESS magnitude \\ contratio & TIC flux contamination ratio \\ instability & instability strip flag \\ parallax & \(Gaia\) DR3 parallax \\ ruwe & \(Gaia\) DR3 renormalized unit weight error \\ phot\_g\_mean\_mag & \(Gaia\) DR3 apparent \(G\) magnitude \\ bp\_rp & \(Gaia\) DR3 \(G_{BP}-G_{RP}\) color index \\ teff & APOGEE DR17 effective temperature \\ teff\_err & temperature uncertainty \\ m\_h & APOGEE DR17 metallicity [M/H] \\ m\_h\_err & metallicity uncertainty \\ alpha\_m & APOGEE DR17 \(\alpha\) enhancement [\(\alpha\)/M] \\ alpha\_m\_err & \(\alpha\) enhancement uncertainty \\ snr\_bad & APOGEE DR17 spectral signal-to-noise flag \\ fspot & spot filling fraction \\ age & MCMC gyrochronological age \\ e\_age+ & 1\(\sigma\) age upper credible limit \\ e\_age- & 1\(\sigma\) age lower credible limit \\ mass & MCMC-inferred stellar mass \\ e\_mass+ & 1\(\sigma\) mass upper credible limit \\ e\_mass- & 1\(\sigma\) mass lower credible limit \\ rad & MCMC-inferred stellar radius \\ e\_rad+ & 1\(\sigma\) radius upper credible limit \\ e\_rad- & 1\(\sigma\) radius lower credible limit \\ Ro & MCMC-inferred Rossby number \\ fcconv & MCMC convergence flag \\ \hline \end{tabular} Note. – The “snr_bad” flag represents the APOGEE spectral signal-to-noise flag and is set for only 21 stars. The “instability” flag marks stars whose TIC temperatures and luminosities place them in the instability strip. It is set to 1 for 98 stars within the \(\delta\) Scuti instability strip characterized by Murphy et al. (2019), 2 for 105 stars in the \(\gamma\) Doradus strip of Balona et al. (2011), and 3 for 243 stars in the \(\gamma\) Dor strip of Bradley et al. (2015). This table is available in its entirety in machine-readable format.
\end{table}
Table 4: Properties of 7,971 Rotationally Detected TESS SCVZ Stars, MCMC Input & Fit Parameters
\begin{table}
\begin{tabular}{l l} \hline \hline \multicolumn{1}{c}{ Label} & \multicolumn{1}{c}{Description} \\ \hline TIC & TESS Input Catalog ID \\ prov & period provenance: TESS-SPOC or TASOC \\ rvar & photometric activity range \(R_{var}\) \\ cdpp & combined differential photometric precision \\ Tmag & TESS magnitude \\ contratio & TIC flux contamination ratio \\ instability & instability strip flag \\ parallax & \(Gaia\) DR3 parallax \\ ruwe & \(Gaia\) DR3 renormalized unit weight error \\ phot\_g\_mean\_mag & \(Gaia\) DR3 apparent \(G\) magnitude \\ bp\_rp & \(Gaia\) DR3 \(G_{BP}-G_{RP}\) color index \\ teff & APOGEE DR17 effective temperature \\ teff\_err & temperature uncertainty \\ m\_h & APOGEE DR17 metallicity [M/H] \\ m\_h\_err & metallicity uncertainty \\ alpha\_m & APOGEE DR17 \(\sigma\) enhancement [\(\alpha\)/M] \\ alpha\_m\_err & \(\alpha\) enhancement uncertainty \\ star\_bad & APOGEE DR17 stellar parameter fit flag \\ snr\_bad & APOGEE DR17 spectral signal-to-noise flag \\ \hline \end{tabular} Note. – The “star_bad” flag represents the APOGEE stellar parameter fit flag, gset when a best-fit model is close to a grid edge. It is set for 772 stars. The “snr_bad” flag represents the APOGEE spectral signal-to-noise flag and is set for only 123 stars. The “instability” flag marks stars whose TIC temperatures and luminosities place them in the instability strip. It is set to 1 for 1,997 stars within the \(\delta\) Scuti instability strip characterized by Murphy et al. (2019), 2 for 3,412 stars in the \(\gamma\) Doradus strip of Balona et al. (2011), and 3 for 6,161 stars in the \(\gamma\) Dor strip of Bradley et al. (2015). This table is available in its entirety in machine-readable format.
\begin{table}
\begin{tabular}{l l} \hline \hline \multicolumn{1}{c}{ Label} & \multicolumn{1}{c}{Description} \\ \hline TIC & TESS Input Catalog ID \\ prov & period provenance: TESS-SPOC or TASOC \\ rvar & photometric activity range \(R_{var}\) \\ cdpp & combined differential photometric precision \\ Tmag & TESS magnitude \\ contratio & TIC flux contamination ratio \\ instability & instability strip flag \\ parallax & \(Gaia\) DR3 parallax \\ ruwe & \(Gaia\) DR3 renormalized unit weight error \\ phot\_g\_mean\_mag & \(Gaia\) DR3 apparent \(G\) magnitude \\ bp\_rp & \(Gaia\) DR3 \(G_{BP}-G_{RP}\) color index \\ teff & APOGEE DR17 effective temperature \\ teff\_err & temperature uncertainty \\ m\_h & APOGEE DR17 metallicity [M/H] \\ m\_h\_err & metallicity uncertainty \\ alpha\_m & APOGEE DR17 \(\sigma\) enhancement [\(\alpha\)/M] \\ alpha\_m\_err & \(\alpha\) enhancement uncertainty \\ star\_bad & APOGEE DR17 stellar parameter fit flag \\ snr\_bad & APOGEE DR17 spectral signal-to-noise flag \\ \hline \end{tabular} Note. – The “snr_bad” flag represents the APOGEE stellar parameter fit flag, gset when a best-fit model is close to a grid edge. It is set for 772 stars. The “snr_bad” flag represents the APOGEE spectral signal-to-noise flag and is set for only 123 stars. The “instability” flag marks stars whose TIC temperatures and luminosities place them in the instability strip. It is set to 1 for 1,997 stars within the \(\delta\) Scuti instability strip characterized by Murphy et al. (2019), 2 for 3,412 stars in the \(\gamma\) Doradus strip of Balona et al. (2011), and 3 for 6,161 stars in the \(\gamma\) Dor strip of Balölev et al. (2015). This table is available in its entirety in machine-readable format.
\end{table}
Table 5: Properties of 53,141 Rotationally Nondetected TESS SCVZ Stars
not fully capture the sector effects in TESS, or at the very least the sector effects make period estimates much less certain beyond 27 days.
### Features of the Period Distribution: Populations
The period-temperature distribution displays a sloped, short-period edge, similar to what was seen in _Kepler_(McQuillan et al., 2014; Santos et al., 2019, 2021). This edge represents the point at which field stars converge onto the slowly-rotating sequence (Curtis et al., 2020).
The distribution also displays a gap in rotation period, occurring at roughly 12 days at 5,000 K and increasing to 20 days at 4,000 K. McQuillan et al. (2013) first identified this gap in the _Kepler_ field, and it has also been recovered in other field star samples using K2 (Reinhold and Hekker, 2020; Gordon et al., 2021). Lu et al. (2022) showed that the gap may close in fully convective star samples. We present here the first detection of the rotation period gap using TESS.
Figure 3 shows another look at the rotation period distribution, now colored by the photometric variability amplitude \(S_{\rm ph}\), in comparison with the distribution from the _Kepler_ field (Santos et al., 2019, 2021). As we expect, stellar variability generally decreases with increasing periods at fixed temperature, since slowly rotating stars are less magnetically active than faster stars. There is a significant dip in the variability between 3,500 K and 4,500 K, most notably near the location of the rotation period gap, which goes from about (5,000 K, 12 d) and curves upward to (4,000 K, \(\sim\)20 d) (refer to Figure 2). This is consistent with Reinhold et al. (2019); Reinhold and Hekker (2020), who found a similar dip in variability near the period gap in _Kepler_ and K2 stars. They argued that the dip in variability causes the apparent gap in rotation periods, where stars in the gap exhibit modulation too small to be detected in rotation.
Figure 4 shows the TESS-SPOC period-temperature distribution using different variability range \(R_{\rm per}\) floor values. Requiring \(\log(R_{\rm per}/{\rm ppm})>3.5\) removes many stars from the top-left corner of the diagram, which are hot but have apparently long rotation periods. While we do not expect to find many stars in this regime based on Galactic population synthesis (e.g., van Saders et al., 2019), the stars that are here should have low variability because they are hot and therefore have thin-to-nonexistent outer convective envelopes, and because they spin relatively slowly. The stars that are lost from the top panel to the middle panel of Figure 4 are likely mostly spurious detections whose measured \(R_{\rm per}\) is actually the photometric noise, as well as a handful of correctly measured, low-variability stars. As we continue to increase the \(R_{\rm per}\) floor, we see two effects. First, we lose more low-variability stars on the hot, long-period edge. This is precisely what we expect to see in a period sample: raising the variability floor, we should lose the highest-Rossby number stars first. These are the slowly rotating, hot stars in the top left "corner" of the distribution. Second, the gap becomes more apparent, consistent with Reinhold and Hekker (2020), although stars are not lost from the gap at a significantly higher rate than stars outside the gap.
### Comparison between TESS-SPOC and TASOC
In the TASOC sample (e.g., in the right panel of Figure 2), we again see a weak presence of the period gap as well as the sloped short-period edge. the TASOC sample also show the 27-day horizontal detection edge exhibited by the TESS-SPOC sample, resulting from the increase in uncertainty past 27 days from sector-to-sector stitching.
There are 427 stars in common between the TASOC and TESS-SPOC samples. We estimated two periods for each of these stars using different neural networks fit to different training sets tailored to the different light curve lengths. While the underlying pixel data between the two samples were the same, the apertures used to perform photometry were different, and the TESS-SPOC light curves were more than twice as long (13 sectors) as the TASOC light curves (6 sectors). In addition, the two training sets used different underlying samples of stars for noise and systematics. This gives us a sample to compare period estimates for robustness against photometric aperture, training set, and duration of observation.
In Figure 5 we compare the period estimates for the overlap sample. They mostly agree, with a median relative error of 7%. The estimates that disagree have relatively large uncertainties, though the fact that they make our 35% reliability cut means that there will be some contamination in our period sample. 76% of stars in the overlap sample have period estimates agreeing to within 20%. The discrepancies likely arise from the different aperture selection, different light curve durations, or differences in the underlying training sets, although here we do not attempt to isolate the main contributor.
### Comparison with other large field rotation samples
The TESS rotation period distribution is the product of the underlying distribution of periods, the presence of modulation in the light curve, the availability of data products, and the ability to detect periods across various stellar parameters. To try and understand the relative influence of these effects, we compare the TESS
period distribution with other large period data sets, particularly _Kepler_ and K2. Figure 6 shows the period distributions from _Kepler_ and K2, while Figure 7 shows our newly obtained TESS distribution. We represent temperature bins as vertical histograms in the style of McQuillan et al. (2014) to increase the clarity of the period gap in the cool-temperature regime. The number of temperature and period bins is adjusted in each panel to account for the total number of stars in each sample.
The top panel of Figure 6 displays 52,338 carefully vetted _Kepler_ rotation periods from Santos et al. (2021). The _Kepler_ period distribution exhibits a pileup on its upper edge for stars hotter than \(\sim 5,500\) K, which is a prediction of the weakened magnetic braking hypothesis (van Saders et al., 2016, 2019) and has been well-studied in the _Kepler_ field (Hall et al., 2021; Masuda et al., 2022; David et al., 2022). Also present is the rotation period gap, clearly visible at \(\sim 15\) days at 5,000 K, \(\sim 17\) days at 4,500 K, and continuing to increase and widen at cooler temperatures.
The bottom panel of Figure 6 shows 13,847 rotation periods from stars in K2 measured by Reinhold and Hekker (2020). These represent a high-fidelity subsample with normalized Lomb-Scargle peak height \(>0.5\) and variability range \(R_{\rm var}>0.1\)%. Peak heights range from 0 to 1 and quantify how sinusoidal a light curve is, with a perfect sinusoid returning unit peak height, and noisy, non-sinusoidal data returning values close to zero. \(R_{\rm var}\) is defined similarly to \(R_{\rm per}\), except that \(R_{\rm var}\) is the variability range over the entire light curve, rather than a median of ranges per period bin. The K2 distribution shows the period gap most strongly between 5,000 K and about 4,250 K, but it is weakly visible in cooler stars, where it appears to increase in period and widen as in _Kepler_. The hot star pileup is not apparent here. This is likely due to the relatively large temperature uncertainty in the K2 Ecliptic Plane Input Catalog (median 140 K, Huber et al., 2016), which blurs out features in the temperature distribution. (van Saders et al., 2019; David et al., 2022). Finally, periods longer than about 35 days are largely absent from the K2 distribution because of K2's 80-day observing campaigns in each field.
The TESS distribution in the top panel of Figure 7 shows periods for 5,056 TESS-SPOC stars with \(\sigma_{P}/P\leq 35\)%, representing the most credible detections. The period gap is present and is most apparent at temperatures between 4,000 and 5,000 K. It is still visible at temperatures cooler than 4,000 K, but the dearth of reliable detections at periods nearing 30 days makes the gap more difficult to detect. In addition to the gap, we detect a handful of M-dwarfs rotating with periods between 40 and 60 days; similar stars were also observed in the _Kepler_ period distribution. We visually inspected the light curves for these stars and confirmed
Figure 3: The rotation period distribution versus temperature for both the _Kepler_ field (left, Santos et al., 2019, 2021) and TESS SCVZ (right, this work). The bins are colored by photometric variability index \(S_{\rm ph}\). As expected, amplitudes generally decrease with increasing period at fixed temperature. Interestingly, the amplitudes near the rotation gap are smaller than those away from the gap in the same temperature and period ranges, in agreement with _Kepler_ as shown left, and with K2 Reinhold et al. (2019); Reinhold and Hekker (2020). The TESS bins above a temperature of 5,000 K and above a period of 30 days nearly all have only a single star; this region is sparsely populated, and most periods here are likely to be spurious detections.
Figure 4: Distribution of periods and TIC temperatures for the TESS-SPOC sample, with varying lower bound on photometric variability range \(R_{\rm per}\). All stars shown pass our reliability criterion (\(\sigma_{P}/P\leq 35\%\)).
them to be true rotation detections with photometric variability \(R_{\rm per}\) approaching 1%. On the hot end, the distribution lacks the long-period edge seen in _Kepler_ because of the abundance of hot stars apparently rotating with \(\sim\)20-day periods. These are likely spurious detections, as their measured amplitudes are close to the noise floor (close to 100 ppm for \(Tmag=8\) and 1% for \(Tmag=15\)). When we raise the variability floor in the bottom panel of Figure 7, the hot, slow rotators mostly disappear, but the gap and the slowly rotating M-dwarfs remain.
We offer one final view of the TESS period distribution, now plotted over the _Kepler_ distribution of Santos et al. (2019, 2021), in Figure 8. The short-period edge of the TESS distribution has the same location and slope of _Kepler_'s, suggesting that the edge is a result of rotational evolution, rather than arising from details of the star formation history (Davenport, 2017; Curtis et al., 2020). The rotation period gap agrees as well, following _Kepler_'s for as long as the TESS gap remains visible into the hot stars. TESS appears to see stars in regions _Kepler_ does not: the slowly rotating, hot stars (\(T_{\rm eff}>5000\) K and \(P_{\rm rot}>30\) d) have amplitudes close to the noise floor and are likely spurious detections. On the other hand, the slowly rotating M dwarfs, with TESS periods up to 80 days, have been vetted by eye and are mostly real rotation detections. Interestingly, the branch of stars beneath the period gap turns over at temperatures below 3,500 K, which is not seen in _Kepler_ but is seen in some younger samples observed by K2 and MEarth (Reinhold and Hekker, 2020; Newton et al., 2016, 2018).
### Modeling Results
Taking the stars with reliable periods from either TESS-SPOC or TASOC, we cross matched with APOGEE DR17 spectroscopic parameters estimated with ASPCAP (Garcia Perez et al., 2016). To ensure a high quality sample, we removed objects with the ASPCAP STAR_BAD and SNR_BAD flags set. We also checked the MULTIPLE_SUSPECT flag for possible double-lined spectroscopic binaries, but none of our APOGEE rotators were flagged. Some stars in our sample had multiple visits and therefore multiple ASPCAP measurements. For targets with multiple measurements, we averaged the temperatures, metallicities, and \(\alpha\) abundances, then added the standard deviation of those measurements in quadrature with the formal ASPCAP uncertainties to obtain an uncertainty for each measurement. This affected 201 targets out of 2,654 with APOGEE spectroscopy. We then filtered out targets with large Renormalized Unit Weight Error (RUWE \(>\) 1.2, Gaia Collaboration et al., 2022) and high flux contamination ratio (TIC _contratio_\(>\) 10%) to clean the sample of potential binary or nearby flux contaminants. This yielded a sample of 1,227 stars, which we designate as our "Gold" sample. We fit models to these stars according to the procedure in Section 4, taking the posterior medians as the nominal fit parameters. The fit parameters and their uncertainties are presented in Table 4.
#### 5.5.1 The TESS SCVZ age distribution
The ages for our stars, which are estimated using our TESS rotation periods, are shown in Figure 9. We separate stars with rotation periods less than 10 days, which in _Kepler_ were more likely to be spun-up by close binary companions than be true rapid rotators Simonian et al. (2019). The age distribution peaks between 2 and 4 Gyr, which is consistent with other age distributions of Solar neighborhood stars: Buder et al. (2019) used isochrones
Figure 5: Period comparisons for the 427 stars in both the TESS-SPOC and TASOC samples. The solid black line represents perfect agreement, while the dashed red lines are \(\pm 50\%\) error. The black dotted lines are at 27 days on either axis, showing the TESS sector length. There is generally good agreement for most stars, with median percent error of 7%, and most of the disagreeing estimates have relatively large uncertainty. We note that while we do not include periods greater than 80 days in our analysis or table, we show them here to illustrate at what periods the agreement worsens.
for GALAH stars, Claytor et al. (2020) used rotation-based ages for _Kepler_ dwarfs, Berger et al. (2020) used isochrones for _Kepler_ dwarfs, Silva Aguirre et al. (2018) used asteroseismology for _Kepler_ giants, and Mackereth et al. (2021) used seismology in the TESS CVZs; all obtained a distribution peaking between 2 and 4 Gyr. We note that our age distribution lacks many of the old (\(>\) 6 Gyr) stars seen in other samples. This is a consequence of two detection biases: (1) our 27-day detection edge prevents the reliable detection of stars hotter than 4,000 K, and (2) old stars rotate more slowly and are less active, further complicating their detection in rotation.
#### 5.5.2 Galactic chemical evolution
With rotation-based ages and high-resolution APOGEE spectroscopic abundances, we can also look for Galactic chemical evolution trends in TESS (e.g., Silva Aguirre et al., 2018; Claytor et al., 2020). Stars' initial composition patterns are set by the compositions of the clouds in which they form, which are in turn enriched by stars that have lived and died before them. Galactic chemical evolution is often traced using the relative abundances of \(\alpha\) elements (e.g., O, M, Ca) to metals and metals to hydrogen (Bensby et al., 2014). APOGEE provides values for [\(\alpha\)/M] and [M/H], which we adopt. The background Galactic composition was governed by the dominance of core-collapse supernovae in the early Milky Way, followed by dominance of type Ia supernovae beginning about 8 Gyr ago (Feltzing et al., 2017). Both types of supernovae enriched the interstellar medium with metals, but in different ratios. Consequently, stars display decreasing [\(\alpha\)/M] and
Figure 6: Histogram representations for the period–temperature distribution of _Kepler_(Santos et al., 2019, 2021, top) and K2 (Reinhold and Hekker, 2020, bottom).
increasing [M/H] with time (reversed as a function of age). We therefore expect old stars to have low metallicity but high \(\alpha\) enhancement, while young stars should have higher metallicity and lower \(\alpha\)-element abundances (Haywood et al., 2013; Bensby et al., 2014; Martig et al., 2016; Feltzing et al., 2017; Buder et al., 2019). These young and old populations are representative of the classical Galactic "thin" and "thick" disks, respectively.
Figure 10 shows stellar \(\alpha\)-element abundance as a function of rotation-based age in the TESS SCVZ. As expected, young stars are generally \(\alpha\)-poor and metal-rich. There is a slight increasing trend of \(\alpha\) enhancement with age. We detect very few stars in rotation older than 6 Gyr due to the detection biases discussed in Section 5.5.1. Finally, we also detect a few young \(\alpha\)-rich stars. These are known from other samples (e.g., Martig et al., 2015; Silva Aguirre et al., 2018; Claytor et al., 2020) and are likely to be the products of stellar mergers. In this scenario, two old, \(\alpha\)-enhanced stars merge, destroying the stars' rotation histories and yielding a fast-rotating, apparently young, \(\alpha\)-enhanced product (Zhang et al., 2021).
#### 5.5.3 Stellar activity
Finally, with rotationally characterized stars we can begin to investigate trends of photometric activity with model-derived parameters like age and Rossby number. We define the Rossby number as the ratio of the rotation period over the convective overturn timescale \(\tau_{\rm cr}\), where the convective timescale is computed from our models as the pressure scale height \(H_{P}\) divided by the convective velocity evaluated at a distance \(H_{P}\) above the
Figure 7: Histogram representations for the period–temperature distribution of TESS, with no photometric variability cut (top), and restricting to \(\log(R_{\rm per}/{\rm ppm})>3.5\) (bottom). Temperatures are from the TIC.
base of the convection zone. To quantify the photometric activity, we use the photometric activity index \(S_{\rm ph}\) rather than \(R_{\rm per}\) so that we can compare the trends in the TESS SCVZ with those in the _Kepler_ field observed by Santos et al. (2019, 2021), and Mathur et al. (2023). The ages and and Rossby numbers for the Mathur et al. sample were computed using the same procedure and models underlying this work, so we can directly compare the _Kepler_ and TESS distributions. We start with the Gold sample and discard stars with periods less than 10 days as before, leaving 1,065 stars with TESS periods, APOGEE spectroscopic parameters, and well-determined Rossby numbers and ages.
Figure 11 shows the photometric activity index \(S_{\rm ph}\) versus the Rossby number for our binary-cleaned stars, plotted over the distribution of stars from _Kepler_. Activity decreases with increasing Rossby number, as expected. The TESS distribution generally agrees with the _Kepler_ distribution. We have a few stars close to the high-activity saturated regime (e.g., Wright et al., 2011), but most of our stars are magnetically unsaturated. The TESS detection limits are clear here, as our lowest-activity star with a period detection has \(S_{\rm ph}=345\) ppm, compared to _Kepler_'s lower limit in the tens of ppm. We have a few hot stars at \(\mathrm{Ro}\gtrsim 2\) where _Kepler_ has almost none. These are the likely spurious period detections from before (e.g., Figure 2).
We show \(S_{\rm ph}\) as a function of rotation-based age in Figure 12. Photometric activity decreases with age, an effect of stellar spin-down. The TESS distribution follows the range and morphology of the _Kepler_ distribution all the way down to the TESS rotation detection limit of \(S_{\rm ph}\approx 350\) ppm.
## 6 Detectability of Rotation
Here we consider the detectability of rotation as a function of fundamental stellar parameters. At a fixed rotation period, at lower temperature and higher metallicity, we expect deeper convective envelopes, stronger magnetism, more surface spots, and more easily detectable rotational modulation. Besides changing with
Figure 8: The _Kepler_ period–temperature distribution from Santos et al. (2019, 2021) (black points) with our new TESS rotation periods overplotted (red points). TESS temperatures are from the TIC.
Figure 9: The distribution of rotation-based stellar ages in the TESS SCVZ (the Gold sample in Table 1). The stars shown span a temperature range of 3400–6400 K. We separate stars with periods less than 10 days, which are more likely to be tidally synchronized binaries than true rapid rotators (Simonian et al., 2019).
static stellar parameters, the strength of a star's magnetism changes as the star ages. Main-sequence stars with outer convective envelopes are born spinning fast and spin down as they age as they lose angular momentum to magnetized stellar winds (Kraft, 1967; Skumanich, 1972). The decrease in rotation speed results in a weakening magnetic field, fewer or smaller spots, and less flux modulation, making rotation more difficult to detect in older stars than in younger stars at fixed mass and composition.
While we might expect rotation to be harder to detect in lower metallicity stars, an age dependence enters the picture because of the variation of Galactic composition with age. Because the background abundance ratios change with time, any apparent changes in rotation detectability with stellar abundances may actually be caused by the decreasing detectability with age.
To investigate the detectability of rotation with fundamental stellar properties, we consider the fraction of targets for which we detected periods in stellar parameter bins. While the CNN infers a period for each target, we can use the estimated uncertainty to determine whether those periods are reliable. As in Claytor et al. (2022), we label targets with \(\sigma_{P}/P<0.35\) (corresponding to \(\sim\)10% median percent error) as successful detections, and anything else as a nondetection.
Figure 13 shows the rotation detection fraction versus temperature and metallicity for all our rotationally-searched stars with APOGEE spectroscopy. Only bins with at least five targets are shown so that the diagram is not muddled by small number fluctuations. As expected, cooler stars, especially cooler than 5,000 K, are detected in period significantly more often than hotter stars. In the range \(5,000\) K \(<T_{\rm eff}\lesssim 6,000\) K, where the detections begin to decrease as a function of temperature, there appears to be a weak trend in metallicity, with higher-metallicity ([M/H] \(\gtrsim-0.1\)) stars being detected in period more frequently than lower-metallicity stars. This is consistent with Claytor et al. (2020); Amard et al. (2020); See et al. (2021), who found that rotation is more easily detected in _Kepler_ stars with higher metallicity at fixed mass. We see the same bias toward higher metallicity among our rotators, which may be due either to the deeper convective envelope resulting from enhanced opacity, or to more rapid rotation (and therefore higher activity) from increased moment of inertia and slower spin-down.
Another view of the detection fraction is shown in Figure 14, this time as a function of metallicity and \(\alpha\)-element enhancement. At fixed metallicity, we detect fewer stars in rotation at high [\(\alpha\)/M] due to the underlying relationship between age and \(\alpha\) enhancement. High-\(\alpha\) stars tend to be older, spin more slowly, and
Figure 10: Stellar \(\alpha\)-element abundances vs. age in the TESS SCVZ (the Gold sample in Table 1), tracing Galactic chemical evolution of \(\alpha\) enhancement from supernovae. Consistent with trends seen in the _Kepler_ field (e.g., Silva Aguirre et al., 2018; Claytor et al., 2020), there is a weak trend of increasing \(\alpha\) abundance with age below 8 Gyr associated with the classical “thin” Galactic disk. We expect but do not see a strong trend of increasing \(\alpha\) abundance with age above 8 Gyr (the classical “thick” disk), but this because we detect very few older stars due to (1) our 27-day detection edge, and (2) slow rotation and weak activity making stars more difficult to detect.
are less active, so we expect them to be more difficult to detect in rotation. This view also allows us to inspect the period detection fraction across metallicity at fixed \(\alpha\) enhancement. At fixed [\(\alpha\)/M], there is significant scatter in the detection fraction across metallicity. Some bins (e.g., \(0<[\alpha\)/M] \(<0.05\)) show gradually increasing detectability with increasing metallicity, while others (e.g., -0.05 \(<[\alpha\)/M] \(<0\)) worsen in detection at higher metallicity. Due to the amount of noise in the bins, it is difficult to conclude whether the apparently enhanced detection fraction at higher metallicity is caused by higher activity from deeper convection zones or by the underlying age distribution.
## 7 Spot Filling Fraction
The links between temperature, metallicity, age, convection, rotation, and photometric variability shed light on the generation of magnetism in cool, main-sequence stars. The strength of rotational modulation in the light curve, and therefore the detectability of rotation, hint at the presence of cool spots created by magnetic fields concentrated near the stellar surface. Because spots are created by the same dynamo that rotation and convection drive, we can use the prevalence of spots in different temperature and rotation ranges to infer dynamo properties in those regimes.
Cao and Pinsonneault (2022) found that temperature-sensitive spectral features include contributions from the quiet photosphere and cooler spots. Thus, fitting APOGEE spectra with two temperature components, they inferred the surface spot filling fractions and the temperature contrasts of a sample of stars. They used a modified version of the FERRE code (Allende Prieto et al., 2006), the spectral fitting code used by the ASP-CAP pipeline, to infer spot filling fractions for all stars in APOGEE DR17. Following this method, we obtained spot filling fractions and updated effective temperatures for the stars in our sample with APOGEE spectra.
We began with the 1,227 stars in our Gold sample (described in Section 5.5). We made cuts in _Gaia_ DR3 magnitudes and colors using \(M_{G}>4.4\) and \(G_{BP}-G_{RP}>1\) to target below the field main-sequence turnoff and ensure all our stars are securely on the main sequence. This yielded 585 cool, main-sequence stars, but a few (less than 20) showed an excess in \(M_{G}\), indicating that they were likely leftover binary systems (e.g., Berger et al., 2018). To remove these, we fit a line to the main sequence and computed the magnitude excess as \(\Delta M_{G}=M_{G}-\langle M_{G}\rangle\), where \(\langle M_{G}\rangle\) was the fit main-sequence magnitude. The distribution of magnitude excesses had two clear peaks, with a trough we visually identified at \(\Delta M_{G}=-0.4\). We removed stars with
Figure 11: The stellar activity index \(S_{\rm ph}\) versus Rossby number for 1,065 stars in our Gold sample with periods greater than 10 days, plotted over the _Kepler_ sample of Santos et al. (2019, 2021) and Mathur et al. (2023). As expected from theory and as seen in the _Kepler_ field, photometric activity decreases with increasing Rossby number. The gray region denotes our measured TESS noise floor, ranging from 250 ppm at 8th magnitude to 1,100 ppm at 13th magnitude. We do not detect TESS stars with amplitudes as low as _Kepler_ due to TESS’s worse photometric precision and therefore higher noise floor.
Figure 12: The stellar activity index \(S_{\rm ph}\) versus rotation-based age for 1,065 stars in our Gold sample with periods greater than 10 days, plotted over the _Kepler_ sample of Santos et al. (2019, 2021) and Mathur et al. (2023). Photometric activity decreases as stars age, an effect of spin-down seen in the _Kepler_ field and predicted by theory.
\(|\Delta M_{G}|<0.4\), leaving 566 stars. We designate these as our "Platinum" sample, a pure, cool, main-sequence sample robustly free from binary contamination.
With spot filling fractions, we can now investigate the detectability of rotation as a function of surface spot coverage. We might expect more spotted stars to be easier to detect in rotation, as they should have higher photometric variability. Figure 16 shows the 268 Platinum sample K-dwarfs with \(1.5<G_{BP}-G_{RP}<2\), along with the 209 stars in the same regime but with no rotation detection. The left panel shows the subsamples' distributions of spot filling fractions, while the right panel shows the cumulative frequency distributions. A Kolmogorov-Smirnov test returns a \(p\)-value of 0.3, rejecting the null hypothesis that the two samples are drawn from the same underlying distribution with only 70% (i.e., just over 1\(\sigma\)) significance. There are too few stars in this regime to confirm any difference in spot filling fraction between the period detection and nondetection samples.
We show the Platinum sample on a _Gaia_ color-magnitude diagram in Figure 15, with points colored by spot filling fraction. While most stars in our sample have low spot filling fractions less than 10%, the mid-K range (\(1.5<G_{BP}-G_{RP}<2\)) exhibits elevated fractions. Here, filling fractions reach \(\approx\) 0.3-0.4, behavior that was first observed by Cao et al. (2023), which they attributed to internal differential rotation. There is a clear gradient of increasing filling fraction with increasing \(M_{G}\). This may represent an increase of spot coverage with increasing metallicity in this temperature regime; the correlation between spot filling fraction and metallicity is still present after strong binary rejection, and the trend disappears outside this temperature range.
Cao et al. (2023) suggested that core-envelope decoupling gives rise to anomalous rotation behavior in cool stars, evidenced by elevated spot filling fractions in clus
Figure 13: The detectability of rotation across APOGEE temperature and metallicity. We preferentially detect rotation in cool stars, which have deeper convective envelopes and are more active. When the detectability drops off as a function of temperature, we see a weak trend of increasing detectability with increasing metallicity. This may be due to metallicity increasing the convective depth at fixed temperature. It may also be an age effect, as young, active stars tend to be more metal-rich.
ter K-dwarfs between 4,000 and 4,500 K. The process of decoupling and recoupling drives a radial shear layer and enhanced surface magnetism. With field star rotation periods up to 27 days, we can investigate the behavior of rotation and spottedness in the TESS SCVZ. Figure 17 shows the period-temperature distribution of our Platinum sample, again colored by spot filling fraction, with the rotation sequences from benchmark open clusters Pleiades (Rebull et al., 2016), Praesepe (Douglas et al., 2017, 2019), NGC 6811 (Curtis et al., 2019), Ruprecht 147 (Curtis et al., 2020), and M67 (Barnes et al., 2016; Dungee, 2022). Here we use the two-component fit effective temperature, rather than the TIC or ASPCAP values, for consistency with the spot filling fractions. As a function of temperature, spot filling fractions increase in the mid-K range--the same behavior Cao et al. (2023) identified in Praesepe. At fixed temperature, we might expect filling fractions to be higher at shorter periods, where stars rotate faster and are more magnetically active. Instead, spot filling fractions in the mid-K range appear to be elevated across the entire span of recovered periods (\(\sim\)10-30 d). Cao et al. (2023) predict shear-enhanced magnetism to persist in this temperature range until ages of a few Gyr (temperature-dependent), so it is likely we do not reach long enough periods for the spot filling fraction to decrease.
The increase in spot filling fraction occurs in the color and temperature range where open clusters NGC 6811 and Rup 147 were shown to exhibit an unexpected epoch of stalled rotational braking (Curtis et al., 2019, 2020). NGC 6811 is 1 Gyr old (Curtis et al., 2019), but for temperatures cooler than 5,000 K its rotation sequence rests upon that of the 670-Myr-old Praesepe (Douglas et al., 2017, 2019). Somewhere between the ages of these clusters, stellar spin-down departs from the classical picture from gyrochronology. By 2.7 Gyr (Rup 147, Curtis et al., 2020), stars at the hot end have resumed braking, but the cooler stars lag behind, suggesting that the
Figure 14: The detectability of rotation across APOGEE metallicity and \(\alpha\)-element abundace. We detect fewer stars in rotation at high [\(\alpha\)/M] because these stars tend to be old, slowly rotating, and less active. At fixed [\(\alpha\)/M], there is scatter in the trend of detectability with metallicity from bin to bin.
Figure 15: _Gaia_ DR3 color-magnitude diagram of our Platinum cool main-sequence sample, colored by surface spot filling fraction. The sample is carefully cleaned of potential binary systems, which can interlope as falsely high spot filling fractions.
epoch of stalled braking lasts longer for lower-mass stars (e.g., Gallet and Bouvier, 2015; Lanzafame and Spada, 2015; Somers and Pinsonneault, 2016; Spada and Lanzafame, 2020; Cao et al., 2023).
Spada and Lanzafame (2020) showed that a two-zone interior model, which allows the core and envelope to rotate at different rates, can nearly reproduce the stalled spin-down behavior exhibited by these clusters. In these models the core and envelope decouple, and the envelope continues to spin down from magnetic braking while the core maintains its rotation speed. During recoupling, angular momentum is transferred from the core to the envelope, and the apparent spin-down is temporarily slowed or halted. After recoupling, the star again behaves as a solid body and undergoes classical (i.e., power law, Skumanich, 1972) braking. While Curtis et al. (2020) argued in favor of the two-zone model, they could not rule out a temporary reduction in the braking torque, either from reduced wind or weakening of the magnetic field, as a possible cause. We suggest
Figure 16: The distribution of spot filling fractions separated by whether rotation was detected in the _Gaia_ color range \(1.5<G_{BP}-G_{RP}<2\). While we might expect more spotted stars to be easier to detect in rotation, there are too few stars to draw any statistically significant conclusions.
Figure 17: The period distribution of our Platinum sample, colored by spot filling fraction, and plotted with rotation sequences of benchmark open clusters. In the K temperature range (between 4,000 and 4,500 K), NGC 6811 and Rup 147 exhibit stalled rotational braking, a departure from current gyrochronological models. The spot filling fractions are elevated here as well.
that the coincidence of elevated spot filling fractions in field stars with the stalled braking seen in open clusters supports the shear-driven dynamo hypothesis argued by Cao et al. (2023).
## 8 Summary & Conclusion
We used deep learning to infer reliable periods for 7,971 main sequence stars near the southern ecliptic pole from year-long TESS full-frame image light curves. Our periods represent the first large-scale recovery and measurement of rotation in TESS stars rotating more slowly than 13.7 days, the limit previously imposed by TESS's complicated systematics. We fit stellar evolutionary models to the stars using rotation and high-resolution spectroscopic parameters to determine stellar ages, masses, convection timescales, Rossby numbers, and more. We investigated the detectability of rotation as a function of fundamental stellar parameters as well as new spot filling fractions inferred from spectroscopy. Our key results and conclusions are as follows:
* We find evidence for the intermediate rotation period gap, first discovered in the _Kepler_ field and seen in K2 field star samples across the ecliptic plane, the first such detection from TESS stars. The period gap in TESS closely aligns with the gaps from previous missions, cementing the conclusion that the gap is a product of stellar structure and evolution and not star formation history.
* The rotation period gap coincides with a dip in photometric variability, consistent with the findings of Reinhold et al. (2019); Reinhold and Hekker (2020) in other field star populations.
* The distribution of rotation periods in TESS closely resembles the distributions seen by _Kepler_ and K2. Its lower edge features a slope of increasing period with decreasing temperature, similar to the distributions from previous missions, and we detect slowly rotating M-dwarfs with a similar location and distribution as in _Kepler_.
* We detect a higher fraction of stars in rotation at cooler effective temperatures, where stars rotate faster at fixed age and have deeper convective envelopes resulting in higher activity amplitudes. We also preferentially detect rotation in stars at higher metallicities at fixed temperature. This may owe to deepening convective envelopes with increasing metallicity, or to increased moment of inertia with increasing metallicity resulting in slower spin down, and faster period (and therefore higher activity) at fixed age.
* In _Gaia_ color regimes with a range of spot filling fractions, stars detected in rotation showed no significant difference in spot filling fraction compared to stars with no period detection.
* Field stars with elevated spot filling fractions coincide with open cluster stars that exhibit a temporary stall in magnetic braking. These coincide at least partly with the period gap and its variability depression, suggesting a common cause.
While TESS systematics have presented unique challenges that remain difficult to solve with conventional period-finding techniques, deep learning presents a way to circumvent instrument systematics without having to solve systematics removal for every individual case. Since first observing the southern hemisphere in 2018, TESS has also observed the North, revisited both hemispheres, and continues to observe the entire sky in its search for transiting exoplanets. As it does, it continues to build a vast trove of stellar light curves to search for rotation in stars across the entire sky.
Our simulation-driven CNN approach enables the inference of more than just rotation. The existing training sets include activity level, latitudinal differential rotation, spot lifetimes, and activity cycles. These quantities can be probed with minimal modification to our CNN framework and would provide new avenues of investigation of stellar rotational, activity, and magnetic evolution.
Understanding the complicated rotational evolution of low-mass stars and the related anomalies in activity and spot coverage will require more rotation periods for more diverse populations of stars. As we grow the number of rotation periods obtained with TESS, precise and homogeneously derived temperatures and metallicities will be imperative to pinpoint the regimes where stellar rotation and activity processes change. The Milky Way Mapper (MWM) of the Sloan Digital Sky Survey V (Kollmeier et al., 2017) is obtaining APOGEE spectroscopy for 6 million stars across the whole, including 300,000 observed with TESS two-minute cadence in the SCVZ. MWM will provide homogeneous temperatures, metallicities, and detailed chemical abundances for all these stars, offering unprecedented precision on the fundamental parameters of a large rotation sample.
Upcoming space missions will provide crucial avenues to rotation periods as well. The methods in this work will be applicable to photometry obtained by the _Nancy Grace Roman Space Telescope_(Spergel et al., 2015). _Roman_ will perform a Galactic Bulge Time Domain Survey (Penny et al., 2019; Johnson et al., 2020) with cadence similar to TESS with the addition of lower ca
dence photometry in at least one secondary band. Not only will a rotation be made accessible in a relatively unprobed population of stars toward the Galactic bulge, but the multi-band coverage will provide access to time-domain temperature resolution, enabling the study of stellar spot and facula distributions for hundreds of thousands of stars. Furthermore, the potential to observe two globular clusters near the Galactic center with _Roman_(Grunblatt et al., 2023) would provide the first large gyrochronology anchors at both old ages and sub-Solar metallicities.
We gratefully acknowledge Gagandeep Anand, Ashley Chontos, Monique Chyba, Ryan Dungee, Rafael Garcia, Daniel Huber, Corin Marasco, Savita Mathur, Peter Sadowski, Angela Santos, Benjamin Shappee, Xudong Sun, and Jamie Tayar for helpful conversations that improved the quality of this manuscript.
The technical support and advanced computing resources from the University of Hawai'i Information Technology Services - Cyberinfrastructure are gratefully acknowledged.
J.v.S. and Z.R.C. acknowledge support from the National Aeronautics and Space Administration (80NSSC21K0246, 80NSSC18K18584)
This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA's Science Mission Directorate.
Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS-IV acknowledges support and resources from the Center for High Performance Computing at the University of Utah. The SDSS website is www.sdss4.org. SDSS-IV is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, Center for Astrophysics -- Harvard & Smithsonian, the Chilean Participation Group, the French Participation Group, Instituto de Astrofisica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / University of Tokyo, the Korean Participation Group, Lawrence Berkeley National Laboratory, Leibniz Institut fur Astrophysik Potsdam (AIP), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astrophysik (MPA Garching), Max-Planck-Institut fur Extraterrestrische Physik (MPE), National Astronomical Observatories of China, New Mexico State University, New York University, University of Notre Dame, Observatario Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Autonoma de Mexico, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University.
This work has made use of data from the European Space Agency (ESA) mission _Gaia_ ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement.
Software:butterpyClaytor et al. (2021); Claytor et al. (2022), kiauhokuClaytor et al. (2020, 2020), NumPyHarris et al. (2020), PandasWes McKinney (2010), MatplotlibHunter (2007), AstroPyAstropyCollaboration et al. (2013, 2018), SciPyVirtanen et al. (2020), PyTorchPaszke et al. (2019), LightkurveLightkurveCollaboration et al. (2018), TESScutBrasseur et al. (2019), iPythonPerez & Granger (2007), starspotAngus2021, AstroqueryGinsburg et al. (2019)
## Appendix A Public TESS Photometry and Tools
There are several publicly available light curve sets, pipelines, and tools designed and optimized for TESS data. We list some of the most widely used in Table 6. Tools like LightkurveLightkurveCollaboration et al. (2018) and eleanorFeinstein et al. (2019) are general tools to download, process, and analyze TESS data. eleanor is a flexible tool that allows for several different systematics correction routines to be used on the same light curves. However, it requires large downloads, making it somewhat inconvenient for working with large data. UnpopularHattori et al. (2022) is a light curve processing pipeline optimized for systematics removal while preserving multi-sector astrophysical signals. It may be ideal for the problem of rotation, but it requires downloading large FFI cutouts, or the entire set of FFIs, for it to work optimally. Lightkurve does no automatic processing and provides simple tools for downloading and interacting with image and light curve data. We use Lightkurve for all our photometry and light curve processing.
Among the many public light curve datasets, the TESS Quick-Look Pipeline (QLP, e.g., Huang et al. 2020) and DIAmanteMontalto et al. (2020) are designed for planet searches, so their light curve processing is aggressive and can remove the stellar signals we are interested in. The difference imaging analysis (DIA) light curves of Oelkers & Stassun (2018) are for general use, but only sectors 1-5 of the first year are available. The GSFC-ELEANOR-LITE light curves Powell et al. (2022) are a brand new data set using eleanor to create general-use light curves for all TESS stars brighter than 16th magnitude in the TESS band pass. They will be worth considering for large scale investigations in TESS, but currently only four sectors are publicly available. The TESS Science Processing Operations Center (TESS-SPOC, Caldwell et al. 2020) has FFI light curves for nearly 40,000 bright SCVZ targets, with background subtraction and systematics correction, as well as underlying pixel data and apertures, available. They are suitable for general use and are easily downloaded from MAST. Finally, the TESS Asteroseismic Science Operations Center (TASOC, e.g., Handberg et al. 2021) is producing data products for all targets brighter than 15th TESS magnitude. They provide two different light curve products optimized for signals at different timescales with varying levels of systematics correction.
\begin{table}
\begin{tabular}{l|c|c} \hline \hline \multicolumn{1}{c}{Name} & Reference(s) & Science Use \\ \hline LightkurveLightkurve & LightkurveCollaboration et al. (2018) & **general** \\ eleanor & Feinstein et al. (2019) & general \\ Unpopular & Hattori et al. (2022) & general \\ \hline DIA & Oelkers \& Stassun (2018) & general \\ QLP & Huang et al. (2020, 2020); Kunimoto et al. (2021) & exoplanet detection \\
**TESS-SPOC** & Caldwell et al. (2020) & **general** \\ DIAmante & Montalto et al. (2020) & exoplanet detection \\
**FDA/TASOC** & Handberg et al. (2021); Lund et al. (2021) & asteroseismology \\ GSFC-ELEANOR-LITE & Powell et al. (2022) & general \\ \hline \end{tabular} Note. – Software tools are listed first, followed by public light curve data sets. Tools and data sets used in this work are highlighted in blue. All light curve data sets are documented and publicly available as MAST High Level Science Products at [https://archive.stsci.edu/hlsp](https://archive.stsci.edu/hlsp), except for DIAOOOelkers & Stassun (2018), which is available at [https://filtergraph.com/tess.ffi](https://filtergraph.com/tess.ffi).
\end{table}
Table 6: TESS Full Frame Image Light Curves, Pipelines, and Tools
## Appendix B Optimizing the Neural Network Architecture
In Section 3.2 we lay out the various convolutional neural network (CNN) architectures that we trained and assessed to optimize our network's performance. Here we discuss the details of that optimization and the justification for our choices of architecture.
For both the TESS-SPOC and TASOC data products, we trained four different CNNs, each with 3 convolution layers, but each CNN had different numbers of convolution kernels to give the networks different flexibility in learning features. The architectures were (A) 8, 16, and 32 kernels; (B) 6, 32, and 64 kernels; (C) 32, 64, and 128; and (D) 64, 128, and 256. We also used four different training sets for both TESS-SPOC and TASOC, each with a different upper limit on rotation period. The period upper limits were 30, 60, 90, and 180 days, intended to optimize different networks for different period ranges. We trained all four architectures on each period range, compared performance metrics, and chose the architecture that had the best performance on average across all four training sets. For performance metrics, we considered, (1) average test loss, (2) median relative uncertainty, (3) percentage of test targets recovered to within 10% and 20% accuracy, and (4) the 1st and 99th percentiles of the filtered period estimates. To illustrate the meaning of these values, we will use the 180-day TESS-SPOC training set as an example.
During training, each training set is partitioned into a training, validation, and test set. The training set is used to fit the network parameters, the validation set is used to determine when to stop training to avoid overfitting, and the test set is used to assess performance. We monitored the average loss for all three partitions during training so that we can construct learning curves, which show the loss values versus training epoch. Figure 18 shows the learning curves for all four architectures on the 180-day training set. The solid lines represent the training loss, while the dashed lines represent the test loss. Left unchecked, training loss will continue to decrease, but the loss on a held-out validation set will plateau or begin to increase once the network begins overfitting, which we use as our stopping criterion. The test loss is highest for run A, the simplest architecture we used. This indicates that run A is not complex enough to fully learn the features in the data, or at least that it begins overfitting before it can fully learn the features. Run B performs better, but is comparable to runs C and D, which fully train in fewer epochs. We can rule out run A for this case, but more metrics are needed to properly assess which run performs best.
One of the strengths of our method is the ability to estimate an uncertainty, which we can use as a metric of predicted reliability (Claytor et al., 2022). Specifically, we use the fractional uncertainty \(\sigma_{P}/P\) to normalize for period dependence. A better-trained network should have lower values of \(\sigma_{P}/P\), indicating more reliable estimates. We use the median \(\sigma_{P}/P\) as an additional metric of performance in addition to using it to filter out bad estimates. Figure 19 shows the _filtered_ period estimates for each run, but note that the median fractional uncertainty listed in each panel
Figure 18: Learning curves of all four CNN architectures for the 180-day training set. The solid lines track the training loss, while the dashed lines show the test loss, which was used to assess performance of the networks once trained.
is computed over the _unfiltered_ periods. Run B has the lowest estimated uncertainty, so by this metric it performs the best and has the most reliable estimates.
We also use accuracy metrics to assess performance. The "acc10" and "acc20" metrics quantify what fraction of test targets are recovered to within 10% and 20% accuracy after filtering by uncertainty. The "acc10" metrics for each run are near 50%, which also means that the median relative error on the period estimates is near 10% for all runs. Run B has the highest accuracy metrics, so it once again performs best.
Estimating uncertainty biases estimates toward the median of the distribution, making period inference near the edges of the training set period range more difficult (Claytor et al., 2022). We attempt to mitigate this by tabulating the 1st and 99th percentiles of each (unfiltered and filtered) inferred period range. Figure 20 shows the distribution of periods for both the unfiltered (left) and filtered (right) estimates. Though it is difficult to assess by eye, run A
Figure 19: Period detections for different CNN architectures, filtered by relative uncertainty. Architectures are increasingly complex from A to D, and recovery statistics are shown in the legend of each panel.
has the lowest 1st percentile (12.1 d) in the filtered sample, although all runs have first percentiles in the 12-13 day range. This also gives us a lower limit for where we can expect successful period estimates from this training set: networks trained on the 180-day set struggle to infer periods less than 12 days, motivating the need for training sets with smaller period ranges.
We prioritized metrics as follows: we considered the average test loss to rule out runs that failed to compete in loss value (e.g., runs B, C, and D achieved comparable loss values, but run A fell short). We then prioritized the accuracy metrics and uncertainty together, then if those were comparable we used the 1st and 99th percentile values to break ties.
When considering all our metrics for the 180-day TESS-SPOC training set, run B performs the best overall. We then repeated this process for each training set and chose the architecture that performed best over all training sets. Following this procedure, we chose architecture C for the TESS-SPOC data and architecture A for TASOC. We note that it may be optimal to use the optimal architecture for each training set, rather than adopt one architecture for all sets. We will consider before publication and release of the final period catalog.
|
2301.10859 | Salesforce CausalAI Library: A Fast and Scalable Framework for Causal
Analysis of Time Series and Tabular Data | We introduce the Salesforce CausalAI Library, an open-source library for
causal analysis using observational data. It supports causal discovery and
causal inference for tabular and time series data, of discrete, continuous and
heterogeneous types. This library includes algorithms that handle linear and
non-linear causal relationships between variables, and uses multi-processing
for speed-up. We also include a data generator capable of generating synthetic
data with specified structural equation model for the aforementioned data
formats and types, that helps users control the ground-truth causal process
while investigating various algorithms. Finally, we provide a user interface
(UI) that allows users to perform causal analysis on data without coding. The
goal of this library is to provide a fast and flexible solution for a variety
of problems in the domain of causality. This technical report describes the
Salesforce CausalAI API along with its capabilities, the implementations of the
supported algorithms, and experiments demonstrating their performance and
speed. Our library is available at
\url{https://github.com/salesforce/causalai}. | Devansh Arpit, Matthew Fernandez, Itai Feigenbaum, Weiran Yao, Chenghao Liu, Wenzhuo Yang, Paul Josel, Shelby Heinecke, Eric Hu, Huan Wang, Stephen Hoi, Caiming Xiong, Kun Zhang, Juan Carlos Niebles | 2023-01-25T22:42:48Z | http://arxiv.org/abs/2301.10859v2 | Salesforce CausalAI Library: A Fast and Scalable Framework for Causal Analysis of Time Series and Tabular Data
###### Abstract
We introduce the Salesforce CausalAI Library, an open-source library for causal analysis using observational data. It supports causal discovery and causal inference for tabular and time series data, of both discrete and continuous types. This library includes algorithms that handle linear and non-linear causal relationships between variables, and uses multi-processing for speed-up. We also include a data generator capable of generating synthetic data with specified structural equation model for both the aforementioned data formats and types, that helps users control the ground-truth causal process while investigating various algorithms. Finally, we provide a user interface (UI) that allows users to perform causal analysis on data without coding. The goal of this library is to provide a fast and flexible solution for a variety of problems in the domain of causality. This technical report describes the Salesforce CausalAI API along with its capabilities, the implementations of the supported algorithms, and experiments demonstrating their performance and speed. Our library is available at [https://github.com/salesforce/causalai](https://github.com/salesforce/causalai).
CausalAI Library: A Fast and Scalable Framework for Causal Analysis of Time Series and Tabular Data
CausalAI Library
## 1 Introduction
Causal inference aims at determining how a change in one part of a system affects another, in isolation, i.e., no other parts of the system are allowed to change independently. Such an inference is fundamentally different from predictions made by machine learning models, which are based on correlation between variables. The reason behind this difference is that correlation does not necessarily imply causation. As a simple example, consider two discrete random variables, \(X\) and \(Y\). \(X\) can independently take two states- \(-1\) and \(+1\) with probability \(0.5\) each. \(Y\) on the other hand takes the state \(0\) when \(X\) is \(-1\), and states \(-1\) and \(+1\) with probability \(0.5\) each, when \(X\) is \(+1\). By design, \(X\) causes \(Y\). However, the correlation between the two variables is \(0\). Another example on the other end of the spectrum is a scenario in which a third variable \(Z\) is a common cause of \(X\) and \(Y\), and there is no causal link between the latter two. In this case, it is likely that \(X\) and \(Y\) are correlated. However, by design, any isolated change in \(X\) cannot cause changes in \(Y\), since \(Z\) is fixed.
The above examples illustrate the fundamental limitations of using correlation based models for the problem of predicting the causal impact of one variable on another. This problem has important applications in several domains such as sales, medicine, diagnosis, etc. For instance, a business might be interested in finding out whether more customers would be likely to purchase their products if they offered a discount, versus if they showed ads on the television. As another example, finding out if certain chemicals cause harmful effects on health can benefit the broader society. This type of knowledge facilitates interventions, which are actionable items that can be used to change future outcomes, as opposed to correlation based machine learning models, which are typically used for automation. Causal analysis tools help us discover which variables in a system can be intervened to achieve the desired outcome for a particular variable of interest.
Given the importance of this problem, several libraries exist on causal discovery and causal inference (Kalainathan and Goudet, 2019; Sharma and Kiciman, 2020; Beaumont et al., 2021). However, certain limitations still exist in existing libraries, such as computationally heavy implementation, and the lack of support for different data types and code-free user interface, which are required for a unified end-to-end system that supports a fast and easy to use system for the various types of problems in the domain of causal analysis.
We introduce the Salesforce CausalAI Library, a Python library for causal analysis that supports causal discovery and causal inference using observation data. The Salesforce CausalAI library pipeline is shown in figure 1. Some of the key features of our library are:
* **Data**: Causal analysis on tabular and time series data, of both discrete and continuous types.
* **Missing Values**: Support for handling missing/NaN values in data.
Figure 1: Salesforce CausalAI Library Pipeline. We support causal discovery and causal inference. The causal discovery module takes as input a data object (containing observational data) and a prior knowledge object (containing any expert partial prior knowledge, optional), and outputs a causal graph. The causal inference module takes a causal graph as input (which could be directly provided by the user or estimated using the causal discovery module) along with the user specified interventions, and outputs the estimated effect on the specified target variable.
* **Data Generator**: A synthetic data generator that uses a specified structural equation model (SEM) for generating tabular and time series data. This can be used for evaluating and comparing different causal discovery algorithms since the ground truth values are known.
* **Distributed Computing**: Use of multi-processing using the Ray (Moritz et al., 2018) library, that can be optionally turned on by the user when dealing with large datasets or number of variables for faster compute.
* **Targeted Causal Discovery**: In certain cases, we support targeted causal discovery, in which the user is only interested in discovering the causal parents of a specific variable of interest instead of the entire causal graph. This option reduces computational overhead.
* **Visualization**: Visualize tabular and time series causal graphs.
* **Domain Knowledge**: Incorporate any user provided partial prior knowledge about the causal graph in the causal discovery process.
* **Code-free UI**: Provide a code-free user interface in which users may directly upload their data and perform their desired choice of causal analysis algorithm at the click of a button.
## 2 Related Work
Table 1 summarizes the main features supported by the Salesforce CausalAI library versus the existing libraries for causal analysis. The key differentiating features of our library are parallelization and a user interface (UI), that are aimed at making causal analysis more scalable and user friendly. Our UI allows the users to run causal discovery and causal inference algorithms on their local machines without the need to write code. In terms of parallelization, we use the Python Ray library (Moritz et al., 2018) (optional) such that the implementation of the algorithm is tied to it, which makes the user's interaction with the API simple, i.e., the user can simply specify as an argument whether or not to use multi-processing. Tigramite also supports parallelization, but uses MPI4Py (Dalcin and Fang, 2021), which we found to be slower compared to Ray. Additionally, in Tigramite's implementation of the PC algorithm, the process of finding the causal neighbors of a variable is run in parallel for each variable. Note that each causal neighbor discovery may require an exponential number (in terms of the number of variables) of conditional independence (CI) tests. Our implementation on the other hand runs the CI tests themselves within each causal neighbor discovery process in parallel. This makes our implementation more efficient. CausalNex on the other hand uses Pytorch for parallelization.
## 3 Library Architecture and API Description
The Salesforce CausalAI Library API is composed of four main components- data layer, prior knowledge, causal discovery, and causal inference, as described in subsections below. The data layer includes data generator, data transform and data specification modules.
The prior knowledge layer helps the user specify any partial knowledge about the causal relationship between variables. The causal discovery layer aims at retrieving the underlying causal graph from observational data. Finally, the causal inference layer aims at estimating the average treatment effect (ATE) and conditional ATE (CATE) of interventions on a subset of variables on a specified target variable. Aside from these components, we also support causal graph visualization and evaluation of the correctness of the estimated causal graph when the ground truth graph is available.
### Data Layer
#### Data Generator
The function DataGenerator can be used to generate synthetic time series and tabular data according to a user specified additive noise structural equation model (SEM). As an example, consider a tabular data system with random variables \(A\), \(B\) and \(C\), then an instance of an additive noise SEM is,
\[A =\mathcal{N}_{1}() \tag{1}\] \[B =k_{1}\times F_{1}(A)+\mathcal{N}_{2}()\] (2) \[C =k_{2}\times F_{2}(A)+k_{3}\times F_{3}(B)+\mathcal{N}_{3}() \tag{3}\]
where \(\mathcal{N}_{i}\)'s and \(F_{i}\)'s are callable functions, \(k_{i}\)'s are constants, all of which are user specified. This SEM is passed as the following dictionary \(\texttt{sem}=\{A:[],B:[(A,k_{1},F_{1})],C:[(A,k_{2},F_{2}),(B,k_{3},F_{3})]\}\). The noise functions \(\texttt{noise\_fn}=[\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_{3}]\) is passed as a list. The procedure for generating data from this SEM is to traverse a topologically sorted order of variables in the corresponding causal graph and sample data- first sample the values of \(A\), since it has no parents, then \(B\) and then \(C\). The number of data samples is specified by T (int). The user may provide a random seed value seed (int), for reproducibility. DataGenerator accepts intervention if the goal is to generate data from a specified SEM while intervening on certain variables. This is passed as a dictionary, where keys are the variable names to be intervened, and values are 1D NumPy arrays of length equal to T. The way this is achieved is identical to the previous case, except that DataGenerator intervenes on the intervention variables whenever they are encountered during the graph traversal. Finally, the user may specify whether the generated data should be discrete or continuous via discrete (bool). In the discrete case, the DataGenerator function first generates con
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c} Library & CI & DK & Time Series & Tabular & Parallelization & UI \\ \hline Tetrad (tet) & & & & & - & \\ Causal-learn (cau) & - & & & & - & - \\ Tigramite (tig) & & - & & - & & \\ CausalNex (Beaumont et al., 2021) & & & - & & & - \\ DoWhy (Sharma and Kiciman, 2020) & & & - & & - & - \\ Salesforce CausalAI (ours) & & & & & & \\ \end{tabular}
\end{table}
Table 1: A comparison of features supported by Salesforce CausalAI library vs existing libraries for causal analysis. Note that all libraries support causal discovery. CI: Causal Inference, DK: Domain Knowledge, UI: User Interface, CDT: Causal Discovery Toolbox.
tinuous data, and then discretizes them by binning (the number of states can be specified via nstates).
There are cases where it is desirable to generate data using a given SEM, both with and without interventions. To achieve this, the DataGenerator function is called twice, once without the intervention argument, and another with the intervention argument. This gives the user access to observational data from that SEM, but also provides the ground truth value of variables after a subset of variables are intervened. This is useful, for instance, when we want to evaluate the accuracy of causal inference algorithms. In such cases, it is important to specify the same random seed to the DataGenerator function in both the calls.
Data TransformFor time series data, two transform modules are supported- StandardizeTransform and DifferenceTransform. StandardizeTransform subtracts and divides each element with the corresponding feature-wise mean and standard deviation respectively. DifferenceTransform performs temporal differencing, i.e., computes the difference between two time-steps separated by a given interval. This interval can be specified by the user. Note that in case there are NaN values in the data array, the output of these transformation only contains NaNs at the location of the original NaNs. These modules use NumPy array format as input and output. Finally, multiple arrays may be passed to these modules in the case of disjoint time series data (E.g. one time series is a daily record from January to March 2001, and another from January to March 2002).
For tabular data, only StandardizeTransform is currently supported, which works identically to the time series case.
Data ObjectIn order to feed observational data to the causal discovery algorithms in our API, the raw data- NumPy arrays and a list of variable names (optional), is used to instantiate a CausalAI data object. Note that any data transformation must be applied to the NumPy array prior to instantiating a data object. For time series and tabular data, TimeSeriesData and TabularData must be initialized with the aforementioned data respectively. For time series data, multiple arrays may be passed to the TimeSeriesData class in the case of disjoint time series data (E.g. one time series is a daily record from January to March 2001, and another from January to March 2002). Users may also optionally specify if the data contains or may contain NaNs, so they may be handled. These data objects contain some useful methods (E.g. for extracting the data corresponding to the parents of a variable for a given observation or time step), but they are mainly designed to be used internally by the causal discovery algorithms, and users may treat the data object as a blackbox.
### Prior Knowledge
The module PriorKnowledge can be optionally used to specify any partial prior knowledge about the causal graph. This module is used in causal discovery algorithms supported by our API. Specifically, there are four types of prior knowledge that users may specify in the form of arguments to the PriorKnowledge class- forbidden_links, existing_links, root_variables, and leaf_variables. The argument forbidden_links should specify
which variables cannot be the parents of a variable. The argument existing_links should specify which variables are known to be the parents of a variable. The argument root_variables should specify which variables cannot have any causal parents. Finally, the argument leaf_variables should specify which variables cannot have any causal children.
Note that these specifications are accepted in the same format for both tabular and time series data. Specifically, in the time series case, the user may specify that \(A\to B\) is forbidden, which implies that no time lagged or instantaneous state of variable \(A\) can cause \(B\) at any time step. Further, existing_links are not utilized for time series data since time lag information cannot be specified in PriorKnowledge. The reasoning behind this design choice is that it is more likely that domain experts may know whether two time series variables are causally related, but less likely that they know which specific time lag of one variable causes another.
### Causal discovery
All the supported causal discovery algorithm classes take a data object (TimeSeriesData or TabularData) and optionally PriorKnowledge as input during instantiation. In order to make a decision on whether a causal edge exists between two variables or not, all implemented algorithms internally make use of statistical hypothesis testing, specifically by computing p-values. Further, all these classes have a run method that takes pvalue_thres as input argument (among other case specific inputs), and returns a causal graph (potentially with some undirected edges) along with the strength of the discovered causal edges and their p-values.
#### 3.3.1 Time Series Data
For time series data, the run method takes max_lag argument, a non-negative integer, which specifies the maximum possible time lag allowed for causal parents.
**Continuous data**: For time series continuous data, we currently support PC algorithm (Spirtes et al., 2000), Granger causality (Granger, 1969), and VARLINGAM (Hyvarinen et al., 2010). Granger causality, and VARLINGAM only support linear causal relationship between variables, while PC in general supports non-linear relationships. For targeted causal discovery (see section 1), PCSingle and GrangerSingle retrieve the causal parents of a given target variable. Their run method takes as additional argument target_var, which specifies the target variable name. For full causal discovery, PC, Granger and VARLINGAM classes should be used. Note that of the three algorithms, only VARLINGAM supports instantaneous causal edge discovery in our library.
Finally, PCSingle and PC API take two additional arguments- max_condition_set_size and CI_test. Since the PC algorithm performs conditional independence (CI) tests to find causal dependence between two variables, we need to specify which test to use. This is done through the CI_test argument. Currently we support two choices- PartialCorrelation and KCI, but users may also specify their own CI test class. PartialCorrelation assumes linear causal relationship between variables while KCI allows non-linear relationships, depending on the kernel used (see tutorials in our repository for examples). The argument max_condition_set_size on the other hand specifies the maximum size of condition set to be used during CI tests (the upper limit of max_condition_set_size is \(N-2\), where \(N\) is
the total number of variables). Larger values make the PC algorithm exponentially slower, but typically more accurate. We also support specifying max_condition_set_size as None. In this case, CI tests are performed using all \(N-2\) variables1. While this can speed up the PC algorithm in the worst case, the results may be less accurate. Ideally, we recommend using a small integer (default value is 4) as a trade-off between speed and accuracy.
Footnote 1: Note that performing exactly one CI test for a pair of variables using a condition set containing all the remaining \(N-2\) variables is not fundamentally incorrect in the case when only time lagged parents are considered. More discussion on this can be found in 4.1
**Discrete data**: For time series discrete data, PCSingle and PC can be used. In this case, we support the DiscreteCI_tests class, in which one of the following CI tests can be specified in the method argument- Chi-squared test (pearson, default), log-likelihood test (log-likelihood), Modified Log-likelihood (mod-log-likelihood), Freeman-Tukey Statistic (freeman-tukey), and Neyman's statistic (neyman). The additional arguments to be specified to the run method for these API's are similar to the continuous case above.
#### 3.3.2 Tabular Data
For both continuous and discrete case, we support the PC algorithm. The details of the PC API are identical to the time series case above.
### Causal inference
We support the class CausalInference to perform causal inference for tabular and time series data, which can be used to compute average treatment effect (ATE) and conditional ATE (CATE). This class can be initialized using a 2D NumPy data array data, a list of variable names var_names, a Python dictionary specifying the causal graph causal_graph corresponding to the data, a prediction_model to be used for learning the mapping function between different variables, the argument discrete (bool) specifying whether the data is continuous or discrete. Another argument use_multiprocessing (bool) may also be specified at initialization which can be used to speed up computation. Typically, multi-processing has an advantage when prediction_model is a non-linear model.
To perform ATE upon initialization, the method ate should be called with the arguments target_var and treatments. Here target_var is the name of the variable on which ATE needs to be estimated, and treatments is a list of Python dictionary, or a dictionary specifying the intervention variables and their corresponding treatment and control values. Specifically, this Python dictionary has three keys called var_name, treatment_value, control_value.
To perform CATE upon initialization, the method ate should be called. This method takes two additional arguments in addition to those described for the ate method above. They are conditions and condition_prediction_model.
## 4 Causal Discovery Algorithms
Causal discovery aims at finding the underlying directed causal graph from observational data, where the variables (or features) are treated as nodes in the graph, and the edges are unknown. For two variables \(A\) and \(B\) in a graph, the edge \(A\to B\) denotes \(A\) causes
\(B\). Observational data is simply a set of observations recorded in the past without actively making any interventions. Typically, finding causal relationships between variables would require performing interventions. But under certain assumptions, it is possible to extract the underlying causal relationships between variables from observational data as well. In this section, we describe some of such algorithms that are supported by our library, their assumptions, and their implementation details.
### PC Algorithm
The Peter-Clark (PC) algorithm (Spirtes et al., 2000) is one of the most general purpose algorithms for causal discovery that can be used for both tabular and time series data, of both continuous and discrete types. Briefly, the PC algorithm works in two steps, it first identifies the undirected causal graph, and then (partially) directs the edges. In the first step, we check for the existence of a causal connection between every pair of variables by checking if there exists a condition set (a subset of variables excluding the two said variables), conditioned on which, the two variables are independent. In the second step, the edges are directed by identifying colliders. Note that the edge orientation strategy of the PC algorithm may result in partially directed graphs. In the case of time series data, the additional information about the time steps associated with each variable can also be used to direct the edges.
The PC algorithm makes four core assumptions: (1) Causal Markov condition, which implies that two variables that are d-separated in a causal graph are probabilistically independent, (2) faithfulness: no conditional independence can hold unless the Causal Markov condition is met, (3) no hidden confounders, and (4) no cycles in the causal graph. For time series data, it makes the additional assumption of stationarity: the properties of a random variable are agnostic to the time step.
Our implementation of the PC algorithm for time series supports lagged causal relationship discovery. Consider a dataset with \(N\) variables. To decide whether there is a causal edge between two variables, typically, this algorithm involves searching for condition sets (this set always excludes the said two variables), for which the two variables are conditionally independent. If a set is found at any point, the search is terminated for the said two variables, and it is concluded that there is no causal edge between the them, otherwise there is one. As can be seen, in the worst case, this search may require an exponential number of steps to determine the existence of a causal edge between each pair of variables. In our implementation, we support the following ways to speed up this algorithm:
**1.** The user may specify max_condition_set_size (see section 3.3) to be a small integer (default is 4) to terminate the search once the condition set set size max_condition_set_size is reached. The intuition behind this is idea is that one may expect the causal graph to be sparsely connected, in which case, max_condition_set_size can be the maximum number of edges attached to a node.
**2.** Multi-processing: For any given condition set size, all the conditional independence tests are independent of one another. We allow the user to exploit this fact and run these tests in parallel. This is optional, and is beneficial when the number of variables or samples are large. Since instantiating and terminating a Ray object (we use the Ray library for multi
processing) adds a small overhead (typically a few seconds), using multi-processing may be slower for small datasets.
**3.** In the case of time series specifically, we also support a mode in which conditional independence tests are performed using all the \(N-2\) variables. This can be done by setting max_condition_set_size = None. This way, the number of CI tests reduces from exponential to one, per pair of variables, thus saving compute time significantly in the worst case. Note that this technically makes sense in our implementation of PC because it supports only time-lagged causal discovery (assuming no contemporaneous relationships). Therefore, no colliders are possible for the two variables under test, since one of the variable is always the current time step, and all other variables are past time steps. However, note that using this full CI test may reduce the power of the independence test and result in less accurate causal discovery, as described in (Runge et al., 2019). To avoid this, one solution is to set max_condition_set_size to be a small integer (similar to the description above). The difference in the time series case is that once the condition sets of size max_condition_set_size is reached, our implementation automatically performs the full CI test using all the variables that do not get eliminated as candidate parents during the greedy search process. Thus our implementation serves as a middle ground between the traditional PC algorithm implementation and the implementation with full CI test.
### Granger Causality
Granger causality (Granger, 1969) can be used for causal discovery in time series data without contemporaneous causal connections. The intuition behind Granger causality is that for two time series random variables \(X\) and \(Y\), if including the past values of \(X\) to predict \(Y\) improves the prediction performance, over using only the past values of \(Y\), then \(X\) causes \(Y\). In practice, to find the causal parents of a variable, this algorithm involves performing linear regression to predict that variable using the remaining variables, and using the regression coefficients to determine the causality.
Granger causality assumes: (1) linear relationship between variables, (2) covariance stationary: a temporal sequence of random variables all have the same mean and the covariance between the random variables at any two time steps depends only on their relative positions, and (3) no hidden confounders.
This algorithm supports lagged causal relationship discovery. Since this algorithm involves running regression problems independently for each variable for the full causal graph, our implementation optionally allows the user to perform these optimization tasks using multi-processing for large datasets to speed up causal discovery.
### Varlingam
VARLINGAM (Hyvarinen et al., 2010) can be used for causal discovery in time series data with contemporaneous causal connections. This algorithm can be broadly divided into two steps. First, we estimate the time lagged causal effects using vector autoregression. Second, we estimate the instantaneous causal effects by applying the LiNGAM (Shimizu et al., 2006) algorithm on the residuals of the previous step, where LiNGAM exploits the non-Gaussianity of the residuals to estimate the instantaneous variables' causal order.
This algorithm makes the following assumptions: (1) linear relationship between variables, (2) non-Gaussianity of the error (regression residuals), (3) no cycles among contemporaneous causal relations, and (4) no hidden confounders. We do not support multi-processing for this algorithm.
## 5 Causal Inference Algorithms
Causal inference involves finding the numerical estimate of intervening one set of variables, on another variable. This type of inference is fundamentally different from what machine learning models do when they predict one variable given another as input, which is based on correlation between the two variables found in the training data. In contrast, causal inference tries to estimate how a change in one variable propagates to the target variable while traversing the causal graph from the intervened variable to the target variable along the directed edges. This means that even if two or more variables are correlated, intervening one may not have any effect on another variable if there is no causal path between them.
Specifically, in this library, we support estimating average treatment effect (ATE) and conditional ATE (CATE). Suppose we are interested in estimating the ATE of some intervention of a set of variables denoted as \(X\), on a variable \(Y\). The treatment and control values of \(X\) are denoted as \(x_{t}\) and \(x_{c}\). Then ATE is defined as,
\[\texttt{ATE}=\mathbb{E}[Y|\texttt{do}(X=x_{t})]-\mathbb{E}[Y| \texttt{do}(X=x_{c})] \tag{4}\]
where do denotes the intervention operation. In words, ATE aims to determine the relative expected difference in the value of \(Y\) when we intervene \(X\) to be \(x_{t}\) compared to when we intervene \(X\) to be \(x_{c}\). Similarly, consider the scenario in which we want to estimate the same ATE as above, but conditioned on some set of variables \(C\) taking value \(c\). Then CATE is defined as,
\[\texttt{CATE}=\mathbb{E}[Y|\texttt{do}(X=x_{t}),C=c]-\mathbb{E}[Y| \texttt{do}(X=x_{c}),C=c] \tag{5}\]
Notice here that \(X\) is intervened but \(C\) is not.
Finding such effects from observational data alone can be challenging. Performing causal inference requires knowledge of the causal graph. There are existing methods for causal inference, for instance, using backdoor adjustment that involves finding a set of variables that result in non-causal associations between the treatment and target variable. In our current implementation, we support a more straight forward solution, in which we learn a set of relevant conditional models that are together able to simulate the data generating process, and we then use this process to estimate ATE and CATE by performing interventions explicitly in this process.
**Tabular**
For tabular data, we first find the topologically sorted ordering of the variables in the causal graph. We then extract the causal paths from the treatment variables \(X\) to the target variable \(Y\). A causal path consists of a sequence of variables with directed edges. For each variable in these paths, we learn a statistical model (E.g. linear regression) that learns to predict this variable from all its parents from the given observational data. Once
this is done, we once again traverse the causal graph in the topologically sorted order, and predict the value of each variable using the learned statistical models, except intervene the treatment variables \(X\) whenever it is encountered. This process results in the estimated values of the target variable \(Y\) under interventions. To compute ATE, we simply compute the mean under each intervention, and take the difference as shown in Eq. 4.
To compute CATE, in addition to the above process, we learn another statistical model that learns to predict the interventional value of the target variable \(Y\) for each value of the condition variable \(C\) in the observational data. We then simply use the desired value of the condition variable as input, to predict CATE.
#### Time Series
The causal inference process is performed similarly for the time series case, with one difference- the data generation process is done iteratively as opposed to the tabular case, where all observations are generated at once in the topologically sorted order.
Figure 3: A comparison of the F1 score achieved by the PC algorithm implementation of our library with existing libraries with varying number of variables (left) and samples (right).
Figure 2: A comparison of speed (seconds) of the PC algorithm implementation of our library with existing libraries with varying number of variables (left) and samples (right). CausalAI (MP) indicates the PC method uses multi-processing, while CausalAI(No MP) indicates no multi-processing.
## 6 Experiments
### Causal Discovery for Time Series
**Experimental Settings** We conduct empirical studies on synthetic data to verify the efficiency of CausalAI library. Specifically, We compare PC algorithm in CausalAI library with hat in _causal-learn2_ library and _tigramite_ library3. To achieve fair comparison, the p-value threshold and maximum condition set size are set to 0.05 and 4. We report results on two settings, considering the effect of different number of variables and the effect of different number of samples, respectively. We evaluate the training time and F1 score of the estimated graphs.
Footnote 2: [https://github.com/cmu-phil/causal-learn](https://github.com/cmu-phil/causal-learn)
Footnote 3: [https://github.com/jakobrunge/tigramite](https://github.com/jakobrunge/tigramite)
**Results** Figure 2 reports the time cost required by each model. We can find that CausalAI and tigramite are much faster than causal-learn. This is because both of these two libraries consider the characteristics of time series and reduce the potential search space when uncovering the causal relationships. We also observe a speedup via using multi-processing in CausalAI, especially when handling data with large sample size or large variable size. This verifies the efficacy of the proposed method in CausalAI to handle large scale data.
In Figure 3, we also show the F1 score between the estimated causal graphs and the ground truth causal graph. We find that CausalAI and tigramite performs much better than causal-learn. This is because the PC algorithm in causal-learn is a more general implementation without considering the characteristic of time series. Therefore, it is necessary to manually include suitable prior knowledge to further improve its performance.
## 7 Conclusions and Future Work
We introduce the Salesforce CausalAI Library, an open source library for causal analysis of time series and tabular data. The Salesforce CausalAI Library aims to provide a one-stop solution to the various needs in causal analysis including handling different data types, data generation, multi-processing for speed-up, utilizing domain knowledge and providing a user-friendly code-free interface.
We continue to develop this library and invite the research community to contribute by submitting pull requests to our GitHub repository. Some of our future plans are to include support for heterogeneous data types (mixed continuous and discrete types), support for GPU based computing, more algorithms for causal discovery and inference, and support for latent variables.
## 8 Contributions
**Devansh Arpit**: Implemented the code for Salesforce CausalAI Library v1.0, except some parts of the VARLINGAM algorithm. Created the Sphinx documentation for the GitHub repository. Wrote the blog. Wrote this tech report, except for the experiments section.
**Matthew Fernandez**: Implemented the UI to test and showcase the CausalAI Library.
**Chenghao Liu**: Implemented the VARLINGAM algorithm, wrote the experiments section and helped in the literature review.
**Weiran Yao**: Coded algorithm which will be added to the next version of the CausalAI library.
**Wenzhuo Yang**: Coded algorithm which will be added to the next version of the CausalAI library.
**Paul Josel**: Helped in the UI design and organization process.
**Shelby Heinecke**: Contributed to the conception and initial direction of the library. Contributed to the literature review.
**Eric Hu**: Coordinated UI design and implementation.
**Huan Wang**: Contributed to the library design regarding tabular/time series data structures, choices of algorithms, and UI features.
**Stephen Hoi**: Contributed to the high-level discussions on the project. General feedback for the project and the report.
**Caiming Xiong**: Initiated project idea and contributed to the high-level direction of the library. General project feedback.
**Kun Zhang**: Contributed to causal discovery algorithm selection and design. General project feedback.
**Juan Carlos Niebles**: Contributed to the high-level project direction and scope, oversaw project execution and coordinated team effort. |
2306.12031 | Large-volume focus control at 10 MHz refresh rate via fast line-scanning
amplitude-encoded scattering-assisted holography | The capability of focus control has been central to optical technologies that
require both high temporal and spatial resolutions. However, existing varifocal
lens schemes are commonly limited to the response time on the microsecond
timescale and share the fundamental trade-off between the response time and the
tuning power. Here, we propose an ultrafast holographic focusing method enabled
by translating the speed of a fast 1D beam scanner into the speed of the
complex wavefront modulation of a relatively slow 2D spatial light modulator.
Using a pair of a digital micromirror device and a resonant scanner, we
demonstrate an unprecedented refresh rate of focus control of 31 MHz, which is
more than 1,000 times faster than the switching rate of a digital micromirror
device. We also show that multiple micrometer sized focal spots can be
independently addressed in a range of over 1 MHz within a large volume of 5 mm
x 5 mm x 5.5 mm, validating the superior spatiotemporal characteristics of the
proposed technique - high temporal and spatial precision, high tuning power,
and random accessibility in a three-dimensional space. The demonstrated scheme
offers a new route towards three-dimensional light manipulation in the 100 MHz
regime. | Atsushi Shibukawa, Ryota Higuchi, Gookho Song, Hideharu Mikami, Yuki Sudo, Mooseok Jang | 2023-06-21T05:43:31Z | http://arxiv.org/abs/2306.12031v1 | # Large-volume focus control at 10 MHz refresh rate
###### Abstract
The capability of focus control has been central to optical technologies that require both high temporal and spatial resolutions. However, existing varifocal lens schemes are commonly limited to the response time on the microsecond timescale and share the fundamental trade-off between the response time and the tuning power. Here, we propose an ultrafast holographic focusing method enabled by translating the speed of a fast 1D beam scanner into the speed of the complex wavefront modulation of a relatively slow 2D spatial light modulator. Using a pair of a digital micromirror device and a resonant scanner, we demonstrate an unprecedented refresh rate of focus control of 31 MHz, which is more than 1,000 times faster than the switching rate of a digital micromirror device. We also show that multiple micrometer-sized focal spots can be independently addressed in a range of over 1 MHz within a large volume of 5 mm \(\times\) 5 mm \(\times\) 5.5 mm, validating the superior spatiotemporal characteristics of the proposed technique - high temporal and spatial precision, high tuning power, and random accessibility in a three-dimensional space. The demonstrated scheme offers a new route towards three-dimensional light manipulation in the 100 MHz regime.
## Introduction
High-speed and precision optical focus control has long served as a basis to construct optical systems with high temporal and spatial resolutions. 3D laser-scanning microcopy[1, 2] and laser micromachining[3] are prominent examples where a high spatiotemporal resolution plays a critical role when observing fast sub-cellular dynamics and achieving high-throughput material processing. Traditionally, 3D focus control has been implemented with a 2D beam scanner (e.g., galvanometer scanner[1, 2], acousto-optic deflector (AOD)[4]) and an objective lens driven by a piezo actuator[1, 2, 5]. In this traditional scheme, the speed of axial focus control is typically much slower (typically \(<\)1 kHz) than that of transversal control (typically \(\sim\) 1 MHz), as it involves precise mechanical positioning of bulky focusing optics.
Many varifocal lens schemes have been proposed to overcome this speed discrepancy[6]. Their working principles are broadly categorized into mechanical, electro-optic (EO), and acousto-optic (AO) modulation. In the mechanical type, the shapes of varifocal elements are directly modulated through electrostatic[7, 8] or mechanical forces[9, 10, 11]; however, due to the necessity of precise physical positioning, the operation speed is typically limited to 10 kHz. In contrast, EO[12]- and AO[13, 14]-based approaches directly modulate the refractive index profile of a fixed medium via electric or acoustic field, achieving control speeds of up to 1 MHz. However, considering the fundamental limiting factors, i.e., the finite capacitance of EO ceramics and the speed of sound in AO materials, realizing operation speeds beyond 1 MHz is highly challenging. Moreover, the axial scan power (i.e., tuning power) is restricted to approximately 10 m-1 because the materials' responses to applied fields, characterized as the Kerr coefficient or piezo-optic coefficient, are often very small[6], thereby limiting the axial scan range in 3D multiphoton microscopy[13, 15] and
laser micromachining[14]. Furthermore, the effects of the speed-limiting factors become more pronounced with the varifocal elements of larger apertures, resulting in the fundamental trade-off between the response time and the tuning power.
Another route towards active focus control is the use of programmable spatial light modulators (SLM), (e.g., liquid crystal on silicon; LCoS[16, 17], digital micromirror device; DMD[18, 19], and deformable mirror; DM[20]). The ability to arbitrarily reconfigure a complex 2D wavefront allows not only transverse control but also axial control in a random-access fashion, unlike conventional varifocal elements. The random-access capability allows for the selective illumination of desired points in a 3D space, leading to, for example, reduced photodamage to a given specimen in biological microscopy[15, 21] and high-throughput laser micromachining[22]. Unfortunately, despite these capabilities, the tuning power of SLMs is even worse to less than 10 m-1 due to their limited spatial degrees of freedom (DOF) (i.e., the number of independently controllable pixels). We note that, recently, the method of utilizing multiple AODs modulated with chirped sinusoidal signals[13, 15, 21] has also been widely used in fast 3D random-access scanning, but its tuning power is limited to the similar value as for the SLMs[6].
Interestingly, in conjunction with a scattering medium, SLM-based approaches have been shown to provide superior spatial characteristics - high-NA focusing and an extremely broad 3D scan range - regardless of the magnitude of their intrinsic spatial DOF[23, 24]. However, the focus control speed is inherently limited by their temporal DOF (i.e., the refresh rate) of conventional SLMs (e.g., LCoS and DMD), which is typically orders of magnitude lower (i.e., ranging from 10Hz - 20 kHz) compared to those of EO- and AO-based varifocal lenses. Only recently has the use of a grating light valve (GLV) enabled 1D wavefront modulation and holographic optical
focusing at 350 kHz[25]. Some novel designs for EO-based SLMs have shown response times shorter than 1 \(\upmu\)s[26, 27, 28], albeit with a handful of spatial DOFs which is insufficient for 3D focus control.
In this work, we propose a novel ultrafast wavefront modulation method that effectively transforms a 2D-SLM into an ultrafast 1D-SLM via line-beam scanning. Through the process of reallocating the spatial DOF to the temporal DOF, our method amplifies the speed of wavefront modulation by the factor of up to a few thousands, surpassing the current speed limit of 1D-SLMs (e.g., GLV) by two orders of magnitude. Using a scattering medium as a holographic focusing element, we further develop a 3D focus control technique, termed fast line-scanning amplitude-encoded scattering-assisted holographic (FLASH) focusing, and achieve record-high speeds of up to 31 MHz for 3D focus control. Using the FLASH focusing technique, we also demonstrate random-access control of micrometer-sized focal spots over an axial range of 0.01 mm to 10 mm (i.e., tuning power of around 100,000 m-1) and a transverse range of 5 mm \(\times\) 5 mm at refresh rates higher than 10 MHz, opening up new opportunities for optical interrogation with extremely high spatial complexity and fast dynamics.
## Principle
Fig. 1 represents the principle of the proposed method. First, to achieve ultrafast wavefront modulation, a resonant scanner (RS) performs transverse scanning of a line beam on a 2D-DMD through a cylindrical lens. During the oscillatory scanning, the amplitude profile of the line beam is sequentially modulated by independent binary patterns on each DMD region illuminated by the line beam. The amplitude-encoded line beam then reverses the incident path and is expanded back into a circular beam with a striped pattern such that its projected area is fixed regardless of the line beam position on the DMD (i.e., scanning position of the RS). The RS and the DMD can be
Figure 1: **Schematic of the FLASH focusing.** A resonant scanner and a cylindrical lens achieve fast line beam scanning across a digital micromirror device (DMD). As the line beam moves across the DMD columns, its amplitude profile is sequentially modulated with different pre-calibrated binary patterns on the DMD. The amplitude-encoded line beam is then expanded back into a 2D striped beam and projected onto a fixed area on a scattering medium regardless of the motion of the resonant scanner. Finally, as the line beam oscillates on the DMD, the time-varying amplitude-modulated beam is holographically focused onto different 3D positions beyond the scattering medium in a random-access manner.
replaced by any 1D beam scanner and 2D-SLM to implement the proposed high-speed wavefront modulation technique.
There are four important parameters that determine the overall performance of our wavefront modulation technique: the line-scan frequency in bidirectional scanning of the beam scanner (\(f_{scan}\)), the number of pixels on the SLM (\(N\times M\); where \(N\) and \(M\) denote the numbers of rows and columns along the major and minor axes of the line beam, respectively), and the number of columns illuminated with a single line beam (\(M_{col}\)). The wavefront of the line beam is encoded by the illuminated columns on the SLM with the spatial \(\mathrm{DOF}\), \(\mathrm{DOF}_{\mathrm{spatial}}\), of \(N\times M_{col}\). For a single line-scan over the SLM which is performed within \(1/f_{scan}\), the line beam scans across \(M/M_{col}\) independent wavefront-modulating columns, resulting in a refresh rate of the wavefront modulation of
\[f_{mod}=f_{scan}\times\kappa, \tag{1}\]
where \(\kappa=\frac{M}{M_{col}}\) is the speed gain.
In general, the spatiotemporal DOFs (\(\mathrm{DOF}_{\mathrm{spatiotemporal}}\) ) of wavefront modulation techniques can be dictated by the product of the number of spatially independent wavefronts (\(\mathrm{DOF}_{\mathrm{spatial}}\)) and the number of independent time points within the unit time (\(\mathrm{DOF}_{\mathrm{temporal}}\)) that can be addressed with the SLM,
\[\mathrm{DOF}_{\mathrm{spatiotemporal}}=\mathrm{DOF}_{\mathrm{spatial}}\times \mathrm{DOF}_{\mathrm{temporal}}. \tag{2}\]
Practically, \(\mathrm{DOF}_{\mathrm{spatial}}\) is identical to the number of independently controllable pixels on the SLM and \(\mathrm{DOF}_{\mathrm{temporal}}\) is identical to the intrinsic refresh rate of the SLM, \(f_{\mathrm{SLM}}\), or the number of independently controllable frequencies per unit time using the SLM. In this regard, with an assumption of \(f_{scan}\geq f_{SLM}\), our method provides a simple but powerful way to implement the
gain in \(\mathrm{DOF_{temporal}}\) (i.e., speed gain, \(\kappa\)) at the cost of \(\mathrm{DOF_{spatial}}\) under the trade-off relationship, \(\mathrm{DOF_{spatial}}\times\kappa=N\times M\), the intrinsic \(\mathrm{DOF_{spatial}}\) of the SLM. For example, if we set \(M_{\mathrm{col}}\) to 1, the speed gain \(\kappa\) is maximized to \(M\) while \(\mathrm{DOF_{spatial}}\) is minimized to \(N\). In this setting, assuming a standard DMD with \(N\) and \(M\) over 1,000 and a RS with \(f_{scan}\) equal to \(f_{SLM}\) (typically over 10 kHz), we can expect a refresh rate (i.e., \(\mathrm{DOF_{temporal}}=\kappa\times f_{SLM}\) ) of over 10 MHz (i.e., \(\kappa>\) 1,000) with \(\mathrm{DOF_{spatial}}\) of \(\sim\) 1,000.
Another important feature of the proposed method is its capability to flexibly reallocate \(\mathrm{DOF_{spatiotemporal}}\) by tuning \(M_{col}\) to address a wide range of \(\mathrm{DOF_{temporal}}\) and \(\mathrm{DOF_{spatial}}\). Ideally, standard LCoS and DMD provide the \(\mathrm{DOF_{spatiotemporal}}\) of \(\sim\) 10\({}^{8}\) and \(\sim\)10\({}^{10}\) from Eq. (2) respectively. Therefore, one may purposefully set \(M_{col}\) or \(f_{scan}\) to address a spatiotemporal domain within the range where \(\mathrm{DOF_{temporal}}>10^{6}\) and \(\mathrm{DOF_{spatial}}>10^{2}\), which cannot be easily addressed with existing wavefront modulation techniques (see Supplementary Fig. 1 for the comparison).
The FLASH focusing technique is implemented using the proposed wavefront modulation method for random-access holographic focus control through a scattering medium. To create scattering-assisted focal spots, the wavefront solutions can be pre-calibrated using previously developed wavefront shaping techniques that are based on iterative methods[29, 30], optical phase conjugation[31], or the transmission matrix formalism[32, 33]. As every \(M_{col}\) columns of the SLM can be optimized for different focal positions (i.e., \(\frac{M}{M_{col}}\) focal spots can be addressed within the single line-scan), the rate of focus control, \(f_{Spot}\), is equal to that of wavefront modulation \(f_{mod}\) in Eq. (1). As holographic focusing involves a process of constructive interference of many speckle fields
contributed by different spatial input modes[34], the focal contrast \(\eta\), quantified as the peak-to-background intensity ratio, is proportional to \(\text{DOF}_{\text{spatial}}\), as follows:
\[\eta=\beta\times\text{DOF}_{\text{spatial}}, \tag{3}\]
where \(\beta\) is a focusing fidelity factor that depends on the type of wavefront modulation (e.g., \(\pi\)/4 for phase-only[34] and 1/2\(\pi\) for binary-amplitude modulation[35]). Therefore, similar to the trade-off in our wavefront modulation scheme, the focal contrast and the refresh rate of focus control are in a trade-off relationship.
## Results
Validation of the FLASH focusing technique: a, Intensity profile of a 2D stripe beam on the projection plane located at the input surface of a scattering medium. The illuminated DMD
Figure 2: **Validation of the FLASH focusing technique: a, Intensity profile of a 2D stripe beam on the projection plane located at the input surface of a scattering medium. The illuminated DMD
column was modulated with a 1D binary pattern with eight-pixel period. Scale bar: 100 \(\mathrm{\SIUnitSymbolMicro m}\). **b**, Correlation matrix of intensity profiles between every column pair. The intensity profile projected from each DMD column was separately recorded while the same binary pattern with two-pixel period was displayed on every DMD column. **c** and **e**, Speckle intensity distributions on the target plane of \(\mathrm{z}=2\mathrm{mm}\) when the binary patterns shown in the insets are respectively displayed on the DMD. Scale bar: 1 \(\mathrm{\SIUnitSymbolMicro m}\). **d** and **f**, 2D autocorrelation function of the speckle distribution in **c** and **e**. Scale bar: 1 \(\mathrm{\SIUnitSymbolMicro m}\). **g**, Schematic diagram of holographic focusing through a scattering medium based on the transmission matrix (TM) approach. Here, \(N_{in}\), \(S_{in}^{+}\) and \(K^{*}\) denote the number of input binary patterns for the TM measurement, the pseudo-inverse matrix of \(S_{in}\) and conjugated transpose of \(K\), respectively. **h**, Focal spot reconstructed on \(\mathrm{z}=2\) mm based on the measured TM of the medium.
To implement the setup for fast wavefront shaping, we used RS and DMD with \(f_{\mathrm{scan}}\) and \(f_{\mathrm{SLM}}\) both at 24 kHz. A line beam was projected onto the DMD through a 4\(\times\) objective lens. A volume holographic grating was placed in front of the DMD to construct a retroreflective configuration in which the tilt angle of each DMD micromirror is offset by the first-order diffraction angle of an incident beam (see Methods and Supplementary Fig. 2a for details about the experimental setup). The active numbers of columns and rows on the DMD, \(M\) and \(N\), were set to 340 and 352, respectively, considering the field of view (FOV) of the objective lens. We projected the amplitude-encoded striped beam on an opal diffusing glass with the area of 4 mm \(\times\) 4 mm. We chose opal glass among other scattering media owing to its isotropic scattering profile (see Supplementary Fig. 4 for the angular scattering profile) and high transmittance.
To validate that line beam scanning effectively converts the spatial array of columns on the 2D-DMD into a temporal array of 1D binary patterns, we characterized the correlation matrix between the striped intensity patterns on the projection plane while illuminating different columns with the same periodic binary pattern (Figs. 2a and b). A 1D binary pattern on the DMD was shown to be projected into a high-definition stripe pattern and the correlation values for every column
pair were close to 1, confirming that the optical intensity response from every \(M\) column was fixed regardless of the line beam position (i.e., the RS scanning position).
Although an illumination pattern was highly asymmetric, the target plane behind the scattering medium can be addressed with a spatially isotropic resolving power. In our configuration, each on-state pixel on a column is projected as a single thin stripe. This thin stripe caused a spatially elongated speckle field due to the short-range angular correlation of the medium (Figs. 2c and 2d). However, when multiple pixels were set in the on-state, interference between the elongated speckle fields resulted in a circularly symmetric autocorrelation function (Figs. 2e and 2f), serving as a basis to address the target plane holographically.
We then validated the focusing capability of the FLASH technique based on the procedure described in Fig. 2g. First, we measured the input-output response of the medium by displaying many different random binary patterns, \(S_{in}=\{s_{1},...,s_{N_{in}}\}\), and measuring the complex amplitude of the speckle patterns generated on the target plane, \(U_{out}=\{u_{1},...,u_{N_{in}}\}\) with a reference beam. Next, with basis transformation, \(U_{out}\times S_{in}^{+}\), we constructed the transmission matrix \(K\) of the position basis from which a binarized amplitude pattern of the phase-conjugated field for a desired focus (i.e., a column of \(K^{*}\)) is computed and displayed on a DMD column[19]. Fig. 2h shows a focal spot reconstructed on the target plane of z = 2 mm from the medium. The focal spot presented a circular shape with a full-width at half-maximum (FWHM) of around 400 nm, which corresponds to a numerical aperture (NA) of 0.68. The measured focal contrast \(\eta\) of 40 was consistent with the expected value of \(\sim\)56 from Eq. (3) (i.e., \(N/2\pi\) for \(M_{col}=1\)). Lastly, with correction of the column-wise wavefront distortion, the focal contrast was shown to be preserved regardless of the line beam position, implying that every DMD column is treated identically in terms of the intensity
and phase response for complex wavefront shaping (see Methods and Supplementary Fig. 6 for details).
## Spatial and temporal capabilities
We experimentally tested the spatial performance of the FLASH focusing technique in terms of the spot size and addressable 3D volume (Fig. 3a). When the spot was transversely scanned from x = 0 to 2.5 mm at a fixed target plane of z = 1.5 mm, the FWHM spot sizes of the reconstructed foci increased from 410 nm to 560 nm, corresponding to NA values ranging from 0.66 to 0.48 (Fig. 3b). When the spot was axially scanned from z = 0.01 to z = 10 mm along the optical axis, FLASH focusing provided effective NAs of 0.8 to 0.21, corresponding to FWHM spot sizes of 340 nm to 1,290 nm, respectively (Fig. 3c). This extremely large tuning range corresponds to a tuning power of around 100,000 m-1, more than two orders of magnitude higher than those of conventional varifocal lenses. For 3D volumetric focusing capability, an effective NA larger than 0.45 was maintained over the 3D cylindrical space with a diameter of 5 mm and height of 2 mm (Supplementary Fig. 5), far exceeding the lateral FOV of \(\sim\) 1mm and the depth of field for a commercial objective lens with 0.45 NA. This validates the feasibility of the FLASH focusing technique for use in application areas such as 3D laser-scanning microscopy and laser micromachining, where a large 3D scan range is critical.
Next, we experimentally tested the temporal performance of the proposed FLASH focusing technique. In our configuration with \(f_{\text{scan}}=24\text{ kHz}\,,\;M=340\), and \(M_{col}=1\), the refresh rate \(f_{mod}\) is expected to be around 8 MHz based on Eq. (1), ideally for a constant scanning speed of the line beam on the DMD plane, and the line-scan range exactly matched the active DMD area of \(M=340\). However, as the DMD requires a finite time of around 26 us to update the 2D binary patterns on the active area, we set the scanning angle range of the RS to be approximately 2.7 times larger than the ideal range to ensure necessary time for the continuous updating of the binary patterns on the DMD with every directional change of line beam scanning. Also, considering the sinusoidal profile of the RS's scan speed, the maximum local refresh rate (i.e., the inverse of the
time required to scan the pixel pitch of 13.7 \(\upmu\)m) is expected to be around 30 MHz with a maximum scanning speed of the line beam of 474 m/s. With these settings, at the expense of an improvement in the local refresh rate and continuous modulation capability, the duty cycle \(D\), defined as the percentage of the period for active wavefront modulation relative to the total period of the single line-scan, expectedly decreases to \(\sim\)24%. However, it should be noted that \(\text{DOF}_{\text{temporal}}\) is given as a fixed value of \(\sim\)8 MHz from Eq. (1) regardless of the value of the duty cycle or the angle scan range of the RS, considering that the fundamental \(\text{DOF}_{\text{temporal}}\) can be determined as the product of the duty cycle and the averaged local refresh rate during active modulation.
Fig. 3e shows the on/off modulation signal acquired with a photodetector (PD) during line beam scanning over DMD columns alternatively encoding optimized and random binary patterns (Fig. 3d). From the sinusoidal fitting shown in Fig. 3f, the on/off switching time for the focal spot was measured to be 32.5 ns at the central columns, corresponding to a refresh rate of 31 MHz. For the entire period of active modulation in Fig. 3e, the local refresh rate ranged from 25 to 31 MHz due to the sinusoidal profile of the RS scan speed. This value of the refresh rate (or response time) testifies an improvement of at least a one or two orders of magnitude over state-of-the-art varifocal elements[12, 14] or wavefront shaping techniques for optical focusing[25]. Moreover, with the angle scan range purposefully set such that it exceeds the active DMD area, the binary patterns on the DMD could be synchronously refreshed during any directional change of line beam scanning. Accordingly, the number of independently addressable foci was not limited to the number of active columns \(M\) on the DMD (Supplementary Figs. 7a-7c and see Continuous Spatial Modulation in the Methods section).
### Control of spatial and temporal degrees of freedom
We demonstrated the tunability of the FLASH focusing technique in setting the focal contrast and the refresh rate under the trade-off relationship in Eq. (2) by simply varying the number of illuminated columns, \(M_{col}\) (see Methods for details). Figs. 4a-4c present the focus
control results with \(M_{col}=2\) and 4. As expected from Eqs. (1)-(3), the focal contrast increased nearly proportionally to 64 and 140 while the local on/off refresh rate around the central column correspondingly decreased to 15.6 MHz and 7.6 MHz for \(M_{col}=2\) and 4. The \(\text{DOF}_{\text{spatiotemporal}}\) for different values of \(M_{col}\) were experimentally characterized as the product of \(\eta_{\text{exp}}/\beta\) and \(f_{\text{exp}}\times D\), representing the experimental \(\text{DOF}_{\text{spatial}}\) and \(\text{DOF}_{\text{temporal}}\), respectively. \(\eta_{\text{exp}}\) and \(f_{\text{exp}}\) are the measured focal contrast and the averaged local refresh rate. As shown in Fig. 4e, the experimental \(\text{DOF}_{\text{spatiotemporal}}\) was estimated to be \(\sim\)2\(\times\)10\({}^{9}\) regardless of \(M_{col}\), which is consistent with the theoretical \(\text{DOF}_{\text{spatiotemporal}}\) of \(N\times M\times f_{SLM}\) at around 2.8\(\times\)10\({}^{9}\). The discrepancy may be attributed to practical imperfections in phase conjugation process that degrades the focal contrast[36]. It is worth noting that the experimentally demonstrated DOFs (pink stars in Fig. 4e) cannot be addressed with existing modulation techniques such as the LCoS-, MEMS-, and AO-based SLM types and also the demonstrated focus control speed (i.e., \(\text{DOF}_{\text{temporal}}\)) exceeds those of the state-of-the-art varifocal lenses (as shown in Fig. 4e and Ref.[6]). With the optimal implementation of the FLASH technique using an objective lens with lower magnification and a DMD with a higher resolution (i.e., with larger \(N\) and \(M\)), the gap between our experimental DOF and the ideal DOF for a DMD can be even narrowed.
### Demonstration of \(>\)10 MHz random-access focusing
In contrast to conventional varifocal lenses, the scattering-assisted holographic focusing is typically associated with background intensity fluctuations and lower transmittance. Also, unlike
conventional lenses, the scattering-assisted lens does not directly form the image of the entire 2D plane at a desired depth, preventing their use in single-shot imaging applications such as bright-field microscopy. Nevertheless, the holographic focusing scheme provides distinctive advantages over conventional varifocal lenses, including random-access focusing, high tuning power and a large transversal FOV[24]. Considering that the FLASH focusing technique can selectively scan target positions without wasting time on continuous scanning (e.g., raster scanning), it would be particularly well suited for applications in which the regions (or spots) of interest are sparsely distributed in a large volume, such as in laser micromachining[3, 22], optical tweezers[38], and cell-targeted activity monitoring and stimulation processes[39].
To demonstrate the 3D random-access capability of the FLASH focusing technique, first we demonstrated focus control over two separate planes within 1 \(\upmu\)s. The DMD was programmed in such a way that every two columns are individually optimized to scan focal spots in elliptic patterns on the two planes of z = 3 mm and 3.2 mm (shown in Fig. 5a and Supplementary Fig. 2b). As the expected refresh rate of \(\sim\) 16 MHz (with the setting of \(M_{col}\) = 2) greatly exceeds the maximum frame rate of a standard camera, we instead counted the number of focal spots that can be captured over the short exposure time of 1 \(\upmu\)s. Consistently, we observed in total 17 focal spots from the two camera images as shown in Fig. 5b, confirming the capability of 3D random-access focus control at more than 10 MHz.
Secondly, to demonstrate the versatility of the FLASH technique on the basis of the superposition principle, we performed the frequency-multiplexed modulation of two distant focal spots as shown in Fig. 5c. Specifically, we encoded the frequency information as a column period of an optimal wavefront for a certain spot location such that the modulation frequency is set to
\(f_{spot}\) divided by the column period. Then, we superposed and binarized the wavefronts for individual spots and locations to simultaneously address multiple frequencies and spot locations. Figs. 5d and 5e present the temporal profiles of optical intensities and the corresponding frequency spectra as measured through two single-mode fibers placed at (x, y, z) = (0, 2.5, 2) mm and (0, 0, 2) mm. As the column periodicity for the two positions were respectively set to 4 and 8, the local modulation frequencies were distinctly measured to be 8 and 4 MHz, respectively. We have also demonstrated a single spot modulation over a large volume of 5 mm \(\times\) 5 mm \(\times\) 5.5 mm as shown in Supplementary Fig. 8. These results indicate that the FLASH focusing technique unlocks a spatiotemporal domain that is inaccessible using conventional optics with extreme flexibility, practically enabling frequency-encoded imaging[40] or frequency-multiplexed switching over unprecedentedly large volumes and bandwidths.
## Discussion
In this study, we proposed a method to reallocate the large \(\text{DOF}_{\text{spatial}}\) of the 2D-SLM to the DOF in the spatiotemporal domain in a tunable manner, thereby achieving ultrafast spatial light modulation with a refresh rate of around 10 MHz and a \(\text{DOF}_{\text{spatial}}\) value exceeding 100. Our method, without a scattering medium, can serve as a general-purpose 1D-SLM such as a GLV, albeit with a much higher refresh rate. Therefore, it can be broadly used in high-definition panoramic display[41], maskless lithography[42], and spectral shaping[43] applications.
Combined with scattering media, the FLASH focusing technique achieves random-access control of micrometer-sized focal spots over a large addressable volume of 5 mm \(\times\) 5 mm \(\times\) 5 mm at a maximum refresh rate of 31 MHz. The demonstrated results represent an improvement of more than one order of magnitude over existing varifocal lens schemes in terms of the refresh rate and tuning power. The FLASH technique, as a versatile holographic display, is broadly applicable to various light manipulation techniques, such as for complex point spread function engineering[44], for non-mechanical wide-angle beam steering for LiDAR and for interrogating large-scale dynamic biological phenomena on the nanosecond time scale and submicron length scale. The demonstrated scheme of binarized TM measurements also paves the way toward achieve real-time closed-loop wavefront shaping through a highly dynamic scattering medium, only requiring 100 us to characterize a TM with \(\sim\)1,000 input modes. For comparison, state-of-the-art wavefront shaping systems require more than 1 ms for TM measurements of the same size[45, 46].
The energy efficiency of the FLASH focusing technique was measured to be on the order of 10-5. This low energy efficiency can be easily improved by one order of magnitude by using a wider-FOV objective lens and 4K-resolution DMD (i.e., by using larger \(N\)). Moreover, the use of a stimulated emission light amplification with an energy-gain medium can amplify the energy of
scattered beams without losing wavefront information by four orders of magnitude, as demonstrated in the recent work[47]. Additionally, the spatial characteristics and energy efficiency could be greatly improved using an engineered scattering medium instead of the opal diffuser glass used in this work[24]. In particular, using metasurface technology, the scattering profile can be precisely controlled to achieve optimal transformation for specific manipulation tasks while also realizing the benefits of high transmittance and long-term stability. The focus control speed can be enhanced as well by incorporating a low-magnification objective lens for line beam projections and a faster 1D scanner such as a polygon scanner. Assuming the scan rate of a typical 24-facets polygon mirror, \(f_{scan}=60\) kHz, and the number of columns \(M\) of 2,000 with the extended FOV of a 2x objective lens, the proposed scheme can practically achieve a modulation speed of 120 MHz.
To conclude, the present work has demonstrated that the conventional speed limit in spatial light modulator technology can be bypassed by exchanging the spatial DOF for the gain in the temporal DOF. We anticipate that the ability to handle DOFs in two domains in an interchangeable and tunable manner will pave the way for novel opportunities to investigate or manipulate systems that exhibit high spatial and temporal (or equivalently, spectral) complexity.
## Methods
### Detailed experimental setup
The overall experimental setup is depicted in Supplementary Fig. 2a. We used a 532 nm green laser (Verdi G5 SLM, Coherent) as the laser source. A collimated laser beam was split into two laser beams with a beam splitter. A laser beam travelling downward served as the reference beam for measuring the transmission matrix of a scattering medium using a phase-shifting holography technique. The reference beam's phase was controlled by a DMD (V-7001, Vialux). Meanwhile, a laser beam travelling to the right was employed for wavefront shaping. This beam was initially converted into a line beam using a cylindrical lens (ACY254-100-A, Thorlabs) and then relayed onto the DMD plane through an achromatic lens and a 4x objective lens (PLN4X, Olympus). The RS used here was positioned on the back focal plane of the objective lens to undertake the scan of the line beam on the DMD. We introduced a volume holographic grating (WP-360/550-25.4, Wasatch Photonics) between the objective lens and the DMD to establish a retroreflective configuration. where the tilt angle of each DMD micromirror was offset by the first-order diffraction angle of the line beam. It is worth noting that the diffraction efficiency of the first-order beam diffracted through the holographic grating was measured and found to be approximately 75% at perpendicular incidence. After binary modulation of illuminated columns of the DMD, the modulated line beam propagated backward along the incident optical path and was expanded back into the collimated beam through the cylindrical lens and RS, ensuring a fixed projected area regardless of the line beam position on the DMD. Here, the measured light utilization efficiency of our modulation method, defined as the ratio of the input power to the output power, was found to be around 10%. The input power was measured on the left side of the polarized beam splitter, while the output power was measured on the projection plane at the scattering medium's surface.
Spatial filtering took place between two achromatic lenses (L2 and L3) to remove unwanted diffracted beams from the DMD. The scattered beam behind the scattering medium was imaged on a CMOS camera (acA1440-220um, Basler) using a microscopic setup consisting of a 60x objective lens (MPLAPON60X, Olympus) and a tube lens (TTL200, Thorlabs). A set comprising a photomultiplier tube (PMT1001, Thorlabs) and a pinhole with an appropriate diameter was placed in the conjugate plane of the camera for detecting the fast MHz modulation of focal spots behind the medium. The analogue voltage signal from the photomultiplier tube was recorded by an oscilloscope (Tektronix, TBS2102B). The all-voltage signals shown in this work were low-pass filtered with a cutoff frequency of 40MHz to eliminate spike-like noise signals due to stray light.
### Alignment of the line beam on the DMD
We detail the process for implementing the precise alignment of the line beam onto a specific column of the DMD. First, we constructed an alignment unit comprising a pellicle beam splitter, a 4x objective lens, and a camera within the experimental setup to visually monitor the amplitude-modulated line beam generated by the DMD. Using this alignment unit, we performed spatial mapping between the DMD plane and the camera plane. During this process, the cylindrical lens was temporarily removed from the setup, allowing a collimated beam to illuminate the entire region of the DMD. The mapping between the two planes was achieved by displaying a specific binary pattern on the DMD and physically adjusting the spatial position and tip/tilt of the camera device using goniometric and translational stages. Upon completing the mapping, the cylindrical lens was reinserted into the setup to illuminate the line beam on the DMD. To ensure precise alignment of the line beam, the cylindrical lens was physically rotated with a rotation stage until
the line beam was accurately aligned parallel to the center column on the DMD. By operating the RS, we confirmed that the scanned line beam could be parallel to every DMD column. Precise implementation of this alignment procedure is essential for accurately achieving 1D spatial modulation from a single DMD column without unwanted 1D modulation from neighboring columns.
### Width of the line beam scanned on the DMD
To achieve 1D modulation by a single DMD column for \(M_{col}=1\) without unwanted modulation (i.e., crosstalk) from adjacent columns, the width of the line beam in the scanning direction must be narrower than the DMD pixel size. To verify that this condition is fulfilled, we monitored the intensity profile of the line beam on the DMD plane using a microscopic setup (Supplementary Figs. 3a and 3b). Supplementary Fig. 3c shows the FWHM for the 1D profile of the line beam in the scanning direction. The FWHMs of the line beam were confirmed to be narrower than each micromirror pitch of 13.7 \(\upmu\)m across the entire scanning range of around \(\pm 2.5\) mm (i.e., the physical width of all active DMD columns). For \(M_{col}\) values higher than 1, the width of the line beam illuminated on the DMD was properly adjusted by controlling the diameter of the incident beam onto the 4x objective lens in front of the DMD.
### Correction of column-wise wavefront distortion
We describe here the process of correcting column-dependent wavefront distortions. Each modulated beam from individual DMD columns propagates along different optical paths between the RS and the DMD. Moreover, there is wavefront curvature across the entire DMD surface. Due
to these factors, the modulated beam experiences column-dependent wavefront distortion. In this experiment, we corrected these wavefront distortions using a typical iterative optimization method based on Zernike modes. Initially, we measured the transmission matrix of the scattering medium using the center column \(\#m\) of the DMD without running the RS. We then calculated the phase pattern \(\varphi_{m}\) based on the measured transmission matrix and displayed its binary amplitude pattern \(A_{m}\) on the next DMD column \(\#m+1\) for focusing at a specific position behind the medium. Upon starting the RS, we would observe a focal spot with a slightly lower peak intensity compared to that for the center column \(\#m\), as the DMD column \(\#m+1\) introduce a small amount of wavefront distortion. To correct this distortion and recover the degraded peak intensity, we optimized the correction pattern \(C_{m+1}\) using several Zernike modes to maximize the spot intensity at a specific position on the camera or on the photodetector. We only used first-order and second-order Zernike modes representing 'defocus' and 'astigmatism' aberrations along the major axis of the line beam (i.e., modulation direction). After correcting the distortion on column \(\#m+1\), the amplitude pattern \(A_{m+1}\), converted from the superposition of the focusing pattern \(\varphi_{m}\) and the correction pattern \(C_{m+1}\), was displayed on the next column \(\#m+2\). Similarly, the correction pattern \(C_{m+2}\) on column \(\#m+2\) was optimized using two aberration modes to offset the wavefront distortion. By repeating this type of optimization for every column, we eventually acquired a set of wavefront correction patterns \(\{C_{1},C_{2},C_{3},...,C_{M}\}\) for all columns. In practice, as correction patterns between neighboring columns are highly correlated, it was not necessary to calibrate all columns individually. Instead, we measured wavefront correction patterns \(\{C_{1},C_{11},C_{21}\,...,C_{M}\}\) for every ten columns and calculated other correction patterns by means of interpolation.
### Continuous ultrafast spatial modulation
Here, we describe the process of synchronously updating DMD patterns in accordance with the movement of the RS for continuous spatial modulation. First, it is crucial to note that the DMD needs a finite time to refresh the frames. Specifically, since the DMD controller operates at a 400 MHz clock speed and takes 40 ns to load a single row with 16 clocks, loading a binary image consisting of all 352 rows in the active DMD area requires approximately 14 \(\upmu\)s. After loading, each micromirror takes around 4 \(\upmu\)s to change its state and an additional 8 \(\upmu\)s to mechanically settle down. Considering all these time intervals, a complete frame refresh requires a total of about 26 \(\upmu\)s. To allocate sufficient time for the continuous updating of DMD frames upon every directional change of the RS, we set the RS angular range to be approximately 2.7 times broader than the minimum angular range precisely matched to the active DMD area of M=340. With this setting, we can safely reserve around 30 \(\upmu\)s, which is longer than the DMD frame refresh time (26 \(\upmu\)s).
Supplementary Figs. 7a and 7b show the system control diagram and the electrical signal flow diagram, respectively. The analogue voltage output from channel 1 (CH1) of the function generator (Tektronix, AFG1062) controls the amplitude (i.e., angular range) of the RS. The synchronization signal with a frequency of 12 kHz from the RS controller is input to the trigger input terminal of the function generator (FG). Upon detecting the rising edge of the synchronization signal from the RS, CH2 of the FG output a 24kHz pulse train to the trigger input terminal of the DMD. In this experiment, the time delay between the RS synchronization signal and the pulse train is adjusted to 30 \(\upmu\)s, enabling the DMD to start refreshing as the RS passes through the DMD active area. Once the DMD detects the rising edge of the pulsed signal from the FG, it spends the first 26 \(\upmu\)s on a complete refresh, which is followed by stable projection time for the next 13 \(\upmu\)s. CH2 of the FG is also connected to the trigger input terminal of the oscilloscope
(Tektronix, TBS2102B) to control the timing of the acquisition of the analogue voltage signal from the photomultiplier tube (PMT). The output of the PMT is connected to CH1 of the oscilloscope to record the voltage signal waveform.
Lastly, we consider the available number of 1D patterns for continuous modulation under the current setting. A DMD module can only achieve an update rate of around 24 kHz when using binary frames stored in on-board memory. Thus, the total number of available 1D patterns can be calculated as the product of the number of prestored binary frames and the number of active columns on each frame. In the current setting, as a total of 175,000 binary frames can be prestored with 16 GB of on-board memory, each containing 340 columns, the total number of available 1D patterns is around 60 million. This total number could be further improved by streaming new frames from the computer into the on-board memory while sequentially displaying the prestored frames on the DMD from memory.
## References
* [1] Gobel, W., Kampa, B. M. & Helmchen, F. Imaging cellular network dynamics in three dimensions using fast 3D laser scanning. _Nat. Methods_**4**, 73-79 (2007).
* [2] Kim, K. H., Buehler, C. & So, P. T. C. High-speed, two-photon scanning microscope. _Appl. Opt._**38**, 6004-6009 (1999).
* [3] Sugioka, K., Cheng, Y. & Midorikawa, K. Three-dimensional micromachining of glass using femtosecond laser for lab-on-a-chip device manufacture. _Appl. Phys. A Mater. Sci. Process._**81**, 1-10 (2005).
* [4] Lechleiter, J. D., Lin, D. T. & Sieneart, U. Multi-photon laser scanning microscopy using an acoustic optical deflector. _Biophys. J._**83**, 2292-2299 (2002).
* [5] Helmchen, F., Fee, M. S., Tank, D. W. & Denk, W. A miniature head-mounted two-photon microscope: High-resolution brain imaging in freely moving animals. _Neuron_**31**, 903-912 (2001).
* [6] Kang, S. Y., Duocastella, M. & Arnold, C. B. Variable optical elements for fast focus control. _Nat. Photon._**14**, 533-542 (2020).
* [7] Ren, H., Xianyu, H., Xu, S. & Wu, S.-T. Adaptive dielectric liquid lens. _Opt. Express_**16**, 14954-14960 (2008).
* [8] Hao, C. _et al._ Electrowetting on liquid-infused film (EWOLF): Complete reversibility and controlled droplet oscillation suppression for fast optical imaging. _Sci. Rep._**4**, 6846 (2014).
* [9] Bernet, S., Harm, W. & Ritsch-Marte, M. Demonstration of focus-tunable diffractive Moire-lenses. _Opt. Express_**21**, 6955-6966 (2013).
* [10] Oku, H. & Ishikawa, M. High-speed liquid lens with 2ms response and 80.3 nm root-mean-square wavefront error. _Appl. Phys. Lett._**94**, 221108 (2009).
* [11] Ee, H. S. & Agarwal, R. Tunable Metasurface and Flat Optical Zoom Lens on a Stretchable Substrate. _Nano Lett._**16**, 2818-2823 (2016).
* [12] Imai, T. _et al._ Fast response varifocal lenses using KTa1-xNbxO3 crystals and a simulation method with electrostrictive calculations. _Appl. Opt._**51**, 1532-1539 (2012).
* [13] Duemani Reddy, G., Kelleher, K., Fink, R. & Saggau, P. Three-dimensional random access multiphoton microscopy for functional imaging of neuronal activity. _Nat. Neurosci._**11**, 713-720 (2008).
* [14] Chen, T. H., Fardel, R. & Arnold, C. B. Ultrafast z-scanning for high-efficiency laser micro-machining. _Light Sci. Appl._**7**, e17181 (2018).
* [15] Nadella, K. M. N. S. _et al._ Random-access scanning microscopy for 3D imaging in awake behaving animals. _Nat. Methods_**13**, 1001-1004 (2016).
* [16] Reutsky-Gefen, I. _et al._ Holographic optogenetic stimulation of patterned neuronal activity for vision restoration. _Nat. Commun._**4**, 1509 (2013).
* [17] Packer, A. M., Russell, L. E., Dalgleish, H. W. P. & Hausser, M. Simultaneous all-optical manipulation and recording of neural circuit activity with cellular resolution in vivo. _Nat. Methods_**12**, 140-146 (2015).
* [18] Guo, Z. V., Hart, A. C. & Ramanathan, S. Optical interrogation of neural circuits in Caenorhabditis elegans. _Nat. Methods_**6**, 891-896 (2009).
* [19] Conkey, D. B., Caravaca-Aguirre, A. M. & Piestun, R. High-speed scattering medium characterization with application to focusing light through turbid media. _Opt. Express_**20**, 1733-1740 (2012).
* [20] Julie A. Perreault, Thomas G. Bifano, B. Martin Levine, M. N. H. Adaptive optic correction using microelectromechanical deformable mirrors. _Opt. Eng._**41**, 561-566 (2002).
* [21] Reddy, G. D. & Saggau, P. Fast three-dimensional laser scanning scheme using acousto-optic deflectors. _J. Biomed. Opt._**10**, 064038 (2005).
* [22] Malinauskas, M. _et al._ Ultrafast laser processing of materials: From science to industry. _Light Sci. Appl._**5**, e16133 (2016).
* [23] Yu, H., Lee, K., Park, J. & Park, Y. Ultrahigh-definition dynamic 3D holographic display by active control of volume speckle fields. _Nat. Photon._**11**, 186-192 (2017).
* [24] Jang, M. _et al._ Wavefront shaping with disorder-engineered metasurfaces. _Nat. Photon._**12**, 84-90 (2018).
* [25] Tzang, O. _et al._ Wavefront shaping in complex media with a 350 kHz modulator via a 1D-to-2D transform. _Nat. Photon._**13**, 788-793 (2019).
* [26] Smolyaninov, A., El Amili, A., Vallini, F., Pappert, S. & Fainman, Y. Programmable plasmonic phase modulation of free-space wavefronts at gigahertz rates. _Nat. Photon._**13**, 431-435 (2019).
* [27] Benea-Chelmus, I. C. _et al._ Electro-optic spatial light modulator from an engineered organic layer. _Nat. Commun._**12**, 5928 (2021).
* [28] Panuski, C. L. _et al._ A full degree-of-freedom spatiotemporal light modulator. _Nat. Photon._**16**, 834-842 (2022).
* [29] Vellekoop, I. M. & Mosk, A. P. Phase control algorithms for focusing light through turbid media. _Opt. Commun._**281**, 3071-3080 (2008).
* [30] Conkey, D. B., Brown, A. N., Caravaca-Aguirre, A. M. & Piestun, R. Genetic algorithm optimization for focusing through turbid media in noisy environments. _Opt. Express_**20**, 4840-4849 (2012).
* [31] Cui, M. & Yang, C. Implementation of a digital optical phase conjugation system and its application to study the robustness of turbidity suppression by phase conjugation. _Opt. Express_**18**, 3444-3455 (2010).
* [32] Popoff, S. M., Lerosey, G., Fink, M., Boccara, A. C. & Gigan, S. Controlling light through optical disordered media: Transmission matrix approach. _New J. Phys._**13**, 123021 (2011).
* [33] Kim, M., Choi, W., Choi, Y., Yoon, C. & Choi, W. Transmission matrix of a scattering medium and its applications in biophotonics. _Opt. Express_**23**, 12648-12668 (2015).
* [34] Vellekoop, I. M. & Mosk, A. P. Focusing coherent light through opaque strongly scattering media. _Opt. Lett._**32**, 2309-2311 (2007).
* [35] Akbulut, D., Huisman, T. J., Van Putten, E. G., Vos, W. L. & Mosk, A. P. Focusing light through turbid media by binary amplitude modulation. _Opt. Express_**19**, 4017-4029 (2011).
* [36] Jang, M., Ruan, H., Zhou, H., Judkewitz, B. & Yang, C. Method for auto-alignment of digital optical phase conjugation systems based on digital propagation. _Opt. Express_**22**, 14054-14071 (2014).
* [37] Wang, Y. M., Judkewitz, B., Dimarzio, C. A. & Yang, C. Deep-tissue focal fluorescence imaging with digitally time-reversed ultrasound-encoded light. _Nat. Commun._**3**, 928 (2012).
* [38] Leite, I. T. _et al._ Three-dimensional holographic optical manipulation through a high-numerical-aperture soft-glass multimode fibre. _Nat. Photon._**12**, 33-39 (2018).
* [39] Packer, A. M., Roska, B. & Hauser, M. Targeting neurons and photons for optogenetics. _Nat. Neurosci._**16**, 805-815 (2013).
* [40] Ducros, M., Houssen, Y. G., Bradley, J., De Sars, V. & Charpak, S. Encoded multisite two-photon microscopy. _Proc. Natl. Acad. Sci. U. S. A._**110**, 13138-13143 (2013).
* [41] Kikuchi, H. & Member, S. I. D. High-pixel-rate grating-light-valve laser projector. _J. Soc. Inf. Disp._**17**, 263-269 (2012).
* [42] Kim, K. R. _et al._ SLM-based maskless lithography for TFT-LCD. _Appl. Surf. Sci._**255**, 7835-7840 (2009).
* [43] Frumker, E. & Silberberg, Y. Femtosecond pulse shaping using a two-dimensional liquid-crystal spatial light modulator. _Opt. Lett._**32**, 1384-1386 (2007).
* [44] Mounaix, M., Boniface, A., Blochet, B., Piestun, R. & Gigan, S. Point-spread-function engineering through a complex medium. _Optica_**4**, 54-59 (2017).
* [45] Feldkhun, D., Tzang, O., Wagner, K. H. & Piestun, R. Focusing and scanning through scattering media in microseconds. _Optica_**6**, 72-75 (2019).
* [46] Wei, X. _et al._ Real-time frequency-encoded spatiotemporal focusing through scattering media using a programmable 2D ultrafine optical frequency comb. _Sci. Adv._**6**, eaay1192 (2020).
* [47] Cheng, Z., Li, C., Khadria, A., Zhang, Y. & Wang, L. V. High-gain and high-speed wavefront shaping through scattering media. _Nat. Photon._**17**, 299-305 (2022).
[MISSING_PAGE_POST]
### Acknowledgments
This work was supported by JST-FOREST (Grant No. JPMJFR205E) and JST-CREST (Grant No. JPMJCR1656), by JSPS KAKENHI (Grant No. JP21H00404 and JP21H02446 to Y.S. and JP21H01393 to A.S.), by the Nakajima Foundation, the Research Foundation of Opto-Science and Technology, by Konica Minolta Science and Technology Foundation, by the Ozawa and Yoshikawa Memorial Electronics Research Foundation, by the Cooperative Research Project of Research Center for Biomedical Engineering, by the National Research Foundation of Korea (NRF) grants funded by the Korea government (MSIT) (Grant No. NRF-2021R1A5A1032937 and 2021R1C1C1011307), by the Korea Agency for Infrastructure Technology Advancement grant funded by the Ministry of Land, Infrastructure and Transport (Grant No. 22NPSS-C163379-02), by the Photo-excitonix Project in Hokkaido University, and by Crossover Alliance to Create the Future with People, Intelligence and Materials from MEXT.
### Author contributions
A.S. conceived of the initial idea. A.S. and M.J. expanded and developed the concept. A.S. and M.J. developed the theoretical model and designed the experiments. A.S. carried out all experimental processes and analyzed the experimental data with the help of R.H. and M.J. A.S. and M.J. wrote the manuscript with the help of R.H., G.S., H.M., and Y.S. H.M., Y.S., and M.J. supervised the project.
### Competing financial interests
The authors declare no competing financial interests.
## Supplementary Figure 1 Supplementary Figure 1 Supplementary Figure 1
[MISSING_PAGE_POST]
## Supplementary Figure 2
[MISSING_PAGE_POST]
[MISSING_PAGE_POST]
[MISSING_PAGE_POST]
[MISSING_PAGE_POST]
[MISSING_PAGE_POST]
[MISSING_PAGE_POST]
[MISSING_PAGE_POST]
[MISSING_PAGE_POST]
[MISSING_PAGE_POST]
[MISSING_PAGE_POST]
[MISSING_PAGE_POST]
[MISSING_PAGE_POST]
[MISSING_PAGE_POST]
[MISSING_PAGE_POST]
[MISSING_PAGE_POST]
[MISSING_PAGE_POST]
[MISSING_PAGE_POST]
[MISSING_PAGE_POST]
[MISSING_PAGE_POST]
[MISSING_PAGE_POST]
[MISSING_PAGE_POST]
[MISSING_PAGE_POST]
[MISSING_PAGE_POST]
[MISSING_PAGE_POST]
[MISSING_PAGE_POST]
[MISSING_PAGE_POST]
[MISSING_PAGE_POST]
[MISSING_PAGE_POST]
[MISSING_PAGE_POST]
[MISSING_PAGE_POST]
**Supplementary Figure 2**
**Supplementary Figure 2**
**Supplementary Figure 2
**Supplementary Figure 2
**Supplementary Figure 2**
**Supplementary Figure 2
**Supplementary Figure 2**
**Supplementary Figure 2**
**Supplementary Figure 2
**Supplementary Figure 2**
**Supplementary Figure 2
**Supplementary Figure 2**
**Supplementary Figure 2
**Supplementary Figure 2
**Supplementary Figure 2
**Supplementary Figure 2
**Supplementary Figure 2
**Supplementary Figure 2
**Supplementary Figure 2
**Supplementary Figure 2
**Supplementary Figure 2
**Supplementary Figure 2
**Supplementary Figure 2
**Supplementary Figure 2
**Supplementary Figure 2
**Supplementary Figure 2
**Supplementary Figure 2
**Supplementary Figure 2
**Supplementary Figure 2
**Supplementary Figure 2
**Supplementary Figure 2
**Supplementary Figure 2
**Supplementary Figure 2
**Supplementary Figure 2
**Supplementary Figure 2
**Supplementary Figure 2
**Supplementary Figure 2
**Supplementary Figure 2
**Supplementary Figure 2
**Supplementary Figure 2
**Supplementary Figure 2
**Supplementary Figure 2
**Supplementary Figure 2
**Supplementary Figure 2
**Supplementary Figure 2
**Supplementary Figure 2
**Supplementary Figure 2
**Supplementary Figure 2
**Supplementary Figure 2
**Supplementary Figure 2
**Supplementary Figure 2
**Supplementary Figure 2
**Supplementary Figure 2
**Supplementary Figure 2
**Supplementary Figure 2
**Supplementary Figure 2
**Supplementary Figure 2
**Supplementary Figure 2**
**Supplementary Figure 2
**Supplementary Figure 2
**Supplementary Figure 2
**Supplementary Figure 2
**Supplementary Figure 2
**Supplementary Figure 2
**Supplementary Figure 2**
**Supplementary Figure 2
**Supplementary Figure 2**
**Supplementary Figure 2**
**Supplementary Figure 2
**Supplementary Figure 2**
**Supplementary Figure 2**
**Supplementary Figure 2**
**Supplementary Figure 2
**Supplementary Figure 2**
**Supplementary Figure 2**
**Supplementary Figure 2**
**Supplementary Figure 2**
**Supplementary Figure 2**
**Supplementary Figure 2**
**Supplementary Figure 2**
**Supplementary 2**
**Supplementary Figure 2**
**Supplementary Figure 2**
**Supplementary Figure 2**
**Supplementary Figure 2**
**Supplementary Figure 2**
**Supplementary Figure 2**
**Supplementary 2**
[MISSING_PAGE_POST]
**Supplementary 2**
**Supplementary Figure 2**
**Supplementary Figure 2**
**Supplementary Figure 2**
**Supplementary Figure Figure 2**
**Supplementary 2**
**Supplementary Figure 2**
**Supplementary Figure 2**
**Supplementary Figure 2**
**Supplementary 2**
**Supplementary Figure 2**
**Supplementary 2**
**Supplementary 2**
**Supplementary 2**
**Supplementary Figure 2**
**Supplementary Figure Figure 2**
and detected by two photomultiplier tubes (PMT1 and PMT2) through relay optics with collimating lenses (Thorlabs, F240APC-532, CL) and doublet lenses (L) with f = 50 mm.
## Supplementary Figure 3
**Supplementary Figure 3 \(|\)** Validation of the line beam scanning width over the DMD plane in the setting of \(\mathbf{M_{col}=1}\). **a**, Optical setup for measuring the width of the line beam on a DMD plane. A microscopic setup composed of 20x objective lens (Olympus, PLN20X), tube lens (Thorlabs, TTL180-A), and a camera (Basler, acA1440-220um) was constructed to observe 2D intensity profile of the line beam on the DMD plane. To measure the width of the line beam in a stationary state at any specific positions, the resonant scanner was replaced with a Galvano scanner (GS). **b**, 2D profiles of the line beams at representative positions in the scan direction. Scale bar: 10 \(\upmu\)m. **c**, FWHM of 1D profile along the blue dotted lines in **b**. The micromirror pitch (pixel size) of the DMD is indicated by the dashed green line. Within the entire width (\(\sim 4.6\) mm) of all 340 DMD columns denoted as 'DMD active area', the FWHM was consistently smaller than the micromirror pitch of 13.7 \(\upmu\)m.
Supplementary Figure 4 Supplementary Figure 4 Supplementary Figure 4 | Observation of a uniform and highly scattering profile of our scattering medium. **a**, Optical setup for measuring the scattering profile of scattering medium (SM). The pencil laser beam with a diameter of 0.5 mm was injected onto the medium. The intensity of the scattered beam on the back focal plane (BFP) of the 60x objective (OBJ) was imaged on the camera (CAM) through achromatic lenses of L1 and L2. The focal lengths of L1 and L2 are 150 mm and 150 mm, respectively. **b**, 2D scattering intensity profile of the scattering medium detected by the 60x objective with the numerical aperture (NA) of 0.9.
## Supplementary Figure 5
**Supplementary Figure 5** \(|\) Measured FWHM of foci for different x position behind a scattering medium at the target planes of z = 0.5mm, 1mm, 1.5mm, 2mm, and 3mm. The theoretical FWHM for diffraction-limited focal spot with NA of 0.45 at the wavelength of 532 nm is indicated by the dotted blue line for reference.
## Supplementary Figure 6
**Supplementary Figure 6** \(|\) Correction of column-dependent wavefront distortions using a few Zernike modes. **a**, Focal spots reconstructed from representative DMD columns without our correction method. Left panel shows the phase pattern \(\varphi_{spot}\) calculated from the measured transmission matrix of the scattering medium. The binarized amplitude pattern converted from this phase pattern was individually displayed onto representative DMD columns. Scale bar: 1 \(\upmu\)m. **b**, Focal spots reconstructed with our correction method. Left panels shows the same phase pattern \(\varphi_{spot}\) while upper panels show correction phase patterns\(\mathbf{\varphi_{cor}}\) expressed by the superposition of a few Zernike modes with different optimized amplitude. Scale bar: 1 \(\upmu\)m. **c**, Contrast of focal spots for different DMD column number with and without the wavefront correction method.
## Supplementary Figure 7 Supplementary Figure 7 Supplementary Figure 7 Supplementary Figure 7
[MISSING_PAGE_POST]
## Supplementary 4
## Supplementary Figure 1
## Supplementary Figure 1
## Supplementary Figure 1
## Supplementary 2
## Supplementary Figure 3
## Supplementary Figure 4
## Supplementary Figure 5
## Supplementary Figure 6
## Supplementary 5
## Supplementary Figure 6
## Supplementary 7
## Supplementary Figure 7
## Supplementary Figure 8
## Supplementary Figure 1
## Supplementary 1
## Supplementary Figure 1
## Supplementary 1
## Supplementary Figure 2
## Supplementary 3
## Supplementary 4
## Supplementary Figure 5
[MISSING_PAGE_POST]
## **
## **
## **
## **
## Supplementary Figure 8
**Supplementary Figure 8** Demonstration of ultrafast focus control over a large 3D volume using two distantly placed single-mode fibers. **a-b**, Optical signals for the on/off modulation of a focal spot over proximal ends of fibers. Two fibers were positioned at (x, y, z) = (0, 2.5, 2) mm and (0, 0, 2) mm in **a** and at (x, y, z) = (0, 2.5, 2) mm and (0, 0, 7.5) mm in **b**, respectively. Inset plots show frequency spectra for the optical signals in **a** and **b**. Orange stars indicate peaks in frequency spectra. |
2310.20003 | Early Detection of Depression and Eating Disorders in Spanish: UNSL at
MentalRiskES 2023 | MentalRiskES is a novel challenge that proposes to solve problems related to
early risk detection for the Spanish language. The objective is to detect, as
soon as possible, Telegram users who show signs of mental disorders considering
different tasks. Task 1 involved the users' detection of eating disorders, Task
2 focused on depression detection, and Task 3 aimed at detecting an unknown
disorder. These tasks were divided into subtasks, each one defining a
resolution approach. Our research group participated in subtask A for Tasks 1
and 2: a binary classification problem that evaluated whether the users were
positive or negative. To solve these tasks, we proposed models based on
Transformers followed by a decision policy according to criteria defined by an
early detection framework. One of the models presented an extended vocabulary
with important words for each task to be solved. In addition, we applied a
decision policy based on the history of predictions that the model performs
during user evaluation. For Tasks 1 and 2, we obtained the second-best
performance according to rankings based on classification and latency,
demonstrating the effectiveness and consistency of our approaches for solving
early detection problems in the Spanish language. | Horacio Thompson, Marcelo Errecalde | 2023-10-30T20:38:31Z | http://arxiv.org/abs/2310.20003v1 | # Early Detection of Depression and Eating Disorders in Spanish: UNSL at MentalRiskES 2023
###### Abstract
MentalRiskES is a novel challenge that proposes to solve problems related to early risk detection for the Spanish language. The objective is to detect, as soon as possible, Telegram users who show signs of mental disorders considering different tasks. Task 1 involved the users' detection of eating disorders, Task 2 focused on depression detection, and Task 3 aimed at detecting an unknown disorder. These tasks were divided into subtasks, each one defining a resolution approach.
Our research group participated in subtask A for Tasks 1 and 2: a binary classification problem that evaluated whether the users were positive or negative. To solve these tasks, we proposed models based on Transformers followed by a decision policy according to criteria defined by an early detection framework. One of the models presented an extended vocabulary with important words for each task to be solved. In addition, we applied a decision policy based on the history of predictions that the model performs during user evaluation.
For Tasks 1 and 2, we obtained the second-best performance according to rankings based on classification and latency, demonstrating the effectiveness and consistency of our approaches for solving early detection problems in the Spanish language.
Early Risk Detection, Classification Problem, Transformers, Decision Policy
IberLEF 2023, September 2023, Jaen, Spain [email protected]
[email protected]
## 1 Introduction
According to the World Health Organization, one in every eight people worldwide suffers a mental disorder. Anxiety, depression, bipolar disorder, and eating behavior disorders are the most frequent [1]. Different social networks have become mass media chosen by people to share information and express their emotions. Several studies show the relationship between the use of social networks and mental disorders [2, 3, 4]. Therefore, there is a growing interest in the early identification of users suffering from these disorders to provide them with appropriate help. Evaluation conferences such as CLEF eRisk have promoted research groups to solve early detection challenges considering different domains [5, 6, 7, 8, 9, 10, 11]. However, there is currently no challenge of these characteristics for Spanish, highlighting the urgent need to promote initiatives in this language.
MentalRiskES is a novel challenge that proposes to solve problems of early risk detection of mental disorders in Spanish [12]. In this first edition, three tasks were defined with the same
objective: to detect Telegram users who show signs of mental disorders as early as possible. Task 1 consisted of the detection of users with eating disorders, Task 2 was related to depression detection, and Task 3 to the detection of an unknown disorder. Each task was divided into subtasks for solving the problem considering different approaches:
**Binary classification (subtask A):** To decide whether or not a user suffers from a mental disorder by considering the positive and negative classes.
**Simple regression (subtask B):** To provide an affectation probability on positive and negative classes.
**Multi-class classification (subtask C):** To decide whether a user suffers from a mental disorder and evaluate their attitude towards it by considering additional classes.
**Multi-output regression (subtask D):** To provide a confidence probability for the additional classes.
Subtasks A and B were defined for Tasks 1, 2, and 3, while subtasks C and D were also included for Task 2.
Early risk detection can be analyzed as a multi-objective problem, where the challenge is to find an adequate balance between the precision in identifying risky users and the minimum time required for that decision to be reliable. Our research group achieved notable results in the 2021 [13], 2022 [14], and 2023 (article currently under review) editions of the CLEF eRisk. In these last two editions, we used an early detection framework [15] which defines that it is necessary to consider two components: one dedicated to solving a user classification problem (classification with partial information or CPI), and the other involves a decision policy to decide when to stop evaluating a user (deciding the moment of classification or DMC). In particular, we applied the framework by using a BERT model [16] with an extended vocabulary (CPI component) and a decision policy based on a historic rule (DMC component). In this first edition of MentalRiskES, we participated in Tasks 1 and 2 according to subtask A. Following our Transformers-based approaches, we used the BETO model [17], a variant of BERT that was trained on large Spanish corpora, and we adjusted the historic rule according to the tasks to be solved.
The present work describes the approaches used by our research group to solve Tasks 1 and 2. Section 2 details the datasets, classification models, and decision policies applied. Section 3 shows the results obtained in both tasks and Section 4 presents the conclusions and future works.
## 2 Resolution Method
The challenge was divided into two stages: a _training stage_, where the participants experimented with data provided by the Organizers, and a _test stage_, where a client application interacted with a server, defining an early environment. This last process was carried out in rounds, during which the client requested the next post of users and, according to the number of predictive models, evaluated them and returned a response to the server.
### Datasets
Three corpora were available to solve Tasks 1 and 2, as shown in Table 1. The _Train_ and _Trial_ corpora were available for the participants to implement their proposals. The _Trial_ corpora were proposed to test the connection between the client application and the server. The _Test_ corpora were used for the Organizers to evaluate the participating models. It should be noted that, for both tasks, the number of corpora samples is limited. In the training stage, 185 samples between _Train_ and _Trial_ were available, while in the test stage, 150 users were evaluated. Furthermore, in contrast to what typically occurs in these classification problems, the classes exhibit a relatively balanced distribution, as evidenced by the number of positive and negative users. On the other hand, for Tasks 1 and 2, the median number of posts per user is approximately 21 and 31, respectively. This fact is relevant because a model with an acceptable performance should finish the evaluation of the users in a smaller number of posts. The maximum number of posts per user in the _Test_ corpus indicates the number of total rounds that the test stage had: 50 for Task 1 and 100 for Task 2. Finally, it is observed that the posts were relatively short (between 8 and 9 words per post for each task).
### CPI components: Models
**Training set.** Due to the limited data available, we augmented the number of samples. For each user, we divided the list of posts into three equal parts according to the list length. Each portion was labeled using the user's label and added to the training set. In this way, we obtained approximately 500 new samples to train the models. Besides, this allowed the models to be trained considering different contexts of the users' history and trying to overcome the limitation of BERT architectures that only admit 512 input tokens. Then, each model was trained and validated using an 85/15 split of the _Train_ corpus with added samples.
**Preprocessing.** Some preprocessing actions were performed before the _fine-tuning_ process. Characters were converted to lowercase, while Unicode and HTML codes were transformed into their corresponding symbols. Web pages and numbers were replaced by the _weblink_ and _number_ tokens, respectively. Repeated words and spaces were also removed.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{3}{*}{} & \multirow{2}{*}{**Corpus**} & \multicolumn{4}{c|}{**\#Users**} & \multicolumn{4}{c|}{**\#Posts per user**} & \multicolumn{4}{c|}{**\#Words per post**} \\ \cline{3-11} & & **Total** & **Pos** & **Neg** & **Med** & **Min** & **Max** & **Med** & **Min** & **Max** \\ \hline \multirow{3}{*}{**Task 1**} & Train & 175 & 74 & 101 & 5931 & 35.0 & 11 & 50 & 9.0 & 2 & 899 \\ & Trial & 10 & 5 & 5 & 389 & 48.5 & 18 & 50 & 9.0 & 3 & 753 \\ & Test & 150 & 64 & 86 & 4179 & 21.5 & 11 & 50 & 9.0 & 2 & 894 \\ \hline \multirow{3}{*}{**Task 2**} & Train & 175 & 94 & 81 & 6248 & 26.0 & 11 & 100 & 9.0 & 1 & 783 \\ & Trial & 10 & 6 & 4 & 624 & 68.0 & 11 & 100 & 9.0 & 3 & 201 \\ \cline{1-1} & Test & 149 & 68 & 81 & 5164 & 31.0 & 11 & 100 & 8.0 & 1 & 368 \\ \hline \end{tabular}
\end{table}
Table 1: Details of the corpora of Tasks 1 and 2. The number of users (total, positives, and negatives) and the number of posts in each corpus are reported. The median, minimum, and maximum number of posts per user and words per post in each corpus are detailed.
**Classifier type.** We used the BETO model (version: _dccuchile/bert-base-spanish-wwm-uncased_), applying the _fine-tuning_ process to adjust it to each task. Different hyperparameters were considered, and a scheduler was used to automatically adjust the learning rate during _fine-tuning_, improving the model convergence. For Tasks 1 and 2, we presented two proposals:
* **Classic BETO model**. We imported the pre-trained model and applied the _fine-tuning_ process. It was a baseline model.
* **BETO model with an extended vocabulary**. Important words were added according to the task to be solved. They were extracted from an external model known as SS3 [18]. We trained SS3 to classify users on the available corpora, and we selected the best words according to the confidence values on the positive class. For Task 1, _ayuno_ (fasting), _cals_ (calories acronym), _atracones_ (binge eating), and for Task 2, _decepcionada_ (disappointed), _sucidarme_ (to commit suicide), and _dano_ (damage) are some examples of important words. We evaluated the number of words added to each model in a range of 5 to 50 by considering the validation performances.
Finally, the best CPI model for each proposal was chosen according to the F1 metric over the positive class (F1+). Table 2 shows a summary of the hyperparameters selected for each task.
### DMC component: Decision Policy
The next step was to find the best decision policy for each task using a mock server (available in: [https://github.com/jmloyola/erisk_mock_server](https://github.com/jmloyola/erisk_mock_server)). This tool simulates the eRisk challenge through the rounds of posts and answers submissions, and it allows the calculation of the final results according to metrics based on decision and ranking. It was helpful since the performance of CPI models can drastically change when evaluated in an early environment. A client application was defined to manage the interaction with the server. When it receives a round of posts, the system preprocesses the writings, invokes the predictive models (CPI), and applies a decision policy (DMC). To take advantage of the 512 input tokens that the BETO architecture admits, the application uses the last \(N=10\) posts (posts window), linking the current with previous posts. With the mock server, the client application, and the predictive models, different decision policies were evaluated using the F1+, ERDE-5, ERDE-50, and latency-weighted F1 metrics. It should be noted that the client application was also used in the test stage of MentalRiskES.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline & **Team\#Run** & \begin{tabular}{c} **Model** \\ **type** \\ \end{tabular} & \begin{tabular}{c} **\#Batch** \\ **size** \\ \end{tabular} & \begin{tabular}{c} **Learning** \\ **rate** \\ \end{tabular} & \begin{tabular}{c} **\#Epochs** \\ **words** \\ \end{tabular} & \begin{tabular}{c} **\#Added** \\ **words** \\ \end{tabular} & **Optimizer** & **Scheduler** \\ \hline \multirow{2}{*}{**Task 1**} & **UNSL\#0** & Classic BETO & 8 & 3E-5 & 5 & - & \multirow{2}{*}{\begin{tabular}{c} Linear \\ Scheduler \\ Warmup \\ \end{tabular} } \\ \cline{2-2} \cline{6-10} & **UNSL\#1** & \begin{tabular}{c} BETO with \\ extended vocabulary \\ \end{tabular} & 8 & 2E-5 & 3 & 25 & \\ \hline \multirow{2}{*}{**Task 2**} & **UNSL\#0** & Classic BETO & 8 & 2E-5 & 5 & - & \\ \cline{2-2} \cline{6-10} & **UNSL\#1** &
\begin{tabular}{c} BETO with \\ extended vocabulary \\ \end{tabular} & 8 & 2E-5 & 5 & 25 & \\ \hline \end{tabular}
\end{table}
Table 2: Hyperparameters of the models presented for Tasks 1 and 2.
**Decision policy: Historic rule**
"_If the current prediction and last \(M\) predictions exceed \(T\) times a Threshold, the client application must issue a risky user alarm; otherwise, it is necessary to continue the user evaluation_".
The parameter \(M\) is the number of past predictions that the rule considers, \(T\) is the tolerance, i.e., how many predictions can exceed the _Threshold_ before issuing an alarm, and _Threshold_ is the limit probability to predict a user as positive. In addition, the rule has the _min_delay_ parameter, which defines the moment when it will start to apply. Table 3 shows the best parameters for each task, which were found by evaluating the models with the mock server on the _Trial_ corpus.
In summary, the final models to solve Tasks 1 and 2 were:
## 3 Results
The Organizers evaluated the teams considering metrics based on classification and latency for subtask A of Tasks 1 and 2. The first metrics evaluate the models according to classification performance, while the second ones penalize performance considering the number of posts required to detect positive users. The Organizers published a results report with team rankings ordered according to the Macro-F1 (classification-based evaluation) and ERDE-30 (latency-based evaluation) metrics.
### Task 1 - Subtask A
Table 4 shows the results obtained considering the classification metrics. The models with the best Macro-F1 were CIMAT-NLP-GTO#0 with 0.966, followed by UMUTeam#0 (0.918) and UNSL#1 (0.913). Considering the mean and median values among all the proposals (in total, 25), the three models showed excellent classification performance. For its part, UNSL#0 obtained
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & **\(\mathbf{M}\)** & **T** & **Threshold** & **min_delay** \\ \hline
**HistoricRule\_T1** & all predictions & 5 & 0.7 & after 5 \\ & & predictions & & predictions \\ \hline
**HistoricRule\_T2** & all predictions & 10 & 0.7 & after 10 \\ & & predictions & & predictions \\ \hline \end{tabular}
\end{table}
Table 3: Best parameters to define the decision policy based on the historic rule. The rule was applied for the UNSL#0 and UNSL#1 models in both tasks. _M_: number of past predictions; _T_: tolerance; _Threshold_: the limit probability to predict a positive user; _min_delay_: the moment when the rule will apply.
0.751, a similar performance to the teams' average. According to the latency metrics (Table 5), the best ERDE-30 was obtained by CIMAT-NLP-GTO#0 with 0.018, followed by UNSL#1 (0.045) and CIMAT-NLP-GTO#1 (0.065). The best latency-weighted F1 results were achieved by CIMAT-NLP-GTO#0 (0.863), BaseLine-RobertaLarge#1 (0.792), and UNSL#1 (0.776), while the best ERDE-5 was obtained by BaseLine-RobertaLarge#1 with 0.163. The UNSL#0 model achieved a better ERDE-30 than the mean and median of the proposals. In summary, the most outstanding models for Task 1 were CIMAT-NLP-GTO#0 and UNSL#1, achieving notable performance in classification and latency.
than the mean and median among all the teams for the ERDE-30 metric. In summary, the best models for this task were SINAI-SELA#0 and UNSL#1. Our model achieved the best classification results and remarkable latency performance with the second-best ERDE-30.
Finally, the performance of our proposals in terms of efficiency metrics for Tasks 1 and 2 is shown in Table 8. It is observed that UNSL#1 and UNSL#0 outperformed the mean among all the proposals, demonstrating the capability to solve both tasks while minimizing resource requirements and reducing environmental impact.
\begin{table}
\begin{tabular}{c c c c c} \hline
**Ranking** & **Model\#Run** & **Accuracy** & **Macro-P** & **Macro-R** & **Macro-F1** \\ \hline
1 & UMUTeam#0 & **0.738** & 0.756 & 0.749 & **0.737** \\
2 & UNSL\#1 & **0.738** & **0.791** & **0.756** & 0.733 \\
3 & UNSL\#0 & 0.732 & 0.752 & 0.742 & 0.731 \\ \hline
5 & SINAI-SELA\#0 & 0.725 & 0.775 & 0.742 & 0.720 \\ \hline & _Mean_ & 0.617 & 0.710 & 0.637 & 0.579 \\ & _Median_ & 0.631 & 0.731 & 0.658 & 0.616 \\ \hline \end{tabular}
\end{table}
Table 6: Classification-based evaluation results for Task 2 (subtask A). The best teams according to the Accuracy, Macro-P, Macro-R, and Macro-F1 metrics are shown (values in bold), as well as the _mean_ and _median_ values of the results report for MentalRiskES 2023. The SINAI-SELA#0 model is also included due to its results on latency-based metrics.
\begin{table}
\begin{tabular}{c c c c c c c} \hline
**Ranking** & **Model\#Run** & **ERDE-5** & **ERDE-30** & **latencyTP** & **speed** & **latency-weighted** \\ & & & & & & & **F1** \\ \hline
1 & SINAI-SELA\#0 & 0.395 & **0.140** & 4 & 0.951 & **0.720** \\
2 & UNSL\#1 & 0.567 & 0.148 & 14 & 0.791 & 0.609 \\
3 & BaseLine-Deberta\#0 & 0.303 & 0.153 & 2 & 0.984 & 0.719 \\ \hline
8 & VICOM-nlp\#2 & **0.275** & 0.173 & 2 & 0.984 & 0.706 \\
14 & UNSL\#0 & 0.551 & 0.188 & 14 & 0.791 & 0.591 \\ \hline & _Mean_ & 0.383 & 0.232 & 8 & 0.902 & 0.599 \\ & _Median_ & 0.362 & 0.205 & 3 & 0.967 & 0.627 \\ \hline \end{tabular}
\end{table}
Table 7: Latency-based evaluation results for Task 2 (subtask A). The best teams according to the ERDE-5, ERDE-30, and latency-weighted F1 are shown (values in bold). The _mean_ and _median_ values of the results report for MentalRiskES 2023 are shown. The second and third-best teams are also included.
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c} \hline
**Task** & **Model\#Run** & **\begin{tabular}{c} **Duration** \\ **(min)** \\ \end{tabular} & **Emissions** & \begin{tabular}{c} **CPU** \\ **energy** \\ \end{tabular} & \begin{tabular}{c} **GPU** \\ **energy** \\ \end{tabular} & \begin{tabular}{c} **RAM** \\ **energy** \\ \end{tabular} & \begin{tabular}{c} **Consumed** \\ **energy** \\ \end{tabular} & \begin{tabular}{c} **CPU** \\ **count** \\ \end{tabular} & \begin{tabular}{c} **GPU** \\ **total size** \\ \end{tabular} &
\begin{tabular}{c} **GPU** \\ **model** \\ \end{tabular} \\ \hline & **UNSL\#1** & 4.64 & 2.78E-05 & 6.13E-05 & 0 & 1.34E-06 & 6.26E-05 & 16 & 1 & 23.545 & AMD Ryzen 7 & 1 x \\
1 & **UNSL\#0** & 4.631 & 2.77E-05 & 6.11E-05 & 0 & 1.34E-06 & 6.24E-05 & 16 & 1 & 23.545 & \begin{tabular}{c} 1700K Eight-Core \\ Processor \\ \end{tabular} &
\begin{tabular}{c} GeForce GTX \\ 1080 Ti \\ \end{tabular} \\ & _Mean_ & 50.840 & 38.45E-05 & 33.66E-05 & 48.9E-05 & 4.66E-06 & 83.02E-05 & 38 & 4 & 164.198 & - \\ \hline & **UNSL\#1** & 3.349 & 2.01E-05 & 4.42E-05 & 0 & 9.98E-07 & 4.52E-05 & 16 & 1 & 23.545 & AMD Bryan 7 & 1 x \\
2 & **UNSL\#0** & 3.347 & 2.01E-05 & 4.42E-05 & 0 & 9.98E-07 & 4.51E-05 & 16 & 1 & 23.545 & \begin{tabular}{c} AMD Bryan 7 \\ 1700K Eight-Core \\ Processor \\ \end{tabular} & \begin{tabular}{c} 1700K Eight-Core \\ Processor \\ \end{tabular} &
\begin{tabular}{c} GeForce GTX \\ 1080 Ti \\ \end{tabular} \\ & _Mean_ & 34.704 & 50.37E-05 & 38.78E-05 & 129.97E-05 & 1.17E-05 & 169.92E-05 & 29 & 3 & 123.458 & - & - \\ \hline \end{tabular}
\end{table}
Table 8: Efficiency metrics for Tasks 1 and 2. The _mean_ among all proposals for each metric is shown.
### Error analysis
Analyzing the proposals of our team, UNSL#1 obtained better performance than UNSL#0 in both tasks. As an illustrative example, Figure 1 shows user evaluation for Task 2, where UNSL#1 correctly resolved the misclassified users by UNSL#0. Furthermore, it observes that UNSL#1 tends to minimize the probabilities of the negative user (Figure 0(a)) and maximize those of the positive user (Figure 0(b)). It also shows that UNSL#1 detected the positive user in post 23 (decision delay=23), a reasonable instance considering the number of user posts.
Considering the latency-based metrics, our proposals demonstrated satisfactory results, particularly in the ERDE-30 and latency-weighted F1 metrics. However, the results for ERDE-5 were less favorable. It would be interesting to explore potential strategies to enhance the performance of the models for ERDE-5 without compromising the other metrics. This could
Figure 1: Comparison between UNSL#0 and UNSL#1 to evaluate users of Task 2. The graphics show the evaluation of each model on a negative (a) and positive (b) user. The rounds of posts (_number of publication_) are observed on the \(x\)-axis, and the model probability at each prediction instance is on the \(y\)-axis. The dashed lines show the moment of the model’s final decision: green (correct prediction) and red (incorrect prediction).
involve optimizing the classification performance of the models and aligning them with the decision policy proposed in this work. Additionally, it would be worth analyzing new decision policies prioritizing speed and efficiency.
Finally, considering the mean values among all the teams, it is observed that Task 2 was more challenging than Task 1. This fact was probably due to the subjectivity level with which users expressed themselves in each domain, which may have impacted the performance of the models. For example, the post _"Me gustaria poder comer sin sentir culpa como antes"_ (I wish I could eat without feeling guilty like before) could be linked to a user at risk for an eating disorder; however, the text _"Esta semana fue dificil para mi"_ (This week was hard for me), it would be rushed to associate it directly with a user with depression.
## 4 Conclusion
In this first edition of the MentalRiskES challenge, our research group solved Tasks 1 and 2. We applied the BETO model by extending its vocabulary with important words, and we used a decision policy based on a historic rule to detect users with depression and eating disorders as early as possible. The method obtained excellent results, demonstrating its effectiveness and consistency in solving these problems in a challenging and underexplored language such as Spanish.
As future work, the classification models could be refined, analyzing the important words considered to extend the vocabulary, improving the representation of the analyzed instances during user evaluation, and testing the performance of other classification models. Furthermore, it would be interesting to evaluate other decision policies to improve the performance of the models in terms of latency.
|
2307.12318 | Frequencies analysis of the hybrid delta Sct-gamma Dor star
CoRoT-102314644 | Observations from space missions have allowed significant progress in many
scientific domains due to the absence of atmospheric noise contributions and
having uninterrupted data sets. In the context of asteroseismology, this has
been extremely beneficial because many oscillation frequencies with small
amplitudes, not observable from the ground, can be detected. One example of
this success is the large number of hybrid delta Sct-gamma Dor stars
discovered. These stars have radial and non-radial p- and g-modes
simultaneously excited to an observable level allowing us to probe both the
external and near-to-core layers of the star. We analyse the light curve of
hybrid delta Sct-gamma Dor star CoRoT ID 102314644 and characterise its
frequency spectrum. We detected 29 gamma Dor type frequencies in the range
[0.32-3.66] cycles per day (c/d) and a series of 6 equidistant periods with a
mean period spacing of DeltaPi=1612 s. In the delta Sct domain we found 38
frequencies in the range 8.63-24.73 c/d and a quintuplet centred on the
frequency p_1=11.39 c/d and derived a possible rotational period of 3.06 d. The
frequency analysis of this object suggests the presence of spots at the stellar
surface, nevertheless we could not dismiss the possibility of a binary system.
The initial modelling of the frequency data along with external constraints has
allowed us to refine its astrophysical parameters giving a mass of
approximately 1.75 solar masses, a radius of 2.48 solar radii and an age of
1241 Myr. The observed period spacing, a p-mode quintuplet, the possible
rotation period and the analysis of the individual frequencies provide
important input constraints for the understanding of different transport
phenomena in A-F-type stars.[abridged] | Julieta Sánchez Arias, Orlagh Louise Creevey, Eric Chapellier, Bernard Pichon | 2023-07-23T13:14:48Z | http://arxiv.org/abs/2307.12318v1 | # Frequencies analysis of the hybrid \(\delta\) Sct-\(\gamma\) Dor star CoRoT-102314644.
###### Abstract
Context:Observations from space missions have allowed significant progress in many scientific domains due to the absence of atmospheric noise contributions and having uninterrupted data sets. In the context of asteroseismology, this has been extremely beneficial because many oscillation frequencies with small amplitudes, not observable from the ground, can be detected. One example of this success is the large number of hybrid \(\delta\) Sct-\(\gamma\) Dor stars discovered. These stars have radial and non-radial \(p\)- and \(g\)-modes simultaneously excited to an observable level allowing us to probe both the external and near-to-core layers of the star.
Aims:We analyse the light curve of hybrid \(\delta\) Sct-\(\gamma\) Dor star CoRoT ID 10231464 and characterise its frequency spectrum. Using the detected frequencies, we perform an initial interpretation developing stellar models.
Methods:The frequency analysis is obtained with a classical Fourier analysis through the Period04 package after removing residual instrumental effects from the CoRoT light curve. Detailed analysis on the individual frequencies is performed by using phase diagrams and other light curve characteristics. An initial stellar modelling is then performed using the Cesam2k stellar evolution code and the GYRE pulsation code, considering adiabatic pulsations.
Results:We detected 29 \(\gamma\) Dor type frequencies in the range \([0.32-3.66]\) cycles per day (c/d) and a series of 6 equidistant periods with a mean period spacing of \(\Delta\Pi=1612\) s. In the \(\delta\) Sct domain we found 38 frequencies in the range \([8.63-24.73]\) c/d and a quintuplet centred on the frequency \(p_{1}=11.39\) c/d and derived a possible rotational period of 3.06 d. The frequency analysis of this object suggests the presence of spots at the stellar surface, nevertheless we could not dismiss the possibility of a binary system. The initial modelling of the frequency data along with external constraints has allowed us to refine its astrophysical parameters giving a mass of approximately 1.75 \(\mathcal{M}_{\odot}\), a radius of 2.48 \(\mathcal{R}_{\odot}\) and an age of 1241 Myr.
Conclusions:The observed period spacing, a \(p\)-mode quintuplet, the possible rotation period and the analysis of the individual frequencies provide important input constraints for the understanding of different phenomena such as the transport of angular momentum, differential rotation and magnetic fields operating in A-F-type stars. Nevertheless, is fundamental to accompany photometric data with spectroscopic measurements in order to distinguish variations between surface activity from a companion.
## 1 Introduction
In the last decade, several space missions such as the COnvection ROtation and planetary Transits (CoRoT) satellite (Auvergne et al., 2009) and NASA's Kepler space telescope (Borucki, 2016), have revolutionised asteroseismology, thanks to their high-precision allowing the detection of very small amplitude modes that are not detectable from ground-based instruments. Indeed \(\delta\) Sct stars have been known for many decades now due to the high amplitude of some of their oscillation modes which reach up to tenths of a magnitude, while \(\gamma\) Dor stars are known only since 1999 (Kaye et al., 1999) and thanks to uninterrupted data from space it was possible the detection of their low amplitude periodicities near one day (Aerts et al., 2010). The existence of hybrid \(\delta\) Sct-\(\gamma\) Dor stars has been known since 2002 (Handler et al., 2002). Their unique character of exhibiting both radial and non-radial pressure (\(p\)) oscillation modes typical of \(\delta\) Sct variable stars, and gravity (\(g\)) pulsation modes characteristic of \(\gamma\) Dor variable stars simultaneously allows one to probe their stellar structure from the core to the envelope.
The \(\delta\) Sct stars lie on and above the main sequence with masses of \(1.5-2.5M_{\odot}\) approximately and spectral types between A2 and F5. They exhibit radial and non-radial \(p\)- and \(g\)- modes driven by the \(\kappa\) mechanism operating in the He II partial ionisation zone (Baker & Kippenhahn, 1962) and the turbulent pressure acting in the hydrogen ionisation zone (Antoci et al., 2014).
The \(\gamma\) Dor variables are generally cooler than \(\delta\) Sct stars, with \(T_{\rm eff}\) centred between 6700 K and 7400 K (spectral types between A7 and F5) and masses in the range 1.5 to 1.8 \(M_{\odot}\) approximately (Catelan & Smith, 2015). They pul
sate in low-degree, high-order \(g\) modes apparently driven by a flux modulation mechanism called convective blocking and induced by the outer convective zone (Guzik et al., 2000; Dupret et al., 2004; Grigahcene et al., 2005). The high-order g modes (\(n\gg 1\)) excited in these stars, allow the use of the asymptotic theory (Tassoul, 1980) and the departures from uniform period spacing to explore the possible chemical inhomogeneities in the structure of the convective cores (Miglio et al., 2008).
The aforementioned distinction between \(\delta\) Sct and \(\gamma\) Dor stars is a topic of debate. Diverse studies on samples of \(\delta\) Sct and \(\gamma\) Dor stars suggest that the hybrid behaviour on these stars is very common (Grigahcene et al., 2010; Uytter-hoveen et al., 2011; Bradley et al., 2015; Balona et al., 2015). Moreover, in 2016, Xiong et al. (2016) calculated a theoretical instability strip using a non-local and time-dependent convection theory and concluded that the \(\kappa\) mechanism operates significantly in warm \(\delta\) Sct and \(\gamma\) Dor stars while the coupling between convection and oscillations is responsible for excitation in cool stars. Furthermore, the instability strips of \(\delta\) Sct and \(\gamma\) Dor stars partially overlap in the Hertzprung-Russell (HR) diagram (see, for instance, Fig. 1 of Grigahcene et al., 2010), explaining the existence of hybrid \(\delta\) Sct-\(\gamma\) Dor stars. As we mentioned, the simultaneous presence of both \(g\) and \(p\) non-radial, along with radial excited modes, allows one to place strong constraints on the whole interior structure. In addition, some of these objects show rapid rotation, making these objects excellent targets for modelling stellar structure and to test different physical phenomena such as the effect of angular transport induced by rotation (Aerts et al., 2019; Ouazzani et al., 2019).
Although a significant number of hybrid \(\delta\) Sct-\(\gamma\) Dor stars is currently known (Grigahcene et al., 2010; Balona, 2014), the analysis of low frequencies in A-F stars still represents a challenge due to the different origins that these frequencies can have, e.g. spots, field stars contaminating the light apertures of the main target, a companion forming a non-eclipsing binary system, Rossby modes usually present in moderate to rapid rotating stars and more (Li et al., 2019; Chowdhury et al., 2018; Saio et al., 2018). Our aim in this paper is to present for the first time a complete observational analysis of the light curve and the frequencies of the hybrid \(\delta\) Sct-\(\gamma\) Dor CoRoT 102314644 along with the corresponding interpretation.
The paper is laid out as follows: both literature and CoRoT data are presented in Sect. 2, followed by the description of the frequency analysis in Sec. 3. Detailed analysis of the frequencies including their mode identification is then presented and discussed in Sect. 4. An initial interpretation of the oscillation modes with stellar models is presented in Sect. 5, and we then conclude in Sect. 6.
## 2 Literature data
### Known stellar quantities from the literature
CoRoT 102314644 (\(V\sim 12.2\), \(\alpha=\) 6h10m26.73s and \(\delta=\) +4deg18'12.19") was observed during the third CoRoT long run, LRa03, which targeted the Anti-Galactic centre (see Fig 1). The observations lasted 148 days from 2009, October 10th to 2010, March 1st. The EXODAT database (Deleuil et al., 2009) indicates the star has an A5V spectral type and 2MASS photometry of \(J\)=11.394, \(H\)=11.18, \(K=\) 11.131. It also indicates a star with reddening of \(E(B-V)=0.4\) mag, however, more recently, Lallement et al. (2019) estimated \(E(B-V)=0.248\pm 0.079\) mag based on the distance of the star 1. The sky map given by the CoRoT database is shown in Fig. 1 upper panel which clearly identifies the target. We also give a wider angle sky map showing our target at the centre and the positions of Gaia Data Release (GDR2/GDR3) identified sources (Gaia Collaboration et al., 2018, 2021, 2022).
Footnote 1: [https://stillism.obspm.fr/reddening?frame=galacticvlong=204.3733&ulong=deg&vlat=-7.104248&ulat=deg&valid=](https://stillism.obspm.fr/reddening?frame=galacticvlong=204.3733&ulong=deg&vlat=-7.104248&ulat=deg&valid=)
The photometry and various identifications of the star are given in Table 1.
### Fundamental stellar parameters
Gaia eDR3 also provides additional properties of the star: its parallax \(\pi\), its radial velocity \(v_{\rm rad}\) and photometry \(G\), \(G_{\rm BP}\) and \(G_{\rm RP}\), given in Table 1. For \(\pi\) we applied the recommended parallax zero-point correction of -0.027 mas based on the magnitude, colour and sky position of the star (Lindegren et al., 2021). Using the extinction, we dereddened the photometry and used the colour-\(T_{\rm eff}\) relations from Casagrande et al. (2020) to derive \(T_{\rm eff}\). To convert the extinction from E(B-V) to other bands, we assumed a reddening law R = 3.1 and we used the coefficients from Danielski et al. (2018). The colour-\(T_{\rm eff}\) relations require \(\log g\) and [Fe/H] as input, and so we used \(\log g=3.9\) (see below) and assumed solar metallicity in the absence of literature values. Then, using \(G\), extinction \(A_{G}\), the parallax and a bolomet
Figure 1: Upper: Star map showing the star’s position and coordinates, from the ExoDAT database. Lower: Star map showing a slighter wider view, showing also the Gaia DR2 identified sources (red).
ric correction, we calculated the luminosity, \(L\). Using the Stefan-Boltzmann law with these values we estimated the stellar radius. Finally, using an estimate of mass between 1.7 and 2.1 \(M_{\odot}\) we calculated a surface gravity of 3.9 \(\pm\) 0.1 using the derived radius.
\(T_{\rm eff}\) and \(L\) are highly correlated because they both depend on the extinction value. To calculate the uncertainties and correlations in the \(T_{\rm eff}\)\(L\) plane, we performed simulations where we perturbed the input values (\(E(B-V)\), \(\pi\), \(G\), \(G_{\rm BP}\), and \(G_{\rm BP}\)) by their errors. Then we propagated these perturbed values to the \(T_{\rm eff}\), \(L\), radius, and \(\log g\). The values obtained for \(L\) and \(T_{\rm eff}\)are in agreement with the assumption of the star being a hybrid \(\gamma\) Dor-\(\delta\) Scuti. The derived values and their 1-D uncertainties are: \(L_{\star}=13.6\pm 2.9\)\(L_{\odot}\); \(T_{\rm eff}=7065\)\(\pm\) 460 and \(R_{\star}=2.27\pm 0.07\)\(R_{\odot}\). In our interpretation of the models in Sect. 5 we used these values as a first approximation to constrain the models2.
Footnote 2: Since the finalisation of the work, Gaia DR3 proposes \(L/L_{\odot}=11.9\pm 0.4\) and \(T_{eff}=6842^{+300}_{-200}\) K which are in good agreement with ours, and the slight differences have little impact on the results.
### CoRoT Light curve
We followed a similar analysis of this CoRoT light curve to that performed in Chapellier et al. (2012) and Chapellier & Mathias (2013). We used the reduced N2 light curves from Auvergne et al. (2009). The light curve consists of a total of 386 381 measurements obtained with a temporal resolution of 32 s. We retained only 342 598 points, those flagged as "0" by the CoRoT pipeline that were not affected by instrumental effects such as stray-light or cosmic rays. We then corrected the measurements by long-term trends (systematic trends). Individual measurements considered outliers (primarily high-flux data points caused by cosmic ray impacts) were removed by an iterative procedure. We retained a total of 340 257 measurements in total, which gives an approximate frequency resolution of 0.008 c/d.
The resulting light curve is represented at different timescales in Fig. 2. The amplitude has been calculated by converting from flux to magnitudes and subtracting the mean. The timescale is labelled in units of the CoRoT Julian day (JD), where the starting CoRoT JD corresponds to HJD 2445545.0 (2000, January 1st at UT 12:00:00). On the top panel, we show the full corrected light curve spanning 148 days. In the middle and lower panels, we show 20 and 5 days time spans, respectively. Here we can distinguish two kinds of periodic time scales: one corresponding to low frequencies, characteristic of \(\gamma\) Dor stars (middle panel), and one due to higher frequencies, which are characteristic of the \(\delta\) Sct star (lower panel).
## 3 Light curve analysis
We analysed the frequency content of the light curve using the package Period04 (Lenz & Breger, 2005). We searched frequencies in the interval [0;100] c/d. For each detected frequency, the amplitude and the phase were calculated by a least squares sine fit. The data were then cleaned of this signal (this is known as pre-whitening) and a new analysis was performed on the residuals. This iterative procedure was continued until we reached the signal to noise (S/N) equal to 5.2 as it is recommended (Baran & Koen, 2021). The first Fourier transform in the range 0 - 30 c/d is depicted in Fig.3, with the y-axis showing amplitude.
We eliminated frequencies lower than 0.25 c/d. These correspond to trends in the CoRoT data (Chapellier et al., 2012), and the satellite orbital frequency (\(f_{\rm sat}=13.97213\) c/d) along with its harmonics. In addition, small-amplitude frequencies with a separation from large-amplitude frequencies less than the frequency resolution were ignored. These smaller amplitude frequencies are not real and are due to the spectral window or to amplitude or frequency variability of the pulsations during the observations (Bowman et al., 2016).
As a result, we obtained a total of 68 stellar frequencies. The first 10 frequencies with the highest amplitude are shown in Table 2 and the complete list with uncertainties is given in Tables 1 and 2.
We also included in Tables 1 and 2 an identity for each frequency (see next Section). Briefly, we identified two ranges of frequencies: \(\delta\) Sct and \(\gamma\) Dor frequency ranges, which are labelled with "p" and "g", respectively; and the frequency with the highest amplitude in each range has the sub-index "1" and subsequent frequencies with lower amplitudes are labelled with increasing sub-index.
The uncertainties in the frequencies were calculated by performing Monte-Carlo-like simulations on the light curve and recalculating the frequency content of each simulated light curve. More concretely, we created a fake signal \(s_{j}\) by adding background noise to the original signal. We calculated the periodogram and then fit the individual frequencies of the simulated periodogram. The fit to each frequency
\begin{table}
\begin{tabular}{l l r} \hline \hline Parameter & Value & Ref. \\ \hline Id & CoRoT 102314644 & \\ & GDR2 3317411131453435008 & \\ & GEDR3 3317411131453435008 & \\ & USNO-A2 0900-02423283 & \\ & 2MASS 06102674+0418122 & \\ \(\alpha\) [deg] & 92.611376 & 1 \\ \(\delta\) [deg] & +4.303372 & 1 \\ \(\alpha\) [hr mn ss] & 6h 10m 26.73 s & \\ \(\delta\) [hr mn ss] & +4h 18m 12.19s & \\ \(l\) [deg] & 204.373325 & \\ \(b\) [deg] & –7.104326 & \\ Spectral Type & A5V & 2 \\ E(B-V) [mag] & 0.248 \(\pm\) 0.079 & 3 \\ \(C\) [mag] & 12.3779 & \\ \(R\) [mag] & 12.3779 & \\ \(J\) [mag] & 11.394 \(\pm\) 0.023 & \\ \(H\) [mag] & 11.18 \(\pm\) 0.023 & \\ \(K\) [mag] & 11.131\(\pm\) 0.023 & \\ \(G\) [mag] & 12.451 & 1 \\ \(G_{BP}\) [mag] & 12.7584 & 1 \\ \(G_{RP}\) [mag] & 11.977113 & 1 \\ \(G_{BP}-G_{RP}\) [mag] & 0.781295 & \\ \(v_{\rm rad}\) [km s\({}^{-1}\)] & 32.9 \(\pm\) 10.2 & 4 \\ \(\pi_{GEDR}\) [mas] & 0.988 \(\pm\) 0.013 & 1 \\ \(\pi_{sys}\) [mas] & –0.271 & 5 \\ \hline \hline \end{tabular} References: 1Gaia Collaboration et al. (2021), 2Deleuil et al. (2009), 3Lallement et al. (2019), 4Gaia Collaboration et al. (2018), 5Lindegren et al. (2021)
\end{table}
Table 1: Identification and literature data for CoRoT 102314644.
\(f_{j,i}\), where \(i\) runs over the list of independent frequencies, was retained for each \(j\) = 1,... \(N\) simulation. We used \(N\) = 500 as this provided a good balance between computation time and enough sampling. We then analysed the resulting distributions of each \(f_{i}\), by calculating the 68%, 95% and 99.7% confidence intervals. We checked first that these values scaled roughly as we expect them to. We report the 99.7% interval (\(\sim\pm 3\sigma\)) in the second column in Tables 1 and 2.
## 4 Analysis of extracted frequencies
We analyze the frequencies derived in Sect. 3 and we distinguish four main regimes to discuss: \(\delta\) Sct type frequencies, \(\gamma\) Dor type periods, a regime with a coupling of "p" and "g" modes, and frequencies whose nature we discussed in terms of surface activity or gravitational effects provoked by a companion. One of the tools we used for the analysis
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & Frequency & Amplitude & Phase & Ident \\ & [c/d] & [mmag] & \(\Phi\)[rad] & \\ \hline \(F_{1}\) & 11.39107 & 8.680 & 0.991701 & \(p_{1}\) \\ \(F_{2}\) & 0.65259 & 4.470 & 0.819798 & \(2f_{rot}\) \\ \(F_{3}\) & 11.89972 & 3.726 & 0.572764 & \(p_{2}\) \\ \(F_{4}\) & 1.00595 & 2.002 & 0.601673 & \(g_{1}\) \\ \(F_{5}\) & 0.87286 & 1.881 & 0.772675 & \(g_{2}\) \\ \(F_{6}\) & 0.90251 & 1.522 & 0.581354 & \(g_{3}\) \\ \(F_{7}\) & 0.93445 & 1.496 & 0.237861 & \(g_{4}\) \\ \(F_{8}\) & 0.32629 & 1.374 & 0.489074 & \(f_{rot}\) \\ \(F_{9}\) & 0.88683 & 1.238 & 0.682482 & \(g_{5}\) \\ \(F_{10}\) & 11.25403 & 1.165 & 0.117229 & \(p_{3}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: List of the first ten frequencies with the highest amplitudes.
Figure 3: First Fourier transform of CoRoT 102314644.
Figure 2: Light curve of the star CoRoT 102314644 corrected for long-term trends and outliers (see text) for different timescales. From top to bottom, the complete light curve over 148d, then a set over 20 d and finally a zoom into 5 d subset.
of the frequencies is the phase diagram. The construction of these diagrams consists in taking all the observations and folding the light curve modulo a single standardized period (in time). Each time point is then assigned a phase with respect to this chosen period, and it takes a value of between 0 and 1, (\(0<\phi<1\)). All measurements are then plotted with phase as the independent variable.
### Spots or binarity?
We noted that the first low frequency \(F_{2}=0.65259\) c/d with \(A=4.47\) mmag has a half frequency harmonic \(F_{8}=0.32630\) c/d with \(A=1.37\) mmag. Such a combination of a frequency and a lower amplitude half frequency corresponds to a double wave curve typical for spotted or eclipsing stars (see e.g Paunzen et al., 2017). Figure 4 shows the phase diagram corresponding to \(F_{8}=0.32630\) c/d after removing all frequencies corresponding to pulsation modes (see Sect. 4.2 and 4.3). It clearly shows a double wave curve which can be explained in terms of spots or a companion of an ellipsoidal variable, assuming that \(F_{8}=0.32630\) c/d is the orbital frequency. In the case of spots, the star appears slightly fainter when a large dark spot is on the visible side, and slightly brighter when it is not. Note that the phase diagram corresponding to the rotation frequency in a regular single star without pulsation frequencies or surface activity, should be flat. A similar effect would be produced by a companion in an ellipsoidal variable system. These systems are non-eclipsing close binaries whose components are distorted by their mutual gravitation and the variations observed in the light curve are due to the changing variations are therefore due to the changing cross-sectional areas and surface luminosities that the distorted stars present to the observer at different phases (Morris, 1985).
We explored the possibility of being in the presence of one of these systems. We followed the equations in Morris (1985) assuming \(P=3.06d\), \(R_{1}=2.27R_{\odot}\) as derived in Sec. 2.2, \(M_{1}=1.75M_{\odot}\), \(\tau=0.2\) and \(\mu=0.4\) from Claret & Bloemen (2011) and \(\Delta m=0\). We found possible solutions for a mass companion, resulting impossible to dismiss this hypothesis, for example, \(M_{2}=0.7M_{\odot},1.4M_{\odot}\) for \(A=12R_{\odot},13R_{\odot}\) respectively, being \(A\) the semimajor axis.
With the aim to explore the existence of spots, we examined the behaviour of the star over several rotational periods, assuming a rotational frequency equal to \(F_{8}=0.32630\) c/d (\(\sim 3.06466\) d). We binned the data of the light curve in groups of ten measurements by assigning the average in time and magnitude to each group, and then we pre-whitened the data with all the pulsational frequencies. The result is presented in Fig. 5 for the duration of 3 rotational periods, each of them separated with horizontal lines. Two phenomena are present: amplitude variations from one orbit to another and moving bumps. The moving bumps might be explained by spots located at different latitudes. Additionally, the changes shown in Fig. 5 can be due to spots with a short lifetime. In the Sun, for example, the lifetime of the spots can vary between hours to months and it is known that they usually migrate (Solanki, 2003). Besides, it has been shown that for hot stars the lifetime tends to decrease, especially for those stars with short rotational periods (Giles et al., 2017) as the case of CoRoT 102314644. This suggests that CoRoT 102314644 can be a spotted star with a rotation period of \(P_{rot}=3.0647\) d. Nevertheless, we found frequencies (\(F_{49}\), \(F_{55}\) and \(F_{64}\)) that are linear combination of \(f_{rot}\) and this strongly suggest that the origin of \(f_{rot}\) is not surface activity (Kurtz et al., 2015) but, possibly, the beating of undetected pulsation frequencies. In order to determine properly the origin of these variabilities, spectroscopic measurements are required.
### \(\gamma\) Doradus domain
We found a total of 29 frequencies in the range of 0.3262 - 3.6631 c/d. From these frequencies, those we consider g-modes oscillations are labelled as "g" modes in Tables 1 and 2. The frequency with the highest amplitude in this domain, after \(F_{2}=2f_{rot}\), is \(F_{4}=1.0059\) c/d with \(A=2.0\) mmag.
Light variabilities from orbital or rotational variation are typically non-sinusoidal, thus, in order to distinguish between possible real \(g\)-modes and the frequencies corresponding to the spots in this domain, we analyse the phase diagram for each frequency. The phase diagrams for typical \(g\) and \(p\) modes frequencies have a sinusoidal behaviour. For instance, in Fig. 9 we have folded the light curve at the period corresponding to \(F_{1}\), and here we can clearly observe
Figure 4: Phase diagram using the rotational frequency \(f_{\rm rot}=0.326\) c/d after removing of all the pulsational frequencies.
Figure 5: Extract of the light curve corresponding to three rotational periods separated with vertical lines. We used the residuals after removing all pulsational frequencies of the binned data.
sinusoidal behaviour. This suggests that \(F_{1}\) is an oscillation eigenmode. On the other hand, for \(F_{18}=0.4638\) c/d, a non-sinusoidal can be spotted. In Fig. 6 the phase diagram for \(F_{18}\) for different amplitude scales is depicted. It seems that there is a maximum around 0.1 and a minimum between 0.3 and 0.4. This suggests that \(F_{18}\) may corresponds to periods related to spots. Nevertheless, we note that this test provides only hints about the origin of the frequency and is not conclusive. In fact, if \(F_{18}\) were originated by spots, it would imply over 40% in differential rotation, which is a value slightly high for A-F stars (Reinhold et al. 2013).
Considering \(F_{18}\) as originated from spots and dismissing the rotational frequency and its harmonics, we retain a total of 26 frequencies in the \(\gamma\) Doradus domain, possibly \(g\)-modes, depicted in black in Fig. 7. In addition, we searched for frequency combinations in this range, but no frequency couplings or splittings were found among these \(g\)-modes. We labelled the frequencies '\(F_{k}\)' as a combination of frequencies after finding a fit of at least two significant digits among all the possible combinations of type '\(mF_{i}\pm nF_{j}\)', for the given frequency '\(F_{k}\)'.
Hybrid \(\delta\) Sct-\(\gamma\) Dor stars, as well as \(\gamma\) Dor stars, are characterised by having high-order \(g\) modes. For these modes, with high radial order (\(k\)) and long periods, the separation of consecutive periods (\(|\Delta k|=1\)) becomes nearly constant and it depends on the harmonic degree (\(\ell\)), given the asymptotic theory of non-radial stellar pulsation (Tassoul 1980) in which the asymptotic period spacing is:
\[\Delta\Pi_{l}=\frac{\Pi_{0}}{\sqrt{\ell(\ell+1)}}, \tag{1}\]
with
\[\Pi_{0}=2\pi^{2}\left(\int_{r_{1}}^{r_{2}}N\frac{dr}{r}\right)^{-1}, \tag{2}\]
where r is the distance from the stellar centre, N is the Brunt-Vaisala frequency and \(r_{1}\) and \(r_{2}\) are the boundaries of the propagation region.
Motivated by this fact, we searched for equidistant \(\gamma\) Dor periods, by analysing the differences between all the periods found in the \(\gamma\) Dor domain. We found a series of 6 equidistant periods with a mean separation of \(\Delta\Pi=1621\) sec (see Table 3). These periods correspond to \(g\)-modes of the same harmonic degree \(\ell\) and consecutive radial orders \(k\). The asymptotic series is depicted in Fig 8. In the top panel of this figure, we show the periods (II) versus an arbitrary radial order (\(k\)). We can see that these periods are almost equally spaced forming a line. In the bottom panel of this figure, we show the forward period spacing (\(\Delta\Pi=\Pi_{k+1}-\Pi_{k}\)) versus \(k\), and we denote the corresponding average period spacing with the red horizontal continuous line. According to Van Reeth et al. (2016), the value we found is more likely to correspond to an asymptotic series with \(\ell=2\). In this paper the authors determine values of about 3100 s and 1800 s for the asymptotic period spacing calculated with \(\ell=1\) and \(\ell=2\) respectively, employing Eq. 1 and 2. In fact, our models predict a harmonic value \(\ell=2\) for this series.
### \(\delta\) Scuti domain
In the \(\delta\) Scuti domain, we found a total of 38 frequencies in the range 8.6 - 24.73 c/d. The highest amplitude frequency in this range is \(F_{1}=11.3910\) c/d with \(A=0.008\) mag. A phase diagram folded with this frequency shows sinusoidal behaviour (Fig. 9), indicating thus that \(F_{1}\) is an eigenmode.
Stellar rotation induces rotational splitting of the frequencies in the pulsation spectra. Considering rigid rotation
Figure 6: Data phased with \(F_{18}=0.46385\) c/d, a frequency possible related to spots in the \(\gamma\) Doradus domain using different scale rages.
Figure 7: Amplitude versus frequency in the \(\gamma\) Dor range of [0:4] c/d. Black lines represent all the \(g\)-mode frequencies found in this range. Grey data corresponds to the frequency spectrum obtained from the FT.
\begin{table}
\begin{tabular}{c c c c} \hline \hline & Period & A & Ident \\ & [sec] & [mmag] & \\ \hline \(F_{14}\) & 90878.5 & 0.841 & \(g_{8}\) \\ \(F_{7}\) & 92460.8 & 1.496 & \(g_{4}\) \\ \(F_{32}\) & 94061.3 & 0.387 & \(g_{19}\) \\ \(F_{6}\) & 95733.0 & 1.522 & \(g_{3}\) \\ \(F_{9}\) & 97425.6 & 1.238 & \(g_{5}\) \\ \(F_{5}\) & 98984.9 & 1.881 & \(g_{2}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: List of the six periods of the asymptotic series.
and the first-order perturbation theory, the components of the rotational multiplets are:
\[\nu_{nlm}=\nu_{nl}+m(1-C_{nl})\frac{\Omega}{2\pi} \tag{3}\]
where \(\nu_{ln}\) is the central mode of the multiplet and \(\Omega/2\pi\) is the rotational frequency. We found a quintuplet centred on \(p_{1}=F_{1}\) (see Table 5), which clearly indicates that this frequency is a non-radial mode with \(\ell=2\). The differences between the central mode and the components of the quintuplets are given in the last column of Table 5. Considering \(C_{nl}\approx 0\) for \(p\) modes, we find a very good agreement with the value for \(f_{rot}=0.32629\) c/d derived in Sec. 4.1. However, this match does not dismiss the possibility of CoRoT 102314644 being an ellipsoidal variable. In fact, an alternative interpretation of this splitting would be tidally deformed oscillation modes that have variable amplitude over the orbit, in case 0.32629 c/d is indeed a binary orbital period.
We also found 4 combinations between \(p\) modes exclusively, and the harmonics for \(p_{1}\) and \(p_{2}\) (see Table 6). The linear combination between two frequencies, yields a third frequency whose amplitude is smaller than those that form it. It is important to distinguish between mode-coupled frequencies from "pure" frequencies because when developing asteroseismic modelling, only frequencies that come from pulsation, i.e. "pure" frequencies can be accurately calculated and thus used.
Removing the couplings, the harmonics and the splitting corresponding to \(p_{1}\), we retain a total of 15 independent frequencies in the range of 10.9 - 21.4 c/d, depicted in black in Fig. 10.
### \(P\) and \(g\) modes combinations
The coupling between \(p\) and \(g\) modes was originally proposed as a way to explore \(g\) modes in the Sun, see Kennedy et al. (1993) and more recently Fossat et al. (2017). According to these studies, internal solar \(g\)-modes produce frequency modulation of \(p\)-modes which results in a pair of side-lobes symmetrically placed about each \(p\)-mode frequency. We explored this feature of \(g\)-modes in \(p\)-modes by searching combinations of frequencies in the \(\delta\) Sct domain. We found these combinations in the form of \(p_{1}\pm g_{i}\), with \(i=1,2,3\) and \(p_{1}-g_{4}\) and \(p_{1}-g_{7}\). The list of coupled \(p\) and \(g\) modes is given in Table 7. This same interaction has also been found in two other hybrid stars, namely, CoRoT-100866999 and CoRoT-105733033 studied in detail in Chapellier & Mathias (2013) and Chapellier et al. (2012), respectively. This indicates that the coupling mechanism
\begin{table}
\begin{tabular}{l l l l l} \hline \hline & Frequency & A & Phase & Ident \\ & [c/d] & [mmag] & [rad] & \\ \hline \(F_{38}\) & 23.29078 & 0.306 & 0.938 & \(p_{1}+p_{2}\) \\ \(F_{46}\) & 22.64486 & 0.143 & 0.362 & \(p_{1}+p_{3}\) \\ \(F_{48}\) & 22.80735 & 0.125 & 0.498 & \(p_{1}+p_{4}\) \\ \(F_{66}\) & 24.73414 & 0.052 & 0.559 & \(p_{1}+p_{5}\) \\ \(F_{23}\) & 22.78214 & 0.539 & 0.406 & \(2p_{1}\) \\ \(F_{52}\) & 23.79931 & 0.102 & 0.865 & \(2p_{2}\) \\ \hline \hline \end{tabular}
\end{table}
Table 6: List of combinations between \(p\) modes and harmonics.
\begin{table}
\begin{tabular}{l l l l} \hline \hline Parameter & Value \\ \hline \(\Delta\Pi\) & 1621 s \\ \(P_{\rm rot}\) & 3.064 d \\ p-mode & labelled as ’\(p_{i}\)’ in Tables 1 and 2 \\ g-mode & labelled as ’\(g_{i}\)’ in Tables 1 and 2 \\ p-g-modes & see Table 7 \\ quintuplet & see Table 5 \\ \hline \end{tabular}
\end{table}
Table 4: Summary of the variable content of the star.
Figure 8: Top Panel: Period versus an arbitrary radial order for the equally spaced series of periods founded. Bottom panel: forward period spacing versus radial order. The horizontal red line indicates the average period spacing along with the associated error in dashed lines.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline & Frequency & A & Ident & \(p_{1}-F_{i}\) \\ & [c/d] & [mmag] & & [c/d] \\ \hline \(F_{19}\) & 10.73844 & 0.667 & \(p_{1}-2f_{rot}\) & 0.65263 \\ \(F_{63}\) & 11.06506 & 0.081 & \(p_{1}-f_{rot}\) & 0.32601 \\ \(F_{1}\) & 11.39107 & 8.680 & \(p_{1}\) & – \\ \(F_{58}\) & 11.71775 & 0.083 & \(p_{1}+f_{rot}\) & –0.32668 \\ \(F_{47}\) & 12.04353 & 0.133 & \(p_{1}+2f_{rot}\) & –0.65246 \\ \hline \hline \end{tabular}
\end{table}
Table 5: List of frequencies of the quintuplet.
Figure 9: Data phased with \(F_{1}=11.3910\) c/d, the highest amplitude frequency in the \(\delta\) Scuti domain.
first proposed by Kennedy et al. (1993) also operates in hybrid \(\delta\) Sct and \(\gamma\) Dor stars.
Is important to notice that the detection of a combination between \(p\) and \(g\)-modes, i.e. \(p_{i}\pm g_{j}\), implies that \(p_{i}\) and \(g_{j}\) originated in the same star.
Additionally, we found one frequency between the \(\delta\) Sct and \(\gamma\) Dor domains, \(i_{1}=5.038\) c/d in Table 6, whose position in the frequency spectrum did not allow us to safely classify them.
## 5 Interpretation of frequency data
### Rotational period and critical velocity
The analysis of low frequencies in A-F stars is a tricky task. It requires several considerations, especially when analyzing hybrid pulsators and this problem arises not only with CoRoT observations but also with TESS data. Many phenomena can mimic stellar oscillations and additional data than photometry is required to disentangle the possible phenomena (Skarka et al. 2022).
In Sec. 4.1 we interpreted the period found \(P_{\rm rot}=3.064\) d, or \(f_{\rm rot}=0.326\) c/d in two different ways: the rotational period of the star or the orbital period of a binary system. Given that the splitting found can also be interpreted as tidally deformed oscillation modes that have variable amplitude over the orbit of a binary system, we could not rule out the possibility of CoRoT 102314644 being a binary system.
With the aim to test further the case of a single star, we calculated the rotational and critical velocities for the values obtained in Sec. 2. By considering the estimated radius, \(R_{*}\sim 2.27R_{\odot}\), we obtain a linear rotational velocity (\(v=2\pi R/P_{rot}\)) of \(\sim 37\) km s\({}^{-1}\). In this case, the corresponding rotational critical velocity (\(v_{crit}=\sqrt{GM_{*}/R_{*}}\)) for a mass of \(1.75M_{*}\) would be \(\sim 383\) km s\({}^{-1}\), meaning that the linear velocity is less than 10% of the critical velocity.
The effect of rotation in main sequence stars varies parameters involved in the modelling of stars such as the mean period spacing and the splitting of \(p\)-modes even at linear velocities which are a low percentage of the critical velocity. Nevertheless, in this work, we present a preliminary model of CoRoT 102314644 without considering rotation, as a first approximation.
### Use of stellar models to constrain the mass and age
With the aim to perform a preliminary modelling of CoRoT 102314644 we first explore the position of this star in the HR diagram for masses and overshooting parameters.
The stellar structure and evolution models were calculated with Cesam2k code (Morel & Lebreton 2008)3. We considered masses between 1.5 and \(1.8M_{\odot}\) with a mass step of \(0.05M_{\odot}\) and overshoot parameters of \(\alpha=0.0\), 0.1 and 0.3. Overshooting phenomena were considered as an extent of the chemical mixing region around the convective core through the expression for the overshooting distance:
Footnote 3: The following physics were considered: The opacities are those from Iglesias & Rogers (1996) and Alexander & Ferguson (1994), we used the equation of state of OPIAL project (Rogers et al. 1996) and a nuclear network with the following elements: \({}^{1}H\), \({}^{2}H\), \({}^{3}He\), \({}^{4}He\), \({}^{7}Li\), \({}^{7}Be\), \({}^{12}C\), \({}^{13}C\), \({}^{14}N\) to describe the H (proton-proton chain and CNObi-cycle), and He burning and C ignition with reaction rates extracted from (Angulo et al. 1999). In addition, we adopted the classical mixing length theory (MLT) (Böhm-Vitense 1958) for convection with a free parameter \(\alpha=1.85\). The occurrence of diffusion and mass loss during the evolution was dismissed and the solar metallicity distribution considered Grevesse & Sauval (1998). We used MARCS atmosphere models (Gustafsson et al. 2008). All of our models have an initial H and He abundances per mass unit of 0.72 and 0.26 with an initial value \(Z/X=0.0028\).
\[d_{OV}=\alpha_{OV}\times min(H_{P},r_{S}) \tag{4}\]
where \(H_{P}\) is the local pressure scale height and \(r_{S}\) is the Schwarzschild limit of the core.
Fig. 11 shows the HR diagram with the evolutionary sequences for different masses and overshooting parameters from the pre-main sequences up to an abundance of H of \(10^{-6}\) in the core, along with the error boxes centred on the values of \(Log(L/L_{\odot})\) and \(Log(T_{\rm eff})\) derived in Sec. 2.
In order to find a representative model for CoRoT 102314644, we selected different models indicated with circles inside the box shown in Fig. 11, and then we calculated their oscillation modes with GYRE code (Townsend & Teitler 2013). We computed adiabatic radial and non-radial (\(\ell=0,1\) and 2) \(p\)- and \(g\)-modes in the frequencies range [0.3, 23] c/d, thus encompassing the range of observed frequencies.
\begin{table}
\begin{tabular}{c c c c} \hline \hline & Frequency & A & Ident \\ & [c/d] & [mmag] & \\ \hline \(F_{50}\) & 10.38536 & 0.113 & \(p_{1}-g_{1}\) \\ \(F_{55}\) & 12.39788 & 0.0920 & \(p_{1}+g_{1}\) \\ \(F_{49}\) & 10.51816 & 0.115 & \(p_{1}-g_{2}\) \\ \(F_{54}\) & 12.26440 & 0.096 & \(p_{1}+g_{2}\) \\ \(F_{62}\) & 10.48902 & 0.0808 & \(p_{1}-g_{3}\) \\ \(F_{60}\) & 12.29424 & 0.0836 & \(p_{1}+g_{3}\) \\ \(F_{57}\) & 10.45718 & 0.0881 & \(p_{1}-g_{4}\) \\ \(F_{59}\) & 8.62954 & 0.0848 & \(p_{1}-g_{7}\) \\ \hline \end{tabular}
\end{table}
Table 7: List of \(p\) and \(g\) mode coupling for the highest amplitude frequency.
Figure 10: Amplitude versus frequency diagram zoomed into the \(\delta\) Sct range of [10 – 25] c/d. Black lines represent the pure \(p\) mode frequencies found in this range. Grey data corresponds to the frequency spectrum obtained from the FT.
### Asteroseismic analysis
The presence of a series of equidistant periods in CoRoT 102314644 (see Sect. 4.2) provides us with a useful tool for the search of a representative model: \(\overline{\Delta\Pi}\), the mean period spacing of high order \(g\)-modes.
As stars evolve in the main sequence and consume H in the core, the Brunt-Vaisala (B-V) frequency, which governs the behaviour of \(g\) modes, is affected by the change of the convective core. For masses greater than \(\sim 1.5M_{\odot}\), the core shrinks and its edge moves inward as the star evolves. The period can be expressed as:
\[\Pi_{n}\approx\frac{2\pi^{2}|n|}{\sqrt{l(l+1)}}\left[\int_{a}^{b}\frac{N}{r}dr \right]^{-1} \tag{5}\]
where N is the Brunt-Vaisala frequency, and a and b are the lower and upper boundary of the propagation zone of the \(g\)-mode. Thus, during the evolution, the integral increases since it expands toward inner regions resulting in a decreasing period and therefore a decreasing period spacing of \(g\) (see Miglio et al. 2008, for example).
We used this parameter as an indicator of the evolutionary status of stars at the main sequence (Saio et al. 2015; Kurtz et al. 2014; Sanchez Arias et al. 2017) which allowed us to place constraints in the search for a representative model. For each model inside the box in Fig. 11, we calculate the mean period spacing of \(g\)-modes for \(\ell=1,2\), as follows:
\[\overline{\Delta\Pi_{\ell}}=\frac{P_{j}-P_{i}}{n-1} \tag{6}\]
where \(P_{j}\) and \(P_{i}\) are the closest periods to the extremes inside the observed interval [90878.5:98984.9] s where the asymptotic series lie; and \(n\) is the number of periods found in this range.
Table 8 summarizes the mass, the overshooting parameter, the age, \(\overline{\Delta\Pi_{\ell}}\) and the difference between \(\overline{\Delta\Pi_{\ell}}\) and the value found in Sect. 4.2 for modes with \(\ell=1\) and 2 for CoRoT 102314644.
Another parameter we employed to select our best model is the ratio between the period spacing for \(\ell=1\) and \(\ell=2\), which should be equal to \(\sqrt{3}\) in the asymptotic regime. We also included this value in Table 8 for the selected models. We decided to use this criterion due to the possible deviation from the asymptotic regime with the adopted search. Our model was selected by the one with the lowest \(D_{l=2}\) among those ones closest to \(\frac{\Delta\Pi_{l=1}}{\Delta\Pi_{l=2}}=\sqrt{3}\). This model has \(1.75M_{\odot}\), no core overshooting, \(1241.24\times 10^{6}\) yrs
Figure 11: HR diagram showing evolutionary sequences for different stellar masses. Sequences in solid lines correspond to cases without overshooting, those in short-dashed lines have \(\alpha_{OV}=0.1\) and long-dashed lines correspond to evolutionary sequences with \(\alpha_{OV}=0.3\). The box indicates the values of \(\log(T_{\rm eff})\) and \(\log(L/L_{\odot})\) derived in Sect. 2. Colour coding shows the age of each evolutionary sequence. Selected models listed in Table 8 are shown by diamonds. The green circle shows the position of our best-fit model (see main text).
and its luminosity and radius are \(11.36\rm L_{\odot}\) and \(2.48\rm R_{\odot}\). We notice that mode-trapping or other internal mode-selection mechanisms might prevent us from detecting more periods belonging to the observed asymptotic series resulting in a mean period spacing of \(g\)-modes apart from the asymptotic value.
## 6 Summary and Conclusions
In this work, we have presented a detailed analysis of the light curve of CoRoT 102314644 and its frequencies. This star exhibits a rich frequency spectrum, with characteristics typical of hybrid \(\delta\) Sct-\(\gamma\) Dor stars. Such objects offer a great opportunity to explore both the outer regions as well as their deep interior, due to the simultaneous presence of \(p\) and \(g\) modes. We performed an in-depth analysis of the frequency and variable content of the time series:
- We detected two separate frequency domains, corresponding to \(\gamma\) Dor domain and \(\delta\) Sct type oscillations. We detected 26 pure frequencies in the \(\gamma\)-Dor range of [0.32,3.66] c/d, and 15 pure frequencies in the \(\delta\)-Stc range [9.38, 21.39] c/d (Fig. 3 and Tables A.1 and A.2).
- In the \(\gamma\) Dor domain, we found an asymptotic series of 6 equidistant periods with a mean separation of \(1621\rm s\pm 20s\) (Fig. 8 and Table 3) which most likely corresponds to \(\ell=2\).
- In the \(\delta\) Sct domain, we found a quintuplet centred in the highest amplitude frequency of this domain, \(p_{1}\). The splitting in the frequencies of this quintuplet suggests that \(f_{rot}=0.32629\) c/d is a rotational frequency (Table 5).
- The phase diagram corresponding to \(f_{rot}\) (Fig 4) along with the moving bumps and the amplitude variation from one orbit to another in Fig. 5 suggest the presence of spots in this hybrid star, in the case of \(f_{rot}\) being a rotational frequency.
- Another remarkable characteristic of this hybrid star is the presence of coupling between \(p\) and \(g\) modes in the \(\delta\) Sct domain (Table 7). This phenomenon, probably common among hybrid \(\delta\) Sct-\(\gamma\) Dor stars, should provide information about their internal structure and the resonant cavities in these kinds of stars.
- We developed a preliminary modelling for CoRoT 102314644 by employing our frequency analysis along with the parameters derived in Sec. 2.2, corrected for extinction. We obtained a mass and age of \(1.75M_{\odot}\) and \(1241\times 10^{6}\) yrs, without overshooting. The model parameters are \(L=11.36L_{\odot}\), \(T_{\rm eff}=6726\) K, \(R=2.48R_{\odot}\) and mean period spacing \(\Delta\Pi=1624\) s, which of course reproduce the derived parameters in Sec. 2.2 within their uncertainties.
Finally, we highlight the need to follow up this star with spectroscopic measurements in order to detect orbital radial velocities deviations from a possible companion or widthline variations over a rotational period from a line corresponding to surface activity in the case of CoRoT 102314644 being a spotted star.
###### Acknowledgements.
JPSA acknowledges the Henri Poincare Junior Fellowship Program at the Observatoire de la Cote d'Azur. We thank the referee for their valuable time in reviewing the manuscript and providing suggestions for improvement. The Astronomical Institute Ondrejov is supported by the project RVO:67985815. This paper is dedicated to the memory of Eric Chapellier.
|
2305.04303 | Superoscillating Quantum Control Induced By Sequential Selections | Superoscillation is a counterintuitive phenomenon for its mathematical
feature of "faster-than-Fourier", which has allowed novel optical imaging
beyond the diffraction limit. Here, we provide a superoscillating quantum
control protocol realized by sequential selections in the framework of weak
measurement, which drives the apparatus (target) by repeatedly applying optimal
pre- and post-selections to the system (controller). Our protocol accelerates
the adiabatic transport of trapped ions and adiabatic quantum search algorithm
at a finite energy cost. We demonstrate the accuracy and robustness of the
protocol in the presence of decoherence and fluctuating noise and elucidate the
trade-off between fidelity and rounds of selections. Our findings provide
avenues for quantum state control and wave-packet manipulation using
superoscillation in quantum platforms such as trapped ions. | Yongcheng Ding, Yiming Pan, Xi Chen | 2023-05-07T15:07:28Z | http://arxiv.org/abs/2305.04303v1 | # Superoscillating Quantum Control Induced By Sequential Selections
###### Abstract
Superoscillation is a counterintuitive phenomenon for its mathematical feature of "faster-than-Fourier", which has allowed novel optical imaging beyond the diffraction limit. Here, we provide a superoscillating quantum control protocol realized by sequential selections in the framework of weak measurement, which drives the apparatus (target) by repeatedly applying optimal pre- and post-selections to the system (controller). Our protocol accelerates the adiabatic transport of trapped ions and adiabatic quantum search algorithm at a finite energy cost. We demonstrate the accuracy and robustness of the protocol in the presence of decoherence and fluctuating noise and elucidate the trade-off between fidelity and rounds of selections. Our findings provide avenues for quantum state control and wave-packet manipulation using superoscillation in quantum platforms such as trapped ions.
The concept of superoscillation (SO) was originally proposed as a footnote in a celebrated study on quantum measurement by Y. Aharonov et al. [1] in the late 1980s. The phenomenon occurs when a band-limited wave function varies arbitrarily faster than its fastest Fourier components, as allowed by its spectral content. In other words, a superoscillatory wave is a _local_ feature that oscillates at a much higher frequency than the overall frequency of the global band-limited wave. This counterintuitive but physically allowed property is dubbed as _faster-than-Fourier_ by M. Berry [2] and others [3], which offers promising optical applications by breaking the diffraction barrier [4]. It has been applied in super-resolution imaging, manipulating nanoparticles, electrons, and atoms with spatiotemporally shaped light beams [5; 6; 7; 8].
Meanwhile, with the advent of state-of-the-art quantum technologies, ingenious protocols using weak measurements are now attainable for a variety of quantum applications [9], including quantum steering [10; 11], quantum tomography [12; 13], geometric information [14; 15; 16], and transition detection [17; 18]. Moreover, a sequential weak measurement enables the production of SO by encoding the amplified weak value of an operator from the repeatedly pre- and post-selected system into the coupled quantum states of the apparatus [19]. This inspires us to construct a general quantum control framework using SOs for the application scenarios where quantum information encoding is essential for specific proposals of quantum state steering and wave-packet manipulation.
In this Letter, we propose a framework called superoscillating quantum control (SQC). We exert sequential pre- and post-selections on the system to construct a superoscillating operator function that can uniformly shift the apparatus after each round of selections, resembling shortcuts to adiabaticity [20]. This SQC framework requires the design of optimal selections for efficient quantum control while gently perturbing the ground state. We demonstrate two preliminary results: a fast nonadiabatic transport of a single trapped ion and a speed-up quantum search algorithm for general quantum computing. At the same energy cost [21], SQC delivers higher fidelity to conventional adiabatic control and still outperforms when the probabilistic cost is included. Furthermore, by investigating noise in open systems, we reveal the trade-off between selection rounds and fidelity, showing two types of mechanisms and their consequences for the speed and robustness of SQC protocols.
_High-fidelity fast quantum transport.--_ Let us start by considering the use of SQC to perform the fast transport of a single trapped ion without final motional excitation. This is motivated by the need for coherent manipulation of trapped ions for quantum information processing, simulations,
and metrology. In the Lamb-Dicke limit, where \(\eta\sqrt{\langle(a+a^{\dagger})^{2}\rangle}\ll 1\), as discussed Ref. [22], the two-level interaction of a trapped ion with a monochromatic photon mode of the light field is described by the Hamiltonian
\[H_{\rm LD}=\frac{\hbar}{2}\Omega\hat{\sigma}_{+}\left[1+i\eta\left(\hat{a}e^{- i\pi}+\hat{a}^{\dagger}e^{i\pi}\right)\right]e^{i(\phi-\delta t)}+\text{H.c.}, \tag{1}\]
where \(\Omega\) is the Rabi frequency, \(\hat{\sigma}_{+}\) is the spin raising operator of the two-level system, \(\eta=kx_{0}\) is the Lamb-Dicke parameter with \(x_{0}=\sqrt{\hbar/(2M\nu)}\) being the characteristic length, \(\hat{a}\) and \(\hat{a}^{\dagger}\) the motional annihilation and creation operator, \(\nu\) the trap frequency, \(\phi\) and \(k\) the laser phase and wave vector, \(\delta\) the detuning, \(M\) the ion's mass. A spin-motion coupling can be implemented by employing red sideband (\(\delta=-\nu\)) and blue sideband (\(\delta=\nu\)) resonances with laser phases (\(\phi_{r},~{}\phi_{b}\)) = (\(-\pi/2,~{}\pi/2\)), resulting in a Hamiltonian with spin-orbit coupling \(H=g\hat{\sigma_{x}}\otimes\hat{p}\)[23]. In the weak coupling regime when \(g=\eta\Omega x_{0}\), this enables high-fidelity fast transport of trapped ion using SQC.
To realize SQC with the setup shown in Fig. 1(a), we apply \(N\) rounds of sequential quantum-state pre- and post-selections by alternatively projecting the system into the initial state \(|i\rangle\) and \(|f\rangle\), and coupling the apparatus weakly to the system between two selections. The sequential selections on the system can exert a longstanding influence on the quantum state of apparatus, leading to the final state given by
\[|\Psi_{A}^{F}\rangle=|\langle f|i\rangle|^{N}\underbrace{\left[\cos\left(\frac{ gT}{\hbar}\frac{\hat{p}}{N}\right)-i\sigma_{xw}\sin\left(\frac{gT}{\hbar}\frac{ \hat{p}}{N}\right)\right]^{N}}_{=:f(p)}|\Psi(x)\rangle, \tag{2}\]
where \(\sigma_{xw}=\langle f|\hat{\sigma_{x}}|i\rangle/\langle f|i\rangle\) is the weak value of \(\hat{\sigma_{x}}\), \(T\) is the total operation time, and \(|\Psi(x)\rangle\) is the initial motional wave function of the apparatus. The sequential selections evolves the wave function by a superoscillating operator function \(f(\hat{p})\) in Eq. (2). Mathematically, it allows the parameter \(\sigma_{xw}>1\) and \(\sigma_{xw}\in\mathcal{R}\), even though \(\sigma_{xw}\) as a weak value can be complex in practice [24]. The SO occurs with a low probability of \(|\langle f|i\rangle|^{2N}\) since we discard the wave function and initialize the system once any selection on internal states fails. It accumulates a weak value on the apparatus by kicking its position \(\delta x=gT\sigma_{xw}/N\) in each round, amplifying it by \(N\) times without an ensemble of \(N\) ions.
In addition to compensating force and inverse engineering [25; 26; 27], the SO described in Eq. (2) provides an alternative shortcut-to-adiabaticity transport by periodically kicking the particle without excitation. We can treat the motional wave function as the target since there is a system-apparatus duality, making the selections on the internal states the effective controller. To determine
Figure 1: (a) Schematic diagram of SQC in trapped ion quantum platform using sequential selections (2), where two lasers are tuned to the first red and blue sideband to couple the internal spin states to the motional mode. Iterative pre-selection of the two-level system for \(|i\rangle\) and post-selection for \(|f\rangle\) (or vice versa) accumulates weak value on the position of the Gaussian-type apparatus, realizing SQC with the probability of \(P=|\langle i|f\rangle|^{2N}\) after \(N\) rounds without an ensemble of \(N\) particles. (b) Final apparatus states for transporting a distance of \(d=x_{0}\) with interaction strength \(g=1\), where SQC achieves a fidelity of \(F=|\langle\Psi(x,T)|\Psi_{G}\rangle|^{2}=0.995\) with \(g\delta T=0.341\) and \(N=20\). Selections (3) on the two-level system shift the initial ground state (grey) to final states by single shot selection (red) and \(N=20\) rounds(blue) after \(T=3.536\) and \(6.830\), respectively. As a calibration, single-round selections show two non-overlapping red wave packets that cannot result in the unidirectional ion transport. The relevant dimensionless mass and trap frequency are \(M=\nu=1\), and \(P=0.25\). (c) Fidelity dependence on \(N\) via SQC for transport distances of \(d=7.5x_{0}\) (dot), \(10x_{0}\) (triangle), and \(15x_{0}\) (cross) with \(g=0.75,~{}1,~{}1.5\), and \(P=0.9\). (d) Fidelity dependence on \(T\) via AQC for the same transport distances as in (c). We move the center of the trap by defining \(x_{\rm trap}=3dt^{2}/T^{2}-2dt^{3}/T^{3}\).
the cost of using SO, we need to analyze the relation between probabilities and transport distance and optimize the selections accordingly. A successful post-selection that projects an arbitrary state from \(|i\rangle\) to \(|f\rangle\), has a probability of \(p=|\langle f|i\rangle|^{2}\). For our concern, to obtain a weak value of both real-valued and larger-than-one, we construct the two selected states
\[|i\rangle = \sqrt{\frac{1-\sqrt{1-p}}{2}}|0\rangle+\sqrt{\frac{1+\sqrt{1-p}}{ 2}}|1\rangle,\] \[|f\rangle = \sqrt{\frac{1+\sqrt{1-p}}{2}}|0\rangle+\sqrt{\frac{1-\sqrt{1-p}}{ 2}}|1\rangle, \tag{3}\]
where \(p\) is the probability of successful projection, which can result in an optimal weak value of \(\sigma_{xw}=1/\sqrt{p}\). Therefore, the SQC protocol can shift the trapped ion by \(d=gT/\sqrt{p}\), at the cost of \(P=p^{N}\). At first glance, it may seem problematic because the displacement is independent of the number of selection \(N\), and the total probability falls exponentially. That is, one can couple the internal states and the motional mode for the time interval of \(T\) in a single shot of measurement. However, the coupling strength \(gT\) maybe too large so that the coupling no longer evolves as a translation operator on the motional wave function. From an intuitive perspective, the post-selection on the system after evolving the system-apparatus Hamiltonian determined by \(\langle f|\exp(-igT\hat{\sigma}_{x}\otimes\hat{p})|\Psi(x)\rangle|i\rangle\), splitting the apparatus to the cat state (or kitty-state when two packets are not well separated):
\[|\text{cat}\rangle=\frac{1+\sqrt{p}}{2\mathcal{N}}|\Psi(x-gT)\rangle+\frac{-1+ \sqrt{p}}{2\mathcal{N}}|\Psi(x+gT)\rangle, \tag{4}\]
where \(\mathcal{N}\) is the coefficient for normalization. Noteably, we present the exact final quantum state of the apparatus, which is applicable for measurements with arbitrary strengths, ranging from weak to strong coupling. The overlap between two wave packets is crucial to define the measurement transition, given by \(\langle\Psi(x-gT)|\Psi(x+gT)\rangle=\exp(-\Gamma^{2})\), where \(\Gamma=gT/(\sqrt{2}x_{0})\) is an interference factor. Using this, we derive the expected net shift of the apparatus:
\[\langle\delta x\rangle=\langle\text{cat}|x|\text{cat}\rangle=\frac{2\sqrt{p}gT }{1+p+\exp(-\Gamma^{2})(-1+p)}, \tag{5}\]
which corresponds to the weak-valued readout \(\sigma_{xw}gT=gT/\sqrt{p}\) asymptotically as \(\Gamma\to 0\), while in the opposite limit of strong measurement \(\Gamma\rightarrow\infty\), it results in a bizarre readout of \(2\sqrt{p}gT/(1+p)\) that does not match the expectation value of \(\hat{\sigma_{x}}\) in the initial or final state. More importantly, the net shift at weak measurement limit can be amplified when \(p\to 1\). To preserve the unidirectional shift, we construct the SQC by periodically decoupling the system and the apparatus after every
\(\delta T=T/N\), ensuring each measurement is in the weak measurement regime. Fig. 1(b) plots the difference between single-shot measurement and SQC for a target distance of \(d=10x_{0}\) with a low total probability of \(P=0.25\) to increase result contrast.
In Fig. 1(c), we set the probability to \(P=0.9\) and vary the number of selections from \(N=1\) to \(16\) to test the ideal performance of our protocol under different measurement strengths. We fix the operation time corresponding to \(N\) in each set of settings by letting the coupling strength be proportional to the transport distance as (i): \(g=0.75,\ d=7.5x_{0}\), (ii): \(g=1,\ d=10x_{0}\), (iii): \(g=1.5,\ d=15x_{0}\). The results support our theory that the fidelity should increase with \(N\) and decrease with transport distance.
We notice that the extra time cost (compared with \(T=6.708\) for \(N=1\)) for SQC is not remarkable since the nonoverlapping wave packet on the left is almost negligible with near-parallel selection for \(P\approx 1\). Furthermore, we highlight that the SQC converges to the lossless expectation value amplification (also the eigenvalue \(\langle\hat{\sigma_{x}}\rangle=1\)) with near parallel selection of the system states. This convergence bounds the quantum speed limit of lossless transport as \(\dot{x}=g\). In this case, the apparatus (4) reduces to a single wave packet \(|\Psi(x-gT)\rangle\), and equalizes the weak value \(1/\sqrt{p}\) or the the previous bizarre readout \(2\sqrt{p}/(1+p)\) in either limit. Alsom in Fig. 1(d), we benchmark our protocol by moving the center of the harmonic trap to the target with the following trajectory \(x_{\rm trap}=3dt^{2}/T^{2}-2dt^{3}/T^{3}\). \(T=N\delta T\) is the operation time for the SQC of \(N\) rounds. Our protocol significantly outperforms such adiabatic quantum control (AQC) in fidelity \(F=|\langle\Psi(x,T)|\Psi_{G}\rangle|^{2}\), where \(|\Psi_{G}\rangle\) is a Gaussian ground state at the target site, and prevails even when takingt the probability of the selections into account.
_Speeding up adiabatic quantum search algorithm.--_ Now we generalize the superoscillating quantum control to include the angular parameters that characterize the wave function and enable the processing of quantum information. Similar to the weak value accumulation on the spatial parameter, the accumulation on the angular parameter amplifies the corresponding coefficient of the target basis. This is analogous to the adiabatic version of Grover's algorithm [28], searched for the target \(|t\rangle\) from the database \(|\Psi\rangle\) with a complexity of \(\mathcal{O}(\sqrt{N_{G}})\) in the context of digitized quantum computing, where \(N_{G}\) is the number of entries. This algorithm iteratively evolves the Grover's operator \(\hat{G}=U_{\Phi}U_{t}\), where the oracle operator \(U_{t}=I-2|t\rangle\langle t|\) and the diffusion operator \(U_{\Psi}=2|\Psi\rangle\langle\Psi|-I\) flip the phase of the wave function in the target subspace and the phase of the initial database, respectively. By decomposing \(\hat{G}\) into a gate sequence, the algorithm rotates the
wave function from the database to the target:
\[\hat{G}^{N}|\Psi\rangle=\sin\left(\frac{2N+1}{2}\theta\right)|t\rangle+\cos \left(\frac{2N+1}{2}\theta\right)|\tilde{t}\rangle. \tag{6}\]
We aim to implement an interaction Hamiltonian \(H=g\hat{\sigma}\otimes\hat{J}\) that couples the system and the apparatus, where \(\hat{\sigma}\) is the Pauli operator and \(\hat{J}\) is the angular momentum. By sequentially selecting the system state, we can manipulate the quantum information encoded in angular parameters, resulting in the following SO on the apparatus
\[|\Psi_{A}^{F}\rangle=|\langle f|i\rangle|^{N}|\Psi(\theta/2+N\sigma_{w}g\delta T )\rangle, \tag{7}\]
where the interaction lasts for a duration of \(\delta T=T/N\) in each sequence. If we define \(g\delta T\sigma_{w}=\theta\), the SO is equivalent to Eq. (6). This oscillating quantum search requires both artificial design of selections, the angular momentum operator, and the implementation of the interaction Hamiltonian, which avoid decomposing the oracles in digitized quantum computing. In Fig. 2(a), we demonstrate the algorithm by using a two-qubit system, as a minimal quantum model that searches for the target state \(|t\rangle=|0\rangle\) out of \(N_{G}=2\) entries. Assuming that the initial database is \(|\Psi\rangle=|+\rangle=\sin(\pi/4)|0\rangle+\cos(\pi/4)|1\rangle\), we can achieve a perfect query by setting \(g\delta T\sigma_{yw}=\pi/4N\) instead of using the standard Grover's algorithm. Accordingly, we use the spin-spin interaction for implementing the SQC, which is indeed the Heisenberg type \(H_{\rm SQC}=-g\hat{\sigma_{y}}\otimes\hat{\sigma_{y}}\)[29], when an auxiliary qubit is introduced as the controller (system). We add the relative phase term \(\exp(-i\pi/2)\) on the coefficients of Eq. (3), and trade-off between the probability and weak value accumulation remains.
We hereby present a quantitative study comparing the energy cost of SQC with AQC. In the AQC approach, we evolve the time-dependent Hamiltonian \(H(t)=[1-\lambda(t)]H_{i}+\lambda(t)H_{f}\), with \(\lambda(t)\) ranging from zero to a certain value within the operation time. We use the type-I Hamiltonian as a benchmark for the specific quantum search [30], given by
\[H_{\rm I}(t)=(1-t/T)\Omega\hat{\sigma}_{x}+(t/T)\Delta\hat{\sigma}_{z}, \tag{8}\]
with Rabi frequency \(\Omega\), and the detuning \(\Delta\). Additionally, we use the more general type-II Hamiltonian [31]
\[H_{\rm II}(t)=(1-t/T)K(I-|\Psi\rangle\langle\Psi|)+(t/T)K(1-|t \rangle\langle t|), \tag{9}\]
as the baseline, where \(K\) is a scaling coefficient. We evaluate the performance of SQC by measuring fidelities \(F=|\langle\Psi(T)|t\rangle|^{2}\) on different \(N\) with the same probability \(P=0.9\) and coupling
strength \(g=1\) in Fig. 2(b). It is evident that the fidelity can be improved without increasing the operation time \(T\) by using a larger Rabi frequency and detuning in the type-I Hamiltonian, or equivalently scaling up \(K\) in the type-II Hamiltonian. Therefore, it is critical to define the energey cost [21] that bounds the input energy to the system for a fair comparison. We use the Frobenius norm of the total Hamiltonian to define the instantaneous cost of the evolution \(\partial_{t}C=\|H(t)\|\), integrate and average it over the operation time as
\[C=\frac{1}{T}\int_{0}^{T}\|H(t)\|dt, \tag{10}\]
yielding \(C_{\rm{SQC}}=2g\) for the superoscillating quantum search. Assuming \(\Omega=\Delta\), we derive the cost of type-I and type-II Hamiltonian as \(C_{1}=\Omega[2\sqrt{2}-\log(-1+\sqrt{2})+\log(1+\sqrt{2})]/4\) and \(C_{\Pi}=K[4+3(-\log 1+\log 3)]/8\), respectively. Fig. 2(c) shows the energy cost of adiabatic algorithms to SQC, along with the fidelities of both types of Hamiltonian for adiabatic quantum search within \(T\), corresponding to selections of \(N\) rounds. Numerical results prove that SQC dramatically accelerates the adiabatic Grover's algorithm, even if both probabilistic and energetic cost are considered. We analyze its extension to arbitrary databases by applying the multi-qubits in the Supplementary Material [32]. The model is physically feasible in the trapped ion platform, where Molmer-Sorensen gates serve naturally as analog simulators for the interaction.
_Possible experimental implementation.--_ Both examples in the ideal simulations approach the lossless SO in the limit of large \(N\). To evaluate the protocol in a laboratory environment, we generalize it to open quantum systems governed by the Lindblad master equation. We account for imperfect selections that can introduce atomic loss and quantum noises, reducing fidelity by a fixed proportion \(P_{\rm{error}}\). These mechanisms can create a trade-off between the fidelity and the number of selections, where the fidelity reaches a maximum and then decreases as \(N\) increases beyond a critical value. Note that the trade-off of the first type, induced by quantum noises, does not necessarily exist in all cases. As \(N\) increases, the protocol preserves the better property of SQC, which improves fidelity with extra operation time, while reducing fidelity as quantum noises affect the system more. Whether this trade-off exists depends on which factor dominates, extra fidelity gain or loss.
To characterize the dephased two-level system (internal state) and damped quantum harmonic oscillator (motional mode) involved in the ion transport task, we use the collapse operators \(C_{\rm{TLS}}=\sqrt{\gamma}\hat{\sigma}_{z}\), \(C_{\rm{HO}}=\sqrt{\gamma}\hat{a}\), \(\sqrt{\gamma}\hat{a}^{\dagger}\), respectively. Solving the Lindblad master equation enables us to obtain the fidelity in the formulation of density matrix approach. As a result of the damping, the expected
shift of the damped apparatus is smaller than that of the ideal apparatus. As shown in Fig. 3(a), we set the same dephasing rate and damping rate to \(\gamma=0.01\), and the additional fidelity loss per selection to \(P_{\text{error}}=5\times 10^{-3}\). Although the only trade-off in this parameter setting is induced by imperfect projection, the trade-off of the first type can be observed at a small critical \(N\) by tuning down the probability of SQC. In the quantum search algorithm, we impose local dephasing on the control qubit (system) and target qubit (apparatus) by \(C_{\text{TLS}}=\sqrt{\gamma}\hat{\sigma_{z}}\), resulting in a decrease in fidelity due to purity loss. As calibrations, the evaluation of the algorithm with the same parameters is shown in Fig. 3(b,c), which confirms the existence of a trade-off mechanisms of two types.
_Conclusion and outlook.--_ In summary, we have introduced a general SQC framework and demonstrated its efficiency in two applications with trapped ions. Our approach utilizes pre- and post-selections of the system to achieve optical projection design, and SQC for atomic non-adiabatic transport and the quantum search problem. We have shown that SQC can speed up conventional adiabatic control, offering a promising alternative toward shortcuts to adiabaticity. Numerical simulations demonstrate that SQC has advantages in terms of energy cost, even when considering the probability of occurrence. We have also extended the SQC protocol to open quantum systems, where noise affects its performance, resulting in two types of trade-offs between
Figure 2: (a) Schematic diagram of quantum search algorithm using SQC (7). \(N=8\) rounds of selections on the controller qubit, driven by the spin-spin interaction Hamiltonian \(H_{\text{SQC}}=-g\hat{\sigma_{y}}\otimes\hat{\sigma_{y}}\), rotate the target qubit from the initial database \(|\Psi\rangle=|+\rangle\) to the target \(|t\rangle=|0\rangle\). (b) Fidelity dependence on \(N\) via SQC. The probability of SQC and coupling strength are set to \(P=0.9\) and \(g=1\), respectively. (c) Fidelity dependence on \(T\) via AQC. The energetic cost is bounded, and the AQC Hamiltonians (8) (red triangle), (9) (red cross) are evolved for \(T\) that depends on the rounds of selections in SQC algorithm.
selections and fidelity.
While our focus is on the trapped ion system, insights can be gained from the cold atoms in spin-dependent optical potentials [33], and electrons in semiconductor quantum dots [34]. Additionaly, our work can be extended to coherent matter-wave splitting, as we have presented the analytical formulation of the apparatus after post-selection. Furthermore, SQC for collective behaviour has applications in the emergence or suppression of superradiant phase transition in the Dicke model [35]. We believe that SQC can provide a deeper understanding of quantum foundations and offer potential for various quantum technologies.
_Acknowledgements.--_ This work has been financially supported by NSFC (12075145), EU FET Open Grant EPIQUS (899368), QUANTEK project (KK-2021/00070), the Basque Government through Grant No. IT1470-22, and the project grant PID2021-126273NB-I00 funded by MCIN/AEI/10.13039/501100011033 and by "ERDF A way of making Europe" and "ERDF Invest in your Future". X.C. acknowledges the Ramon y Cajal program (RYC-2017-22482).
Figure 3: (a) Fidelity vs. \(N\) in SQC-based quantum transport, with quantum noises characterized by dephasing and damping rates of \(\gamma=0.01\). Simulation results for the open quantum system are multiplied by \(1-P_{\text{error}}=0.995\) after each selection (red markers) to mimic atomic loss induced by imperfect projections. Parameters and markers are the same as in Fig. 1(c). (b) Fidelity vs. \(N\) in SQC-based quantum search, with local dephasing modeled by \(C_{\text{TLS}}=\sqrt{\gamma}\sigma_{z}^{\text{z}}\) on the controller and target qubits. Results are shown for perfect (blue) and imperfect (red) projections with the same settings as in (a) and in Fig. 2(b). |
2304.13556 | The Systematic Review-lution: A Manifesto to Promote Rigour and
Inclusivity in Research Synthesis | The field of human-computer interaction (HCI) is maturing. Systematic
reviews, a staple of many disciplines, play an important and often essential
role in how each field contributes to human knowledge. On this prospect, we
argue that our meta-level approach to research within HCI needs a revolution.
First, we echo previous calls for greater rigour in primary research reporting
with a view towards supporting knowledge synthesis in secondary research.
Second, we must decide as a community how to carry out systematic review work
in light of the many ways that knowledge is produced within HCI (rigour in
secondary research methods and epistemological inclusivity). In short, our
manifesto is this: we need to develop and make space for an inclusive but
rigorous set of standards that supports systematic review work in HCI, through
careful consideration of both primary and secondary research methods,
expectations, and infrastructure. We call for any and all fellow systematic
review-lutionaries to join us. | Katja Rogers, Katie Seaborn | 2023-04-22T10:03:14Z | http://arxiv.org/abs/2304.13556v1 | # The Systematic Review-Iution: A Manifesto to Promote Rigour and Inclusivity in Research Synthesis
###### Abstract
The field of human-computer interaction (HCI) is maturing. Systematic reviews, a staple of many disciplines, play an important and often essential role in how each field contributes to human knowledge. On this prospect, we argue that our meta-level approach to research within HCI needs a revolution. First, we echo previous calls for greater rigour in primary research reporting with a view towards supporting knowledge synthesis in secondary research. Second, we must decide as a community how to carry out systematic review work in light of the many ways that knowledge is produced within HCI (rigour in secondary research methods and epistemological inclusivity). In short, our manifesto is this: we need to develop and make space for an inclusive but rigorous set of standards that supports systematic review work in HCI, through careful consideration of both primary and secondary research methods, expectations, and infrastructure. We call for any and all fellow systematic review-lutionaries to join us.
research synthesis, systematic review, rigour, literature, epistemology 2022 ac acmcopy 2022 acmcopy 2022 acmcopy 2022 acmcopy 2022 acmcopy 2022 acmcopy 2022 acmcopy 2022 acmcopy 2022 acmcopy 2022 acmcopy 2022 acmcopy 2022 acmcopy 2022 acmcopy 2022 acmcopy 2022 acmcopy 2022 acmcopy 2022 acmcopy 2022 acmcopy 2022 acmcopy 2022 acmcopy 2022 acmcopy 2022 acmcopy 2022 acmcopy 2022 acmcopy 2022 acmcopy 2022 acmcopy 2022 acmcopy 2022 acmcopy 2022 acmcopy 2022 acmcopy 2022 acmcopy 2022 acmcopy 2022 acmcopy 2022 acmcopy 2022 acmcopy 2022 acmcopy 2022 acmcopy 2022 acmcopy 2022 acmcopy 2022 acmcopy 2022 acmcopy 2022 acmcopy 2022 acmcopy 2022 acmcopy 2022 acmcopy 2022 acmcopy 2022 acmcopy 2022 acmcopy 2022 acmcopy 2022 acmcopy 2022 acmcopy 2022 acmcopy 2022 acmcopy 202 acmcopy 2022 acmcopy 202 acmcopy 202 acmcopy 202 acmcopy 202 acmcopy 202 acmcopy 202 acmcopy 202 acmcopy 202 acmcopy 202 acmcopy 202 acmcopy 202 acmcopy 202 acmcopy 202 acmcopy 202 acmcopy 202 acmcopy 202 acmcopy 202 acmcopy 202 acmcopy 202 acmcopy 202 acmcopy 202 acmcopy 202 acmcopy 202 acmcopy 202 acmcopy 202 acmcopy 202 acmcopy 202 acmcopy 202 acmcopy 202 acmcopy 202 acmcopy 202 acmcopy 202 acmcopy 202 acmcopy 202 acmcopy 202 acmcopy 202 acmcopy 202 acmcopy 202 acmcopy 202 acmcopy 202 acmcopy 202 acmcopy 20 acmcopy 202 acmcopy 202 acmcopy 202 acmcopy 20 acmcopy 202 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acm 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acm 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acm 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acmcopy 20 acm 20 acmcopy 20 acmcopy 20 acm 20 acmcopy 20 acm 20 acmcopy 20 acm 20 acmcopy 20 acm 20 acm 20 acmcopy 20 acmcopy 20 acmcopy 20 acm 20 acm 20 acm 20 acm 20 acm 20 acmcopy 20 acm 20 acm 20 acmcopy 20 acm 20 acmcopy 20 acm 20 acmcopy
In other large(r) fields, Chu and Evans [18] warn that increased publication output can lead to "_ossification_" because novel ideas cannot gain traction against the entrenched canon of the papers most often cited. This can have severe consequences for the field as a whole: "_too many papers published each year can lead to stagnation rather than advance [knowledge creation]_" [18]. There are already hints of this trend in HCI: the papers most cited are cited quite a bit more often than the average paper, while the number of citations papers receive per year is declining overall [66]. Based on other fields, this suggests that it is getting more difficult for new ideas to break through and shake up established ones in HCI. However, in HCI, we might actually have the opposite problem (too) because of the _kinds_ of papers we publish at this rapid pace. In recent years, some researchers have become concerned that we are focusing too much on novelty [5, 62]. Consider the most recent (2022) proceedings of the Conference on Human Factors in Computing Systems (CHI), where searching for "novel" yields 602 results--based on 697 papers. When 86% of papers are characterized as "novel," we need to ask what this means for knowledge gains and consensus-building. Playing devil's advocate, we might say that our field incentivizes, if not requires, the publication of a never-ending stream of flashy one-offs. Instead of putting effort into rigorous incremental research to confirm evidence across multiple studies, we find ourselves dancing after the novelty carrot1.
Footnote 1: Keeping in mind that our field does not necessarily agree on one meaning of “novelly.”
The repercussions of this state of affairs deserve careful and critical attention. Focusing so strongly on novelty may be part of what makes it difficult to provide definitive answers about what we actually know so far in HCI [41]. This also makes it difficult for HCI to situate itself within and participate alongside other fields of study, and limits the kind of research that we do. We echo DiSalvo et al. [24]'s statement on sustainable HCI as relevant for HCI as a whole: "_[t]o avoid reinventing the wheel, there is a need for the field to take stock of what is known and to identify major unknown questions or issues, which arise from what has been established, as a basis for future work._" Whittaker et al. [84] raised similar criticisms: that the HCI community "_(overemphasizes) "_radical invention" at the price of achieving a common research focus._" They go on to point out that in the absence of "_such a focus, it is difficult to build on previous work, to compare different interaction techniques objectively, and to make progress in developing theory._" This is not merely a problem of research praxis; it also has practical ramifications when we, as a field of study, cannot provide clear guidelines or implications. We find ourselves in a liminal space, where we all are carrying out research and producing a variety of outcomes, but an outsider looking in may find the overall picture difficult to make out. Such an outsider may then move on to a more clearly defined space, leaving our work unacknowledged and overlooked. From within, we may not be able to see the forest for the trees, leaving no clear path forward. Research synthesis can clarify the work conducted in a field of study, not only for others but ourselves, as well.
Yet replication studies, follow-up work, corrections and expansions, and other forms of explicitly _not novel_ forms of inquiry remain sidelined, despite calls to action that go back more than a decade [25, 42, 85, 86, 85]. This has grave implications in light of larger patterns and hiccups in research practice, including p-hacking [38], the replication [2, 72] and publication bias [60] crises, and adverse effects resulting from the preprint server explosion [1]. This is not only a problem for experimental or quantitative work typically housed within positivist frameworks. We recognize that not all research projects within HCI aim for generalization or consensus. Still, many if not all of them hold valuable insights on their own that could be productively synthesized. All epistemological and methodological lenses should be embraced in knowledge synthesis work if we wish to provide a full picture of HCI research.
Globalized computer-based information technology--the very heart of our discipline--has created new drivers and tensions for scholarship. Solutions to this phenomenon might be found by embracing _slow science_[77], to some extent. Yet even if we were to stop publishing altogether tomorrow--and pull the plug on the Internet--we would still need to sift through and synthesize the existing work published so far. Many of us in HCI are taking up this task, but have little guidance or standardization. This is not because guidelines or standards do not exist--they do [13, 37, 45, 63, 73, 75, 79, 80]. However, these are premised around the work developed (and valued) in other fields, e.g., randomized-controlled trials in the medical field [76]. Part of the challenge inherent to HCI is the _sheer variety_ of work available (perhaps even leading to what Reeves [69] and Fallman and Stolerman [27] refer to as "_disciplinary anxiety_"). Closely related to this, another key challenge is the lack of consensus on _how_ to carry out research synthesis in general, and systematic reviews more specifically: our field has not yet embarked on an explicit conversation about what we expect from systematic reviews, nor how to handle the different kinds of knowledge our field produces.
Figure 2. Results per year for the keyword “human-computer interaction” in the ACM Digital Library. A total of 658,883 results were found as of 14:24 on February 7th, 2023. The year 2000 alone featured 8,519 results, with 27,009 for 2010; 35,183 for 2020; and 37,215 results for 2022.
This is our manifesto. We propose to begin a community-driven conversation to determine how to depart from our typical research praxis to support research synthesis at the meta level. We argue that a rigorous and inclusive systematic review approach to research synthesis in HCI is the way forward. We pose two critical questions at this juncture: _1) How can we package our work in such a way that meaningful research synthesis can be practiced based on the wonderful diversity of work that we produce in CHI and adjacent spaces?_ Secondly, in light of the many forms of knowledge produced in our field, we believe and hope to convince the reader that we must all come together as a community to develop a shared set of research practices for planning, conducting, and reporting research synthesis within HCI: _2) What should research synthesis look like when it is grounded in plurality: quantitative studies, qualitative studies, design research, ethnography, and the development of interactive artifacts and systems?_ We raise these concerns, challenges, and desires for a different way in the scholarly tradition of denouncing the "_confusing and harmful abundance_" of literature, a form of self-reflective discourse that dates back several centuries [9]. Our goal is to remind ourselves about the bigger picture and re-orient each other as members of a community of practice.
With the above, we make the case for research synthesis, why it matters for the field of HCI, and why our answers to this issue may differ from other fields. We next present established methodologies for research synthesis, focusing on the current global standard across most fields of study: the systematic review. We then raise critical issues about relying on a systematic review approach for HCI research, and provocations that anchor to its abundance in topics, methods, and epistemological points of view. We end with a call for action on a novel framing of research synthesis: inviting you, dear reader, and everyone participating in HCI research. We aim to start a conversation at this year's conference that we expect will lead into the development of a future workshop, special interest group, committee and/or collaborations with the goal of establishing a community of practice invested in HCI research synthesis. By gathering the multitude of disciplinary voices and epistemological perspectives in our community, we hope to make a disciplinary impact in terms of knowledge creation and methodology that re-verberates back to the larger research community.
Launching pad: what is a systematic review? or: "But I have a prisma figure, surely that makes it systematic"
The systematic reviews are a staple of research synthesis in many fields of study. In medicine, they are considered "_indispensable_" [44] as the "_gold standard_" [59] means of arriving at consensus across individual studies on a specific topic, typically an intervention of some kind, so as to enable decision-making grounded in evidence-based work [44]. Yet there is no clear agreed-on definition for what a systematic review is as an outcome and what it should entail as a process [56]. As Martinic et al. [56] explain, "_definitions of [systematic reviews] are vague and ambiguous, often using terms such as_ clear, explicit _and_ systematic, _without further elaboration_." This presents those of us in HCI with a challenge and an opportunity: we may struggle to understand and apply systematic review methodologies to our work, even though we have much to learn from other disciplines in the history and practice of the systematic review. Yet perhaps we do not need to adopt all of these methods as they are, and in some cases perhaps it would even be inappropriate to try; we may instead chart a new path forward.
The first step towards understanding and productive deviation is a definition. Let us begin with what a systematic review is not: It is not merely a description of previous work on a certain topic, within a field of study, or around a certain research question or hypothesis. It is not an annotated bibliography in which we comment on papers that we have read. It is also not conducted ad hoc without an _a priori_ plan, especially not if the procedure was changed mid-process so that, in the end, the research question had to be adjusted to fit the method. It is not the summary alone; it cannot be without a description of how the results were constructed, magicing its way from search process to outcomes with no in-between. It is not a narrative of a curated selection of works.
Then, what might a systematic review _be_? In other words, what is its nature as an outcome and method of scholarship? We start with a pair of concepts: primary and secondary research--their relationship, and a basic distinction between the two. Primary research is the _material_ of a systematic review. We define _primary research_ as any paper2 that reports directly on collected and analyzed data, e.g., a paper reporting a user study. _Secondary research_, then, is one step removed: a paper that reports on a collected and analyzed sample of primary research papers: systematic reviews are one example of secondary research. We see no issue in carrying these concepts forward for research synthesis within HCI.
Footnote 2: While we use “paper” here, we do not mean that the paper itself is “the research:” We use “paper” for simplicity in writing and in recognition that most output of scholarship is packaged in paper form, particularly in the context of systematic review work and research synthesis.
With these foundational concepts in hand, how then do we process the material that is primary research into the outcome that is secondary research? Unfortunately, the structure of a typical systematic review process is more contested than ideal [56]. Martinic et al. [56] suggest the following components, listed in procedural order: "_i) a research question; ii) sources that were searched, with a reproducible search strategy (naming of databases, naming of search platforms/engines, search date and complete search strategy); iii) inclusion and exclusion criteria; iv) selection (screening) methods; v) [a critical appraisal of] the quality/risk of bias of the included studies; vi) information about data analysis and synthesis that allows the reproducibility of the results._" Haddaway and Bilotta [36] instead compare requirements posed by institutions that promote evidence-based research through systematic reviews, e.g., the Cochrane Collaboration. They suggest three basic standards: "_(i) [...] methods should be described in sufficient detail to allow full repeatability and traceability; ii) [...] a systematic approach to identifying and screening relevant academic and grey literature, and iii) [...] critical appraisal of the validity (quality and generalisability) of included studies to give greater weight to more reliable studies._" With the plurality of our field in mind, we draw out the following general characteristics: i) an _a priori_ developed and pre-registered protocol i.e., full documentation of the planned review procedure, as well as clearly and comprehensively articulated research questions, search processes, screening processes, data extraction processes, and the means of
quality appraisal (or a rationale for its omission); (ii) data analysis and synthesis methods; and (iii) a discussion that transparently addresses limitations in both search and synthesis, and for both method choices and results.
The components necessary for a review to be "systematic" remains an open question in HCI. Is pre-registration necessary, especially if a similar registration already exists? Does every review require an assessment of quality of the primary research? Are certain tools or platforms required, such as the use of the ACM Digital Library or IEEE Xplore, which are foundational for primary and secondary research publishing but not without their quirks and outright glitches? Further, the specifics of _how_ the steps should be conducted in practice are similarly unclear and in disarray. As one example in HCI, there is no consensus or even weighing in on the trade-offs for the choice between single and double screening--is one person's decision enough, or is at least one other required? Can the other(s) simply review rejected items, i.e., to avoid false negatives? Can the work be divided up between different people? Should there be a "storming and norming" process to get people on the same page or even some form of inter-rater reliability metric? Do we let go of generalizability and accept epistemological diversity? Should we adopt the aspirations to be practical and flexible and simply transparent, as advocated by Braun and Clarke [10] in their _reflexive_ thematic approach? On that note, _can_ we meaningfully and appropriately draw from other methodologies to inform research synthesis? These are just some of many methodological questions that researchers in other fields have been exploring in recent years [32, 52], yet each one alone already raises an array of questions and provocations for the context of HCI research.
**Systematic Review.**/sssto'mattk r 'viu/.
*DefinitionError: term'systematic review' is not defined.
#### Human-Computer Interaction, Probably
The systematic review in its modern form can primarily be traced back to the medical field, where the goal is to synthesize the results of multiple randomized controlled trials to better estimate the effect of a specific intervention [76]. When the effect sizes in very similar studies are synthesized via statistical methods, it is considered a systematic review with meta-analysis [22]. The term "meta-analysis" is sometimes used in HCI to refer to review work without statistical aggregation of effect sizes, presumably in a more literal interpretation of the term "meta" to account for a paper that reports on one or more analyses, e.g., [20, 83]. Other fields have adjusted synthesis methodology or created their own to suit their needs, for example fields and subfields that do not conduct (m)any randomized controlled trials [79, 80]. This parallel methodological evolution in multiple fields has led to a dizzying array of closely related but different synthesis methods and review types: scoping review, rapid review, mapping review, review of reviews, (best-fit) framework synthesis, mixed-method synthesis, among many others [78]. Uptake of these methods as well as guidelines for their usage varies wildly, as do opinions on which of these are or are not "systematic." As these fields have matured, they have started to face another flood of papers, this time with secondary rather than primary research [44]. The waterfall does not stop at the pond, but cascades ever further: for a while already, academic research literature has featured tertiary research [4, 23, 74] and even occasional examples of "quarternary" research [58]. We have no reason that this will not be the case for HCI as well; now is the time to act and seek a new path forward.
## Input coordinates: Tracing out open questions and countering objections
* [noitemsep,topsep=0pt]
* alt.chi website expects submissions to be "_controversial, risk-taking, and boundary pushing_" [15]--so why are we writing about systematic reviews, when they are an established methodology, even a gold standard, that can highlight existing knowledge ("backward-looking" [61]) as well as create new forms of knowledge from what came before ("forward-looking" [61])? Surely this is not a controversial topic? Yet somehow it is: in our own experience when submitting and reviewing papers in HCI, we have come across a broad range of expectations and opinions about:
* whether systematic reviews, as a form of secondary research that heavy relies on primary research, has a place in HCI, since such work does not always lead to a novel outcome in the traditional sense;
* what systematic reviews are for (_providing an objective and comprehensive overview of a subfield vs. providing an opinionated narrative vs. providing an estimation that answers a very specific question; establishing consensus vs. providing a subjective but substantiated perspective),
* how they should be conducted (_based on a range of specific guidelines; ignoring or including qualitative research; with or without meta-analysis; with or without critical appraisal or double screening or data extraction forms or..._)
* what forms of knowledge they can and should produce ("maps" vs. synthesized effect size estimates vs. taxonomies, theories or frameworks vs. new research questions and directions vs. new primary research or instruments or prototypes), and
* basic terminology and definitions (_when should a review be considered systematic; what is a meta-analysis; etc._)
Let us invoke an imaginary HCI researcher, who sees no benefit to systematic reviews and considers them procrustean:
\[\text{{Procrustean}}./\text{{pr}}(\text{{u}})\text{{fix\,}}\text{{rastn}}./ \text{{Of}},\text{{relating to}},\text{{or}}\]
resembling the practices of Procrustes (see Procrustes n.);
(hence) enforcing uniformity or conformity without regard to natural variation or individuality.
#### Oxford English Dictionary
As we have outlined above, there are reasons to come into such a position within HCI, so we give this perspective a platform and trace out likely concerns. This researcher might reasonably ask: Will systematic reviews lead to no one reading the original papers anymore? We again emphasize that it is generally not possible to stay up to date and read all papers in HCI. Sorry. That ship has said. Yet it may be too simplistic and disillusioned to respond that "nobody reads anything anyway"--even though it seems that we do not engage with cited work as critically and comprehensively
as we should [54]. Systematic reviews _could_ indeed shift or divert citations from primary research papers. Reviews are easy to cite for general overview purposes, and without systematic reviews, the same authors might cite a couple of hand-picked primary research papers instead. However, researchers tackling a particular topic or carrying out work within the same domain will still cite the most relevant papers directly--or should. Still, we acknowledge that an increase systematic reviews might affect citation practices, especially if we consider _who_ is writing them (and who is not) as well as _how_: "_citations have politics_" [19]. As noted by Kumar and Karusala: "_How work is written about also matters because it can distort or even erase contributions over time_" [46]. However, a well-conducted systematic review should gather and give platform to a broad and unbiased selection of papers grounded in a comprehensive search strategy and self-reflective quality assessments. It could thus help to reduce biases in how we cite and pay attention to existing research, i.e., be self-correcting in the same spirit as the scientific method. Rather than encouraging us to cite what (or who) we know, which may not represent the diversity of the field but rather our social networks [31], systematic review procedures can broaden our horizons and create greater inclusion in citation practice. Further, a well-conducted systematic review is itself a form of in-depth critical reflection and engagement with the primary research in its corpus. While it may "steal" some citations, it should itself cite the primary work and likely also elicit future citations for it going forward3.
Footnote 3: We might also question when citations are truly meaningful or useful, given that they can just as much indicate social power differentials as scholarly engagement. Systematic reviews could help us dogle our natural inclination as social animals towards popularity metrics, as operationalized in citation counts.
Our imaginary researcher might next ask: Will systematic reviews lead us to enforce a procrustean norm in our synthesized results that entirely ignores all the beautiful variations in each of the individual papers? This may be true. But maybe such variation does not always help us with our goal in the moment. When seeking a good (enough) answer to a specific question based on the field's currently available research, perhaps those variations are not always useful or relevant at a meta level. In fact, extending the metaphor of Procrustes to user studies can show why these objections should not be an issue. As a field of study, we generally do not shirk the individual user when drawing on the results of a n=30 user study to infer how it might work for the user group as a whole. By posing implications and conclusions about a specific research question based on a single user study, we are not aiming to define or enforce a norm that ignores the beautiful variations of the individual participants. Rather, we offer a slice of the available experience with the resources at our disposal and then turn to other methods to explore and showcase the variations we could not get to in one study. Similarly, we can combine systematic reviews with specific methods of analysis to draw conclusions, and gain nuance and rich, situated understanding.
Finally, we turn towards Blackwell's perspective on HCI as a field of study: Perhaps the goal of HCI should _not_ be to "_develop and maintain a stable body of knowledge, but rather to be the catalyst or source of innovation_". This would instead require that we as HCI researchers engage in scholarship that is "_questioning_, _provocative_, _disruptive and awkward_" [8]. This could be an argument against systematic reviews, as the goal of synthesis is often to stabilize and find firm ground in the shifting sands of our field. Still, Blackwell [8] also emphasize the importance of "_reflective practice_"--which itself is something that knowledge synthesis through systematic reviews can deliver and structure. We suggest that systematic reviews, with all of their own methodological diversity, have the potential to be part of both the development of stable ground _and_ disruptive practice within knowledge production in HCI.
our research reporting practices, e.g., with regards to the reporting of race and ethnicity data [14], brain signal experiment data [68], participant compensation data [65], inter-rater reliability in qualitative research [57], specific measures [71] and questionnaires [43; 47], engagement with self-determination theory [81], artifact descriptions [33], and inferential statistics [12], to list just a few. These issues may arise in part due to page limits or efforts to ensure paper length matches perceived contribution, but may also be due to lack of community-driven standardization and education.
This complicates research synthesis in secondary research because it makes results difficult to compare and weigh. Again, we recognize that this may not always be the goal, but it often is in the HCI world. Yet how can we point to what works and what does not if we cannot synthesize results with a high degree of rigour or systematicity? The good news is that using existing guidelines for reporting more will likely also help with secondary research simply by making the reported primary research more comprehensive and comparable. Still, it may be worth examining to what extent existing guidelines for reporting primary research can support follow-up secondary research.
Further, given that criticism of reporting in HCI has been expounded for many years, perhaps it is time to consider pointing authors and reviewers in HCI to such guidelines more explicitly. There are already hints of conferences adding a bit more structure to the submission process. For example, since 2021, at least one ACM conference has required authors to indicate "_the primary and secondary contribution type of their paper_" (empirical-qualitative; empirical-quantitative; empirical-mixed-methods; artifacts-technical; artifacts-design; theoretical; or meta-research4), to assist with reviewer fit to assigned papers [17]. We could add a requirement for papers to include a structured abstract of sorts as supplementary material--tailored to the contribution type. This could support not only future secondary research, but also the reviewing process itself, by providing a concise overview of the conducted work that reviewers can quickly and easily digest.
Footnote 4: Adapted from Wobbrock and Kientz [87]’s classification of research contributions in HCI
_Epistemological Diversity of CHI._ We next address the role that our different ways of knowing in HCI play in the synthesis of primary work. This needs to be considered both from the perspective of the diversity of research _within_ primary work in HCI, but also in the diversity of methods that we draw on for synthesizing it in secondary research.
How do we approach any kind of formalization of systematic reviews for as broad a field as HCI? Research in other fields has in recent years looked at the methodology of mixed methods reviews in more detail [40; 70]. To conduct a review that accommodates research results from quantitative as well as qualitative methods, we can point to the Joanna Briggs Institute (JBI) guidelines for mixed methods systematic reviews [50] as a starting point that covers some approaches. Currently there are only a few reviews in HCI that use these or related guidelines. Still, we think it deserves more attention from our mixed methods-inclined field of study and could greatly benefit the way we do synthesis.
But HCI also features approaches that are situated more in design research methods, like participatory design and research through design. Wolf et al. [88] describe the field as featuring as an "_inherent tension [that is] reflected in the distinctive practices and disciplinary orientations of engineering and creative design._" We agree and also highlight that this "_is not an insurmountable conflict [...] both perspectives are valid_" [88]. Excluding the knowledge created through design research methods when we do synthesis decentres a significant section of our field and prevents us from accessing a truly full picture of HCI practice. However, to our knowledge, there are currently no methods or guidelines designed to handle and synthesize evidence and knowledge created through design research methods. We may have to develop new methods to integrate this work into systematic reviews. We call on researchers familar with each approach: "_any notion of rigour has to be developed within a 'firm understanding of the particular purpose of each approach_" [30] (Frauenberger et al. [30] citing Fallman and Stolterman [27]).
There is no short supply of work in these fields for exploring what rigour means within different HCI approaches and how to evaluate such work for synthesis purposes. For example, Wolf et al. [88] outlines qualities in design praxis that aim to achieve "_design rigor_", among them the design critique: "_a designer's reflective, evaluative and communicative explanation of her design judgments and the activities in which she has engaged_". Similarly, Zimmerman et al. [89]'s criteria or lenses for evaluating interaction design research (process, invention, relevance and extensibility) may be a useful tool for synthesis. For approaches like participatory design, Frauenberger et al. [30] write about how traditionally positivist understandings of rigour need to be re-interpreted: "_accountability and rigour in a post-modern scientific context is delivered through debate, critique and reflection_", and make the case for "_acknowledging different ways of knowing_" [30]. We extend this argument: not only do we need to acknowledge these different ways of knowing, we need to develop methods of synthesizing and integrating different ways of knowing, as well.
Secondary Research Reporting.Perhaps the closest match for existing methodological guidelines towards which synthesis guidance in HCI could be oriented are efforts within software engineering (e.g., Kitchenham and Charters [45]'s work), qualitative health research (e.g., Tong et al. [79]'s ENTREQ, or Cooke et al. [21]'s SPIDER framework), and quite recently, Topor et al. [80]'s NIRO-SR for non-intervention studies (still a preprint). However, we strongly believe that HCI will need to also draw on synthesis methods that more explicitly _combine_ quantitative and qualitative work: To quote Reeves [69], we need "_more reviews of and reflections upon the landscape of different forms of reasoning in HCI and through this better ways of managing how potentially competing disciplinary perspectives meet together_." Guidance for pulling together evidence from different disciplines and methodologies _does_ exist (e.g., [50]) although it is rare. Yet how well this works in HCI is an open question; currently, there is essentially no uptake in our field.
When existing systematic reviews at CHI cite a guideline for their method, they primarily reference PRISMA [63] (e.g., [7; 39; 51]). The PRISMA figure, specifically, is popular, as it can be a great way to illustrate the search and screening process. However, the PRISMA guidelines as a whole were made for reporting systematic reviews and meta-analyses of intervention studies in the medical field [63]. Most HCI reviews--even if they state that they followed the PRISMA
guidelines!-do not actually answer all (or even most) of the PRISMA checklist items [63, 64]. For example, how often have you seen a systematic review at CHI report a quality assessment and/or risk of bias in each study5 or certainty in the evidence as a whole6? Further, because of the medical world's focus on meta-analyses, several PRISMA items are designed for statistical synthesis methods that reviews in HCI only very rarely employ (e.g., explorations of causes of statistical heterogeneity7), and are thus simply not applicable to the kinds of reviews that we (can) conduct. PRISMA figure may be useful, but the guidelines are, for the most part, not actually appropriate for our field--at least not past the search procedure when it comes to the synthesis methods at the heart of the review.
Footnote 5: “_Item 18. Present assessments of risk of bias for each included study_” [64]
Footnote 6: “_Item 15. Describe any methods used to assess certainty (or confidence) in the body of evidence for an outcome_” [64], e.g., via the GRADE framework for evaluating quality of evidence in a review [35].
A quick search for "systematic review" in the ACM DL shows a sharp increase in systematic reviews being produced. This means that now is an important moment to **STOP** and reflect on the methods we use for systematic reviews in our field. We need to figure out what we mean when we use the term "_systematic_" in the context of review work, and what we expect in terms of best practices. We need to report methods clearly and comprehensively, including how we adapted guidelines to our own use. We need to look more deeply into synthesis methods and carefully choose, name, and rationalize our choices. We may also want to look into structured abstracts as supplementary materials for secondary research (for example, Haddaway et al. [37]'s ROSES could either be borrowed directly or adapted for HCI research).
Oulasvirta and Hornbae [62] put forth that HCI needs more "_ conceptual contributions that link empirical findings and the design of technology_" to make our research findings actionable and create "_integrative types of knowledge_." We argue that by putting effort into developing and unbiding guidelines and standards for review synthesis and its reporting that works for HCI specifically, we will be able to improve the conceptual contributions that HCI can make as a field. If we view HCI as a field defined by its problem-solving capacity [62], then systematic reviews--when done rigorously--can directly help to improve several of the criteria they propose as important for problem-solving: it can help us develop a better understanding of how well solutions _transfer_ and inform our degree of _confidence_ in them.
_Venues and Subcommittees._ Marshall et al. [55] commented that HCI has few explicit publication formats that invite critical discussion: "_none of the major [venues] have any format for critical response to published articles [...] once a piece of HCI work is in publication, it is unlikely to attract any critical discussion_." Critical discussion instead is more likely to take place in social media, Slack workplaces and Discord channels, and other unofficial venues. Systematic reviews could perform the function of critical discussion in a rigorous and formalized way, accessible to the community of practice as a whole. Yet there is no clear place for them, either. Perhaps the only publication venue in HCI that explicitly welcomes reviews ("_survey papers_") is the ACM Computing Surveys (CSUR) journal, but they make no mention of systematicness in their author guidelines [3]. CHI as the "_flagship conference of the discipline_" [49] features only one subcommittee--Health--that mentions (systematic) reviews as a method in their description [16]. Even subcommittees that describe themselves as "_epistemologically pluralistic [and] welcoming of a range of perspectives, approaches, and contributions_" [16] can recruit associate chairs and reviewers that do not consider systematic reviews as a methodology per se and may be inclined to reject them for that reason alone. Reviewers in HCI as a whole have wildly different expectations and methodological expertise when it comes to reviews; a little more agreement would go a long way.
A perception of systematic reviews not producing "_novel_" work may be a partial reason for this issue. For example, the TOCHI journal warns that they "_rarely publish[...] survey papers unless they offer a major original contribution._" We note that reviews _absolutely_ can produce original contributions based on the synthesis, e.g., intermediate-level knowledge like taxonomies [11]. When it should be considered "_major_", and whether or when a systematic overview of existing work should be considered an "_original contribution_" is something that might be helpful for TOCHI to describe in more detail for potential authors, and indeed something that we should discuss as a field.
_Infrastructure: Digital Libraries and Machine Learning Approaches._ Our digital libraries are poorly documented and barely evaluated. Results can vary wildly over time. This is sometimes expected (i.e., numbers go up as more research is published); however, it sometimes also _decreases_ due to adjustments in metadata8. Metadata in publication databases often has errors and cannot necessarily be relied on [28, 29]. Additionally, what databases cover is not always entirely clear and can vary based on institutional access--e.g., the "_Web of Science Core Collection_" consists of different sub-data sets depending on university subscription [48]. This makes one of the fundamental goals of systematic reviews--namely, that it should be possible to reproduce the results--rather difficult. It is considered best practice in other fields to conduct searches on multiple databases. Perhaps we need to consider doing multiple searches over several days to try to mitigate database fluctuation. However, perhaps we also need to re-consider or be clearer about what we require for a systematic review to be "reproducible": What do we mean when talk about reproducing results? For example, as long as the search queries themselves are reported, and the records of papers found in each step, then perhaps we should not require the search to yield the same number of results, simply because we cannot rely on the databases to be consistent.
Footnote 8: And on some days, databases are simply buggy: on one memorable occasion, we noted the ACM DL reporting 0, then 200+, then 0, then 500+ results for the same search within a single day.
Still, there are additional issues with designing multiple searches to be _comparable_ across databases. Databases use a variety of different keyword and filter options, and often they are only poorly documented. Guidance for creating comparable searches across, for example, ACM and Scopus, would be highly beneficial for synthesis in our field. Current ACM DL tutorials are not sufficient for this purpose, and contacting the ACM DL team about database and search specifics has been unproductive. One option is to work in concert with publishers in a participatory design project with ourselves as the target "end-users" directing the design of these
systems in a more fruitful direction for supporting review work. Another option to consider is automation. With the growth in artificial intelligence and machine learning, the landscape of digital infrastructure surrounding databases and publication searches now also features tools for (semi-)automated search (e.g., Research Rabbit9) or screening (e.g., ASReview10). These may be of interest for reviewing the field, but to what extent they can and should be used in formal systematic reviews is an open question--especially as the exact data sources and how often they are updated is often not made explicit. Perhaps a participatory design approach can again be useful as a starting point.
Footnote 9: [https://www.researchrabbit.ai/](https://www.researchrabbit.ai/), last accessed: 23 Nov, 2022
Finally, we note that there is little information on what kinds of publications relevant to HCI are found within which databases. Gusenbauer [34] created a discipline-based coverage map of a wide range of academic databases, giving us a first hint. However, HCI was not included in this disciplinary coverage map; it may be worth creating a disciplinary coverage map of databases for HCI specifically. This would give us a better idea of what kind of HCI research can be found in which database, and provide guidance on which databases to chose for specific research questions.
## 6. Engae: A call to action for research synthesis in HCI
As a research community, we need to come together and decide what actions need to be taken towards building a set of standards that is rigorous yet inclusive of the diversity of work that we do in HCI. We do not aim to be prescriptive in this manifesto, but we do offer some ideas for what to aim for based on the discussion so far:
* a shared understanding of what should be considered a systematic review, the desired and possible outcomes of systematic reviews, and the forms that systematic reviews can take when exploring diverse evidence resulting from different research paradigms (quantitative, qualitative, mixed-methods, as well as design research methods)
* a shared understanding of what best practices we want to encourage in secondary research methods: double screening, extraction, critical appraisal, protocol development and preregistration, etc., specifically through an agreement on standards (e.g., for critical appraisals of primary research: what kind and when)
* unearthing how the digital libraries relevant to HCI work (e.g., query filters) and what they cover
* better infrastructure in our publishing ecosystem: is it time for a subcommittee or track for research synthesis and meta science? Should we require structured abstracts or checklists for primary and/or secondary research?
* robust descriptions of and/or access to the interactive artifacts reported on in primary research papers to support research synthesis about them
* exploration of the design and use of living reviews [26]--as interactive systems, HCI expertise could be particularly beneficial here
Our goal is to begin a discussion and gather different experiences and opinions of researchers on the role that systematic reviews should play, on what a systematic review should look like, and how systematic reviews are currently valued and received within the CHI community--and more broadly, within HCI as a whole.
## Acknowledgments
Many thanks to Maximilian Altmeyer for feedback on an earlier draft of this alt.chi paper, and to all of our colleagues for the many lively discussions on these topics over the years.
|
2304.03745 | Assessing Perceived Fairness from Machine Learning Developer's
Perspective | Fairness in machine learning (ML) applications is an important practice for
developers in research and industry. In ML applications, unfairness is
triggered due to bias in the data, curation process, erroneous assumptions, and
implicit bias rendered within the algorithmic development process. As ML
applications come into broader use developing fair ML applications is critical.
Literature suggests multiple views on how fairness in ML is described from the
users perspective and students as future developers. In particular, ML
developers have not been the focus of research relating to perceived fairness.
This paper reports on a pilot investigation of ML developers perception of
fairness. In describing the perception of fairness, the paper performs an
exploratory pilot study to assess the attributes of this construct using a
systematic focus group of developers. In the focus group, we asked participants
to discuss three questions- 1) What are the characteristics of fairness in ML?
2) What factors influence developers belief about the fairness of ML? and 3)
What practices and tools are utilized for fairness in ML development? The
findings of this exploratory work from the focus group show that to assess
fairness developers generally focus on the overall ML application design and
development, i.e., business-specific requirements, data collection,
pre-processing, in-processing, and post-processing. Thus, we conclude that the
procedural aspects of organizational justice theory can explain developers
perception of fairness. The findings of this study can be utilized further to
assist development teams in integrating fairness in the ML application
development lifecycle. It will also motivate ML developers and organizations to
develop best practices for assessing the fairness of ML-based applications. | Anoop Mishra, Deepak Khazanchi | 2023-04-07T17:30:37Z | http://arxiv.org/abs/2304.03745v1 | # Assessing Perceived Fairness from Machine Learning Developer's Perspective
###### Abstract
**Fairness** in machine learning (ML) applications is an important practice for developers in research and industry. In ML applications, unfairness is triggered due to bias in the data, curation process, erroneous assumptions, and implicit bias rendered within the algorithmic development process. As ML applications come into broader use developing fair ML applications is critical. Literature suggests multiple views on how fairness in ML is described from the **user's perspective** and students as **future developers**. In particular, ML developers have not been the focus of research relating to perceived fairness. This paper reports on a pilot investigation of **ML developers' perception of fairness**. In describing the perception of fairness, the paper performs an exploratory pilot study to assess the **attributes** of this construct using a systematic focus group of developers. In the focus group, we asked participants to discuss **three questions**- 1) What are the characteristics of fairness in ML? 2) What factors influence developer's belief about the fairness of ML? and 3) What practices and tools are utilized for fairness in ML development?
The findings of this exploratory work from the focus group show that to assess fairness developers generally focus on the overall ML application design and development, i.e., business-specific requirements, data collection, pre-processing, in-processing, and post-processing. Thus, we conclude that the procedural aspects of **organizational justice theory** can explain developer's perception of fairness. The findings of this study can be utilized further to assist development teams in integrating fairness in the ML application development lifecycle. It will also motivate ML developers and organizations to develop best practices for assessing the fairness of ML-based applications.
machine learning perceived fairness perception procedural justice theory ML developers
## 1 Introduction
Machine learning (ML) is practiced in the research, industry, and education sectors to make complex decisions and assist humans [1, 2, 3, 4, 5]. Machine learning in natural language processing and computer vision are employed in developing decision-making systems and decision-support systems [2, 3]. Measuring fairness and developing fair ML applications has become a widespread practice in research, industry, and academia. However, literature suggest that fairness in ML is a very subjective term and possess many definitions [6, 7, 8]. Mehrabi _et al._ 2021 defines fairness in ML as deficiency of favoritism toward an individual or a group based on acquired characteristics [7]. Similarly, Pessach _et al._ 2022 describes different notions of fairness like individual and group fairness. The subjective characteristic in fairness is caused due to introduction of bias. An unfair model in ML is triggered mainly due to bias in the data and erroneous assumptions rendered within the algorithmic development process [4]. Literature suggests that bias in data has many shapes and forms. Thus, algorithmic bias and algorithmic fairness are discussed in literature [6, 7, 9]. Algorithmic fairness is an area of research applied to mitigate bias and explain fairness in AI systems [7, 4]. Researchers have multiple views and descriptions for algorithmic fairness, thus lacks a rigid definition. Mehrabi _et al._ 2021 define a large class of biases by representing a feedback loop relationship between data, algorithm, and users [7].
They argue that most of the definitions and work on fairness are developed in the West. When these are applied to different problem types, it introduces historical bias, contextual bias, and representation bias [7, 4]. It may lead to an unfair AI decision-making system. However, few researchers see fairness in ML systems as multi-dimensional aspects like psychology, political science, and economics [10, 4].
Investigating notions of fairness is important because, if not considered, incorrect outcomes or perceptions may lead to severe societal and business concerns. The literature includes scenarios discussing the severe effect on society: Amazon's AI-based recruiting system reported bias against women in the recruitment process; Apple's credit card approval process biased against women; and Stanford's COVID-19 vaccination algorithms were biased towards a specific group [11, 12, 13, 4].
Researchers tried to understand fairness from the sociotechnical domain by conducting studies on the human perception of fairness. The literature describes that different stakeholder has distinct interpretations of fairness concerning the same ML model [14, 4]. These studies are conducted to understand the relationship between human perceptions of fairness and proposed notions of fairness in literature [8, 14, 15, 16, 17, 18]. Harrison _et al._ 2022 conducted an empirical user study to investigate the trade-off between competing notions of fairness and human perception of fairness such that ML models can embed these trade-offs to build fair ML applications [19]. Based on the literature, perceived fairness in ML is described as human perception and interpretation of ML models based on outcomes predicted by the ML model.
Prior studies are conducted to investigate user's perception of fairness. Most of the studies performed a randomized between-subject experiment on Amazon's Mechanical Turk. The validity of research on Amazon's Mechanical Turk has been questioned in the past [20, 4]. Kasinidou _et al._ 2021 and Kleanthous _et al._ 2022 investigated student's (as future developers) perception of fairness and justice in algorithmic decision-making in three separate scenarios. However, the potential gap lies within the understanding of the human perception of fairness which has to do with the ML developer's perception of fairness. ML developers are crucial actors to investigate perceived fairness because they are responsible for designing, developing, and evaluating ML models in ML development process. We conducted virtual focus groups to explore and assess characteristics and factors that influence ML developer's perceived fairness. The goal of this research study is to assess the characteristics of perceived fairness from developer's perspective. Thus, we asked three questions from ML developers in the systematic focus groups:
1. How would you describe the fairness of ML applications from your (developer's) perspective?
2. What are the factors that influence your (developer's) belief about the perceived fairness of ML applications?
3. What practices or tools do you utilize to practice fairness?
Inductive thematic analysis and LDA-based topic modeling are utilized to accomplish the research objective of assessing developer's perceived fairness. Section 3.2 discusses more about this approach. Our findings assist to understand the relation between actual ML developer's perceptions and proposed notions of fairness in the literature. Researchers explain that notions of fairness are associated with distributed fairness from organization justice theory, which means fairness measures based on outcomes [21, 22, 6]. The present trend in fairness advocates for developing procedural notions of fairness, i.e., procedural fairness that explain fairness based on the process [23, 21, 24]. This research study's findings conclude that the developer's perceived fairness relates to procedural fairness of organizational justice theory in the decision-making process. Based on the findings, our research contributions are:
1. Developed attributes (themes) of ML developer's perceived fairness,
2. Investigated the association between ML developer's perceived fairness with procedural fairness,
3. Proposed definition of perceived fairness from ML developer's perspective
This research study's findings will help researchers and motivate ML developers and practitioners to understand perceived fairness from a multi-dimensional perspective. This research will also help organizations understand perceived fairness and provide insight into how fairness is addressed when interpreting ML applications. The latter section includes the related work discussing prior research on fairness, practice and tools in fairness, and human perception (users and students) of fairness. The related sections also discuss the gaps in existing methods. Section 2.2 shows the approach of inductive thematic analysis. Further sections include findings and discussions, association of procedural fairness and perceived fairness, definition of perceived fairness, and conclusions.
## 2 Background
### Algorithmic fairness in decision-making
In artificial intelligence (AI), machine learning (ML) is described as data-driven process that learns with experiencing the data without explicitly programmed [3; 1]. The advancement in ML approaches allows models to assist humans in decision-making tasks. Useful representations are learned from data that evaluates the ML model's performance. However, if the data consist biases then models from data-driven process may inherits the bias. Since, ML models like neural networks are black box that causes intermediate process to be opaque, it becomes difficult to assess whether the decisions are justified or biased [4; 7]. Prior research indicates critical concern on data collection methods, because flawed data include bias which can result in unfair decisions [7; 25; 6]. Thus, it is essential to understand the definition of a fairness in ML. Literature suggests that ML model is unfair if it produces unfavorable treatment to people based on specific demography [26; 27; 28; 4]. Fairness in ML is a popular and multi-dimensional concept that depends on cultures, objectives, contexts and problem definition [10; 14; 29; 30]. Black box nature of ML models that lacks explainability can lead to harmful consequences. Techniques utilizing computational and mathematical frameworks are utilized to practice interpretability, bias identification, that improves algorithmic fairness, such as IBM's AI360, Tensorflow constrained optimization framework, fairlearn, etc. [31; 32; 33; 34; 4]. Deng _et al._ 2022 empirically explored the practice of ML toolkits from ML practitioners. They concluded that practitioners need toolkits such that they can contextualize, collaborate and communicate in explainability with non-technical peers for ML fairness [34].
### Human perception of fairness
Prior research advocates that for developing a fair decision-making system consult with subject matters, ethical checks, planning, and human checks must be considered [12; 25; 2; 4]. Lee _et al._ 2017 finds that different stakeholders perceive distinct interpretations of fairness with same ML model [14]. For example, an online experiment conducted by Wang _et al._ focusing AI/ML fairness conclude that if algorithmic outcomes are inclined towards an individual, then it was rated as fair by the users [10; 4]. Researchers made efforts to understand fairness from the sociotechnical domain by conducting studies to investigate human's perception of fairness. These studies are organized to understand the relationship between human's perception of fairness and proposed notions of fairness in literature [8; 14; 15; 16; 17; 18]. Woodruff _et al._ 2018 explored the perceived fairness of users from a marginal population, they found that ML fairness cooperates with user's trust [18]. Berkel _et al._ 2021 performed an online crowdsourcing study to investigate how information presentation influences human's perceived fairness [35]. Srivastava _et al._ 2019 conducted user's study to investigate the relationship between mathematical notions and human perceived fairness [8]. Lee_et al._ 2021 performed user's experimentsfocusing on people's (Black American) perception of fairness in AI healthcare for skin cancer diagnosis. This research seeks to examine individual-level differences targeting trustworthiness in human decision-making in contrast to AI-based algorithmic decision-making [17].
### Fairness from organization justice theory
Piero _et al._ 2014 explains that fairness is concerned with social norms and governing rules [36; 4]. They discuss four forms of fairness considering the fourfold model of justice theory: distributive justice, procedural justice, interpersonal justice, and informational justice. They claim that perceived fairness is highly correlated with psychological well-being and distress. Algorithmic fairness is broadly explored with distributed (justice) fairness. Few attempts are made to explore procedural fairness [23; 21]. Distributive Fairness is defined as perceived fairness is the process for distribution of rewards across group members [37; 38; 39; 40; 4]. Whereas procedural justice is the perceived fairness of rules and decision processes used to determine outcomes [38; 23; 41; 39; 22].
#### 2.3.1 Procedural Fairness
Few studies explain the advantage of procedural fairness over distributed fairness. Morse _et al._ 2021 and Rueda 2022 suggest that procedural fairness in ML augments explainability and transparency in the ML model [24; 21]. Biran _et al._ 2017 claims that bias and fairness are highly related to the interpretation and explainability [29; 4]. In machine learning, the explanation is complex due to its black-box nature. Doshi-Velez _et al._ 2017 explain the relation between interpretability with reliability and fairness by discussing real-world scenarios [30; 4]. The definition of fairness, explainability, and interpretability is motivated by multiple theories of lens, including psychology, philosophy, cognitive science, and ethics[30; 1; 2; 4]. However, in this study, we are targeting organization justice theory. We identified components of procedural fairness from the literature to accomplish our objective. Lee _et al._ 2019 describes procedural fairness using transparency, control, and principle [23]. Rueda 2022 explains procedural fairness as avoidance of bias, accountability, and transparency [24]. Morse _et al._ 2021 discuss the components of procedural fairness from
Leventhal 1980 as bias impression, consistency, representativeness, correctability, accuracy, and ethicality [41; 21]. These components are utilized and discussed in detail in section 4.2.
Most of the studies discussed in section 2.2 include participants as users and students. Few studies utilized Amazon Mechanical Turk for user's study. However, the validity of research on Amazon's Mechanical Turk has been questioned in the past [20; 4]. There are no studies that explored ML developer's perception of fairness. Literature suggests that consulting with subject matter experts, ethical checks, planning, and human checks must be considered for developing a fair decision-making model [25; 2; 4]. Studying perceived fairness from organization justice theory as a theory of lens will help define and describe attributes of perceived fairness and develop a conceptualization of the factors that influence the beliefs of ML developers. Thus, it is essential to understand the characteristics of perceived fairness from ML developer's perspective.
## 3 Methods
### Virtual Focus Group
A focus group in research is a group discussion of people with similar characteristics where they share experiences and discuss in order to generate data [42; 43]. Focus group discussions are utilized as a qualitative approach to assess in-depth understanding of social issues [44; 43]. In this research study, focus groups are used to explore ML developer's perceptions of fairness. The participants targeted for the focus groups are ML developers, data scientists, and ML engineers from the industry who participate in the design and development of ML applications.
#### 3.1.1 Focus group participants
Motivated by literature and to explore ML developer's attributes of perceived fairness, we conducted three virtual focus groups on Zoom. Anonymity is ensured for all companies including participants, companies, and intermediate representatives. Table 1 shows the participation details for each company..
Three companies located in United States participated in the virtual focus groups. The company size is more-than-1000-employees for all three companies. The number of participants who actually participated are less with participants committed to participate. In total, nine participants from three distinct companies participated in three virtual focus groups. The diversity of the companies are shown in table 1. The job-role portfolio of the participants includes data scientists, senior data scientist, and software developers. Out of 9 participants, 3 participants are females and 6 are male. Out of all, \(75\%\) were extremely familiar with ML concepts including professional ML experience of 3 to 5 years, and took the college-level course of ML. \(25\%\) participants are experts and have more than 5 years of professional experience in ML development.
#### 3.1.2 Study Design
The research has Institutional Review Board (IRB) approval for conducting the focus groups. The virtual focus group was designed to develop an understanding of perceived fairness and it's attribute among developers. This research study utilized MIRO as a brainstorming tool and Zoom to conduct the focus groups. MIRO is an online visual platform where teams can connect, collaborate, create, and brainstorm together (see [https://miro.com/](https://miro.com/)). All sessions are conducted synchronously. Each company along with its participants has one focus group. Each focus group is given a 75-minutes window to participate. Participants from each company were provided with their own unique session ID on MIRO. Participants were notified by email along with MIRO and Zoom web links in advance of the session, so they could plan for their participation. Reminder e-mails were sent during the session to encourage more participation. Each company is assigned one focus group. After logging in at Zoom, each company with participants receives an introductory session about the focus group agenda. Each participant is asked to first fill out a questionnaire for demographic details. The second step on the agenda is brainstorming questions, and the participant
\begin{table}
\begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline
**Company ID** & **Type of Company** & **Participants committed to participate** & **Participants who actually participated** \\ \hline
1 & Life insurance, finance, media & 5 & 3 \\ \hline
2 & Railroad & 11 & 3 \\ \hline
3 & Transportation and logistics & Unknown & 3 \\ \hline \end{tabular}
\end{table}
Table 1: Participants in the focus groups
is asked to enter as many ideas as possible on MIRO during the 75-minutes window, as well as comment and discuss fellow participant's ideas. The participants recorded their ideas and discussion in the form of notes at MIRO. These notes include phrases, short sentences, and long sentences. The final step on the agenda is a closing discussion based on the brainstorming session. Table 2 shows the agenda and instructions for each focus group. The table 2 was repeated for each company as seen in table 1.
### Developing attributes/themes from focus group's data
#### 3.2.1 Thematic analysis
The focus group data collected from the participant's brainstorming are qualitative in nature. Thus, the data are further analyzed utilizing thematic analysis. Thematic analysis is broadly used for the analysis of qualitative data for recognizing different patterns and allows researchers to formulate rich, detailed, and transparent meanings [45]. Braun et al. [45] explain that thematic analysis uses familiarization, code formulation, generation of themes, themes review, defining and naming themes, and report formation. In this research study, the thematic analysis is used to identify emerging themes to formulate and assess the perceived fairness of ML developers.
#### 3.2.2 Topic Modeling
Topic modeling is a statistical modeling approach for discovering abstract "topics" in a collection of documents like newspapers and digital corpus [46]. Latent Dirichlet Allocation (LDA) approach is used for conducting the topic
\begin{table}
\begin{tabular}{|p{142.3pt}|p{142.3pt}|} \hline
**Activity** & **Instructions** \\ \hline Introductory session & Welcoming the participants; Making participants aware of Institutional Review Board (IRB) approval for conducting the focus group; Introduction to the focus group with agendas; Introduction of the topic and research objective; Tools introduction and demo; Request to fill out demographic questionnaire; \\ \hline Beginning Questionnaire & Please answer all questions to the best of your knowledge. All questions are voluntary. After you have answered all questions, ”Notify” the host, and you will be taken back to the agenda. When you are back at the agenda, go to Brainstorming Question 1 and you can start discussing and entering ideas. \\ \hline Brainstorming Question 1 & How would you describe the fairness of ML applications from your perspective? Alternatively, what are the characteristics of the perceived fairness of ML applications? Think broadly to include individual behaviors and processes based on your past experiences like projects and team meetings. Enter 5 to 10 separate ideas. Comment on ideas other people have entered and/or enter more of your own ideas. Feel free to expand on other people’s ideas. \\ \hline Brainstorming Question 2 & What are the factors that influence your belief about the perceived fairness of ML applications? Think broadly to include individual behaviors and processes based on your past experiences like projects and team meetings. Enter 5 to 10 separate ideas. Comment on ideas other people have entered and/or enter more of your own ideas. Feel free to expand on other people’s ideas. \\ \hline Brainstorming Question 3 & What practices or tools do you utilize to mitigate bias and practice fairness in ML application development? Think broadly to include individual behaviors, processes, technologies, and tools based on your past experiences like projects and team meetings. Enter 5 to 10 separate ideas. Comment on ideas other people have entered and/or enter more of your own ideas. Feel free to expand on other people’s ideas. \\ \hline Brainstorming Question 3 & What practices or tools do you utilize to mitigate bias and practice fairness in ML application development? Think broadly to include individual behaviors, processes, technologies, and tools based on your past experiences like projects and team meetings. Enter 5 to 10 separate ideas. Comment on ideas other people have entered and/or enter more of your own ideas. Feel free to expand on other people’s ideas. \\ \hline Closing Discussion & Summarizing all participant’s ideas and requesting their agreement for closing the agenda. \\ \hline \end{tabular}
\end{table}
Table 2: Focus group agenda and instructions for participants
modeling on focus group data. LDA is a popular topic modeling technique to extract topics from the collection of documents [47]. The details of LDA-based topic modeling can be found at [47].
An inductive approach is used to derive the themes by coding qualitative data into clusters of similar entities and conceptual categories. The theme derivation is done by integrating thematic analysis and LDA-based topic modeling approach. The themes helped to formulate the theoretical explanation and definition of ML developer's perceived fairness in machine learning.
## 4 Findings and Discussions
### Results and findings
In this section, we present the findings from the focus groups data collected from ML developers. The focus groups suggest that participant's ideas and discussions are influenced by their personal experience, knowledge base, and practice gained through developing ML applications. Inductive approach using thematic analysis and topic modeling using LDA assisted to derive themes from the developer's discussion on focus groups. Table 3 describes the derived themes including their attributes which describe the sub-themes derived from the focus group data. The supportive evidence from focus groups in table 3 shows the transcripts from focus group discussions. These themes are bias mitigation, data, model design, model validity, business rules, and users interaction, which describe the attributes of the developer's perceived fairness. All the themes, based on ML developer's discussion are described below.
#### 4.1.1 Bias mitigation
: An unfair model in ML is affected due to bias in the data [6; 7; 9]. The findings implies that developers are concerned with training the ML model which is a true representative of the population. Key bias form that involved in the discussion are historical, unintended (bias due to location), implicit bias, and human annotated bias. The bias mitigation techniques used by the developers include hold-out sampling, rigorous residual analyses, chi-square tests on residuals (to ensure no statistically significant patterns exist in residual distributions), implicit bias by the protected classification, and differences in the model predictions across groups.
#### 4.1.2 Data
: This theme explain data and its properties that highly depend on how data is collected. The developer's perception suggests that data collection (sampling) is important to target true representatives of the population. ML developers discussed how data is stored, processed, and transmitted in ML development process. Data representation in input data and project requirements is important to build an ML model. Feature engineering and data wrangling techniques are used widely to understand the data. They discuss that feature engineering helps them to analyze how certain variables are being weighted based on the historical understanding of the data/problem. Key data representation practices discussed are data transformation, cross-validation, data wrangling, and dimensionality reduction.
#### 4.1.3 Model design
: Model design reveals the model development process incorporating algorithmic selection, parameters selection, and training of the ML model. ML Developer's discussed that developers must be a "blank slate" while performing the quantitative evaluation of the model with proper metrics such that no bias through the developer's action can be logged.
#### 4.1.4 Model validity
: This theme illustrates whether the ML model accomplishes its intended business objective or not. The developer's discussion enlightens that developing explainable models is their key aspect of practicing fairness. Key practices discussed are risk assumptions from use-case testing, evaluating fairness from the true objectives of the scenario, peer review of code from fellow developers, marginal analysis, analyzing true predictions based on demographics, human-in-loop, and evaluating model performance with multiple metrics over time.
#### 4.1.5 Business rules
: Business rules are framing the ML problem by defining constraints, rules, ethics, privacy, and stakeholder's goals. The developer's discussion explains that the true objectives and goals of the ML must be set such that evaluations can be done within the boundaries of business rules, not on human choices. They explained that the development should not include features that violate privacy.
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline
**Themes** & **Attributes** & **Supporting evidence from focus groups** \\ \hline Bias Mitigation & Historical bias, asymmetric bias, selection bias, unintended bias, human bias, implicit bias, outliers & 1) “Data wrangling and derivations are not done in such a way as to be cherry-picking data or unduly biasing the results based on human desire”. \\ \hline Data & Demographics, population, data source, sampling, protected category, balanced/unbalanced data, representation, data review, data diversity, data collection, anonymity, features & 1) “if the data is coming into the model is skewed towards a certain group, the results will reflect it”. 2) “I know there is a belief that machine learning models are biased. To me if the coming data is unbalanced, but the model isn’t doing anything to skew the output results, it is fair”. \\ \hline Model Design & Algorithmic selection, Adaptability, Blank Slate, Model structure, hyperparameters, auto/manual design, active design & 1) “active design changes should be considered for the sake of fairness, as unfair”. 2) “choose the appropriate ML algorithm for the data such that if data is balanced vs unbalanced Data in any stratification factors, use a ML Model appropriate for that design”. \\ \hline Model validity & Residual analysis, performance metrics, human feedback, explainability, risk assumption, output measures, human choices, boundary conditions & “Microsoft Azure studio is utilized as a tool for practicing fairness, as the developers stated that for outcome analysis it has a feature to control fairness by ensuring the accuracy/recall is similar across the protected groups.” \\ \hline Business Rules & Project requirements, business constraints, user’s feedback, requirements, construct, user’s usability, use-case analysis, goal-specific selection, target objective, explainability, ethics, privacy & “training data used to construct the model must precisely represent the requirements of the business product.” \\ \hline User interaction & user’s feedback, user’s usability, explainability to users, case dependent & “one more concern with data insufficiency is when populating the data with median or average for the crucial features, results are not acceptable by the users. So, data MUST be collected appropriately by the application. In Freight Acquisition model (FAM) which is currently deprecated due to data inconsistencies, users believed that model would be giving exact yes/no to call a customer. It took some time to explain the process”. \\ \hline \end{tabular}
\end{table}
Table 3: Themes derived from thematic analysis and LDA topic modeling with transcripts as supporting evidence
#### 4.1.6 Users Interaction
: The findings suggest that developers are users oriented. The developers discuss that the business directly deals with people. Thus, a fair ML model should be beneficial to end-users, not biased towards certain groups, and being explainable to users. One of the important aspects of their perceived fairness was explaining the ML flow process to users and domain experts. They further explained that user's feedback is recorded and then integrated with the ML process such that ML applications are built with the appropriate context.
**Developers discussion on _"fairness"_**: Based on the findings of our study and the discussion above, we conclude that the developer's perceived fairness comprises the complete ML process including privacy, ethics, the intention of ML development, business constraints and goals, explainability to users, and user's usability. Interestingly, one of the developers claims that fairness in machine learning is a subjective term and evaluation of ML models must include ML-pipeline process.
### Defining Perceived Fairness
In section 2.3, we reviewed the components of procedural fairness from organization justice theory, discovered from literature. Lee _et al._ 2019 describes procedural fairness using transparency, control, and principle [23]. As per Lee _et al._ 2019, transparency is the rules of the decision-maker that are perceived as fair and warranted including an explanation of decision outcomes and information representativeness. Control is described as the degree of control over the decision that individuals receive, and principle is defined as demonstrations of consistency, competency, benevolence, and voice. Rueda 2022 explains procedural fairness as avoidance of bias, accountability, and transparency in medical scenarios [24]. Rueda 2022 defines transparency as the procedure that explains ML algorithms working and processing that lead to the outcome. Accountability is also related to the robustness of the model and avoidance of bias describes not including attributes that can cause unfavorable decisions [24]. Morse _et al._ 2021 discuss the components of procedural fairness proposed by Leventhal 1980 as bias impression, consistency, representativeness, correctability, accuracy, and ethicality [41; 21]. Consistency defines the uniformity of decision procedures across people and time, accuracy is the measure of validity and high-quality information, ethicality describes practicing moral standards and values, representativeness describes proper population representation, bias suppression subjects to prevent favoritism by the decision maker, and lastly, correctability are approaches to correct flawed decisions [41; 21].
Table 4 shows an association of themes describing developer's perceived fairness with the components of procedural fairness proposed by Lee _et al._ 2019, Rueda 2022, and Leventhal 1980. These associations are proposed by the union of the procedural fairness component's description from the literature discussed and the ML developer's discussion in the focus groups. For example, the theme "Data" describes the properties and characteristics of data in ML development process, as discussed in Sections 4.1. This theme aligns with Lee's _et al._ 2019 transparency and control because of information representativeness and its impact on decisions in the data-driven process. Rueda's 2022 avoidance of bias for fair decision-making, and Leventhal's 1980 consistency and representativeness for uniform decision-making across people and time. Thus, we conclude that the association in table 4 illustrates that procedural aspects of organizational justice theory, i.e., procedural fairness can explain the developer's perception of ML fairness.
**Definition of _perceived fairness_**- In section 4.1, we discussed the characteristics of perceived fairness from the developer's perspective. Based on the findings of this study, a developer's perception of fairness relates to aspects of data, user characteristics, understanding of ML model design and validity, and understanding of business rules that impacts their behaviors and ability to build ML systems that are free from bias. This implies that the ML systems are designed and built to be fair in processes, transparent in actions (explainable), have the opportunity for multiple voices to be integrated into their development, and are impartial to all users in their outcomes.
## 5 Implications and Future Works
### Implications
In this research study, we acknowledge the relationship between themes mentioned in table 3 with the ML development process utilized to develop ML models and applications. The relationship shown in table 5 is validated by the definitions discussed in the literature [48]. In literature, the ML process is divided broadly into define-and-plan, pre-processing, in-processing, and post-processing [3; 49; 50]. All these ML processes have distinct objectives for any framed ML problem [48]. The findings will help and motivate ML practitioners, researchers, and organizations to develop and further explore the research for formulating fairness in ML. This research study will also help researchers to understand the perceived fairness in ML in a realistic setting and provide insights into how perceived fairness is
addressed while evaluating ML applications. It will also motivate other ML developers and organizations to develop and practice ethical and fair ML decision-making models to benefit society and businesses.
### Future Works
This is a **work-in-progress** article. As observed, there are only 9 participants, and a focus group is organized just to perform a pilot study of the existing research. However, the results advocate some promising future directions including performing a large survey study.
## 6 Conclusion
In this **pilot research study**, we explored the ML developer's perception of fairness using pilot investigation through focus groups. Three companies along with nine ML developers participated in focus groups. An inductive approach, integrating thematic analysis and LDA-topic modeling is utilized to derive themes that describe attributes of ML developers' perceived fairness. The findings of the study conclude two major arguments- 1) developer's perceived fairness generally focuses on the overall ML application design and development, i.e., business-specific requirements, pre-processing, in-processing, and post-processing. 2) the procedural aspects of organizational justice theory can explain developer's perception of fairness. Finally, we proposed the definition of perceived fairness from ML developer's perspective.
## 7 Acknowledgement
This pilot research study acknowledges and thanks all the companies and their participants along with their managers whose participation made this study possible. This article acknowledges the GRACA 2021 grant under the title _Perceived Fairness from Developer's Perspective in Artificial Intelligent Systems_. The writings of this research paper were included in the GRACA grant application and oral presentation at the research fair presentation of the Office of Research and Creativity Activity under the title _Perceived Fairness from Developer's Perspective in Artificial Intelligent Systems_ at the University of Nebraska at Omaha (UNO) in Spring 2022. The virtual focus group for this research study has been approved by the Institutional Review Board(IRB 263-21-EX).
\begin{table}
\begin{tabular}{|p{85.4pt}|p{113.8pt}|p{113.8pt}|} \hline
**Themes** & **Lee et al. 2019** & **Rueda 2022** & **Lewenthal 1980; Morse et al 2020** \\ \hline Bias Mitigation & Control & Avoidance of bias, Accountability & Bias suppression \\ \hline Data & Control, Transparency & Avoidance of bias & Consistency, Representiveness \\ \hline Model Design & Transparency & Transparency & Correctability \\ \hline Model Validity & Transparency, Control, Principle & Transparency, Accountability & Accuracy, Correctability \\ \hline Business Rules & Transparency, & Avoidance of bias, Accountability & Ethicality \\ \hline Users Interaction & Transparency, Control, Principle & Accountability & Ethicality \\ \hline \end{tabular}
\end{table}
Table 4: Relationship between themes of fairness and procedural (justice) fairness components from literature
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|} \hline
**Themes** & **Relationship with ML process** \\ \hline Bias Mitigation & Define and plan, pre-processing \\ \hline Data & Define and plan, pre-processing \\ \hline Model Design & In-processing \\ \hline Model Validity & Post-processing \\ \hline Business Rules & Define and plan \\ \hline Users Interaction & Define and plan, post-processing \\ \hline \end{tabular}
\end{table}
Table 5: Explaining relationships between attributes of perceived fairness of developers and ML development process |
2304.12890 | MRI Recovery with Self-Calibrated Denoisers without Fully-Sampled Data | Objective: Acquiring fully sampled training data is challenging for many MRI
applications. We present a self-supervised image reconstruction method, termed
ReSiDe, capable of recovering images solely from undersampled data.
Materials and Methods: ReSiDe is inspired by plug-and-play (PnP) methods, but
unlike traditional PnP approaches that utilize pre-trained denoisers, ReSiDe
iteratively trains the denoiser on the image or images that are being
reconstructed. We introduce two variations of our method: ReSiDe-S and
ReSiDe-M. ReSiDe-S is scan-specific and works with a single set of undersampled
measurements, while ReSiDe-M operates on multiple sets of undersampled
measurements and provides faster inference. Studies I, II, and III compare
ReSiDe-S and ReSiDe-M against other self-supervised or unsupervised methods
using data from T1- and T2-weighted brain MRI, MRXCAT digital perfusion
phantom, and first-pass cardiac perfusion, respectively.
Results: ReSiDe-S and ReSiDe-M outperform other methods in terms of
reconstruction signal-to-noise ratio and structural similarity index measure
for Studies I and II, and in terms of expert scoring for Study III.
Discussion: We present a self-supervised image reconstruction method and
validate it in both static and dynamic MRI applications. These developments can
benefit MRI applications where the availability of fully sampled training data
is limited. | Sizhuo Liu, Muhammad Shafique, Philip Schniter, Rizwan Ahmad | 2023-04-25T15:02:33Z | http://arxiv.org/abs/2304.12890v3 | # MRI Recovery with Self-Calibrated Denoisers without Fully-Sampled Data
###### Abstract
In many MRI applications, acquiring fully sampled training data is challenging. We introduce a self-supervised image reconstruction method, termed ReSiDe, capable of recovering images solely from undersampled data. Results from brain and cardiac MRI, along with those from a digital perfusion phantom, demonstrate the performance advantages of ReSiDe over other self-supervised or unsupervised methods.
Sizhuo Liu, Philip Schniter, and Rizwan Ahmad The Ohio State University
self-supervised, MRI, reconstruction, plug-and-play
## 1 Introduction
Magnetic resonance imaging (MRI) is a well-established non-invasive imaging technique that provides several advantages over other imaging modalities, including exquisite soft-tissue contrast, multiple contrast mechanisms, and radiation-free acquisition. MRI is used in a broad range of clinical applications, including neuro, musculoskeletal, abdominal, and cardiovascular imaging. However, long scan times remain a challenge in MRI as they can reduce patient comfort, make acquisition sensitive to motion, and decrease throughout. Consequently, accelerating MRI has become a highly active area of research, with various acquisition and processing techniques being explored to reduce scan times.[1]
Parallel MRI (pMRI) acquires data in parallel across multiple receive coils and is now available on all commercial MRI scanners.[2] The resulting multi-coil data, even when subsampled below the Nyquist rate, can be jointly processed to reconstruct the underlying image. Typically, pMRI can speed up the acquisition process by a factor of two to three. To achieve further acceleration, pMRI can be combined with methods that utilize prior information about the image. For example, compressed sensing (CS) leverages sparsity-based priors and can be readily paired with pMRI.[3] The combination of pMRI and CS yields higher acceleration rates than pMRI alone, and such reconstruction methods are becoming increasingly available on commercial scanners.
More recently, deep learning (DL) methods have been developed to reconstruct images from highly undersampled MRI data. Several studies suggest that DL methods can outperform sparsity-driven CS methods.[4] Most of the DL methods are based on supervised learning, where a reconstruction network is trained on a large fully sampled training dataset.[5] Outside of a handful of 2D applications, such training datasets are not available. For other applications, e.g., cardiac imaging, collecting fully sampled data may not be feasible.[6] Therefore, self-supervised DL (SDL) techniques have recently gained significant interest for MRI reconstruction.[7] These techniques do not require fully sampled training datasets and instead leverage the redundant information within the undersampled data to guide the training process.
Several SDL methods have been proposed recently for im
age denoising, including single-instance deep generative prior methods such as deep image prior (DIP) and deep decoder [8, 9]. These methods model an image as the output of a generator network, with both network parameters and input code vectors trained on an image-specific basis. Another popular SDL method is Noise2Noise [10], which denoises images using two noisy copies of the same image. However, acquiring multiple copies of an image is not efficient for MRI. To address this issue, other SDL-based denoising methods have been recently proposed, including Noise2Void [11], Noise2Self [12], and Self2Self [13], which operate on a single noisy image. These methods train a network to predict a pixel from its neighboring pixels or predict one group of pixels from another. Since the noise is assumed to be independent across pixels and thus hard to predict, the network denoises the image by implicitly learning the underlying structure in the image. For SDL-based denoising, Xu et al. took a different approach and proposed a method called Noisy-As-Clean [14]. This method works by adding synthetic noise to the noisy images and training a denoising network to remove the added noise. The trained network is then used to denoise the images that it was trained on. In a separate development, Stein's unbiased risk estimator (SURE)-based loss has been used for unsupervised training of denoising networks [15].
The application of SDL extends beyond denoising, and many of these methods can be applied to solve inverse problems. For instance, DIP can readily solve inverse problems with a known forward operator and has been used for dynamic MRI reconstruction [16]. Scan-specific robust artificial neural network for k-space interpolation (RAKI) is similar to Noise2Void but operates in k-space [17]. RAKI trains on the fully sampled auto-calibration signal (ACS) region to predict missing k-space data from acquired data. Both RAKI and its recent extension, called residual RAKI [18], can be viewed as nonlinear extensions of traditional GRAPPA. However, due to their scan-specific nature, DIP and RAKI are computationally slow. Recently, Yaman et al. proposed self-supervised learning via data undersampled (SSDU), a self-supervised learning method that resembles Noise2Self but with a loss function defined in k-space [19]. In SSDU, the acquired undersampled k-space is divided into two subsets, and an unrolled network is trained to infer images from the first subset such that those images are consistent with the second subset. At the inference stage, the trained network in SSDU can rapidly map an undersampled, aliased image to a fully sampled image. Cole et al. [20] also proposed training a network to map undersampled, aliased images to fully sampled images but with an adversarial loss, where the discriminator is fed two unrelated undersampled images: one from the image reconstruction network output and one from an independent set of measurements. Finally, SURE-based loss has also been recently used for unsupervised training of image reconstruction networks [21, 22].
In this work, we propose an SDL method, called recovery with a self-calibrated denoiser (ReSiDe), for image reconstruction. This approach leverages the denoising scheme in Noisy-As-Clean [14] to solve the inverse problem in MRI reconstruction. Additionally, we employ the discrepancy principle to automatically adjust the strength of the denoiser. Finally, we propose a faster version of ReSiDe that is applicable in cases where multiple undersampled sets of measurements are available. Using data from brain MRI, MRXCAT perfusion phantom, and first-pass perfusion MRI, we demonstrate that ReSiDe outperforms other self-supervised and unsupervised image reconstruction methods. These developments significantly expand our preliminary description of ReSiDe [23], which did not include auto-tuning, was applicable only to a single set of measurements, and utilized only one T1- and one T2-weighted image for validation.
## 2 Theory
In MRI, the data are sampled in the spatial frequency domain, called k-space, and the MRI reconstruction entails estimating the underlying image from noisy and potentially undersampled k-space measurements. The measured noisy data are related to the image via
\[\mathbf{y}=\mathbf{A}\mathbf{x}+\mathbf{\eta}, \tag{1}\]
where \(\mathbf{x}\in\mathbb{C}^{N}\) is a vectorized \(N\)-pixel image, \(\mathbf{y}\in\mathbb{C}^{M}\) is the pMRI data measured from \(C\) receive coils, \(\mathbf{\eta}\in\mathbb{C}^{M}\) is circular white Gaussian noise with variance \(\sigma^{2}\), and \(\mathbf{A}\in\mathbb{C}^{M\times N}\) is a known forward operator that subsumes coil sensitivity maps, discrete Fourier transform, and k-space undersampling. In MRI, the value of \(\sigma^{2}\) can be reliably estimated from noise pre-scan, which typically takes a fraction of a second and can be integrated with any pulse sequence.
To reduce acquisition time, the k-space data are often prospectively undersampled to achieve an acceleration rate \(R\triangleq\frac{CN}{M}>1\). At high acceleration rates, the problem in Equation (1) becomes ill-posed. In that case, a common remedy is to inject prior knowledge about \(\mathbf{x}\) using a regularizer, resulting in the optimization problem of the form
\[\widehat{\mathbf{x}}=\operatorname*{arg\,min}_{\mathbf{x}}\frac{1}{\sigma^{2}}\|\mathbf{y }-\mathbf{A}\mathbf{x}\|_{2}^{2}+\mathcal{R}(\mathbf{x}), \tag{2}\]
where the first term enforces data consistency and \(\mathcal{R}(\cdot)\) represents the regularizer. For CS-based MRI reconstruction, popular choices for \(\mathcal{R}(\mathbf{x})\) include total variation and \(\lambda\|\mathbf{\Phi}\mathbf{x}\|_{1}\), where \(\mathbf{\Phi}\) represents a sparsifying transfrom and \(\lambda>0\) controls the regularization strength [3]. It has been shown that simple sparsity-based regularizers may not fully capture the rich structure in medical imaging [24]. To leverage more complex priors, Venkatakrishnan et al. [25] proposed an algorithmic framework called plug-and-play (PnP). In PnP, an off-the-shelf image denoiser, \(\mathbf{f}(\cdot)\), is called within an iterative algorithm, e.g., primal-dual splitting (PDS) [26], for solving Equation (2). A PDS-based implementation of PnP is given in Algorithm 1. In the subsequent sections, we will use this algorithm as a starting point to develop ReSiDe. Note, the
implementation of PnP or ReSiDe is not limited to PDS and can be carried out using other algorithms, including alternating direction method of multiplier (ADMM) and fast iterative shrinkage and thresholding algorithm (FISTA) [27].
Although any of-the-shelf denoiser \(\mathbf{f}(\cdot)\) (Line 3 in Algorithm 1) can be used in the PnP framework, recent evidence suggests that the performance of PnP methods can be improved by using application-specific DL-based denoisers [27]. However, training such denoisers requires high-quality images or image patches. For many MRI applications, such training data are not readily available. To address this issue, ReSiDe aims to iteratively train the denoiser using the image or images being recovered. We propose two variations of ReSiDe, i.e., ReSiDe-S and ReSiDe-M, which are described below.
### ReSiDe-S
ReSiDe-S is a scan-specific technique that operates on a single set of undersampled measurements, \(\mathbf{y}\). A PDS-based implementation of ReSiDe-S is given in Algorithm 2. ReSiDe-S differs from PnP in the way the denoiser \(\mathbf{f}(\cdot)\) is trained. Following the work by Xu et al. [14], we propose training a DL-based denoiser by adding synthetic noise to the image being recovered. The denoiser training process is described in Line 3 of Algorithm 2. In summary, \(\mathbf{u}_{t}\) is an intermediate image at iteration \(t\), and \(\mathcal{I}_{t}[\mathbf{u}_{t}]\) represents the \(i^{\text{th}}\) patch from \(\mathbf{u}_{t}\). For training purposes, \(\mathcal{I}_{i}[\mathbf{u}_{t}]\) and \(\mathcal{I}_{i}[\mathbf{u}_{t}]+\mathcal{N}(\mathbf{0},\sigma_{t-1}^{2}\mathbf{I})\) act as "clean" and "noisy" patches, respectively. Here, \(\mathcal{N}(\mathbf{0},\sigma_{t-1}^{2}\mathbf{I})\) represents complex-valued zero-mean white Gaussian noise with variance \(\sigma_{t-1}^{2}\). The denoiser \(\mathbf{f}(\cdot;\mathbf{\theta})\), parameterized by \(\mathbf{\theta}\), is trained using \(P\geq 1\) patches in a supervised fashion by minimizing the loss \(\mathcal{L}(\cdot,\cdot)\), which measures the difference between the denoiser output and clean patches. Once the denoiser is trained, it is then used to denoise the intermediate image \(\mathbf{u}_{t}\) (Line 4 in Algorithm 2), but this time without the added noise. The process of training and applying the denoiser is repeated in each iteration of ReSiDe-S.
The strength of the denoiser is controlled by \(\sigma_{t}^{2}\), with larger \(\sigma_{t}^{2}\) leading to more aggressive denoising. As evident from our preliminary results [23], the value of \(\sigma_{t}^{2}\) should be larger at the start to speed up convergence and then decreased over iterations to avoid over-smoothing of the recovered images. To address this issue, we propose using the discrepancy principle to auto-tune \(\sigma_{t}^{2}\) (Line 6 in Algorithm 2) [28]. If the Algorithm 2 is run with a fixed user-defined \(\sigma_{0}^{2}\), one would expect the value of the final residual sum of squares \(\big{(}\|\mathbf{A}\widetilde{\mathbf{x}}-\mathbf{y}\|_{2}^{2}\big{)}\) to monotonically increase with an increase in \(\sigma_{0}^{2}\). By leveraging this monotonic relationship, we adapt \(\sigma_{t}^{2}\) by using \(\frac{M\sigma^{2}}{\|\mathbf{A}\mathbf{x}_{t}-\mathbf{y}\|_{2}^{2}}\) as a multiplicative corrective term. This way, the value of \(\sigma_{t}^{2}\) is auto-adjusted to promote the ratio \(\frac{M\sigma^{2}}{\|\mathbf{A}\mathbf{x}_{t}-\mathbf{y}\|_{2}^{2}}\) to be one. The value of \(\alpha>0\) (Line 6 in Algorithm 2) controls the contributions of the corrective term, with larger values leading to a more rapid adjustment of \(\sigma_{t}^{2}\). Optionally, a user-defined scalar \(\tau>0\) can be used to provide further control over the strength of the denoiser, with smaller values generating noisy but sharper images. Note, adjusting \(\sigma_{t}^{2}\) is much more challenging as it needs to be adjusted over iterations, while the fixed value of \(\tau\) can be adjusted once and then kept constant for a given MRI application.
### ReSiDe-M
There are two major limitations of ReSiDe-S. First, it requires training a network in each iteration, which is time-consuming and thus unrealistic for clinical deployment. Second, ReSiDe-S is a scan-specific method and strictly operates on a single set of undersampled measurements; however, for most MRI applications, more than one set of undersampled measurements is generally available. To reduce computation time at the time of inference and to leverage the availability of multiple sets of undersampled measurements, we propose ReSiDe-M, which is outlined in Algorithm 3. Here, the tilde annotation on the top of the symbol indicates it is a joint representation of \(K\geq 1\) sets of measurements. For example, \(\widetilde{\mathbf{A}}\), \(\widetilde{\mathbf{y}}\), and \(\widetilde{\mathbf{\sigma}}^{2}\), indicate the forward operator, k-space measurements, and average noise variance, respectively, from \(K\) sets of measurements.
ReSiDe-M is implemented in two steps: training and inference. The training step is similar to ReSiDe-S, where both image recovery and denoiser training happen in tandem. However, in contrast to ReSiDe-S, the denoiser training in ReSiDe-M is performed using multiple undersampled datasets. For \(K=1\), the training phase of ReSiDe-M is identical to that of ReSiDe-S. Like ReSiDe-S, although the training step in ReSiDe-M reconstructs images, its main objective is to store the resulting sequence of denoisers, parameterized by \(\{\mathbf{\theta}_{t}\}_{t=1}^{T}\). Since the denoiser in ReSiDe-M is trained on a more diverse, larger set of patches from multiple images, it is expected to generalize better on unseen images from the same application without further training. The second step in ReSiDe-M is inference, where an unseen undersampled dataset is reconstructed using a PnP algorithm, with the denoising in the \(t^{\text{th}}\) iteration of PnP (Line 10 in Algorithm 3) performed by the pretrained denoiser \(\mathbf{\theta}_{t}\). The computational complexity of ReSiDe-M at the inference stage is similar to that of sparsity-based iterative CS methods. A high-level description of ReSiDe-M can be found in Figure 1.
## 3 Methods
### Study I-Brain MRI
In this study, ReSiDe-S and ReSiDe-M were evaluated on T1- and T2-weighted images from the fastMRI dataset [4]. For each contrast, twenty-two sets of multi-coil measurements were used. All T1 images were cropped to \(320\times 320\), and all T2 images were cropped to \(384\times 304\). The multi-coil k-space data were compressed to 8 virtual coils. The data
were retrospectively downsampled at \(R=4\) using two realistic Cartesian sampling masks, i.e., a 1D pseudo-random mask with a 32-line wide ACS region (S1) and a 1D random mask with a 32-line wide ACS region (S2). The S1 mask
was kept fixed across training and testing sets of measurements, while a different S2 mask was randomly drawn for each training and testing instance. The coil sensitivity maps were estimated using ESPIRiT [29]. Sixteen out of the 22 sets of measurements were used for network training in ReSiDe-M and SSDU. Five measurement sets were used for performance evaluation and for comparing ReSiDe-S and ReSiDe-M with CS with an \(\ell_{1}\) penalty on the wavelet coefficients [30], PnP with BM3D denoiser (PnP-BM3D), ConvDecoder [31], and SSDU [19]. Algorithm 1 was used to implement PnP-BM3D, with \(\mathbf{f}(\cdot)\) in Line 3 of Algorithm 1 representing denoising with BM3D.
For each method, one set of measurements was used for manual parameter tuning. These parameters included: regularization strength for CS, denoising level for PnP-BM3D, number of iterations and input size for ConvDecoder, cardinality of the loss mask and number of epochs for SSDU, and parameters \(\alpha\) and \(\tau\) and number and the size of patches for ReSiDe-S and ReSiDe-M.
### Study II-MRXCAT Perfusion Phantom
Twenty-two sets of perfusion image series from the MRXCAT digital phantom were considered in this study [32]. Each set of measurements was cropped to \(112\times 168\) pixels, with \(32\) frames and \(4\) receive coils. All data were retrospectively downsampled at \(R=4\) using a 1D pseudo-random Cartesian sampling mask (S3) [33]. Due to the interleaving nature of S3, the ACS region was not acquired for individual frames, and the fully sampled time-averaged k-space was used to estimate coil sensitivity maps. To simulate real data, complex-valued white Gaussian noise was added to k-space measurements to simulate a k-space signal-to-noise ratio of approximately 12 dB. Sixteen sets of measurements were used to train ReSiDe-M, and five sets of measurements were used for performance evaluation and for comparing ReSiDe-S and ReSiDe-M with PnP with BM4D denoiser (PnP-BM4D) implemented using Algorithm 1 and CS with an \(\ell_{1}\) penalty on the spatiotemporal wavelet coefficients [34]. As described in the previous study, one set of measurements was used to manually optimize free parameters in all methods.
### Study III-First-Pass Perfusion Imaging
This study included 22 first-pass perfusion image series from patients clinically referred for a CMR exam at our institute. All measurements were performed on a commercial 1.5T scanner (MAGNETOM Sola, Siemens Healthcare, Erlangen, Germany) with a fast low angle shot (FLASH) sequence using echo-planar imaging (EPI) readout. The data were collected in three different views, i.e., short-axis (SAX), two-chamber (2CH), and four-chamber (4CH) views. The other imaging parameters were: flip angle 25 degrees, temporal footprint \(75.48\) to \(99.36\) ms, matrix size \(144\times 108\) to \(144\times 144\), field of view \(360\times 270\) to \(420\times 380\), echo train length \(4\), echo spacing \(6.06\) to \(6.29\) ms, slice thickness 8 to 10 mm, and a number of frames 60. The images were prospectively undersampled in the \(k_{x}\)-\(k_{y}\) domain with an acceleration rate of two and uniform undersampling that was interleaved across time. Sixteen sets of measurements were used to train ReSiDe-M, and five sets of measurements were used for performance evaluation and comparison with PnP with BM4D denoiser (PnP-BM4D) implemented using Algorithm 1 and CS with an \(\ell_{1}\) penalty on the spatiotemporal wavelet coefficients [34]. As described in the previous study, one set of measurements was used to manually optimize free parameters in all methods.
### Quality Assessment
For Studies I and II, where the fully sampled reference was available, image quality was assessed using the structural similarity index (SSIM) and reconstruction SNR (rSNR) in dB, defined as \(20\log_{10}\left(\|\mathbf{x}\|_{2}/\|\mathbf{x}-\mathbf{\tilde{x}}\|_{2}\right)\). For Study III, where the fully sampled reference was not available, each perfusion image series was blindly scored by three expert reviewers, each with more than ten years of experience in cardiac MRI. Each image series, presented as a movie, was scored on a five-point Likert scale (1: non-diagnostic, 2: poor, 3: adequate, 4: good, 5: excellent) in terms of preservation of small details.
Figure 1: A high-level description of ReSiDe-M. At the training stage (a), a convolutional neural network (CNN)-based denoiser is trained on patches from intermediate images \(\mathbf{\tilde{u}}_{t}\) (Line 3 in Algorithm 3). The resulting sequence of denoisers is stored. At the inference stage (b), the reconstruction is performed using a PnP method, which conceptually alternates between data consistency and denoising steps. The denoising in (b) is performed using the denoiser from (a).
### Implementation of ReSiDe
In Study I, we extracted randomly positioned \(P=576\) patches and \(P=2{,}306\) patches to train the denoiser in ReSiDe-S and ReSiDe-M, respectively. For ReSiDe-M, the \(2{,}306\) patches were evenly distributed across the 16 training images. The patch size was fixed at \(64\times 64\). For Studies II and III, we extracted randomly positioned \(P=288\) patches and \(P=4{,}608\) patches to train the denoiser in ReSiDe-S and ReSiDe-M, respectively. For ReSiDe-M, the \(4{,}608\) patches were evenly distributed across the 16 training image series. The patch size was fixed at \(64\times 64\times 20\). The mean squared error was used as a cost function to train the denoiser. The real and imaginary parts were split into two channels. We trained the network with the structure shown in Supporting Information Figure S1. Each convolutional layer had 128 kernels with size \(3\times 3\) for Study I and size \(3\times 3\times 3\) for Studies II and III. We used Adam optimizer with the learning rate \(10^{-3}\) for Study I and \(10^{-4}\) for Studies II and III. The training in ReSiDe-M was performed on an NVIDIA RTX 2080 Ti for Study I and an NVIDIA RTX 3090 for Studies II and III. For Study I, the measurement noise variance \(\sigma^{2}\) was estimated from the outer fringes of k-space. For Study II, the noise was synthetically added; so, the value of \(\sigma^{2}\) was precisely known. For Study III, the value of \(\sigma^{2}\) was estimated from the noise pre-scan, which was included in the raw data file. In Studies I, II, and III, the number of iterations, \(T\), for ReSiDe-S and ReSiDe-M were set at 80, 60, and 60, respectively. Within each iteration, the denoiser was trained for a total of ten epochs. The code for ReSiDe-S can be downloaded from [https://github.com/sizhudiu/ReSiDe](https://github.com/sizhudiu/ReSiDe).
## 4 Results
### Study I-Brain MRI
Figure 2 and Figure 3 display examples of reconstructed T1- and T2-weighted images using undersampling masks S1 and S2, respectively. The first row presents the true image obtained from fully sampled k-space, alongside images reconstructed by CS, PnP-BM3D, ConvDecoder, SSDU, ReSiDe-S, and ReSiDe-M methods. The second row features two magnified regions from the images in the first row. Red arrows indicate visible artifacts or blurring present in the reconstructed images. In the third row, the leftmost panel illustrates the undersampling masks, while the remaining panels depict error maps from various reconstructions after a five-fold amplification. The top section of Table 1 summarizes rSNR and SSIM values averaged over five T1- and T2-weighted images employing S1 and S2 masks.
### Study II-MRXCAT Perfusion Phantom
Figure 4 presents a representative frame from reconstructions of an MRXCAT perfusion phantom. The first row displays the true image derived from fully sampled k-space, as well as images reconstructed using CS, PnP-BM3D, ReSiDe-S, and ReSiDe-M methods. The middle row shows a magnification of two selected regions. Red arrows emphasize details that are partially or entirely lost in some of the reconstructed images. In the third row, the leftmost panel illustrates the undersampling masks in phase encoding (vertical) and temporal (horizontal) dimensions. The readout dimension, which is not shown, is fully sampled. The remaining panels in the third row depict error maps after a five-fold amplification. The last row in Table 1 summarizes rSNR and SSIM values averaged over five MRXCAT datasets with S3 mask for CS, PnP-BM4D, ReSiDe-S, and ReSiDe-M.
### Study III-First-Pass Perfusion Imaging
Figure 5 shows a representative frame from one of the first-pass perfusion image series. Reconstructions from CS, PnP-BM4D, ReSiDe-S, and ReSiDe-M are shown. The top row shows the entire frame, and the bottom row features two magnified regions from the images in the first row. The red arrows highlight details that are partially or completely lost in some of the reconstructed images. In the case of the green box, the red arrows point to the leaflets of the mitral valve. An additional frame from a different image series is shown in Supporting Information Figure S2. For CS, PnP-BM4D, ReSiDe-S, and ReSiDe-M, Table 2 provides the image quality scores averaged over five image series from three CMR experts, including two cardiologists.
## 5 Discussion
In this work, we present an SDL method, called ReSiDe, for MRI reconstruction. Like PnP methods, ReSiDe integrates a DL-based denoiser into the reconstruction process. However, PnP methods use pretrained denoisers while ReSiDe iteratively trains the denoiser on the images being recovered. We present two variations of ReSiDe, i.e., ReSiDe-S and ReSiDe-M. ReSiDe-S is truly scan-specific and utilizes only a single set of undersampled measurements. The necessity to train the network in each iteration makes ReSiDe-S computationally slow. In contrast, ReSiDe-M operates on multiple sets of undersampled measurements and trains the denoiser on patches from multiple images. More importantly, due to its training on multiple sets of images, the denoiser in ReSiDe-M can potentially generalize to other unseen images. To leverage that, we save the denoiser trained in ReSiDe-M and then utilize it in a PnP algorithm without further training. The computation burden of ReSiDe-M after the training stage is comparable to CS-based iterative methods.
The validation of ReSiDe-S and ReSiDe-M is carried out on three datasets, i.e., T1- and T2-weighted images from fastMRI (Study I), digital perfusion image series from MRXCAT (Study II), and first-pass perfusion data collected from
patients (Study III). In Study I, we compare ReSiDe-S and ReSiDe-M with other methods that do not require fully sampled data, including CS, PnP-BM3D, ConvDecoder, and SSDU. As summarized in Table 1, ReSiDe-M consistently outperforms competing methods, with ReSiDe-S being the second best. All methods perform better with the S1 mask compared to the S2 mask, and the difference between ReSiDe and other methods is more pronounced for the S2 mask. Two examples of reconstructed images are shown in Figure 2 and Figure 3. Compared to PnP-BM3D, ConvDecoder, and SSDU, both ReSiDe-S and ReSiDe-M exhibit fewer artifacts while preserving fine details. The artifacts are particularly pronounced in ConvDecoder. Despite our best efforts to optimize the code provided by the original authors here [35], we were unable to improve the performance of ConvDecoder.
The rSNR and SSIM numbers from Study II followed a similar trend, but the difference between ReSiDe-S and ReSiDe-M was smaller. We attribute the smaller performance gap to two factors. First, the simplistic nature of MRXCAT images makes learning from a single image series more effective. Second, each image series in MRXCAT has 32 frames. The availability of multiple frames helps the denoiser training in ReSiDe-S, which uses a single image series. In contrast to Study I, where CS was the best performer outside of ReSiDe, PnP-BM4D outperforms CS by nearly 2 dB in Study II. We hypothesize that being a patch-matching filtering technique, BM4D benefits from the extra redundancies present in a 3D time series. Figure 4 shows a representative frame. As shown in the error map, ReSiDe-S and ReSiDe-M preserve edge
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline
**Image** & **Samp.** & **CS** & **PnP-BMXD** & **ConvDecoder** & **SSDU** & **ReSiDe-S** & **ReSiDe-M** \\ \hline \hline Brain T1 & S1 & \(21.49/0.8682\) & \(22.22/0.9157\) & \(16.49/0.7548\) & \(21.50/0.8634\) & \(22.10/0.9134\) & \(22.78/0.9196\) \\ \hline Brain T1 & S2 & \(18.51/0.8231\) & \(17.43/0.8595\) & \(15.77/0.7154\) & \(19.04/0.8435\) & \(19.80/0.8956\) & \(21.32/0.9109\) \\ \hline Brain T2 & S1 & \(23.11/0.9050\) & \(21.68/0.9356\) & \(15.37/0.8042\) & \(20.93/0.8872\) & \(23.30/0.9478\) & \(23.73/0.9479\) \\ \hline Brain T2 & S2 & \(18.76/0.8513\) & \(18.13/0.8927\) & \(14.87/0.7899\) & \(18.87/0.8587\) & \(22.34/0.9428\) & \(22.42/0.9416\) \\ \hline Brain Avg & S1/S2 & \(20.47/0.8619\) & \(19.86/0.9020\) & \(15.62/0.7659\) & \(20.09/0.8632\) & \(21.89/0.9249\) & \(22.56/0.9275\) \\ \hline \hline MRXCAT & S3 & \(22.22/0.7896\) & \(24.21/0.8112\) & – & – & \(25.51/0.8305\) & \(25.77/0.8304\) \\ \hline \end{tabular}
\end{table}
Table 1: Image quality metrics for Studies I (top) and Study II (bottom). In each cell, the first number represents rSNR in dB, and the second number represents SSIM, both averaged over five test samples. The best value in each row is highlighted in bold font. The “Brain Avg” row represents the average of the preceding four rows. BMXD represents BM3D for Study I and BM4D for Study II.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline & **CS** & **PnP-BM4D** & **ReSiDe-S** & **ReSiDe-M** \\ \hline \hline E1 & \(3.2\) & \(4.0\) & \(4.4\) & \(\mathbf{5.0}\) \\ \hline E2 & \(2.0\) & \(3.0\) & \(4.0\) & \(\mathbf{4.6}\) \\ \hline E3 & \(2.0\) & \(3.4\) & \(3.6\) & \(\mathbf{3.6}\) \\ \hline Avg & \(2.4\) & \(3.5\) & \(4.0\) & \(\mathbf{4.4}\) \\ \hline \end{tabular}
\end{table}
Table 2: Image quality scoring from three expert reviewers (E1, E2, and E3) on a five-point Likert scale (5: best, 1: worst) averaged over five perfusion image series.
Figure 2: An example showing reconstruction of T1-weighted images with sampling mask S1. To highlight differences, the second row magnifies two areas in the brain. The red arrows point to visible artifacts or blurring. The third row shows the sampling mask (left) and the error map after five-fold amplification.
information more effectively. In Study III, where the image series were subjectively evaluated by three expert readers, ReSiDe-M consistently outperformed other methods, with ReSiDe-S being the second best. The example images provided in Figure 5 and Supporting Figure S2 illustrate that the ReSiDe methods are more effective in preserving small details, e.g., mitral valve leaflets. The PnP-BM4D results while effective in suppressing noise have an artificial appearance and lack texture. Similar observations have been made in prior works about BM3D and BM4D. For example, see Figure 3 in a recent work by Xu et al. [36]
In addition to the superior image quality, ReSiDe-M also offers a computation advantage over ReSiDe-S and many other SDL methods. For example, in Study I, ReSiDe-S and ReSiDe-M trainings took 36 minutes and 140 minutes, respectively. However, at the inference stage, the reconstruction from ReSiDe-M took only 11 seconds per image. In comparison, the training and inference from SSDU took 127 minutes and 3.3 seconds, respectively. The inference time for ReSiDe-M for Studies II and III was 25 and 37 seconds, respectively. The inference time for ReSiDe-M for Studies II and III was 25 and 37 seconds, respectively.
Figure 4: A representative frame from MRXCAT perfusion phantom reconstructions. The second row magnified two areas of the phantom, with the red arrows highlighting visible blurring. The third row shows the sampling mask (left) in the phase-encoding (vertical) and temporal (horizontal) dimensions and the absolute error map after 5-fold amplification.
Figure 3: An example showing reconstruction of T2-weighted images with sampling mask S2. To highlight differences, the second row magnifies two areas in the brain. The red arrows point to visible artifacts or blurring. The third row shows the sampling mask (left) and the error map after five-fold amplification.
respectively, per image series. In comparison, PnP-BM4D took 40 minutes and 110 minutes for each image series in Studies II and III, respectively.
This study has several limitations. First, our current implementation requires saving a denoiser in each iteration of the training process. Saving a large number of denoisers can be memory intensive, especially if larger networks are employed. Future work could explore saving the denoisers less frequently, e.g., after every tenth iteration. Second, we have used a denoiser architecture that is based on the residual learning approach proposed in 2017 [37]. It is possible that other network architectures can further improve the performance of ReSiDe. Third, although using the discrepancy principle eliminates the need to precisely schedule the denoiser strength [23], both ReSiDe-S and ReSiDe-M still require selecting the values of \(\alpha\) and \(\tau\). For the studies presented, we selected one parameter combination for each application. It is not clear if the values of these parameters will stay reasonable if imaging parameters, e.g., spatial resolution or measurement SNR, change significantly within each application.
## 6 Conclusion
We have presented two self-supervised methods for MRI reconstruction: ReSiDe-S and ReSiDe-M. ReSiDe-S offers a scan-specific implementation where a single set of undersampled measurements is used for denoiser training and image recovery. In contrast, ReSiDe-M trains a denoiser from multiple undersampled measurements and utilizes that denoiser during inference without further training. Our validation studies, which used data from brain MRI, perfusion phantom, and first-pass perfusion, demonstrate that ReSiDe-S and ReSiDe-M outperform other self-supervised or unsupervised methods in terms of both qualitative and quantitative metrics. In comparison to ReSiDe-S, ReSiDe-M also offers better image quality and faster inference.
## 7 Acknowledgments
This work was funded in part by NIH projects R01EB029957, R01HL151697, and R01HL135489.
## Author Contributions
S. Liu implemented the ReSiDe algorithm and drafted the first version of the manuscript. P. Schniter contributed by providing valuable feedback for optimizing ReSiDe and assisted with manuscript preparation. R. Ahmad supported the project by assisting with data acquisition, troubleshooting the of ReSiDe, study design, and manuscript preparation.
## Financial Disclosure
The authors declare no potential conflict of interests.
Figure 5: A representative frame from one of the first-pass perfusion image series. The first show shows the entire frame, while the second shows two magnified areas from the frame. The visible loss of detail is highlighted with red arrows. |
2310.17877 | ASPIRO: Any-shot Structured Parsing-error-Induced ReprOmpting for
Consistent Data-to-Text Generation | We present ASPIRO, an approach for structured data verbalisation into short
template sentences in zero to few-shot settings. Unlike previous methods, our
approach prompts large language models (LLMs) to directly produce
entity-agnostic templates, rather than relying on LLMs to faithfully copy the
given example entities, or validating/crafting the templates manually. We
incorporate LLM re-prompting, triggered by algorithmic parsing checks, as well
as the PARENT metric induced consistency validation to identify and rectify
template generation problems in real-time. ASPIRO, compared to direct LLM
output, averages 66\% parsing error rate reduction in generated verbalisations
of RDF triples on the DART dataset. Our best 5-shot text-davinci-003 setup,
scoring BLEU of 50.62, METEOR of 45.16, BLEURT of 0.82, NUBIA of 0.87, and
PARENT of 0.8962 on the Rel2Text dataset, competes effectively with recent
fine-tuned pre-trained language models. | Martin Vejvar, Yasutaka Fujimoto | 2023-10-27T03:39:51Z | http://arxiv.org/abs/2310.17877v1 | ASPIRO: Any-shot Structured Parsing-error-Induced ReprOmpting for Consistent Data-to-Text Generation
###### Abstract
We present ASPIRO, an approach for structured data verbalisation into short template sentences in zero to few-shot settings. Unlike previous methods, our approach prompts large language models (LLMs) to directly produce entity-agnostic templates, rather than relying on LLMs to faithfully copy the given example entities, or validating/crafting the templates manually. We incorporate LLM re-prompting, triggered by algorithmic parsing checks, as well as the PARENT metric induced consistency validation to identify and rectify template generation problems in real-time. ASPIRO, compared to direct LLM output, averages 66% parsing error rate reduction in generated verbalisations of RDF triples on the DART dataset. Our best 5-shot text-davinci-003 setup, scoring BLEU of 50.62, METEOR of 45.16, BLEURT of 0.82, NUBIA of 0.87, and PARENT of 0.8962 on the Rel2Text dataset, competes effectively with recent fine-tuned pre-trained language models.1
Footnote 1: code available at github.com/vejvarm/ASPIRO.
## 1 Introduction
Data-to-text task Reiter (1996) aims to build a faithful natural language interpretation of structured data such as relational tables or Resource Description Framework (RDF) triples Miller (2001). However, without proper context, the given structured data may not sufficiently represent the relationships between entities, leading to ambiguity Dusek et al. (2019). To battle this, some works rely on fine-tuning pre-trained language models (PLMs) on task-specific datasets in supervised or semi-supervised ways Ke et al. (2021); Agarwal et al. (2021), but the domain of the resulting system is limited and requires well-labelled training data Keymanesh et al. (2022). In contrast to finetuning, Kasner and Dusek (2022) prove that zero-shot neural systems are a possible solution, where in-domain data is introduced via simple human-crafted templates for each unique relation in the knowledge graph. Xiang et al. (2022) nullify the requirements for human labelling entirely by utilising GPT3-davinci Brown et al. (2020), a large language model (LLM) with broad general knowledge, to disambiguate RDF triples into short sentences and automatically parse them into reusable sentence templates as an alternative to human-crafted templates. In this paper we introduce ASPIRO, a robust \(N\)-shot variant of the data disambiguation step presented by Xiang et al. (2022) and a promising alternative to fine-tuning PLMs for crafting RDF verbalisations Kasner et al. (2023). At its core, ASPIRO uses simple rules to algorithmically flag errors in the templates (such as missing subject, multiple objects, etc.) and re-prompt the LLM until all errors are alleviated or maximum (\(N\)) retries have been reached. We evaluate changes in automated metrics and reduction of parsing errors in different configurations of ASPIRO on DART Nan et al. (2021) and Rel2Text Kasner et al. (2023) and compare the original RDF verbalisation prompt used by Xiang et al. (2022) with our prompt focused on enforcing structured json output with intermediate fields as guidelines.
## 2 Related Work
Single triple verbalisation:Mainly leveraged for reducing ambiguity in structured data before a specific D2T task Laha et al. (2019); Dusek and Kasner (2020); Xiang et al. (2022) as well as transforming inputs to be better suited for existing NLG models Gupta et al. (2020); Kasner and Dusek (2022); Xiang et al. (2022), verbalisation templates fall into three main categories:
1. human-crafted Kale and Rastogi (2020); Kasner and Dusek (2022)
2. rule-based Laha et al. (2019); Gupta et al. (2020)
3. neural model-based Xiang et al. (2022); Kasner et al. (2023)
ASPIRO combines aspects of both 2) and 3).
Delexicalization:Einolghozati et al. (2020) and Heidari et al. (2021) find that without delexicalization, generative models can produce incomplete representations of the entities and concepts in the structured data verbalisations, leading to misinterpretation and failures in production. Our JSON structured prompt (SSG.2) enforces the LLM to directly produce named-entity agnostic templates.
0-shot to \(N\)-shot:Our work is heavily inspired and builds upon the disambiguation step from Xiang et al. (2022), which is equivalent to 0-shot setting for our \(N\)-shot Generator. We also use their prompt (SSG.1) as baseline against our JSON prompt (SSG.2).
Refining LLM outputs:Madaan et al. (2023) and Shinn et al. (2023) show that iterative prompting and chain-of-thought reasoning can significantly improve the outputs of LLMs. We lean on their findings in designing our ASPIRO pipeline. However, back and forth prompting of LLMs can be expensive, which we counterweight by using our Rule-based parser (SS3.1) and the PARENT (Dhingra et al., 2019) F1 score (SS3.2) as cost-efficient gateways to decide if additional prompting is necessary.
## 3 Methods
The proposed method (ASPIRO) revolves around the conversion of structured data samples into verbalisation templates using a two-stage pipeline: \(N\)**-shot Generator** (SS3.1) and **Consistency Validator** (SS3.2). The pipeline processes structured data samples, wherein each sample comprises of one or more RDF triples which share the same relation. ASPIRO (see Figure 1) starts with an initial prompt to verbally articulate the structured data. This is equivalent to prompting a single LLM directly. If the zeroth attempt isn't accurate, it will retry a maximum of \(N\) times, refining the previous completion based on parsing errors (SS3.1.2). Subsequently, the outputs are validated for consistency, ensuring faithful and reliable verbalisations. We explain the individual stages and their sub-modules in the sections below. Refer to Figure 1 for full pipeline and terminology on general input. **Step-by-step flow** of the pipeline and example on specific input are provided in section SS3.3 and Figure 2 respectively.
### \(N\)-shot Generator
\(N\)-shot Generator further fractures into an LLM stack and a Rule-based parser. The LLM Stack is tasked with generating verbalisation attempts based on given initial prompt (SSG.1). It does so with the help of the Rule-based parser. This parser checks the generated completions for structural accuracy, ensuring they adhere to expected patterns.
#### 3.1.1 LLM Stack
The LLM stack is a sequence of \(N+1\) LLMs, indexed from \(0\) to \(N\). \(\mathcal{L}_{0}\) is responsible for the initial completion and each further retry shot, initiated by the Rule-based parser (SS3.1.2), increments the index by \(1\). Each \(L_{n}\) is instantiated separately and does not have to be the same model. Equation (1) shows the single completion for structured input sample \(x\) at shot \(n\).
\[y_{n}=\mathcal{L}_{n}(\mathcal{T}(x)) \tag{1}\]
where \(\mathcal{T}\) is a given prompt and can be either \(\mathcal{T}_{I}\) (initial) or \(\mathcal{T}_{R}\) (retry).
#### 3.1.2 Rule-based parser
A purely algorithmic module, which validates \(y_{n}\) against a set of conditions \(\{\mathcal{C}\}\) one by one. If \(y_{n}\) does not pass the condition \(\mathcal{C}_{i}\), a respective parsing error is logged into set \(\mathcal{E}_{n}\). The aggregated rules for each given completion are formally given below (see SSA for detailed Python implementation).
\(\mathcal{C}_{0}\)... has exactly one '<subject>' substring.
\(\mathcal{C}_{1}\)... has exactly one '<object>' substring.
\(\mathcal{C}_{2}\)... has no other '<...>' substrings.
If the parser identifies any error in the structure, the next LLM in the LLM stack is re-prompted with Retry prompt (SSG.3) to generate new completion.
### Consistency Validator
Even if the outputs from the \(N\)-shot Generator adhere to the structural patterns, they might still contain inaccuracies, such as hallucinated content. This module assesses the quality of the verbalisations, using the PARENT statistical metric (Dhingra et al., 2019). If PARENT F1 score is too low, the module will utilise an LLM with specialised Consistency prompt (SSG.4) to improve the sentence.
#### 3.2.1 PARENT\({}_{F1}\) threshold
To gauge the quality of the completion \(y_{n}\) from \(N\)-shot Generator, we set a minimal threshold (\(\mu\)) for the PARENT score of \(y_{n}\). The score is calculated
using eq. (3) against artificially constructed table and reference.
First, we construct the respective hypothesis, table and reference entries:
\[\begin{split} h&=y_{n}.\texttt{replace}([s,o],e)\\ t&=\langle e,r.\texttt{split("\ ")},e\rangle \\ \rho&=r\end{split} \tag{2}\]
where "<subject>" and "<object>" are replaced with "<entity>" to prevent penalising order discrepancy between hypothesis and table.
We then calculate the PARENT F1 score using equation (3).
\[F1(y_{n})=\texttt{PARENT}(h,\rho,t) \tag{3}\]
#### 3.2.2 Consistency LLM
If the calculated PARENT score from SS3.2.1 is not sufficient, we call another LLM with prompt \(\mathcal{T}_{C}\) as in eq. (4).
\[y_{C}=\mathcal{L}_{C}(\mathcal{T}_{C}(r,y_{n})) \tag{4}\]
The prompt \(\mathcal{T}_{C}\) is designed to guide \(\mathcal{L}_{C}\) to identify problems with the given completion, provide advice how to fix it and subsequently produce fixed completion in a structured json output. See SSG.4 for full version of the prompt.
### Stepwise Pipeline Formulation
Given a dataset of structured data samples \(\{x^{r}\}_{r\in\mathcal{R}}\), where \(x^{r}=\{x_{1}^{r},x_{2}^{r},...,x_{m}^{r}\}\) and \(x_{j}^{r}\) is a single RDF triple \(x_{j}^{r}=\langle s_{j}^{r},r,o_{j}^{r}\rangle\) with relation \(r\in\mathcal{R}\), the pipeline for one \(x^{r}\) is as follows:
**Step 0**: Set \(n=0\) and \(\mathcal{T}_{0}^{r}=\mathcal{T}_{I}(x^{r})\).
**Step 1**: Calculate \(y_{n}^{r}\) using eq. (1).
Step 2Use SS3.1.2 to validate \(y_{n}^{r}\) against all conditions \(\mathcal{C}\). If errors (\(\mathcal{E}_{n}^{r}\)) are found, run equation (5) and return to **Step 1**. Otherwise go to **Step 3**.
\[\begin{split}\mathcal{T}_{n+1}^{r}&=\mathcal{T}_{R} (x^{r},y_{n}^{r},\mathcal{E}_{n}^{r})\\ n&=n+1\end{split} \tag{5}\]
Step 3Use SS3.2.1 and calculate \(F1(y_{n}^{r})\) via eq. (3). If the calculated \(F1\) score is lower than our chosen threshold \(0\leq\mu\leq 1\), continue to **Step 4**. Otherwise, output current \(y_{n}^{r}\) as the final completion \(y^{r}\).
**Step 4**: Use SS3.2.2 to get revised completion \(y_{C}^{r}\).
**Step 5**: Compute \(F1\) scores of \(y_{n}^{r}\) and \(y_{C}^{r}\) using eq. (3) and take the completion with higher score via eq. (6) to produce the final completion \(y^{r}\).
\[y^{r}=\operatorname*{argmax}_{\begin{subarray}{c}y\in\{y_{n}^{r},y_{C}^{r}\} \end{subarray}}(F1(y)) \tag{6}\]
## 4 Experiments
The following sections show results on several setups of ASPIRO. In section SS4.1 we compare auto
Figure 1: ASPIRO pipeline for general input sample \(x^{r}\in X\).
Figure 2: Example flow of ASPIRO pipeline with input sample \(x^{r}=[\langle\text{Mario, creator, Shigeru Miyamoto}\rangle]\)
matic metrics on Rel2Text test set (SSD.3) with Kasner et al. (2023)'s fine-tuned BART-BASE models. In section SS4.2 we report on the number of parsing errors tagged by our Rule-based parser (SS3.1) on both DART (SSD.1) and Rel2Text (SSD.3) datasets. In SS4.3 we also provide brief ablation study of CV.
Setup:For \(N\)-shot generator (SS3.1), \(\mathcal{L}_{0}\) marks initial model choice and \(N\)x\(\mathcal{L}_{n}\) max \(N\) retry shots using model \(\mathcal{L}_{n}\). We limit our experiments to \(\mathcal{L}_{n}\) being same for all \(N\) shots. For Consistency Validator (SS3.2), we set \(\mu=0.7\) and only use it in some setups (marked by \(\mathcal{L}_{C}\) in brackets). For reference on LLMs used as \(\mathcal{L}\) in ASPIRO setups, see Tab. 2.
Prompts:While Retry prompt \(\mathcal{T}_{R}\) (SSG.3) and Consistency prompt \(\mathcal{T}_{C}\) (SSG.4) are constant across all our experiments, we compare two variants of the Initial prompt \(\mathcal{T}_{I}\):
* ASDOT: proposed by Xiang et al. (2022) in their Data Disambiguation step to produce short sentence representation of a given triple. (full form SSG.1)
* JSON: our proposed prompt, which enforces json-like output with auxiliary fields to guide the creation of named-entity agnostic templates directly. (full form SSG.2)
### Automatic Metrics
We evaluate Automatic metrics on Rel2Text test set (SSD.3) with 4 ASPIRO setups (see Table 1 for 5 run averages; Table 7 for standard deviations).
### Parsing Errors
Parsing error analysis does not require specific references from the dataset. After ASPIRO produces the verbalisation templates (\(y^{r}\)), we run them through our Rule-based parser (SS3.1) to flag and count the number of errors. As source data (\(X\)), similar to [20], we collect at most 2 triple examples for each unique relation in the dataset and use them to prompt our pipeline.
Parsing error counts:For **DART** (Table 3) we use the full dataset (SSD.1), producing 4299 unique template sentences in each experiment run. In **Rel2Text** (Table 4) we only use the test split (SSD.3) with 226 unique relations and G3.5 (T2) as base model with either (**A**)SDOT or (**J**)SON prompts and different \(N\)-shot Generator setups. For Rel2Text, we don't provide RR % as the reduction is evident from counts.
Discussion:Introducing \(N\)-shot Generator (SS3.1) shows significant reduction in parsing error counts (Tables 3 and 4) even with \(N=1\). In the 1 retry shot setting, GPT4 (**G4**) is most effective at reducing parsing errors. However, if we introduce up to 5 retry shots, we can see that gpt-3.5-turbo (**G3.5T**) reduces parsing errors further. The exception is (**J**)SON prompt on DART where G4 keeps the lead. Interestingly, while text-davinci-003 (**G3.5**) performs well as 0-shot model, it generally performs worse than G3.5T in \(N\)-shot settings, contrasted again on DART by **J** prompt. It is also evident that **J** prompt provides more robust 0-shot baseline compared to (**A**)SDOT prompt. The values in parentheses reveal that including Consistency Validation yields only slight reduction in error count.
### Ablation of Consistency Validator
To investigate the efficacy of Consistency Validator, we conduct a brief ablation study on Rel2Text test set (SSD.3). For statistical metrics (Table 5), CV provides only marginal gains. This effect may be attributed to the improvement of **C**ontradiction score and degradation of **N**eutrality score, implying that CV moves the templates closer to general statements with less informational value. Conversely, parsing errors (Table 6) are reduced notably by CV, with counts decreasing from 12 to 10 and 23 to 16.
### Limitations
Operational costs:When contrasted with 0-shot setting, ASPIRO significantly escalates the operational costs (see appendix SSF) due to the repeated calls of the \(N\)-shot Generator and the lengthy Consistency prompt (SSG.4) associated with the Consistency Validator (SS3.2). Following the brief ablation study of CV (SS4.3) and the cost analysis, it remains debatable whether the performance of the Consistency Validator reported in this paper justifies the additional expense incurred in prompting the LLM for the flagged examples.
Isolated triples:Generating verbalisations from single isolated triples doesn't account for situations where context from other triples is necessary to fully interpret the final natural language verbalisation. As exemplified by the DART dataset, contextual integration is significant and should be explored further.
Backup template:In instances where the parsing of the <subject> and <object> within the generated completion of the LLM proved unsuccessful, Xiang et al. (2022) introduced a general backup template as fallback. In our research, we did not use any backup templates and did not investigate their potential impact on automated metric scores. Nonetheless, it's important to acknowledge that within a production environment, the incorporation of a backup template is a fundamental necessity, warranting further assessment of its effects.
Direction of relation:The capacity to accurately discern the correct direction of the relation between subject and object is a notable feature of Data-to-text systems. In our experiments, we report on contradiction statistic (C %), which can roughly translate to measure this ability. Although ASPIRO generally shows to improve on this statistic, there are no specific guardrails to validate the ambiguity other than the general knowledge of the LLM itself.
Variance of experiment runs:Due to the substantial expenses associated with prompting large language models (LLMs) and the considerable size of the DART dataset, each experiment on DART was conducted only once. The same is true for Rel2Text parsing error analysis in Table 4. It should be noted that, although the temperature parameter was uniformly set to 0 for all the employed LLMs, the underlying generative process remains reliant on maximum likelihood estimation, which inherently leaves room for potential variation errors in our experimental results.
## Ethics Statement
In the course of this research, we have employed various Generative Pre-trained Transformer models, including GPT3 davinci, InstructGPT text-davinci-003 and gpt-3.5-turbo-0301, and gpt-4-0314, each demonstrating inherent biases as outlined in their respective publications, which are listed in Table 2. These biases often favour popular opinions and can lead to a distortion in the model's outputs. This reflects the models' training on large-scale internet text, which is not entirely neutral and contains biased or skewed perspectives. We acknowledge this limitation and highlight that despite implementing a pipeline designed to minimise the inclusion of unnecessary and irrelevant information, the potential for biased outcomes cannot be entirely eliminated.
|
2307.03215 | Holographic Aspects of Even-Dimensional Topological Gravity | In an odd-dimensional spacetime, gravity can be formulated as a proper gauge
theory based on the Chern-Simons action for a suitable gauge group. Performing
dimensional reduction, one obtains, as an effective theory, Chamseddine's
even-dimensional topological gravity with the reduced gauge symmetry. This
theory involves a multiplet of scalar fields that appear as a result of the
dimensional reduction, and it is topological in the sense that its action does
not depend on the metric. Focusing primarily on the four-dimensional case, we
use the holographic dictionary to compute one-point correlation functions of
the relevant boundary operators and find that the spin-current can have a
nonzero expectation value in the dual quantum field theory. We also consider
the generalized holographic Weyl anomaly and find that it vanishes. Finally, we
propose a way of computing two-point correlation functions using the
gravitational Wilson lines. | Dušan Đorđević, Dragoljub Gočanin | 2023-07-06T17:42:03Z | http://arxiv.org/abs/2307.03215v2 | # Holographic Aspects of Even-Dimensional Topological Gravity
###### Abstract
In an odd-dimensional spacetime, gravity can be formulated as a proper gauge theory based on the Chern-Simons action for a suitable gauge group. Performing dimensional reduction, one obtains, as an effective theory, Chamseddine's even-dimensional topological gravity with the reduced gauge symmetry. This theory involves a multiplet of scalar fields that appear as a result of the dimensional reduction, and it is topological in the sense that its action does not depend on the metric. Focusing primarily on the four-dimensional case, we use the holographic dictionary to compute one-point correlation functions of the relevant boundary operators and find that the spin-current can have a nonzero expectation value in the dual quantum field theory. We also consider the generalized holographic Weyl anomaly and find that it vanishes. Finally, we propose a way of computing two-point correlation functions using the gravitational Wilson lines.
## I Introduction
Holographic duality is one of the tenets of modern quantum gravity research, the most prominent example being the AdS/CFT correspondence [1] - a conjectured duality between a theory of quantum gravity in \((\mathrm{D}+1)\)-dimensional asymptotically anti-de Sitter (AdS) spacetime (the bulk) and a conformal field theory (CFT) that resides on the D-dimensional asymptotic boundary. The standard version of this duality works in a regime where the gravity is well approximated by a semi-classical theory while the dual CFT is strongly coupled. In particular, the holographic dictionary [2] states that bulk fields are associated with dual CFT operators, and this matching is usually done by using the most "divergent" component of the bulk field as the source for the CFT operator. Moreover, the gravitational partition function in the saddle point approximation corresponds to the partition function of the dual CFT (if the appropriate boundary conditions are imposed). In the context of string theory, initial considerations established an equivalence between superstring (supergravity) theory on AdS\({}_{5}\times\mathrm{S}^{5}\) and \(\mathcal{N}=4\) supersymmetric Yang-Mills theory in \(\mathrm{D}=4\). Soon after this pioneering work, the holographic duality was successfully applied to various models of gravity and opened a new way of studying condensed matter systems. One notable example is the holographic relation between Jackiw-Teitelboim (JT) gravity [3] and the Sachdev-Ye-Kitaev (SYK) model [4; 5]. The correspondence was also generalised to the case of non-Riemannian geometries, where nonzero torsion plays an important role in sourcing the boundary spin-current [6]. Based on these previous considerations, in this paper, we focus on even-dimensional gauge theories of gravity in the bulk and their holographic description in terms of boundary correlation functions.
Unlike the Standard Model of particle physics, General Relativity (GR) is not formulated as a proper gauge theory. There are, however, gravity theories that are of this type; those were formulated by Chamseddine [7; 8] in any number of spacetime dimensions. In an odd-dimensional spacetime, they coincide (up to a boundary term) with the Chern-Simons (CS) action for a suitable gauge group. The holographic description of 5-dimensional CS gravity was studied in [6; 9; 10], and the role of torsion in the context of AdS/CFT was addressed in [11]. It was argued that torsion could be used to introduce spin-current degrees of freedom at the boundary useful for describing the hydrodynamics of spin systems - in the case of [11], the resulting theory was defined on a 4-dimensional boundary. On the other hand, in an even-dimensional spacetime, besides a gauge connection, a multiplet of scalar fields has to be introduced. However, all those theories have a common property that their action can be written only using differential forms and wedge products without the Hodge dual, and in that sense, we regard them as topological (though they can have local propagating degrees of freedom [12]). For future reference, we will dub the even-dimensional ones - Chamseddin's topological gravity (CTG) theories.
Furthermore, it was demonstrated in [8] that 5-dimensional CS gravity action with conformal \(SO(4,2)\) gauge symmetry can be dimensionally reduced by Kaluza-Klein compactification to 4-dimensional CTG action with \(SO(3,2)\) gauge symmetry. In this paper, we will use this fact to derive the holographic dual of 4-dimensional CTG theory with a nontrivial spin-tensor defined on a 3-dimensional boundary (and, in addition, on any odd-dimensional manifold with a dimension greater than three). We stress the role of the bulk scalars and show that non-vanishing torsion in the bulk is not a necessary condition for having a spin-current on the boundary. One could also take a different point of view and look at the 4-dimensional CTG action as a straightforward generalisation of the 2-dimensional BF theory that produces the equations of motion of JT gravity (section IV). The 2-dimensional BF gravity can also be obtained from the 3-dimensional CS gravity via dimensional reduction, a fact that has been used previously in the liter
ature on holography [13]. As explained in [13], in order to get an interesting boundary theory, one has to deform the original action by introducing appropriate boundary terms. As a more technical part of this paper, we perform this kind of deformation of the CTG theory so that the holographically motivated boundary conditions are satisfied.
The plan of this paper is the following. In section II, we use the relation between 5-dimensional CS gravity and 4-dimensional CTG to identify the Fefferman-Graham expansion of the bulk fields. In section III, we calculate the holographic currents and analyze the obtained results. Moreover, we analyze some solutions of the bulk equations of motion and discuss some of their properties in the context of holography. In section IV, as a way of confirmation, we apply our procedure to the case of 2-dimensional BF theory, whose equations of motion yield JT gravity. Section V is devoted to the study of line defects in the bulk, interpreted as heavy particles. Finally, section VI contains our conclusions and outlook. The summary of notation, conventions and the algebraic setup is presented in Appendix A. Derivation of the Fefferman-Graham gauge is given in Appendix B, and the generalization of the results obtained for the 4-dimensional CTG to any even number of spacetime dimensions can be found in Appendix C.
## II Dimensional reduction and the holographic ansatz
Here we give a short account of the 5-dimensional CS gravity and its dimensional reduction, see [8; 14] for more details. Throughout, we use the first-order formalism where the vielbein, \(\hat{E}\), and the spin-connection, \(\hat{\Omega}\), are treated as independent fields. The notation, conventions and some background algebra can be found in Appendix A. Suppressing the wedge product, the action for the 5-dimensional CS gravity is given by
\[S^{(\text{SD})}_{\text{CS}}=\frac{k}{8}\int_{\mathcal{M}_{5}} \varepsilon_{\text{ABCD}\mathbb{E}}\Big{(}\frac{1}{l}\hat{R}^{\text{AB}}\hat{ R}^{\text{CD}}\hat{E}^{\text{E}}\] \[+\frac{2}{3l^{3}}\hat{R}^{\text{AB}}\hat{E}^{\text{C}}\hat{E}^{ \text{D}}\hat{E}^{\text{E}}+\frac{1}{5l^{5}}\hat{E}^{\text{A}}\hat{E}^{\text{ E}}\hat{E}^{\text{C}}\hat{E}^{\text{D}}\hat{E}^{\text{E}}\Big{)}, \tag{1}\]
where \(\hat{R}=\text{d}\hat{\Omega}+\hat{\Omega}^{2}\) is the curvature 2-form, \(k\) is a dimensionless constant (the CS level), and \(l\) is the appropriate length scale; henceforth, we set \(l=1\). Up to a boundary term, the action (1) is invariant under the conformal gauge group \(SO(4,2)\). The equations of motion for the independent fields \(\hat{E}^{\text{A}}\) and \(\hat{\Omega}^{\text{AB}}\) are
\[\varepsilon_{\text{ABCD}\mathbb{E}}\big{(}\hat{R}^{\text{AB}}+ \hat{E}^{\text{A}}\hat{E}^{\text{B}}\big{)}\big{(}\hat{R}^{\text{CD}}+\hat{E} ^{\text{C}}\hat{E}^{\text{D}}\big{)} =0, \tag{2}\] \[\varepsilon_{\text{ABCD}\mathbb{E}}\hat{T}^{\text{A}}\big{(}\hat {R}^{\text{BC}}+\hat{E}^{\text{B}}\hat{E}^{\text{C}}\big{)} =0. \tag{3}\]
Note that the torsion 2-form \(\hat{T}=\text{d}\hat{E}+\hat{\Omega}\hat{E}\) does not necessarily vanish on-shell.
As demonstrated in [8], by compactifying one spatial dimension (the one corresponding to the spacetime index 4) into a circle, the 5-dimensional CS gravity action (1) reduces, up to a boundary term (which will be important later on), to the following 4-dimensional CTG action with \(SO(3,2)\) gauge symmetry,
\[S_{CTG}=\kappa\int_{\mathcal{M}_{4}}\varepsilon_{\hat{A}\hat{B}\hat{C}\hat{D} \hat{E}}\hat{\Phi}^{\hat{A}}\hat{F}^{\hat{B}\hat{C}}\hat{F}^{\hat{D}\hat{E}}, \tag{4}\]
where \(\hat{F}\) stands for the \(SO(3,2)\) field strength 2-form and \(\hat{\Phi}\) is a multiplet of spacetime scalars that appear after dimensional reduction. The parameter \(\kappa\) is defined in terms of the CS level \(k\) and the compactification radius; the radius has to be small enough so that we can ignore the higher Kaluza-Klein modes. The effective 4-dimensional CTG action (4) describes our bulk theory of gravity and it will be our starting point for the holographic analysis; all bulk fields are denoted by a hat symbol.
The equations of motion obtained by varying action (4) with respect to \(\hat{\Phi}^{\hat{A}}\) and the full \(SO(3,2)\) connection \(\hat{\Omega}^{\hat{A}\hat{B}}\) are given by
\[\varepsilon_{\hat{A}\hat{B}\hat{C}\hat{D}\hat{E}}\hat{F}^{\hat{A} \hat{B}}\hat{F}^{\hat{C}\hat{D}} =0, \tag{5}\] \[\varepsilon_{\hat{A}\hat{B}\hat{C}\hat{D}\hat{E}}\hat{F}^{\hat{A} \hat{B}}\text{D}\hat{\Phi}^{\hat{C}} =0. \tag{6}\]
where D stands for the \(SO(3,2)\) covariant derivative. The \(SO(3,2)\) gauge group index is decomposed as \(\hat{A}=(A,5)\) where again \(A=0,1,2,3\) is the standard Lorentz index. Using the field strength components, \(\hat{F}^{AB}=\hat{R}^{AB}+\hat{e}^{A}\hat{e}^{B}\) and \(\hat{F}^{A5}=\hat{T}^{A}\), the previous two equations can be cast [12] in a more explicit form,
\[\varepsilon_{ABCD}\big{(}\hat{R}^{AB}+\hat{e}^{A}\hat{e}^{B} \big{)}\big{(}\hat{R}^{CD}+\hat{e}^{C}\hat{e}^{D}\big{)} =0, \tag{7}\] \[\varepsilon_{ABCD}\hat{T}^{A}\big{(}\hat{R}^{BC}+\hat{e}^{B}\hat {e}^{C}\big{)}=0,\] (8) \[\varepsilon_{ABCD}\big{(}\hat{R}^{BC}+\hat{e}^{B}\hat{e}^{C} \big{)}\big{(}\text{D}\hat{\phi}^{A}-\hat{\varphi}\hat{e}^{A}\big{)} =0,\] (9) \[\varepsilon_{ABCD}\big{(}2\hat{T}^{B}(\text{D}\hat{\phi}^{A}- \hat{\varphi}\hat{e}^{A})\] \[+(\hat{R}^{AB}+\hat{e}^{A}\hat{e}^{B})(\text{d}\hat{\varphi}-\hat{ \varphi}\hat{e}_{E})\big{)} =0, \tag{10}\]
where \(\hat{R}^{AB}=\text{d}\hat{\omega}^{AB}+\hat{\omega}^{A}_{\;\;\;\hat{C}}\hat{ \omega}^{CB}\)is the bulk curvature and \(\hat{T}^{A}=\text{d}\hat{e}^{A}+\hat{\omega}^{AB}\hat{e}_{B}\) is the bulk torsion.
Since boundary terms may play an essential role in holography, we will start from the action (4) instead of making a direct dimensional reduction of all the results pertaining to the original 5-dimensional CS gravity case. Nevertheless, those results that are independent of the boundary terms can be obtained directly by using the reduction prescription. In particular, this is true for the Fefferman-Graham (FG) expansion of the bulk fields - an expansion of the bulk fields organized in powers of the radial coordinate \(\rho\). For CS gravity, the FG expansion is
found in [6]. The fact that on-shell action for this theory is IR divergent means that appropriate regularisation and renormalisation procedure has to be imposed. The FG expansion is finite, as opposed to more generic situations. Actually, the CS gravity has to be considered separately from other generic Lovelock gravity theories, as the equations of motion are degenerate, and the theory is (in a sense that has already been explained) topological [15]. In general, the structure of the FG expansion is based on the diffeomorphism invariance, gauge invariance and the equations of motion of the bulk theory. Since the dimensional reduction of the 5-dimensional CS gravity action to 4-dimensional CTG action consistently extends to the equations of motion and the symmetry structure of the two theories, we can directly reduce the FG expansions from the 5-dimensional CS gravity [6] and write down the asymptotic FG expansion of the fields appearing in our 4-dimensional CTG theory.
The asymptotic boundary is located at \(\rho=0\). Boundary fields (written without the hat symbol) are finite and do not have \(\mathrm{d}\rho\) component. The index 1 corresponds to the radial coordinate \(\rho\), and the Lorentz index is decomposed as \(A=(a,1)\), with \(a=0,2,3\) being the boundary index. The asymptotic expansions (they are finite as for the 5-dimensional CS gravity from which they are derived) of the bulk fields are given by
\[\hat{e}^{1}=-\frac{\mathrm{d}\rho}{2\rho},\quad\hat{e}^{a}=\frac {1}{\sqrt{\rho}}(e^{a}+\rho k^{a}), \tag{11}\] \[\hat{\omega}^{a1}=\frac{1}{\sqrt{\rho}}(e^{a}-\rho k^{a}),\quad \hat{\omega}^{ab}=\omega^{ab},\] (12) \[\hat{\phi}^{1}=\frac{1}{\sqrt{\rho}}(\varphi-\rho\psi),\quad\hat {\phi}^{a}=\phi^{a},\] (13) \[\hat{\varphi}=\frac{1}{\sqrt{\rho}}(\varphi+\rho\psi). \tag{14}\]
This holographic ansatz has to satisfy the 4-dimensional bulk equations of motion (7)-(10), which gives us a set of constraints on the boundary fields (see subsection III.1).
## III Holographic currents
The typical situation with gravity in asymptotically AdS spacetimes is the following. Integration of the Lagrangian density down to \(\rho=0\) introduces divergences in the on-shell action. In order to be able to interpret the on-shell bulk gravity action as the generating function for the dual CFT, one has to perform holographic renormalization. This is not surprising, as renormalization plays an important role in quantum field theory (QFT), the only difference being that QFT has to be renormalized in the UV regime, while the bulk gravity is IR divergent. An important aspect of the AdS/CFT correspondence is that it relates the IR scales of gravity in the bulk and the UV scales of the corresponding CFT at the boundary. The renormalization is achieved by putting an IR cutoff on the gravitational side and adding appropriate boundary counterterms that do not influence classical equations of motion but lead to a finite expression for the boundary correlation functions. Another important aspect of the boundary terms is their connection to the variation principle. In order to talk about dynamics, one has to define a set of boundary conditions and, in some cases, deform the theory by adding appropriate boundary terms. Note that action (4) vanishes on-shell, and therefore we have to be careful when setting up the variational principle in order to get an interesting boundary QFT. This is analogous to the situation with the BF formulation of JT gravity [13; 16] (see also section IV).
We follow the procedure developed in [6]. The variation of the 4-dimensional CTG action (4) is given by
\[\kappa\int_{\mathcal{M}_{4}}\varepsilon_{\hat{A}\hat{B}\hat{C} \hat{D}\hat{E}}\big{(}\delta\hat{\Phi}^{\hat{A}}\hat{F}^{\hat{B}\hat{C}}\hat{F} ^{\hat{D}\hat{E}}+2\hat{\Phi}^{\hat{A}}\delta\hat{F}^{\hat{B}\hat{C}}\hat{F}^ {\hat{D}\hat{E}}\big{)}. \tag{15}\]
Decomposing indices, we get
\[\kappa\int_{\mathcal{M}_{4}}\varepsilon_{ABCD}\Big{(}\delta\hat{ \varphi}\hat{F}^{BC}\hat{F}^{DE}+2\delta\hat{\phi}^{\hat{A}}\hat{T}^{\hat{B}} \hat{F}^{CD} \tag{16}\] \[+2\hat{\varphi}\hat{\delta}\hat{F}^{AB}\hat{F}^{CD}+4\hat{\phi}^ {A}\delta\hat{T}^{B}\hat{F}^{CD}+4\hat{\phi}^{A}\hat{T}^{B}\delta\hat{F}^{CD} \Big{)}.\]
After some partial integration, putting the variation on-shell yields
\[\delta S\mid_{\mathrm{on-shell}}=\kappa\int_{\partial\mathcal{M} _{4}}\varepsilon_{ABCD}\Big{(}2\hat{\varphi}\delta\hat{\omega}^{AB}\hat{F}^{CD}\] \[\qquad\qquad\qquad+4\hat{\phi}^{A}\delta\hat{e}^{B}\hat{F}^{CD}+4 \hat{\phi}^{A}\hat{T}^{B}\delta\hat{\omega}^{CD}\Big{)}. \tag{17}\]
There is no variation of the fields \(\hat{\varphi}\) and \(\hat{\phi}^{A}\) field since there are no derivatives of these fields in the action. We now use the asymptotic expansions (11)-(14) to organize the action in powers of \(\rho\). In general, we should care only about those terms that are of order \(\rho^{0}\). This is because the renormalization theorem [10] claims that we can rewrite terms that contain nonzero powers of \(\rho\) as \(\delta(\dots)\) and thus those terms can always be compensated by adding counterterms to the original action. However, in our case, we can even check that terms that contain nonzero powers of \(\rho\) actually vanish, which leaves us only with finite terms. They are given by
\[\delta S|_{\mathrm{on-shell}}= 4\kappa\int_{\partial\mathcal{M}_{4}}\varepsilon_{abc}\times \tag{18}\] \[\times \Big{(}\delta k^{a}\big{(}-2\varphi(R^{bc}+4e^{b}k^{c})-4\phi^{b} T^{c}\big{)}\] \[+\delta e^{a}\big{(}2\psi(R^{bc}+4e^{b}k^{c})+4\phi^{b}\mathrm{D} k^{c}\big{)}\] \[+\delta\omega^{ab}\big{(}-2\varphi\mathrm{D}k^{c}+2\psi T^{c}-2 \phi^{c}e^{d}k_{d}\big{)}\Big{)},\]
where now D stands for the Lorentz covariant derivative.
Boundary fields \(e^{a}\) and \(\omega^{ab}\) couple to the field theory stress-energy tensor \(\tau_{a}\) and the spin-tensor \(\sigma_{ab}\), respectively, while fields \(\varphi\) and \(\phi^{a}\), if their variation would be present in the above expression, would couple to certain operators, \(o_{\varphi}\) and \(o_{a}\), in the boundary QFT. Yet, the variations of \(\varphi\) and \(\phi^{a}\) do not appear in the last expression. Moreover, there is no obvious choice for what \(k^{a}\) should couple with. This motivates us to define the boundary conditions such that only fields \(e^{a}\), \(\omega^{ab}\), \(\varphi\) and \(\phi^{a}\) are fixed at the boundary, as they are interpreted as boundary sources in AdS/CFT. This is different from the standard choice of boundary conditions where one fixes the full gauge connection at the boundary. Note also that in the case of asymptotically AdS spacetimes, due to divergences present in the asymptotic expansions at \(\rho=0\), it is hard to give a physical meaning to the standard boundary conditions.
This \(k^{a}\) is not determined by the bulk equations of motion, and we can therefore add appropriate boundary terms to move the variation from \(k^{a}\) to other fields, thus deforming the original theory. New, deformed theory, has nonzero on-shell action. We illustrate this for one of the terms, namely
\[\int_{\partial\mathcal{M}_{4}}\varepsilon_{abc}\delta k^{a}\varphi R ^{bc}=\int_{\partial\mathcal{M}_{4}}\varepsilon_{abc}\big{(}\delta(k^{a}\varphi R ^{bc})-\delta\varphi k^{a}R^{bc}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad-\mathrm{D }(k^{a}\varphi)\delta\omega^{bc}\big{)}. \tag{3.5}\]
The boundary term \(\mathrm{d}(\varepsilon_{abc}\varphi k^{a}\omega^{bc})\) is discarded as \(\partial^{2}\mathcal{M}_{4}=\emptyset\). In total, the boundary term that we have to add in order to respect the holographic boundary conditions is
\[S_{GHY}=8\kappa\int_{\partial\mathcal{M}_{4}}\varepsilon_{abc}\left(\varphi k ^{a}R^{bc}+2\varphi k^{a}k^{b}e^{c}+2k^{a}\phi^{b}T^{c}\right). \tag{3.6}\]
This term is finite and can be thought of as a generalised on-shell Gibbons-Hawking-York (GHY) term, as explained in [17]. For convenience, we should relate the constant \(\kappa\) to Newton's constant \(G\). Since the CTG action is not the standard Einstein-Hilbert action, it is not possible to directly see the relation. However, motivated by the fact that CTG action contains the Einstein-Hilbert term, multiplied by the field \(\hat{\varphi}\), we will introduce the constant \(G\) using the following relation
\[4\kappa=\frac{1}{16\pi G}.\]
Note that this fact is also used in the MacDowell-Mansouri-Chamseddine-Stelle-West (MMCSW) approach to 4-dimensional gravity with negative cosmological constant [18; 19], although this formalism is incompatible with our analysis that relies heavily on the local \(SO(3,2)\) symmetry, which is broken in the MMCSW.
The final expression for the variation of the modified CTG action is, therefore, given by
\[\delta S_{\mathrm{mod}}= \frac{1}{16\pi G}\int_{\partial\mathcal{M}_{4}}\varepsilon_{abc} \Big{(}\delta e^{a}\big{(}2\psi(R^{bc}+4e^{b}k^{c})\] \[-4k^{b}\mathrm{D}\phi^{c}+4k^{b}k^{c}\varphi\big{)}\] \[+\delta\varphi\big{(}2k^{a}R^{bc}+4k^{a}k^{b}e^{c}\big{)}+\delta \phi^{a}\big{(}-4k^{b}T^{c}\big{)}\Big{)}\] \[+\delta\omega^{ab}\big{(}\varepsilon_{abc}(2\psi T^{c}-2\phi^{c} e^{d}k_{d}-2k^{c}\mathrm{d}\varphi)\] \[-4\varepsilon_{acd}k^{c}\phi^{d}e_{b}\big{)}\Big{)}. \tag{3.7}\]
On-shell, modified action is different from zero, and using the holographic dictionary, we have
\[\delta S_{\mathrm{mod}}=\delta W=\int_{\partial\mathcal{M}_{4}} \Big{(}\delta e^{a}\tau_{a}+\frac{1}{2}\delta\omega^{ab}\sigma_{ab}\] \[+\delta\varphi o_{\varphi}+\delta\phi^{a}o_{a}\Big{)}, \tag{3.8}\]
where \(W\) is the generating functional of connected Green's functions in the dual QFT. From this, we can read out the one-point correlation functions. They are given by
\[\tau_{a}=\langle\mathcal{T}_{a}\rangle_{\mathrm{QFT}} =\frac{1}{16\pi G}\varepsilon_{abc}\big{(}2\psi(R^{bc}+4e^{b}k^{c} )\] \[\qquad\qquad-4k^{b}\mathrm{D}\phi^{c}+4k^{b}k^{c}\varphi\big{)}, \tag{3.9}\] \[\sigma_{ab}=\langle\mathcal{S}_{ab}\rangle_{\mathrm{QFT}} =\frac{1}{16\pi G}\varepsilon_{abc}(2\psi T^{c}-2\phi^{c}e^{d}k_{d}\] \[\qquad\qquad-2k^{c}\mathrm{d}\varphi)-4\varepsilon_{acd}k^{c}\phi ^{d}e_{b},\] (3.10) \[o_{\varphi}=\langle\mathcal{O}_{\varphi}\rangle_{\mathrm{QFT}} =\frac{1}{8\pi G}\varepsilon_{abc}(k^{a}R^{bc}+2k^{a}k^{b}e^{c}),\] (3.11) \[o_{a}=\langle\mathcal{O}_{a}\rangle_{\mathrm{QFT}} =-\frac{1}{4\pi G}\varepsilon_{abc}k^{b}T^{c}. \tag{3.12}\]
Fields \(k^{a}\) and \(\psi\) have to satisfy certain constraints that we present in the subsection III.1, but are not fixed by the boundary sources. These constraints are given either by obtaining the equations of motion for the bulk action in the radial direction or by dimensionally reducing the constraints found in [6]. They will be helpful in the discussion concerning the holographic Weyl anomaly.
Alternatively, we can deform the boundary theory by adding the boundary term originating from the dimensional reduction of 5-dimensional CS gravity. This term is given by
\[\frac{1}{64\pi G}\int_{\partial\mathcal{M}_{4}}\varepsilon_{ABCD}\Big{(}\frac{ 4}{3}\hat{e}^{A}\hat{e}^{B}\hat{e}^{C}\hat{\phi}^{D}+4\hat{e}^{A}\hat{R}^{BC} \hat{\phi}^{D}\Big{)}. \tag{3.13}\]
On-shell, this term is divergent, and therefore holographic renormalization is necessary. This means that the boundary is first moved to some finite \(\rho=\varepsilon\), and counterterms are added to cancel the divergences. As we are not interested in the nature of those terms, they will not be presented here. Additionally, we have to make sure that the variational principle is satisfied. This
is again done by adding a suitable GHY-like boundary term. If the finite boundary term originating from the 5-dimensional CS gravity contains some of the fields \(k^{a}\) or \(\psi\), adding it would not change the one-point functions of the dual operators, given that the suitable GHY-like term is also included. As all terms in (3.13), upon unpacking, involve one of those fields, we conclude that the structure of the one-point correlation functions remains the same. The full GHY-like term is now
\[S_{\rm GHY}= \frac{1}{16\pi G}\int_{\partial{\cal M}_{4}}\varepsilon_{abc} \Big{(}-4k^{a}k^{b}e^{c}\varphi+k^{a}(R^{bc}+2k^{b}e^{c})\varphi\] \[+e^{a}(R^{bc}+2k^{b}e^{c})\psi-2e^{a}k^{b}{\rm D}\phi^{c}\Big{)}. \tag{3.14}\]
### Semi-classical bulk geometries
Having computed the general form of the one-point correlation functions, we will provide some examples of solutions of the bulk equations of motion, thus identifying some semi-classical geometries that one could use to learn more about the dual QFT. Inserting the asymptotic FG expansions (2.11)-(2.14) into the equations of motion (2.7)-(2.10) we obtain the constraints that have to be satisfied by the boundary fields. Note that the first two equations (2.7)-(2.8) are identically satisfied, while the remaining two yield
\[\varepsilon_{abc}\big{(}{\rm D}\phi^{a}-2k^{a}\phi-2k^{a}\big{)} \big{(}R^{bc}+4e^{b}k^{c}\big{)} =0, \tag{3.15}\] \[\varepsilon_{abc}\big{[}({\rm d}\varphi-e_{d}\phi^{d}){\rm D}k^{ c}-({\rm d}\psi-k_{d}\phi^{d})T^{c}\] \[\qquad+({\rm D}\phi^{c}-2e^{c}\psi-2k^{c}\varphi)e^{d}k_{d}\big{]} =0,\] (3.16) \[\varepsilon_{abc}\big{[}({\rm d}\varphi-e_{d}\phi^{d})(R^{bc}+4e ^{b}k^{c})\] \[\qquad+2({\rm D}\phi^{b}-2k^{b}\varphi-2e^{b}\psi)T^{c}\big{]} =0,\] (3.17) \[\varepsilon_{abc}\big{[}(\psi-k_{d}\phi^{d})(R^{bc}+4e^{b}k^{c})\] \[\qquad+2({\rm D}\phi^{b}-2k^{b}\varphi-2e^{b}\psi){\rm D}k^{c} \big{]} =0. \tag{3.18}\]
The most obvious solution to the bulk equations of motion is the AdS\({}_{4}\) spacetime with \(\hat{R}^{AB}+\hat{e}^{A}\hat{e}^{B}=0\) and vanishing torsion. However, this case is peculiar due to the fact that the scalar fields are completely arbitrary. In particular, this is an example of a bulk geometry with vanishing torsion for which the one-point function of the spin-current in the dual QFT can be non-vanishing due to the presence of bulk scalars. If we set the scalars to zero, we are left with the pure AdS\({}_{4}\) that corresponds to the vacuum state \(|0\rangle\) in the boundary theory and has a vanishing spin-current at the boundary.
In order to relate the discussed model to the physics of spin systems [11], one should place the boundary field theory at a finite temperature. This is usually done by placing a black hole in the bulk. We will analyse the black hole with a flat horizon discussed in [20; 21]. In the Schwarzschild form, the 5-dimensional metric for this black hole is given by
\[{\rm d}s^{2}=-(r^{2}-\mu){\rm d}t^{2}+\frac{1}{(r^{2}-\mu)}{\rm d }r^{2}+r^{2}\big{(}{\rm d}x^{2}+{\rm d}y^{2}+{\rm d}z^{2}\big{)}.\]
Analogous to the 3-dimensional case discussed in [22], we can rewrite this metric in the FG form as
\[{\rm d}s^{2}=\frac{{\rm d}\rho^{2}}{4\rho^{2}}+\frac{1}{\rho} \Big{(}(1+\frac{\mu}{2}\rho+\frac{\mu^{2}}{16}\rho^{2})({\rm d}x^{2}+{\rm d}y^ {2}+{\rm d}z^{2})\] \[\qquad\qquad-(1-\frac{\mu}{2}\rho+\frac{\mu^{2}}{16}\rho^{2}){ \rm d}t^{2}\Big{)}. \tag{3.19}\]
This black hole can have a non-vanishing torsion. The corresponding solution of the action (2.4) is obtained by performing dimensional reduction. The resulting black hole has a similar metric but necessarily vanishing torsion (see Appendix B for the discussion on why it is legitimate to apply our formalism to this black hole). We get
\[e^{a}=\delta^{a}_{\mu}{\rm d}x^{\mu},\ \ \omega^{ab}=0, \tag{3.20}\] \[k^{a}=\epsilon\frac{\mu}{4}\delta^{a}_{\mu}{\rm d}x^{\mu},\ \ \varphi=1,\] (3.21) \[\psi=\frac{\mu}{4},\ \ \phi^{a}=0, \tag{3.22}\]
where \(\epsilon=\pm 1\), depending on the value of \(a\); for \(a=0\) we have \(\epsilon=-1\), and otherwise for \(a=2,3\). From (3.20), it is clear that the boundary is flat. This is appealing, considering the possible condensed matter applications. Also, it is clear that fields \(e^{a}\) and \(k^{a}\) are independent, as only one of them is proportional to the parameter \(\mu\) of the black hole solution. The Hawking temperature of this black hole solution, which corresponds to the temperature of the dual QFT, is \(\frac{\sqrt{\mu}}{2\pi}\). One can readily check that the constraint equations (3.15)-(3.18) are satisfied. It is also easy to see that this solution has a vanishing one-point function for the spin-current and therefore is not useful when dealing with spin systems. The spacetime components of the one-point function of the stress-energy tensor are given by
\[\langle{\cal T}_{0}\rangle_{\rm QFT}= \frac{3\mu^{2}}{32\pi G}{\rm d}x^{2}{\rm d}x^{3}, \tag{3.23}\] \[\langle{\cal T}_{2}\rangle_{\rm QFT}= \frac{\mu^{2}}{32\pi G}{\rm d}x^{0}{\rm d}x^{3},\] (3.24) \[\langle{\cal T}_{3}\rangle_{\rm QFT}= -\frac{\mu^{2}}{32\pi G}{\rm d}x^{0}{\rm d}x^{2}, \tag{3.25}\]
and the one-point functions for operators \({\cal O}_{\varphi}\) and \({\cal O}_{a}\) are
\[\langle{\cal O}_{\varphi}\rangle_{\rm QFT}= -\frac{\mu^{2}}{32\pi G}{\rm d}x^{0}{\rm d}x^{2}{\rm d}x^{3}, \tag{3.26}\] \[\langle{\cal O}_{a}\rangle_{\rm QFT}= 0. \tag{3.27}\]
Holographic considerations of dilaton gravity theories, with an emphasis on hydrodynamics, can also be found in [23; 24].
Having in mind our goal to study holographic properties of CTG, one can further introduce a scalar field \(f\) that is coupled with this background, neglecting backreaction. If the scalar is minimally coupled, this means that the action is given by
\[\frac{1}{2}\int\mathrm{d}^{4}x\sqrt{-g}g^{\mu\nu}\partial_{\mu}f\partial_{\nu}f. \tag{3.28}\]
By solving the Klein-Gordon equation for the field \(f\), we can obtain the spectrum of quasi-normal modes of a given black hole. Quasinormal modes are in holographic relation to relaxation times in the thermal state of the dual field theory. This was done in [25], and it is important to note that this spectrum can be found exactly without relying on some approximate methods. However, in the spirit of dilaton gravity, one may consider a more general actions describing a scalar field in the form of
\[\frac{1}{2}\int\mathrm{d}^{4}x\sqrt{-g}\,\hat{\varphi}^{N}g^{\mu\nu}\partial_{ \mu}f\partial_{\nu}f, \tag{3.29}\]
where \(N\) is some positive number. The case of \(N=1\) corresponds to the dimensional reduction of the minimally coupled scalar field in five dimensions. Modified Klein-Gordon equation, in this case, is given by (we can safely use partial integration with covariant derivatives as the torsion for this geometry is zero)
\[\Box f-m^{2}f+N(\partial_{\mu}\ln\hat{\varphi})\partial^{\mu}f=0. \tag{3.30}\]
Using the FG expansion for the scalar field \(\hat{\varphi}=\frac{1}{\sqrt{\rho}}\left(1+\frac{\mu}{4}\rho\right)\), it is not hard to check that equation (3.30), for \(N\in\mathbb{N}\), corresponds to the equation of \((4+N)\)-dimensional black hole discussed in [25], and therefore shares the same spectrum of quasinormal modes. It is also interesting to note that equation (3.30) can be solved for non-integer values; for example, when \(N=\frac{1}{2}\) the solution is given in terms of hypergeometric functions.
Finally, there is a solution to the equations of motion with nonvanishing torsion but vanishing scalar fields. The vielbeins match those in the black hole solution (3.19), but the spin-connection is modified by the presence of the contorsion tensor \(K^{23}=K(r)\mathrm{d}r\). It is interesting to note that the exact profile of the function \(K(r)\) is not determined by the equations of motion.
### Generalised holographic Weyl anomaly
A CFT has a vanishing trace of the stress-energy tensor. However, when coupled to a curved background, an anomaly may appear, and the expectation value of the trace of the stress-energy tensor can be nonzero. Note, however, that in the present case, the scalar fields ruin the conformal symmetry in the usual sense, similar to the case of AdS/CFT with non-conformal branes [26]. Our theory has a generalised conformal structure (for this reason we insisted on calling the boundary theory QFT and not CFT), and the holographic Weyl anomaly vanishes,
\[e^{a}\langle\mathcal{T}_{a}\rangle_{\mathrm{QFT}}+\varphi\langle\mathcal{O}_{ \varphi}\rangle_{\mathrm{QFT}}=\mathcal{A}=0, \tag{3.31}\]
where \(\mathcal{A}\) is the anomaly. Note that the form of the conformal Ward identity (3.31) follows from the fact that the scaling dimensions of the operators dual to \(e^{a}\) and \(\varphi\) are the same, while bulk fields \(\omega^{ab}\) and \(\phi^{a}\) have no divergent parts, as can be seen from their asymptotic expansions. To derive (3.31), we use the constraints (3.15)-(3.18) following from the bulk equations of motion. We have neglected the total derivative \(\mathrm{d}(4\kappa\varepsilon_{abc}\phi^{a}R^{bc})\), as it can be removed by a suitable redefinition of the current. This is consistent with the fact that in three dimensions, for CFT, there should be no Weyl anomaly [27]. Our result is similar to the considerations in [28]. It is interesting to note that, in the case of 5-dimensional CS gravity, the nature of the holographic Weyl anomaly led authors to conclude that the boundary theory is a non-unitary CFT [29]. We have no reason to claim anything similar based on the derived result (3.31), as it is consistent with the expectations.
## IV Deformed 2D BF model and stress-energy tensor
The previous procedure can be applied in the case of two spacetime dimensions. The action is given by [30; 31]
\[\kappa\int_{\mathcal{M}_{2}}\varepsilon_{\hat{A}\hat{B}\hat{C}}\hat{\Phi}^{ \hat{A}}\hat{F}^{\hat{B}\hat{C}}. \tag{4.1}\]
For the \(SO(2,1)\) gauge group, and the usual decomposition of the connection components, equations of motion imply vanishing torsion. Therefore, on-shell, this action is equivalent to the JT gravity,
\[\frac{1}{16\pi G}\int_{\mathcal{M}_{2}}\mathrm{d}^{2}x\;\varphi\big{(}R-2 \Lambda\big{)}. \tag{4.2}\]
In the second-order formalism, this action has to be accompanied by a GHY term, given by \(\frac{1}{8\pi G}\int_{\partial\mathcal{M}_{2}}\varphi K\), where \(K\) is the trace of the extrinsic curvature. Through holography, JT gravity is closely related to the SYK model [4; 5].
On-shell variation of the action (4.2) is given by
\[\kappa\int_{\partial\mathcal{M}_{2}}\varepsilon_{AB}\big{(}\hat{\varphi}\delta \hat{\omega}^{AB}+2\hat{\phi}^{A}\delta\hat{e}^{B}\big{)}. \tag{4.3}\]
We can now plug in the expansions (2.11)-(2.14), and extract the finite piece (part of the variation that does
not contain \(\rho\)). As anticipated, terms that do contain powers of \(\rho\) cancel exactly. The result is
\[4\kappa\int_{\partial\mathcal{M}_{2}}(-\varphi\delta k+\psi\delta e). \tag{10}\]
At this point, we will follow the same logic as in the 4-dimensional theory and add a boundary that will move the variation from the \(k\) field to the scalar field \(\varphi\). The boundary term is \(4\kappa\int_{\partial\mathcal{M}_{2}}\varphi k\) (it clearly resembles the GHY term for JT gravity), and thus we obtain the following variation
\[\delta W=4\kappa\int_{\partial\mathcal{M}_{2}}(k\delta\varphi+\psi\delta e). \tag{11}\]
Note that our choice of boundary conditions is in the spirit of the "JT-like" boundary conditions from [13], and therefore, does not give rise to the boundary Schwartzian dynamics - a low-energy limit of the SYK model. Our boundary conditions, on the other hand, are in perfect analogy with the type of boundary conditions discussed in [6; 10], which are useful for holographic considerations.
The one-point functions are
\[\langle\mathcal{T}\rangle =4\kappa\psi, \tag{12}\] \[\langle\mathcal{O}_{\varphi}\rangle =4\kappa k. \tag{13}\]
Constraints that have to be satisfied due to the bulk equations of motion are
\[\mathrm{d}\varphi =e\phi,\ \ \ \ \mathrm{d}\psi=k\phi, \tag{14}\] \[\mathrm{d}\phi =2k\varphi+2e\psi. \tag{15}\]
Note that \(e\langle\mathcal{T}\rangle+\varphi\langle\mathcal{O}_{\varphi}\rangle=0\), up to boundary terms, confirming there is no Weyl anomaly. Boundary theory is 1-dimensional QFT, which is just ordinary quantum mechanics. The one-point function \(\langle\mathcal{T}\rangle\), therefore, corresponds to the expectation value of the Hamiltonian.
This theory has a black hole solution similar to the one discussed in section (III.1), obtained by dimensional reduction of a spinless BTZ black hole,
\[\mathrm{d}s^{2}=-(r^{2}-\mu)\mathrm{d}t^{2}+\frac{1}{r^{2}-\mu}\mathrm{d}r^{2 },\ \ \ \ \ \hat{\varphi}=r. \tag{16}\]
In the FG gauge, we have the following expressions: \(e=\mathrm{d}t\), \(k=-\frac{\mu}{4}\mathrm{d}t\), \(\varphi=1\), \(\psi=\frac{\mu}{4}\) and \(\phi=0\). It is easy to check that equations (14) and (15) are indeed satisfied. Moreover, we can use the one-point function (12) to obtain the thermodynamics of this black hole solution. In the case of three dimensions and Einstein-Hilbert gravity, this was done in [22], and for JT gravity, entropy was computed in [32]. We first note that avoiding conical singularity in the Euclidean signature results in the temperature \(T=\frac{\sqrt{\mu}}{2\pi}\), as before. Furthermore, we have
\[\frac{\delta W}{\delta e_{t}^{t}}=\frac{\delta W}{\delta g^{tt}}\frac{\partial (e_{t}^{t}e^{tt})}{\partial e_{t}^{t}}=-2\frac{\delta W}{\delta g^{tt}}= \langle\mathcal{T}\rangle=E, \tag{17}\]
and thus from (12) follows that the energy of the black hole is \(\kappa\mu\). If we write \(\kappa=\frac{1}{16\pi G}\), we get
\[E=\frac{\mu}{16\pi G}. \tag{18}\]
Using the relation \(T\mathrm{d}S=\mathrm{d}E\), together with the fact that entropy is zero for zero temperature, we obtain
\[S=\frac{\sqrt{\mu}}{4G}. \tag{19}\]
This result coincides with the black hole entropy calculated in the metric formulation of JT gravity [33; 34].
Finally, we can deform the theory by adding a boundary term originating from the CS gravity action in three dimensions,
\[2\kappa\int_{\partial\mathcal{M}_{2}}\varepsilon_{AB}\phi^{A}e^{B}. \tag{20}\]
Total action is now on-shell divergent, and the machinery of holographic renormalization has to be used. Putting the boundary at some finite \(\rho=\varepsilon\), we add a counterterm
\[\frac{2}{\varepsilon}\kappa\int_{\mathcal{M}_{2}}\varphi e, \tag{21}\]
that removes the divergences. In addition, we have to modify the previously added GHY term, so that we still have well-defined boundary conditions. It is easy to check that, upon adding the relevant GHY term, the one-point functions in the dual theory remain the same. The total GHY term, in this case, is given by
\[2\kappa\int_{\partial\mathcal{M}_{2}}(\psi e+\varphi k). \tag{22}\]
## V Gravitational Wilson lines
It was argued in a series of papers that in the case of 3-dimensional CS gravity, the bulk Wilson line observable corresponds to a bi-local operator in the dual QFT. The same is expected to be true in the 2-dimensional model of topological gravity (see, for example, [35; 36; 37; 13; 37]). In the context of gauge theories of gravity, Wilson lines are closely related to heavy particles moving in the gravitational field [38]. We would like to understand the importance of Wilson lines in the holographic setting. Through this section, we will not threat dilaton fields and therefore our analysis is applicable (and possibly better suited) for other gauge theories of gravity. We are not a priori claiming that this object will be a two-point correlation function in the boundary QFT, but as we will see, in some cases, this might be true. We start with the particle action
\[S_{\mathrm{par}}=-\int\mathrm{Tr}(KA_{\tau}^{h})\mathrm{d}\tau, \tag{23}\]
where the trace is given in the explicit representation of the \(SO(3,2)\) algebra using gamma matrices. \(K\) is a fixed algebra element, given by \(K=mP_{0}+\frac{1}{2}sJ_{23}\). Field \(h\) is a Lorentz-algebra-valued one-form, and is used as a gauge parameter to gauge transform field \(A_{\tau}=A_{\mu}\frac{{\rm d}x^{\mu}}{{\rm d}\tau}\). As explained in [38], this can be interpreted as a gravitational Wilson line insertion in the bulk. The Wilson line depends on the choice of representation. We work in an infinite dimensional irreducible representation labelled by two numbers \((m,s)\), representing the particle's mass and spin.
The (Euclidean) path integral is given by
\[\int{\cal D}{\cal P}{\cal D}K{\cal D}h\;e^{-\int{\rm d}\tau\left(-{\rm Tr}(KA_{ \tau}^{\mu})+{\rm L.M.}\right)}. \tag{5.2}\]
The action terms denoted by L.M. are the constrain terms fixing the two Casimirs of the \(SO(3,2)\) algebra,
\[{\rm L.M.}=\lambda_{1}\Big{(}\frac{1}{2}J^{\hat{A}\hat{B}}J_{\hat {A}\hat{B}}-c_{2}\Big{)}\] \[+\lambda_{2}\Big{(}\frac{1}{16}\varepsilon_{\hat{A}\hat{B}\hat{C }\hat{D}}\varepsilon^{\hat{E}\hat{F}\hat{G}\hat{H}}J^{\hat{A}\hat{B}}J^{\hat{C }\hat{D}}J_{\hat{E}\hat{F}}J_{\hat{G}\hat{F}}-c_{4}\Big{)}, \tag{5.3}\]
where \(\lambda_{1}\) and \(\lambda_{2}\) stand for the Lagrange multipliers, and generators \(J^{\hat{A}\hat{B}}\) are defined in [38].
To obtain the two-point function on the boundary, it was necessary to include the integration over paths (\({\cal P}\)) connecting the two given boundary points because the bulk gauge curvature is not zero. If we were to consider the topological BF action, then there would be no need to include this integration and one could assume that the field \(h\) is valued in the whole \(SO(3,2)\) algebra, not only in the Lorentz subalgebra. In the latter case, the interpretation of particles as Wilson lines is exact, as explained in [38].
In [37], it was important to restrict to a concrete representation of a gauge group in order to make the connection with the entanglement entropy. In four spacetime dimensions, it is not expected that Wilson lines should be able to reproduce boundary entanglement entropy, but it is nevertheless possible to study extended operators in the dual field theory. One can formulate surface defects (strings) in the bulk, that anchor the boundary along a given curve [39], that might be related to the entanglement entropy, but we will not pursue this question here. What we are discussing in this section is similar to the construction of the two-point function from [40].
Quadratic Casimir \(c_{2}\sim m^{2}\) should be assumed large so that we can safely use the saddle point approximation of the path integral (5.2). Using the standard holographic dictionary, a scalar bulk field of mass \(m\) is dual to the primary operator of scaling dimension \(\Delta=\frac{D}{2}+\sqrt{\frac{D^{2}}{4}+m^{2}}\). In the limit of a large mass, the scaling dimension coincides with the \(m\sim\sqrt{c_{2}}\). We will also assume that the spin of the particle is either zero, or large, in order to apply the semi-classical approximation safely. However, we will work in a regime where \(s\ll m\), so that the Casimir operators of the \(SO(3,2)\) group, \(c_{2}\) and \(c_{4}\), reproduce the well-known Casimir operators in the Minkowski spacetime (where the concept of a particle is well-defined). Of course, the difference between the entanglement entropy and our two-point correlation function is that the entanglement entropy is UV divergent in field theory, while the two-point function has to be renormalized. The saddle point approximation yields
\[\langle{\cal O}(x_{1}){\cal O}(x_{2})\rangle=\lim_{\varepsilon\to 0} \varepsilon^{2m}e^{-S_{\rm parion-shell}}. \tag{5.4}\]
One can then calculate, using the described prescription, the two-point function in different states with semi-classical bulk. For example, let us discuss the case of AdS\({}_{4}\) bulk spacetime and a spinless particle. Equation of motion obtained by varying \(h\) is [38]
\[\frac{{\rm d}z^{\mu}}{{\rm d}\tau}p^{\nu}=\frac{{\rm d}z^{\nu}}{{\rm d}\tau}p^ {\mu}, \tag{5.5}\]
which is solved by \(p^{\mu}=m\frac{{\rm d}z^{\mu}}{{\rm d}\tau}\) (\(\tau\) being the affine parameter of a geodesic). The equation following the variation with respect to the path \({\cal P}\) then reproduces the standard geodesic equation. For simplicity, we use coordinate \(z=\sqrt{\rho}\), such that the metric of AdS\({}_{4}\) reads
\[{\rm d}s^{2}=\frac{{\rm d}z^{2}-{\rm d}t^{2}+{\rm d}x^{2}+{\rm d}y^{2}}{z^{2}}. \tag{5.6}\]
Focusing on constant time correlation functions between two boundary points at distance \(L\) (this distance is defined with respect to the flat boundary metric), geodesics are given by semi-circles, connecting two boundary points [41]. We can then calculate the on-shell action and obtain the final result for the two-point correlation function. The result is given by
\[\langle{\cal O}(x_{1}){\cal O}(x_{2})\rangle\sim\frac{1}{L^{2m}}. \tag{5.7}\]
Finally, consider a pair of QFT states, \(|\psi_{g_{1}}\rangle\) and \(|\psi_{g_{2}}\rangle\), that are dual to a pair of very distinct semi-classical bulk geometries, \(g_{1}\) and \(g_{2}\), respectively. One could ask what
Figure 1: Probe particle world-line, interpreted as a gravitational Wilson line in the bulk.
should be the dual description of a superposition of these two QFT states, e.g. \(\frac{1}{\sqrt{2}}(|\psi_{g_{1}}\rangle+|\psi_{g_{2}}\rangle)\). Based on the considerations of the entanglement entropy in [42; 43] we believe that the answer to this question would be that it is a weighted sum of correlation functions corresponding to the states \(|\psi_{g_{1}}\rangle\) and \(|\psi_{g_{2}}\rangle\).
## VI Conclusion and outlook
The holographic analysis of the Chamseddine's even-dimensional topological gravity is presented for all even dimensions. One-point correlation functions of the dual QFT are obtained and the generalised holographic Weyl anomaly is discussed. A method of computing two-point correlation functions in terms of gravitational Wilson lines is also proposed. We emphasized the role of boundary GHY-like terms in defining the bulk action with appropriate holographic interpretation. In that respect, our results contribute to a better understanding of holography for Riemann-Cartan spaces [44]. We should also point out the similarity of the CTG action (2.4) to that of Brans-Dicke theory (scalar-tensor modified gravity theory) [45]. This theory can be considered as a particular frame change of \(f(R)\) models. Some (but not complete) progress has been made to understand holography in \(f(R)\) models, especially in three dimensions [46].
Although we started from a concrete gravity theory in the bulk and aimed at finding its holographic features, our motivation largely came from the considerations regarding the holographic description of 4-dimensional hydrodynamics of spin-systems [11] (see also [47]). The even-dimensional topological theory of gravity that we focused on also predicts the possibility of obtaining the nonzero spin-current for the boundary QFT, but now an odd-dimensional one. This is due to the first-order treatment of our gravity theory. In particular, it would be impossible to formulate the CTG action (2.4) with \(SO(3,2)\) gauge symmetry, assuming zero torsion. However, it is interesting to note that the boundary spin-current is nonvanishing even for some torsion-free bulk geometries, which can be relevant for the physics of spin systems. A potential further investigation could involve suitable modelling of the bulk scalar fields to capture some interesting features of the dual spin systems.
**Acknowledgments**
We thank Olivera Miskovic and Rodrigo Olea for useful discussions on similar topics, and to Dusan Novicic for useful comments on boundary conditions. We also thank C. Brukner, A.C. de la Hamette and V. S. Kabel for the discussion on the superposition of semi-classical geometries. Work of D.D. and D.G. is supported by the funding provided by the Faculty of Physics, University of Belgrade, through grant number 451-03-47/2023-01/200162 by the Ministry of Science, Technological Development and Innovations of the Republic of Serbia.
|
2306.14440 | Pattern Formation for Fat Robots with Lights | Given a set of $n\geq 1$ unit disk robots in the Euclidean plane, we consider
the Pattern Formation problem, i.e., the robots must reposition themselves to
form a given target pattern. This problem arises under obstructed visibility,
where a robot cannot see another robot if there is a third robot on the
straight line segment between the two robots. Recently, this problem was solved
for fat robots that agree on at least one axis in the robots with lights model
where each robot is equipped with an externally visible persistent light that
can assume colors from a fixed set of colors [K. Bose, R. Adhikary, M. K.
Kundu, and B. Sau. Arbitrary pattern formation by opaque fat robots with
lights. CALDAM, pages 347-359, 2020]. In this work, we reduce the number of
colors needed and remove the axis-agreement requirement. In particular, we
present an algorithm requiring 7 colors when scaling the target pattern is
allowed and an 8-color algorithm if scaling is not allowed. Our algorithms run
in $O(n)$ rounds plus the time needed for the robots to elect a leader. | Rusul J. Alsaedi, Joachim Gudmundsson, André van Renssen | 2023-06-26T06:10:05Z | http://arxiv.org/abs/2306.14440v1 | # Pattern Formation for Fat Robots with Lights
###### Abstract
Given a set of \(n\geq 1\) unit disk robots in the Euclidean plane, we consider the Pattern Formation problem, i.e., the robots must reposition themselves to form a given target pattern. This problem arises under obstructed visibility, where a robot cannot see another robot if there is a third robot on the straight line segment between the two robots. Recently, this problem was solved for fat robots that agree on at least one axis in the robots with lights model where each robot is equipped with an externally visible persistent light that can assume colors from a fixed set of colors [6]. In this work, we reduce the number of colors needed and remove the axis-agreement requirement. In particular, we present an algorithm requiring 7 colors when scaling the target pattern is allowed and an 8-color algorithm if scaling is not allowed. Our algorithms run in \(O(n)\) rounds plus the time needed for the robots to elect a leader.
Pattern formation, Robots with lights, Fat robots, Obstructed visibility, Collision avoidance 2012 acmcopyrightmargin=5pt, innerleftmargin=5pt, innerrightmargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt,margin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt,margin=5pt, innermargin=5pt, innermargin=5pt,margin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt,margin=5pt, innermargin=5pt, innermargin=5pt,margin=5pt, innermargin=5pt,margin=5pt, innermargin=5pt,margin=5pt, innermargin=marginmargin=5pt, innermargin=5pt,margin=5pt, innermargin=5pt,margin=5pt, innermargin=5pt,margin=5pt, innermargin=5pt,margin=5pt, innermargin=5pt,margin=5pt, innermargin=marginmargin=5pt, innermargin=marginmarginmargin=5pt, innermargin=marginmarginmargin=5pt, innermargin=marginmargin=5pt, innermargin=marginmarginmargin=5pt, innermargin=marginmargin=5pt, innermargin=marginmarginmargin=5pt, innermargin=marginmarginmargin=5pt, innermargin=marginmarginmargin=5pt, innermargin=marginmarginmargin=5pt, innermarginmargin=marginmargin=5pt, innermargin=marginmarginmargin=5pt, innermargin=marginmarginmargin=5pt, innermarginmargin=marginmarginmargin=5pt, innermargin=marginmarginmargin=5pt, innermarginmargin=marginmargin=5pt, innermarginmargin=marginmargin=5pt, innermargin=marginmarginmargin=marginmarginmargin=5pt, innermarginmargin=marginmarginmargin=5pt, innermarginmargin=marginmarginmargin=5pt, innermarginmargin=marginmarginmargin=5pt, innermarginmargin=marginmarginmargin=marginmargin=5pt, innermarginmargin=marginmarginmarginmargin=marginmarginmargin=marginmarginmargin=marginmarginmarginmargin=marginmarginmarginmargin=marginmarginmarginmargin=marginmarginmarginmargin=marginmarginmarginmarginmarginmargin=margin
One of the problems studied under obstructed visibility (and the focus of this paper) is the Pattern Formation problem: Starting from arbitrary distinct positions in the plane, determine a schedule to reposition the robots such that they form the given (target) pattern without collisions [6, 8, 33]. We say that two robots collide if at any time they share the same position. In previous work, the target pattern is allowed to be scaled, rotated, translated, and reflected.
To tackle this and other robot problems, a generalization of the classical model has recently become the focus of significant interest [2, 6, 8, 17, 25, 26, 28, 29, 30, 32, 33]. This variant, called the luminous robots model (or robots with lights model), equips robots with an externally visible light which can assume different colors from a fixed set. These lights are persistent, i.e., the color of the light is not erased at the end of the LCM cycle. Except the assumption of the availability of lights, the robots work similarly to the classical model. This model corresponds to the classical oblivious robots model when the number of colors in the set is \(1\)[17, 20]. One objective in this model is to solve the problem while minimizing the size of the color set.
### Related Work
There has been considerable work on the Pattern Formation problem for point robots [5, 7, 9, 22, 23, 24, 27] and some of these also considered the lights model while solving the problem [8, 33]. Other work in this area includes that by Cicerone _et al._[11], who presented an algorithm to solve the Pattern Formation problem for point robots with chirality (the robots agree on the orientation of the axes, i.e., on the meaning of clockwise), and Flocchini _et al._[21], who studied the problem for point robots in the asynchronous setting, but they required that the robots agree on their environment and observe the positions of the other robots. While these works provided techniques to overcome various difficulties, they did not take the physical extents of the robots into account. Unfortunately, the techniques developed for point robots cannot be applied directly to solve the Pattern Formation for fat robots, due to the effect these extends have on collision avoidance.
The work most closely related to our result is due to Bose _et al._[6]. They studied the Pattern Formation problem for fat robots in the robots with lights model and used 10 colors. Unfortunately, their solution assumes that all robots agree on an axis of the coordinate system. Our solutions remove this assumption and reduces the number of colors required.
Other related work for fat robots includes the work by Kundu _et al._[24], who studied the Pattern Formation problem for fat robots with lights on an infinite grid with one axis agreement. They solved the problem using 9 colors. Unfortunately, they did not bound the running time of their algorithm.
### Contributions
We first present two algorithms using at most 11 colors, which through careful analysis we improve to use only 7 colors when scaling of the pattern is allowed and 8 if this is not allowed. None of our algorithms require the pattern to be rotated, translated, or reflected. Our algorithms require \(O(n)+O(q\log n)\) rounds. Here \(q>0\) is related to leader election, which can be solved in \(O(q\log n)\) rounds with probability at least \(1-n^{-q}\)[33].
Our algorithms work under the fully synchronous model and are collision-free. Interestingly, unlike previous work, our algorithms do not require any additional assumptions on the capabilities of the robots or any shared information or coordinate system. The moves of the
robots are rigid, i.e., an adversary does not have the power to stop a moving robot before reaching its computed destination [20].
## 2 Preliminaries
Consider a set of \(n\geq 1\) anonymous robots \(R=\{r_{1},r_{2},\ldots,r_{n}\}\) operating in the Euclidean plane \(\mathbb{R}^{2}\). The number of robots \(n\) is not assumed to be known to the robots. We assume that each robot \(r_{i}\in R\) is a non-transparent disk with diameter \(1\). The center of the robot \(r_{i}\) is denoted \(c_{i}\), and the position of \(c_{i}\) is also said to be the position of \(r_{i}\). We denote by \(\mathrm{d}(r_{i},r_{j})\) the Euclidean distance between the two robots \(c_{i}\) to \(c_{j}\). For simplicity, we use \(r_{i}\) to denote both the robot \(r_{i}\) and the position of its center \(c_{i}\). Each robot \(r_{i}\) has its own coordinate system, and it knows its position with respect to this coordinate system. Robots may not agree on the orientation of their coordinate systems. However, since all the robots are of unit size, they implicitly agree on the notion of unit length.
Each robot has a camera to take a snapshot of the plane and the distance that is visible to this camera is infinite, provided that there are no obstacles blocking its view (i.e., another robot) [1]. Following the fat robot model [1, 15], we assume that a robot \(r_{i}\) can see another robot \(r_{j}\) if there is at least one point on the bounding circle of \(r_{j}\) that is visible to \(r_{i}\). Similarly, we say that a point \(p\) in the plane is visible to a robot \(r_{i}\) if there is a point \(p_{i}\) on the bounding circle of \(r_{i}\) such that the straight line segment \(\overline{p_{i}p}\) does not intersect any other robot.
Each robot \(r_{i}\) is equipped with an externally visible light that can assume any color from a fixed set \(C\) of colors. The set \(C\) is the same for all robots. The color of the light of robot \(r\) at time \(t\) can be seen by all robots that are visible to \(r\) at time \(t\).
At any time \(t\), a robot \(r_{i}\in R\) is either active or inactive. When active, \(r_{i}\) performs a sequence of _Look-Compute-Move_ (LCM) operations:
* _Look:_ the robot takes a snapshot of the positions of the robots visible to it in its own coordinate system;
* _Compute:_ executes its algorithm using the snapshot which returns a destination point \(x\in\mathbb{R}^{2}\) and a color \(c\in C\); and
* _Move:_ moves to the computed destination \(x\in\mathbb{R}^{2}\) (if \(x\) is different than its current position) and sets its own light to color \(c\).
Each robot executes the same algorithm locally every time it is activated and a robot's movement cannot be interrupted by an adversary. Two robots \(r_{i}\) and \(r_{j}\) are said to _collide_ at time \(t\) if the bounding circles of \(r_{i}\) and \(r_{j}\) share a common point at time \(t\). To avoid collisions among robots, we thus have to ensure that at all times \(\mathrm{d}(r_{i},r_{j})\geq 1\) for any robots \(r_{i}\) and \(r_{j}\).
We assume that the execution starts at time \(0\). At this time, the robots start in arbitrary positions with \(\mathrm{d}(r_{i},r_{j})\geq 1\) for any two robots \(r_{i}\) and \(r_{j}\), and the color of the light of each robot is set to _off_.
The Pattern Formation problem is now defined as follows: Given any initial positions of the robots, the robots must reposition themselves to form a given target pattern without having any collisions in the process. The target pattern is allowed to be scaled, rotated, translated, and reflected. An algorithm is said to solve the Pattern Formation problem if it always achieves the target pattern from any initial configuration. We measure the quality of the algorithm using the number of distinct colors in the set \(C\) and the number of rounds needed to solve the Pattern Formation problem.
## 3 Algorithm
In this section, we present an algorithm that solves the Pattern Formation problem for \(n\geq 1\) robots of unit disk size in the robots with lights model. Our algorithm assumes the fully synchronous setting. We first present two algorithms that use at most 11 colors: one algorithm requires the target pattern to be scaled, while the other does not. Neither algorithm requires the target pattern to be rotated or reflected. Our algorithms execute four phases: Mutual Visibility, Leader Election, Line Formation, and Pattern Formation.
### Mutual Visibility
Starting from any initial configuration, the goal of this phase is to move the robots to positions where every robot can see all other \(n-1\) robots. Previous work [2] achieves this by positioning all robots on a convex hull and the presented algorithm runs in \(O(n)\) rounds and uses only two colors (_off_ for robots that are not yet a corner of the convex hull, and _corner_ for robots that are). This algorithm runs in the fully synchronous setting with obstructed visibility in the robots with lights model and avoids collisions. While we use this algorithm as a black box, the one important thing to note is that to ensure that there is always enough space for all robots on the boundary of the convex hull, the corner robots expand the convex hull in each round. Hence, until this phase is completed, the corner robots move each round. Figure 1 shows an initial configuration as well as the final situation of the Mutual Visibility phase.
### Leader Election
The goal of the Leader Election phase is for a single robot to be elected as a leader. After the Mutual Visibility phase, every robot sees all other \(n-1\) robots, so the robots know the value of \(n\) (while they remain visible to each other, as they have no memory). It is known that electing a leader is not possible using a deterministic algorithm in an anonymous distributed system [3]. Hence, we use the randomized algorithm by Vaidyanathan _et al._[33].
Initially, all robots are competing and use a color, say _competing_, to indicate this. The algorithm proceeds in iterations until it finishes with a single leader. Each iteration has a constant number of rounds. In an iteration with \(n\) competing robots, each robot flips a coin whose probability of success is \(1/n\). If a robot is successful, then the robot leaves its color as _competing_. Otherwise, it changes its color to _non-competing_. If there is exactly one competing robot left, the robots have successfully elected a leader and this robot changes its color to the _leader_ color. Otherwise, this iteration was unsuccessful and all robots change their color to back _competing_ and try again. Figure 2 shows an example of the Leader Election phase.
Figure 1: An example of the Mutual Visibility phase: (a) an initial configuration and (b) the end configuration.
### Line Formation
The goal of the third phase is to move the robots from their convex hull positions to a line away from both their old positions and away from where the leader will build the target pattern in the next phase. In this phase the leader will consider its own coordinate system and move the other robots one by one to achieve this goal. Where the leader forms the line of robots depends on the current location of the convex hull of robots and the area required to build the pattern (we assume without loss of generality that the leader will use its origin \((0,0)\) as the topright corner of the bounding box of the target pattern).
Since the leader is a robot on the convex hull, at least one of the quadrants originating from the leader's position points away from the convex hull and thus does not contain any robots. We assume without loss of generality that this is the lowerleft quadrant in the leader's coordinate system. The leader will use this quadrant to build the line. Without loss of generality, we will explain the Line Formation and Pattern Formation phases using this quadrant. This assumption can easily be removed by mirroring the approach along the \(x\)- or \(y\)-axis.
The leader computes the topmost position of the line as follows. Let \(x_{hull}\) and \(y_{hull}\) denote the minimum \(x\)- and \(y\)-coordinate of any robot on the convex hull according to the leader's coordinate system. Similarly, let \(x_{pattern}\) and \(y_{pattern}\) denote the minimum \(x\)- and \(y\)-coordinate of the target pattern according to the leader's coordinate system. The leader now computes the topmost position on the line \((x_{line},y_{line})\) as \(x_{line}=\min(x_{hull},x_{pattern})-M\) and \(y_{line}=\min(y_{hull},y_{pattern})-M\), where \(M\) is some large constant. Picking this coordinate guarantees that the line will be built below and to the left of both the convex hull and the target pattern. We note that while the leader can compute this coordinate, it cannot store it for use in future rounds, so it needs to move directly from its current position to where it wants to build the line.
If there is only one leftmost robot on the convex hull, the leader now moves to this position \((x_{line},y_{line})\) and changes its color to _follow the leader_. As there are no robots on the line yet, this signals that the robot closest to the leader's position should move to this location. In the next round, the leader moves down one unit (and sets its color to _do not follow_) to avoid colliding with the approaching robot. Once this robot reaches its position, it sets its color to _on the line_. Figure 3 shows these movements as well as the remaining ones in this phase. If there is more than one leftmost robot on the convex hull, the leader moves to ensure that the leftmost bottommost robot is the unique closest one in the next round. It does this by computing an \(x_{temp}\) such that this is the case and then setting \(x^{\prime}_{line}=\min(x_{temp},x_{line})\) to get new coordinates \((x^{\prime}_{line},y_{line})\) ensuring that the moves described earlier activate a single robot to move to the leader's position. Once the first robot is in place, this robot will not
Figure 2: An example of the Leader Election phase: (a) initially all robots are competing (orange), (b) an unsuccessful iteration with competing and non-competing (gray) robots, and (c) a successful iteration, where a single robot is elected leader (purple).
move for the remainder of the phase.
Now that one robot is in place, the leader can observe the location of this robot to "remember" where the line should be built. The leader proceeds to move the remaining robots to the line one at a time. To move a robot, the leader moves one unit to the left of the leftmost bottommost robot \(r_{i}\) remaining on the convex hull and changes its color to _follow the leader_. Robot \(r_{i}\) changes its color to _following the leader_, but does not move yet.
Figure 3: An example of the Line Formation phase (numbers indicate order of operations): (a) the leader (purple) moves to the first position on the line and signals (yellow) the closest robot to move there in the next round before moving our of the way and setting its color to _do not follow_ (lightblue), (b) the first robot changes its color to _on the line_ (green) and the leader moves next to the next robot to be moved, (c) the leader moves the next robot to its position on the line while the following robot has its color set to _following the leader_ (brown), after which the leader moves to guide the next robot, and (d) the result of the Line Formation phase.
The leader then repeatedly moves while robot \(r_{i}\) follows the leader by moving to the leader's previous observed position until it reaches its position on the line. The movements of each robot are: (1) \(r_{i}\) moves one unit to the left while the leader moves down until it reaches the \(y\)-coordinate of robot \(r_{i}\)'s intended position on the line, (2) \(r_{i}\) moves down while the leader moves left to \(r_{i}\)'s intended position on the line, and (3) the leader moves two units down and changes its color to _do not follow_ while \(r_{i}\) moves to its position on the line and changes its color to _on the line_. The leader now moves right to observe the remaining robots to find the new leftmost bottommost remaining robot. This process is repeated until every robot is positioned on the line. To avoid collisions, the leader ensures that there are two units of vertical space between consecutive robots on the line. We note that a robot can determine whether the phase has ended by seeing whether it can see any colors other than _on the line_ and any of the leader's colors.
### 3.4 Pattern Formation
The goal of the Pattern Formation phase is to relocate the robots to form the given target pattern. The leader can determine where the robots should be placed by placing the first robot at \((0,0)\) with respect to its own coordinate system and building the pattern from there. The leader will build the pattern from this first robot towards the line, so given our assumed positioning this will be done from right to left, top to bottom. As the placement of the robots can be uniquely determined based on the robots on the line and the origin of the leader's coordinate system, the leader can determine what the last placed robot is and thus which part of the pattern has already been completed. During this phase, unless instructed otherwise by the leader, the robots stay on the line and do not move.
We present two versions of the Pattern Formation phase: one that requires scaling of the pattern (requiring one new color) and one that does not (requiring two new colors).
#### The Algorithm with Scaling of the Target Pattern
In this version, we assume that the target pattern can be scaled. The first position of the first robot is at \((0,0)\) in the leader's coordinate system. Again, we explain our algorithm in the setting where the leader builds the pattern in the lower left quadrant of the origin, but the algorithm can be mirrored to obtain the other quadrants. This phase uses one new color to allow robots to remember that they have reached their final position in the pattern.
After the Line Formation phase, all the robots are on a line with their light set to _on the line_, except for the leader. Similar to the previous phase, the idea is that the leader moves a single robot to the pattern by moving next to it and guiding the robot to the intended position in the pattern. The leader moves the robots from top to bottom as ordered along the line. The following process is illustrated in Figure 4. To activate a robot \(r_{i}\) on the line, the leader moves next to it and sets its color to _follow the leader_. The leader then moves such that its \(y\)-coordinate is equal to that of the intended position in the pattern and in the next round the leader moves to the intended position in the pattern. Robot \(r_{i}\) has set its color to indicate that it is following the leader and repeatedly moves to the leader's last observed position. To avoid collisions in the last step, the leader moves two units down and sets its color to _do not follow_ to indicate that the robot reached its final position. At this point, \(r_{i}\) sets its color to _at final position_. The leader now proceeds to pick up the next robot and repeats this process until the pattern is formed. If needed, the leader moves to fill the
final position itself, if the pattern required exactly \(n\) robots1.
Footnote 1: We note that if the pattern requires more than \(n\) robots, it cannot be built and all robots can detect this situation in the Leader Election phase and terminate at that point.
In order to ensure that after guiding a robot to its intended position the leader can indeed move down two units, we scale the pattern by a fixed constant factor, say \(10\). Since in the given target pattern robots can at most be touching each other (otherwise the pattern cannot be constructed, which can be detected by all robots before the algorithm even starts), scaling
Figure 4: An example of the Pattern Formation phase when scaling is allowed (numbers indicate order of operations): (a) the leader moves to the position above the topmost robot \(r_{1}\) on the line and signals that it should follow (yellow), which causes \(r_{1}\) to change its color to _following the leader_ (brown) and move accordingly, (b) the leader moves out of the way so robot \(r_{1}\) can reach its final position, which the leader indicates by changing its color to _do not follow_ (lightblue) and \(r_{1}\) sets its color to _at final position_ (blue), after which the leader moves to be immediately above the next robot on the line, (c) the leader guides the next robot to its final position, and (d) all robots have reached their final position.
the pattern this way ensures that there is ample space between the robots for the leader to move as described.
#### The Algorithm without Scaling of the Target Pattern
In this version, we assume that the target pattern is not allowed to be scaled. As in the previous case, the pattern is built with respect to the origin \((0,0)\) in the leader's coordinate system and the pattern will be built in the lower left quadrant. This version of the phase needs two new colors: one to "push" robots, and one to allow robots to remember when they have reached their final position in the pattern.
The high-level idea in this version of the Pattern Formation phase is that the leader first "pulls" a robot behind it to position it in such a way that there is a straightline path from the robot to its intended position in the target pattern. Once the robot reaches this position, the leader moves away to indicate to the robot how far it needs to move in the direction _opposite_ to the direction the leader moved in, effectively "pushing" the robot to its final position.
In more detail (see also Figure 5), in order to move a robot \(r_{i}\), the leader starts by moving to the position that has the same \(x\)-coordinate as the line and same \(y\)-coordinate of the final position of \(r_{i}\) in the target pattern. Here it changes its color to _follow the leader_, signaling to the topmost robot \(r_{i}\) of the line that it should move to the leader's current position. Robot \(r_{i}\) observes this and changes its color to _following the leader_ to remember what it is supposed to do. Let \(d\) be the distance between the leader's current position and the position in the pattern where it wants to place \(r_{i}\). Note that since the leader is already at the \(y\)-coordinate of this intended location, this distance is just the difference in \(x\)-coordinate. After determining \(d\), the leader moves \(d\) to the left and changes its color to _push_. This indicates to robot \(r_{i}\) that in the next round it should move from its current position (where the leader used to be) to the position a distance of \(d\) away from its current position in the direction opposite to where it sees the leader. Once robot \(r_{i}\) reaches its final position, it changes its color to _at final position_ to remember this. Since every robot is moved using exactly one "pull" and one "push", they know that after the push they have arrived at their final position without the leader having to indicate this in any way.
The leader now moves to fetch the next robot and this process is repeated until the pattern is formed. As before, if needed, the leader moves to fill the final position itself.
## 4 Analysis
We proceed to prove that our algorithms solve the Pattern Formation problem for fat robots and that there are no collisions among the robots. Analogous to our algorithm description, throughout this analysis we will assume without loss of generality that the leader uses the lower left quadrant in the Line Formation and Pattern Formation phases. We start by stating the result by Alsaedi _et al._[2] on the Mutual Visibility phase.
Our algorithm solves the Mutual Visibility problem for unit disk robots in \(O(n)\) rounds without collisions in the fully synchronous setting using two colors.
Next, we consider the Leader Election phase. This phase builds on the algorithm described and analyzed by Vaidyanathan _et al._[33], who bounded the expected number of rounds needed for this phase as well as the associated probability. The number of colors follows directly from needing three new colors to discern competing, non-competing, and leader robots.
**Theorem 2**.: _For any \(q>0\), Leader Election can be solved in the fully synchronous setting for \(n\) robots in \(O(q\log n)\) rounds with probability at least \(1-n^{-q}\) using three colors._
Next, we analyze the Line Formation phase.
**Lemma 3**.: _In every round of the Line Formation phase, only one robot that is not the leader moves from the convex hull to be positioned on the line, avoiding collisions._
Proof.: Robots only move when the leader activates them to do so. If there are no robots on
Figure 5: An example of the Pattern Formation phase when scaling is not allowed (numbers indicate order of operations): (a) the leader moves to the position on the line with \(y\)-coordinate equal to where the first robot \(r_{1}\) needs to be placed and signals that \(r_{1}\) should follow (yellow), which causes \(r_{i}\) to change its color to _following the leader_ (brown) and move accordingly while the leader moves away preparing to push \(r_{1}\), (b) the leader changes its color to _push_ (gold) and \(r_{1}\) moves distance equal to its current distance to the leader away from the leader to reach its final position and sets its color to _at final position_ (blue), after which the leader moves to pull the next robot on the line, (c) the leader first pulls and then pushes the next robot to its final position, and (d) all robots have reached their final position.
the line, all robots can see each other and thus they can determine whether they are the robot closest to the leader, resulting in only a single robot being activated. Once there are robots on the line, the leader moves next to a robot and sets its color to _follow the leader_ to activate it. As the leader activates the robots one at a time and completes moving a robot to the line before activating the next robot, only one robot moves from the convex hull to be positioned on the line. The "no collisions" part of the lemma then follows from the fact that we have only one moving robot, and that the robots are picked from left to right and start with a horizontal movement to move them away from the convex hull before moving vertically, thus ensuring there are no collisions.
The Line Formation phase uses four new colors.
Proof.: The algorithm uses one color for the leader to indicate that a robot should follow it, one for a robot to store that it is following the leader, one for the leader to signal that the robot should stop following it, and one final color for the robot to store it is done for this phase.
The Line Formation phase takes \(O(n)\) rounds.
Proof.: In the Line Formation phase, the leader moves each robot from the convex hull to a line. The leader requires at most three rounds to move a robot to its position on the line. After each robot is moved, the leader uses at most three rounds to move next to the next robot. Therefore, the Line Formation phase takes \(O(n)\) rounds to move all the robots from the convex hull to the line.
Lemmas 3, 4, and 5 imply the following theorem.
The Line Formation phase takes \(O(n)\) rounds, avoids collisions, and uses four new colors.
Next, we analyze the Pattern Formation phase. We start with the version where we can scale the pattern.
In every round of the Pattern Formation phase with scaling, only one robot that is not the leader moves from the line to be positioned in the target pattern, avoiding collisions.
Proof.: By construction, a robot only moves after it is activated by the leader. Since the leader moves the robots one at a time, only one robot will move. By moving the topmost robot of the line and starting by moving this robot vertically up (away from the other robots on the line), there are no collisions with other robots on the line. Furthermore, scaling the pattern by a large enough factor (such as 10), we ensure that there are also no collisions with robots in the pattern either: the pattern is built from right to left, meaning that we can only collide with robots that were originally at most a distance of 1 from the current robot. However, because the pattern is scaled, this distance is now increased to 10, meaning that any overlap between the robot's paths into the pattern and any previously placed robots is removed.
The Pattern Formation phase with scaling uses one new color.
Proof.: By construction, the algorithm uses only a single new color during this phase: for a robot to store that it has reached its final position.
The Pattern Formation phase with scaling takes \(O(n)\) rounds.
Proof.: In the Pattern Formation phase, the robots move from the line to the target pattern one by one. The leader moves at most three times to move next to the robot it wants to move and it uses three rounds to move this robot to its final position in the pattern. Hence, it takes a constant number of rounds to move one robot from the line to the target pattern. As a result, in total we require at most \(O(n)\) rounds to move all the robots to the target pattern.
Using Lemmas 3, 3, and 3, we get the following theorem.
The Pattern Formation phase with scaling takes \(O(n)\) rounds, avoids collisions, and uses one new color.
Using Theorems 3, 3, 3, and 3, we can now conclude the following.
Our algorithm solves the Pattern Formation problem when scaling the target pattern is allowed for \(n\) unit disk robots in \(O(n)+O(q\log n)\) rounds without collisions in the fully synchronous setting using 10 colors, where the Leader Election phase takes \(O(q\log n)\) rounds with probability at least \(1-n^{-q}\).
Finally, we analyze the Pattern Formation phase if scaling is not allowed.
In every round of the Pattern Formation phase without scaling, only one robot that is not the leader moves from the line to be positioned in the target pattern avoiding collisions.
Proof.: By construction, a robot only moves once it is activated by the leader. Hence, only one robot moves at a time, and since the leader always picks the topmost robot on the line and moves it vertically away from the line, no collisions with the other robots on the line can occur. Since the leader knows the pattern and can determine the last robot placed based on the placement order of the pattern and the visible robots, it can also compute how to push the robot into the pattern to avoid collisions (since the pattern is built right to left from top to bottom, a left to right push exists). By pushing the robot this way, there are thus no collisions.
The Pattern Formation phase without scaling uses two new colors.
Proof.: By construction, the algorithm uses two new colors: one to signal that a robot is being pushed, and one for the robot to store it has reached its final position.
The Pattern Formation phase without scaling takes \(O(n)\) rounds.
Proof.: In the Pattern Formation phase, the robots move from the line to the target pattern one by one. The leader uses at most three moves to position itself above the topmost robot on the line and then another three rounds to move this robot to its final position in the pattern. Therefore, it takes a constant number of rounds to move one robot from the line to the target pattern. As a result, the total number of rounds used to move all the robots to the target pattern is \(O(n)\).
Using Lemmas 3, 3, and 3, we get the following theorem.
The Pattern Formation phase without scaling takes \(O(n)\) rounds, avoids collisions, and uses two new colors.
Using Theorems 3, 3, 3, and 3, we obtain our final result.
**Theorem 16**.: _Our algorithm solves the Pattern Formation problem when scaling the target pattern is not allowed for \(n\) unit disk robots in \(O(n)+O(q\log n)\) rounds without collisions in the fully synchronous setting using 11 colors, where the Leader Election phase takes \(O(q\log n)\) rounds with probability at least \(1-n^{-q}\)._
## 5 Improving the Number of Colors
In this section, we improve the number of colors used to solve the Pattern Formation problem by reusing some of the colors used in the different phases discussed in the previous sections.
The Mutual Visibility phase uses two colors: _off_ for non-corner robots and _corner_ for corner robots. The Leader Election phase uses three new colors to keep track of status of the different robots as _competing_, _non-competing_, and _leader_. We start by arguing that instead of using a new color for non-competing robots, we can use the color _off_ instead.
The _off_ color can be reused as the _non-competing_ color in the Leader Election phase.
Proof.: We need to argue that the Leader Election phase still works as intended and that reusing the color does not cause any problems in later phases.
When the robots activate in the Leader Election phase, they change their color to _competing_. During this phase, unsuccessful robots change their color to _off_ as non-competing robots. During this process all robots are mutually visible and thus the non-competing robots can always see either a competing robot or the leader robot, which have colors that help them identify the phase. Hence, they can conclude that they are in the Leader Election phase and thus act accordingly.
Since some robots have the _non-competing_ color during part of the Line Formation phase, we now argue that changing this to the _off_ color does not cause any issues. We note that any robot that can see a robot with the _leader_ color or a robot with the _on the line_ color can conclude that it should not do anything unless the leader activates it and thus these robots cannot cause issues in the execution of the algorithm. However, if a robot sees neither of these colors it can mistakenly think that it is in the Mutual Visibility phase. We note that the robot would conclude that it is a corner and thus set its color to _corner_. As a corner robot, it would move to expand the convex hull away from where the line is being built (as any robot that would expand the convex hull towards the line can see the line). As robots are removed from the convex hull by the leader, every robot will eventually see the line and at that point it can conclude the correct phase again and stop moving. We note that since the robots move to expand the convex hull away from the other robots, no collisions can occur. Hence, while the robots can mistake the phase they are in, the fact that they would stop the moment they see the leader or a robot on the line implies that this does not cause issues for our approach.
The Line Formation phase uses four new colors in order to allow the leader to activate a robot to follow it, for robots to store whether they are following the leader, for the leader to indicate that a robot should stop following it, and for a robot to store that it reached its position on the line. We argue that we can reuse the _competing_ color from the Leader Election phase as the color to indicate that a robot is following the leader.
The _competing_ color can be reused as the _following the leader_ color in the Line Formation and Pattern Formation phase.
Proof.: The _competing_ color was originally used to elect a single leader, so there was no leader before. Now since the robot with the _competing_ color sees the leader, it knows a leader has already been elected, and thus it can conclude that this is the Line Formation or Pattern Formation phase, where it has to follow the leader. Therefore, the _competing_ color can be reused as the _following the leader_ color in these phases.
Next, we argue that we can also reuse the _leader_ color as the _do not follow_ color in these phases.
The leader _color can be reused as the_ do not follow _color in the Line Formation and Pattern Formation phase._
Proof.: The function of both the _leader_ color in the Leader Election phase and the _do not follow_ color in the Line Formation and Pattern Formation phases is to indicate which robot is the leader, while ensuring that the other robots do not move. Hence, both colors allow the leader to move around without affecting moving the other robots, allowing it to get into position to guide the robots one at a time to their new positions on the line or in the pattern.
The above reuse of colors implies the following theorems.
Our algorithm solves the Pattern Formation problem when scaling the target pattern is allowed for \(n\) unit disk robots in \(O(n)+O(q\log n)\) rounds without collisions in the fully synchronous setting using 7 colors, where the Leader Election phase takes \(O(q\log n)\) rounds with probability at least \(1-n^{-q}\).
Our algorithm solves the Pattern Formation problem when scaling the target pattern is not allowed for \(n\) unit disk robots in \(O(n)+O(q\log n)\) rounds without collisions in the fully synchronous setting using 8 colors, where the Leader Election phase takes \(O(q\log n)\) rounds with probability at least \(1-n^{-q}\).
## 6 Conclusion
We studied the Pattern Formation problem for unit disk robots in the robots with lights model under obstructed visibility. We described two algorithms for this problem, depending on the assumptions made to solve this problem. If the target pattern is allowed to be scaled, our initial algorithm used 10 colors, which we subsequently improved to 7 colors. If scaling is not allowed, our algorithm needs one additional color: initially 11, which we then improved to 8.
Our algorithms run in \(O(n)+O(q\log n)\) rounds, where \(q>0\) is a parameter of the Leader Election phase, which takes \(O(q\log n)\) rounds with probability at least \(1-n^{-q}\). Interestingly, unlike previous work, our algorithms do not require any additional assumptions on the capabilities of the robots or any shared information or coordinate system.
There are a number of interesting directions in which we could consider extending this work. For example, we have no lower bounds indicating that the number of colors we use is optimal and thus the natural open problems are both trying to improve the number of colors or showing that this is not possible using a lower bound.
Returning to the classical model (without lights) it would also be interesting to determine whether the Pattern Formation problem can be solved efficiently in this model as well or whether additional assumptions are required. |
2308.02956 | An equichordal characterization of the ellipsoid and the sphere | Let $K$ and $L$ be two convex bodies in $\mathbb R^n$, $n\geq 3$, with
$L\subset \text{int}\, K$. In this paper we prove the following result: if
every two parallel chords of $K$, supporting $L$ have the same length, then $K$
and $L$ are homothetic and concentric ellipsoids. We also prove a similar
theorem when instead of parallel chords we consider concurrent chords. We may
also replace, in both theorems, supporting chords of $L$ by supporting sections
of constant width. In the last section we also prove similar theorems where we
consider projections instead of sections. | Victor A. Aguilar-Arteaga, Rafael Iván Ayala-Figueroa, Jesús Jerónimo-Castro, Efrén Morales-Amaya | 2023-08-05T21:52:19Z | http://arxiv.org/abs/2308.02956v1 | # An equichordal characterization of the ellipsoid and the sphere
###### Abstract
Let \(K\) and \(L\) be two convex bodies in \(\mathbb{R}^{n}\), \(n\geq 3\), with \(L\subset\operatorname{int}K\). In this paper we prove the following result: if every two parallel chords of \(K\), supporting \(L\) have the same length, then \(K\) and \(L\) are homothetic and concentric ellipsoids. We also prove a similar theorem when instead of parallel chords we consider concurrent chords. We may also replace, in both theorems, supporting chords of \(L\) by supporting sections of constant width. In the last section we also prove similar theorems where we consider projections instead of sections.
## 1 Introduction
Let \(K\) be a convex body, i.e., a compact and convex set with non-empty interior. We say that a point \(x\) in the interior of \(K\) is an _equichordal point_ if all the chords of \(K\) through \(x\) have
the same length. The famous Equichordal Problem, due to W. Blaschke, W. Rothe, and R. Weitzenbock [2], and Fujiwara [4], asks about the existence of a convex planar body with two equichordal points. M. Rychlik in [10], finally gave a complete proof about the non existence of a body with two equichordal points. In [7] was introduced an extension of the notion of an equichordal point: Let \(K\) and \(L\) be two convex bodies in \(\mathbb{R}^{n}\), \(n\geq 2\), with \(L\subset\operatorname{int}K\); it is said that \(L\) is an _equichordal body_ for \(K\) if every chord of \(K\) tangent to \(L\) have length equal to a given number \(\lambda\). In [1], J. Barker and D. Larman proved that if \(K\) is a convex body that possesses an equichordal ball then it is also a ball. This result was extended in [7] and is proved there that only Euclidean balls possess an equichordal convex body.
Another interesting and classical result is due to W. Suss [11]: if all 2-dimensional sections of a convex body \(K\subset\mathbb{R}^{3}\) through a point \(p\in\operatorname{int}K\) are sets of constant width, then \(K\) is a ball. This result was latter generalized by L. Montejano in [8]. As in the case of the equichordal point, we can also consider a pair of convex bodies, one in the interior of the other, such that the sections tangent to the inner body are of constant width.
In this paper we continue the study of characterizations of the Euclidean ball and the ellipsoid by mean of equality of the length of chords and by sections of constant width supporting a convex body in the interior of another convex body.
## 2 A characterization of the ellipsoid
Let \(\mathcal{E}\subset\mathbb{R}^{n}\), \(n\geq 2\), be an ellipsoid and let \(\mathcal{E}^{\prime}\) be an ellipsoid in its iterator, homothetic and concentric with \(\mathcal{E}\). It is not difficult to see that every pair of chords of \(\mathcal{E}\), parallel and tangent to \(\mathcal{E}^{\prime}\), have the same length (see Fig. 1). However, is very unexpected that this condition characterizes the ellipsoid if the dimension of the space is \(n\geq 3\). In the plane, every pair of centrally symmetric and concentric convex bodies share the same property.
**Theorem 1**.: _Let \(K,L\subset\mathbb{R}^{n}\), \(n\geq 3\), be convex bodies with \(L\subset\mathrm{int}K\). Suppose for every \(u\in\mathbb{S}^{n-1}\) all the chords of \(K\) supporting \(L\) and parallel to \(u\), have the same length \(\lambda(u)\). Then \(K\) and \(L\) are homothetic and concentric ellipsoids._
Proof.: Let \(u\in\mathbb{S}^{n-1}\) be any unit vector. Let \([a,b]\) be any chord of \(K\) parallel to \(u\) touching \(L\) at \(x\). Consider a \(2\)-dimensional plane \(H\) supporting \(L\) at \(x\) and such that \([a,b]\subset H\). We will show that \([a,b]\) is a affine diameter of \(H\cap K\). Suppose this is not the case and let \([a^{\prime},b^{\prime}]\) be the affine diameter of \(H\cap K\) parallel to \([a,b]\). We have that \(|a^{\prime}b^{\prime}|>|ab|\). Let \(\Pi\) be a \(2\)-dimensional plane containing \([a^{\prime}b^{\prime}]\) such that \(\Pi\) intersect the interior of \(L\). Let see \([c,d]\) and \([c^{\prime},d^{\prime}]\) be the two chords of \(K\), parallel to \([a^{\prime},b^{\prime}]\), supporting \(\Pi\cap L\). By hypothesis \(|cd|=|c^{\prime}d^{\prime}|<|a^{\prime}b^{\prime}|\), and by the convexity of \(\Pi\cap K\), this is not possible. Hence \(|ab|=|a^{\prime}b^{\prime}|\) and then \([a,b]\) is an affine diameter of \(H\cap K\). In the same way we can prove that any other chord of \(H\cap K\) through \(x\) is an affine diameter of \(H\cap K\). By a well known result of Hammer [5] we have that \(H\cap K\) has center of symmetry at the point \(x\). This is true for every \(2\)-dimensional plane supporting \(L\) at \(x\), it follows that the hypersection supporting \(L\) at \(x\) has center of symmetry at \(x\). Now, by a theorem due to Olovjanishnikov [9] we have that \(K\) and \(L\) are homothetic and concentric ellipsoids.
We believe the following conjectures are true.
**Conjecture 1**.: _Let \(K,L\subset\mathbb{R}^{2}\) be convex bodies with \(L\subset\mathrm{int}K\) and \(L\) centrally symmetric. Suppose every pair of parallel chords of \(K\) supporting \(L\) have the same length. Then \(K\) is also centrally symmetric._
**Conjecture 2**.: _Let \(K,L\subset\mathbb{R}^{3}\) be convex bodies with \(L\subset\mathrm{int}K\) a strictly convex body. Suppose every section of \(K\) supporting \(L\) has the contact point as an equichordal point. Then \(K\) and \(L\) are concentric balls._
Figure 1: Parallel chords tangent to \(\mathcal{E}^{\prime}\) have the same length
Characterization of the sphere by concurrent chords
The following lemma is interesting by itself and will be useful for the subsequent results.
**Lemma 1**.: _Let \(\mathcal{E}_{1}\) and \(\mathcal{E}_{2}\) be two concentric and homothetic ellipses with \(\mathcal{E}_{1}\subset\mathrm{int}\mathcal{E}_{2}\). Let \([a,a^{\prime}]\) be the chord of \(\mathcal{E}_{2}\), tangent to \(\mathcal{E}_{1}\), and orthogonal to the mayor axis of \(\mathcal{E}_{2}\). Then, any other chord \([b,b^{\prime}]\) tangent to \(\mathcal{E}_{1}\), not orthogonal to the mayor axis of \(\mathcal{E}_{2}\), has length bigger than the length of \([a,a^{\prime}]\)._
Proof.: Let \(y\) be the point of intersection between the lines \(aa^{\prime}\) and \(bb^{\prime}\), and let \(m\) and \(x\) be the points where the chords \([a,a^{\prime}]\) and \([b,b^{\prime}]\) touch \(\mathcal{E}_{1}\), respectively. We know that \([a,b]\parallel[m,x]\parallel[a^{\prime},b^{\prime}]\) (see Fig. 2), this can be seen easily if we apply an affine transformation which send \(\mathcal{E}_{1}\) and \(\mathcal{E}_{2}\) to concentric circles. To see that \(|aa^{\prime}|<|bb^{\prime}|\) it is enough, by Thale's theorem, to prove that \(|ym|<|yx|\).
Consider the circle \(\Gamma\) with diameter \([p,q]\), the minor axis of \(\mathcal{E}_{1}\). Let \(z\) be the projection of \(x\) into the minor axis of \(\mathcal{E}_{1}\) and let \(x^{\prime}\) be the point where the segment \([z,x]\) intersects the circle \(\Gamma\). By a well known property of the ellipse, we have that \(\frac{|zx|}{|zx^{\prime}|}=\lambda>1\), for a fixed number \(\lambda\). The linear transformation that send \(\mathcal{E}_{1}\) to \(\Gamma\), also send the tangent segments \([y,m]\) and \([y,x]\) to the segments \([y^{\prime},m^{\prime}]\) and \([y^{\prime},x^{\prime}]\), tangent to \(\Gamma\). Clearly, we have that \(|ym|=|y^{\prime}m^{\prime}|\) and \(|y^{\prime}x^{\prime}|<|yx|\) and since \(|y^{\prime}m^{\prime}|=|y^{\prime}x^{\prime}|\) we have that \(|ym|<|yx|\).
Figure 2: \([a,a^{\prime}]\) is the tangent chord with minimum length
**Theorem 2**.: _Let \(K,L,M\subset\mathbb{R}^{n}\), \(n\geq 3\), be convex bodies with \(L\subset\mathrm{int}K\subset\mathrm{int}M\). Suppose for every \(x\in\partial M\) all the lines supporting \(L\) and passing through \(x\), intersects in \(K\) chords of the same length \(\lambda(x)\). Then \(K\) and \(L\) are concentric balls._
Proof.: First we prove the theorem in dimension \(n=3\). Let \(x\in\partial M\) be any point and \(C(x,L)\) be the support cone of \(L\) with apex at \(x\). Define as \(\gamma(x)=\{C(x,L)\cap\partial M\}\setminus\{x\}\), which is clearly a simple and closed curve in the boundary of \(M\). Notice that \(\gamma(x)\) divides the boundary of \(M\) into two open regions, the region \(R^{-}\) that contains \(x\) and the region \(R^{+}\) that does not contain \(x\). Now, consider a chord \([x,y]\) so that \(y\in\gamma(x)\) and any chord \([x^{\prime},y^{\prime}]\) of \(M\) parallel to \([x,y]\), supporting \(L\), and such that \(x^{\prime}\in R^{-}\) and \(y^{\prime}\in R^{+}\). Since the cone \(C(x^{\prime},L)\) intersects \(R^{-}\) we have that \(\gamma(x)\cap\gamma(x^{\prime})\neq\emptyset\). Choose a point \(z\in\gamma(x)\cap\gamma(x^{\prime})\). Since \(zx\) and \(zx^{\prime}\) are support lines of \(L\) that pass through \(z\in\partial M\), then \(\lambda(z)=|[z,x^{\prime}]\cap K|=|[z,x]\cap K|=\lambda(x)=\lambda(x^{\prime})\). We have proved that the intersection of \([x,y]\) with \(K\) and the intersection of \([x^{\prime},y^{\prime}]\) with \(K\) have the same length. That is, parallel chords of \(K\) supporting \(L\) have the same length. From theorem 1 we can say that \(K\) and \(L\) are homothetic and concentric ellipsoids. But from Lemma 1 we can see that \(K\) and \(L\) must be, indeed, concentric balls.
For dimension \(n>3\) we just use the fact that if every \(3\)-dimensional section of a convex body is a \(3\)-dimensional ball, then it is an \(n\)-dimensional ball.
**Remark 1**.: _We may replace the set \(M\) in Theorem 2 by a pair of parallel hyperplanes
_and \(H_{2}\) such that \(K\) and \(L\) are contained in the open region between those planes. The proof follows the same ideas._
## 4 Characterization of the sphere by sections of constant width
**Theorem 3**.: _Let \(K,L\subset\mathbb{R}^{3}\) be convex bodies with \(L\subset\mathrm{int}K\). Suppose for every \(u\in\mathbb{S}^{n-1}\) all sections of \(K\) supporting \(L\) and parallel to \(u\), have the same constant width \(h(u)\). Then \(K\) and \(L\) are concentric balls._
_Proof._ We will prove first that \(K\) is strictly convex. Suppose this is not the case and let \([p,q]\) be a segment in the boundary if \(K\). Let \(S_{1}\) be a supporting 2-dimensional plane of \(L\) containing \([p,q]\). Since \(S_{1}\cap K\) is of constant width, it must be strictly convex, then \([p,q]\) must be a point. Now, let \(u\in\mathbb{S}^{2}\) be any unit vector. Let \(H\) be any support plane of \(L\) parallel to \(u\). By hypothesis, the section \(H\cap K\) is of constant width \(h(u)\). Let \([a,b]\subset H\) be a chord of \(K\), parallel to \(u\), supporting \(L\). We will show that \([a,b]\) is a diameter of \(H\cap K\). Suppose this is not the case, and let \([a^{\prime},b^{\prime}]\) be a diameter of \(H\cap K\) parallel to \([a,b]\). Then we have \(|a^{\prime}b^{\prime}|>|ab|\). Let \(\Pi\) be a plane supporting \(L\) and parallel to \(u\) such that the 2-dimensional plane \(\Gamma\), containing \([a^{\prime},b^{\prime}]\) and the diameter \([x,y]\) of \(\Pi\cap K\) parallel to \(u\). Let \([c,d]\) and \([c^{\prime},d^{\prime}]\), the chords of \(K\) in \(\Gamma\), parallel to \(u\) and supporting \(L\). Since \([c,d]\) and \([c^{\prime},d^{\prime}]\) belong each one to supporting sections of \(L\) parallel to \(u\), we have that \(|a^{\prime}b^{\prime}|=|xy|\geq|cd|,|c^{\prime}d^{\prime}|\). This cannot happen since \(K\) is strictly convex and so every section of \(K\) is strictly convex as well. Hence \([a^{\prime},b^{\prime}]=[a,b]\), i.e., \([a,b]\) is a diameter of \(H\cap K\). With this, we have proved that all the chords of \(K\), parallel to \(u\) and supporting \(L\), have the same length \(h(u)\). This is true for every \(u\in\mathbb{S}^{2}\) then we apply Theorem 1 and get that \(K\) and \(L\) are homothetic and concentric ellipsoids. However, only Euclidean balls have sections of constant width. Therefore, \(K\) and \(L\) are concentric balls. \(\Box\)
In a very similar way we can prove the following.
**Theorem 4**.: _Let \(K,L,M\subset\mathbb{R}^{3}\) be convex bodies with \(L\subset\mathrm{int}K\subset\mathrm{int}M\). Suppose for every \(x\in\partial M\) all sections of \(K\) supporting \(L\) and passing through \(x\), are of constant width \(h(x)\). Then \(K\) and \(L\) are concentric balls._
_Proof._ Following the same ideas as in 2, we show that any two parallel sections are of the same constant width. Hence using Theorem 3 and Lemma 1 the result follows directly. \(\Box\)
A particular case of Suss theorem
We start this section with a proof for a particular case of Suss theorem, i.e., we will prove that a convex body with concurrent sections of constant width \(1\) is a ball. Before giving the proof we introduce the following notation: for every \(\upsilon\in\mathbb{S}^{2}\) denote by \(S(\upsilon)\) the circle \(\mathbb{S}^{2}\cap\upsilon^{\perp}\), by \(H(\upsilon)\) and \(H(-\upsilon)\) the two supporting planes of \(K\) orthogonal to \(\upsilon\), and by \(\operatorname{Sb}(K,\upsilon)\) the shadow boundary of \(K\) in direction \(\upsilon\), i.e.,
\[\operatorname{Sb}(K,\upsilon)\equiv\{x\in\partial K:\text{there is a line parallel to $\upsilon$ and touching $K$ at $x$}.\}\]
It is a well known result of W. Blaschke [3] that a convex body \(K\) is an ellipsoid if for every \(\upsilon\in\mathbb{S}^{2}\) the shadow boundary \(Sb(K,\upsilon)\) is a planar curve. Moreover, if we know that \(\operatorname{Sb}(K,\upsilon)\) is orthogonal to \(\upsilon\), for every \(\upsilon\in\mathbb{S}^{2}\), then the conclusion is even stronger: \(K\) is a Euclidean ball. However, if we restrict the directions for the shadow boundaries we have the following interesting conclusion.
**Lemma 2**.: _Let \(K\subset\mathbb{R}^{3}\) be a convex body and \(\upsilon\in\mathbb{S}^{2}\) such that for every \(w\in S(\upsilon)\) we have that \(\operatorname{Sb}(K,w)\) is a closed planar curve orthogonal to \(w\). Then \(K\) has an axis of revolution parallel to \(\upsilon\)._
Proof.: Let \(H(\upsilon)\) and \(H(-\upsilon)\) be the planes supporting \(K\), orthogonal to \(\upsilon\), and consider the points \(a\equiv H(\upsilon)\cap K\) and \(b\equiv H(-\upsilon)\cap K.\) Clearly, \(a\) and \(b\) belong to \(\operatorname{Sb}(K,w)\) for every \(w\in S(\upsilon)\), and since \(w\bot\)\(\operatorname{Sb}(K,w)\) then we get that \([a,b]\) is parallel to \(\upsilon\). Now, let \(\Omega\) be a plane parallel to \(H(\upsilon)\) which intersects the interior of \(K\). Consider an arbitrary vector \(w\in S(\upsilon)\), let \(\Pi\) be the plane orthogonal to \(w\) through \([a,b]\), and let \(\{x,y\}\equiv\Omega\cap\partial K\cap\Pi\). We have that \(\operatorname{Sb}(K,w)=\Pi\cap\partial K\), then the tangent vectors to \(\Omega\cap\partial K\) through the points \(x\) and \(y\) are parallel to \(w\). We have that \([x,y]\) is orthogonal to \(w\) and intersects the segment \([a,b]\) in a point \(m\). It is known that a planar curve is a circle if all its normal lines are concurrent, it follows that \(\Omega\cap\partial K\) is a circle with center at \(m\). Since the above is true for every plane \(\Omega\) orthogonal to \(\upsilon\) and intersecting \(K\), we conclude that the line \(ab\) is an axis of revolution for \(K\).
**Theorem 5**.: _Let \(K\subset\mathbb{R}^{3}\) be a convex body and let \(p\) be a point in its interior. If every \(2\)-dimensional section of \(K\) through \(p\) has constant width 1, then \(K\) is a ball with center at \(p\) and diameter 1._
Proof.: Let \([a,b]\) be any affine diameter of \(K\). The section containing \(p\), \(a\), and \(b\), has constant width and has diameter equal to \(|ab|=1.\) It follows that all the affine diameters of \(K\) are indeed diameters and hence \(K\) is a body of constant width 1. Let \(H\) be any 2-dimensional plane containing \(p\) with orthogonal unit vector \(u\). The orthogonal projection of \(K\) onto \(H\), \(\pi_{u}(K)\), is a 2-dimensional body of constant width 1 which contains \(H\cap K\), then we have that \(\pi_{u}(K)=H\cap K\). It follows that \(\operatorname{Sb}(K,u)=\partial(H\cap K)\). We have proved that all the shadow boundaries of \(K\) are planar curves, by Blaschke's theorem we have that \(K\) must be an ellipsoid. However, since \(K\) is a body of constant width it must be a ball.
**Remark 2**.: _This theorem can also be proved using Lemma 2: it is not difficult to prove that there are two diameters of \(K\) passing through \(p\). By Lemma 2 we have that each one of these diameters is an axis of revolution of \(K\) and is well known that the only convex body with two axes of revolution is the Euclidean ball._
## 6 Equichordal projections
We give a result where instead of sections tangent to the inner body we consider orthogonal projections.
Figure 4: The shadow boundaries orthogonal to \(\upsilon\) are planar curves
**Theorem 6**.: _Let \(K,L\subset\mathbb{R}^{3}\) be convex bodies with \(L\subset\mathrm{int}K\) and \(L\) strictly convex. Suppose for every \(u\in\mathbb{S}^{2}\) all the chords of \(\pi_{u}(K)\) that are tangent to \(\pi_{u}(L)\) have length equal to 1. Then \(K\) and \(L\) are concentric balls._
_Proof._ Let \(u\in\mathbb{S}^{2}\) be any direction and let \([a,b]\) be any chord of \(\pi_{u}(K)\) supporting \(\pi_{u}(L)\). Let \(\Pi(a,b)\) be the plane parallel to \(u\) and containing \([a,b]\). Clearly, \(\Pi(a,b)\) is a supporting plane of \(L\). Let \(a^{\prime},b^{\prime}\in\partial K\) be the points such that \(\pi_{u}(a^{\prime})=a\) and \(\pi_{u}(b^{\prime})=b.\) We have that the lines \(a^{\prime}a\) and \(b^{\prime}b\) are supporting lines of the section \(S(a,b)=\Pi(a,b)\cap K\) (see Fig. 5). If \([a^{\prime},b^{\prime}]\) is not parallel to \([a,b]\) then we can find a direction \(w\in\mathbb{S}^{2}\) where a chord of \(\pi_{w}(K)\) tangent to \(\pi_{w}(L)\) has length bigger than 1. To see this, consider \(w\), orthogonal to \([a^{\prime},b^{\prime}]\) and parallel to \(\Pi(a,b)\). We have that the segment \(\Pi(a,b)\cap\pi_{w}(K)\) has length bigger than or equal to \(|a^{\prime}b^{\prime}|>|ab|=1\), which is a contradiction to the hypothesis of the theorem. Hence we have that \([a^{\prime},b^{\prime}]\) is parallel to \([a,b]\), i.e., the width of \(S(a,b)\) in the direction of \([a,b]\) is equal to 1. In the same way we prove that the width of \(S(a,b)\) in every direction parallel to \(\Pi(a,b)\) is equal to 1. In other words, \(S(a,b)\) is a set of constant width 1. We have proved that all the sections of \(K\) supporting \(L\) are sets of constant width 1, we apply Theorem 3 and conclude that \(K\) and \(L\) are concentric balls. \(\Box\)
Figure 5: The section \(S(a,b)\) is a set of constant width 1
**Remark 3**.: _Following similar ideas, we can prove the case in Theorem 6 when \(L\) is a point. However, we suspect the following is also true._
**Conjecture 3**.: _Let \(K,L\subset\mathbb{R}^{3}\) be convex bodies with \(L\subset\mathrm{int}K\) and \(L\) strictly convex. Suppose for every \(w\in\mathbb{S}^{2}\) the chords of \(\pi_{u}(K)\) that are tangent to \(\pi_{u}(L)\) and are parallel to \(w\), with \(u\bot w\), have length equal to \(\lambda(w)\). Then \(K\) and \(L\) are homothetic and concentric ellipsoids._
Finally, we have the following conjecture for which we provide in the last theorem an advance in direction to find a proof.
**Conjecture 4**.: _Let \(K\subset\mathbb{R}^{3}\) be a convex body and let \(p\) be a point in its interior. Suppose for every \(u\in\mathbb{S}^{2}\) the point \(\pi_{u}(p)\) is an equichordal point of \(\pi_{u}(K)\). Then \(K\) is ball._
**Theorem 7**.: _Let \(K\subset\mathbb{R}^{3}\) be a strictly convex body and let \(p\) be an equichordal point of \(K\). Suppose that for all \(u\in\mathbb{S}^{2}\), \(\pi_{u}(p)\) is an equichordal point of \(\pi_{u}(K)\). Then \(K\) is body of revolution._
_Proof._ Let \([a,b]\) be an affine diameter of \(K\) through \(p\). We will prove now that \([a,b]\) is a binormal of \(K\), i.e., there exist support planes of \(K\) through \(a\) and \(b\), respectively, which are orthogonal to the chord \([a,b]\). Let \(\Pi_{a}\) and \(\Pi_{b}\) be parallel support planes of \(K\) through \(a\) and \(b\), respectively, and let \(v\in\mathbb{S}^{2}\) be the unit vector orthogonal to them. If \(v\) is parallel to \([a,b]\), then \([a,b]\) is indeed a binormal of \(K\) and we are done. Let us assume that \(v\) is not parallel to \([a,b]\). Let \(w\in\mathbb{S}^{2}\) be parallel to \([a,b]\) and let \(u\) be a unit vector orthogonal to \(v\) and parallel to the subspace generated by \(v\) and \(w\). Let \(a^{\prime}=\pi_{u}(a)\) and \(b^{\prime}=\pi_{u}(b)\) be the orthogonal projections of \(a\) and \(b\) onto the plane \(u^{\bot}\). Let \([x,y]\) be the chord of \(K\) through \(p\) which is parallel to \(u^{\bot}\) and consider the segment \([x^{\prime},y^{\prime}]\subset\pi_{u}(K)\), with \(x^{\prime}=\pi_{u}(x)\) and \(y^{\prime}=\pi_{u}(y)\). Since \([x^{\prime},y^{\prime}]\) passes through \(p^{\prime}=\pi_{u}(p)\), which is an equichordal point of \(\pi_{u}(K)\), we have that
\[|a^{\prime}b^{\prime}|\geq|x^{\prime}y^{\prime}|=|xy|=|ab|,\]
however, this is only possible if \(|a^{\prime}b^{\prime}|=|ab|\), which implies that \([a,b]\) is parallel to \([a^{\prime},b^{\prime}]\). We have proved that \([a,b]\) is orthogonal to \(\Pi_{a}\) and \(\Pi_{b}\), i.e., \([a,b]\) is a binormal of \(K\).
Now, let \(u\in\mathbb{S}^{2}\) be any vector orthogonal to \([a,b]\) and denote by \(K_{u}\) the 2-dimensional section of \(K\) orthogonal to \(u\) and passing through \(p\). We have that all chords of \(\pi_{u}(K)\) through \(\pi_{u}(p)\) have length equal to \(|\pi_{u}(a)\pi_{u}(b)|=|ab|\), and since \(\pi_{u}(K_{u})\subset\pi_{u}(K)\) we have that \(\pi_{u}(K_{u})=\pi_{u}(K)\). Since \(K\) is a strictly convex body, we obtain that the shadow boundary of \(K\) in direction \(u\), \(\mathrm{Sb}(K,u)\), coincides with \(\partial K_{u}\), in other words, \(\mathrm{Sb}(K,u)\) is a planar closed curve orthogonal to \(u\). Now we apply Lemma 2 and conclude that the line \(ab\) is an axis of revolution for \(K\). \(\Box\)
|
2310.09491 | The cokernel of a polynomial push-forward of a random integral matrix
with concentrated residue | We prove new statistical results about the distribution of the cokernel of a
random integral matrix with a concentrated residue. Given a prime $p$ and a
positive integer $n$, consider a random $n \times n$ matrix $X_n$ over the ring
$\mathbb{Z}_p$ of $p$-adic integers whose entries are independent. Previously,
Wood showed that regardless of the distribution of $X_n$, as long as each entry
of $X_n$ is not too concentrated on a single residue modulo $p$, the
distribution of the cokernel $\mathrm{cok}(X_n)$ of $X_n$, up to isomorphism,
weakly converges to the Cohen--Lenstra distribution, as $n \rightarrow \infty$.
In this paper, we consider the case when $X_n$ has a concentrated residue $A_n$
so that $X_n = A_n + pB_n$, where $B_n$ is a random $n \times n$ matrix over
$\mathbb{Z}_p$. We show that for every fixed $n$ and a non-constant monic
polynomial $P(t) \in \mathbb{Z}_p[t]$, we can explicitly compute the
distribution of $\mathrm{cok}(P(X_n))$ when $B_n$ is a Haar-random matrix.
Using this, we also show that for specific choices of $A_n$ a much wider class
of random matrices $B_n$ gives the same distribution of $\mathrm{cok}(P(X_n))$.
For the Haar-random $B_n$, we deduce our result from an interesting
equidistribution result for matrices over $\mathbb{Z}_p[t]/(P(t))$, which we
prove by establishing a version of the Weierstrass preparation theorem for the
noncommutative ring $\mathrm{M}_n(\mathbb{Z}_p)$ of $n \times n$ matrices over
$\mathbb{Z}_p$. | Gilyoung Cheong, Yifeng Huang | 2023-10-14T04:49:46Z | http://arxiv.org/abs/2310.09491v3 | # The cokernel of a polynomial push-forward of a random integral matrix with concentrated residue
###### Abstract.
We prove new statistical results about the distribution of the cokernel of a random integral matrix with a concentrated residue. Given a prime \(p\) and a positive integer \(n\), consider a random \(n\times n\) matrix \(X_{n}\) over the ring \(\mathbb{Z}_{p}\) of \(p\)-adic integers whose entries are independent. Previously, Wood showed that regardless of the distribution of \(X_{n}\), as long as each entry of \(X_{n}\) is not too concentrated on a single residue modulo \(p\), the distribution of the cokernel \(\operatorname{\mathrm{cok}}(X_{n})\) of \(X_{n}\), up to isomorphism, weakly converges to the Cohen-Lenstra distribution, as \(n\to\infty\). In this paper, we consider the case when \(X_{n}\) has a concentrated residue \(A_{n}\) so that \(X_{n}=A_{n}+pB_{n}\), where \(B_{n}\) is a random \(n\times n\) matrix over \(\mathbb{Z}_{p}\). We show that for every fixed \(n\) and a non-constant monic polynomial \(P(t)\in\mathbb{Z}_{p}[t]\), we can explicitly compute the distribution of \(\operatorname{\mathrm{cok}}(P(X_{n}))\) when \(B_{n}\) is a Haar-random matrix. Using this, we also show that for specific choices of \(A_{n}\) a much wider class of random matrices \(B_{n}\) gives the same distribution of \(\operatorname{\mathrm{cok}}(P(X_{n}))\). For the Haar-random \(B_{n}\), we deduce our result from an interesting equidistribution result for matrices over \(\mathbb{Z}_{p}[t]/(P(t))\), which we prove by establishing a version of the Weierstrass preparation theorem for the noncommutative ring \(\operatorname{M}_{n}(\mathbb{Z}_{p})\) of \(n\times n\) matrices over \(\mathbb{Z}_{p}\).
## 1. Introduction
Fix a prime \(p\) and consider the distribution of the cokernel \(\operatorname{\mathrm{cok}}(X)\) of a random \(n\times n\) matrix \(X\) over the ring \(\mathbb{Z}_{p}\) of \(p\)-adic integers, where \(n\in\mathbb{Z}_{\geqslant 1}\). We consider \(X\) with \(n^{2}\) independent entries \((X_{ij})_{1\leqslant i,j\leqslant n}\). Writing \(\operatorname{M}_{n}(R)\) to mean the set of \(n\times n\) matrices over a ring \(R\), we can identify \(\operatorname{M}_{n}(\mathbb{Z}_{p})=\mathbb{Z}_{p}^{n^{2}}\), and the probability measure on \(\operatorname{M}_{n}(\mathbb{Z}_{p})\) is given by the product measure of the probability measures on \(n^{2}\) copies of \(\mathbb{Z}_{p}\).
Each independent entry \(X_{ij}\) of a random matrix \(X\) can be written as
\[X_{ij}=X_{i,j,0}+X_{i,j,1}p+X_{i,j,2}p^{2}+\cdots \tag{1.1}\]
whose \(p\)-adic digits \(X_{i,j,0},X_{i,j,1},X_{i,j,2},\dots\) are randomly chosen from \(\{0,1,2,\dots,p-1\}\), which we may often identify as \(\mathbb{F}_{p}\), the finite field of \(p\) elements. The most natural example is when each \(X_{i,j,l}\) is distributed uniformly at random, which is equivalent to saying that \(X_{ij}\) is given by the Haar measure on \(\mathbb{Z}_{p}\). In [11], Friedman and Washington computed the distribution of \(\operatorname{\mathrm{cok}}(X)\) of a random matrix \(X\in\operatorname{M}_{n}(\mathbb{Z}_{p})\) whose \(n^{2}\) independent entries \((X_{ij})_{1\leqslant i,j\leqslant n}\) are Haar-random in \(\mathbb{Z}_{p}\). More specifically, [11, Proposition 1] says
\[\operatorname{Prob}_{X\in\operatorname{M}_{n}(\mathbb{Z}_{p})^{\mathrm{Haar}} }(\operatorname{\mathrm{cok}}(X)\simeq G)=\frac{1}{|\mathrm{Aut}(G)|}\prod_{i= 1}^{n}(1-p^{i})\prod_{j=n-r_{p}(G)+1}^{n}(1-p^{-j}), \tag{1.2}\]
as long as \(n\geqslant r_{p}(G):=\dim_{\mathbb{F}_{p}}(G/pG)\) (which otherwise gives \(0\) for the probability), where \(\mathrm{Aut}(G)\) is the automorphism group of \(G\).
**Remark 1.1**.: We shall always assume that \(\operatorname{M}_{n}(\mathbb{Z}_{p})\) has the Borel \(\sigma\)-algebra or the discrete \(\sigma\)-algebra. We have used the notation \(\operatorname{M}_{n}(\mathbb{Z}_{p})^{\mathrm{Haar}}\) above to indicate that each independent entry \(X_{ij}\) of a random matrix \(X\in\operatorname{M}_{n}(\mathbb{Z}_{p})^{\mathrm{Haar}}\) is Haar-random, which also assumes that we are using the Borel \(\sigma\)-algebra.
In [12], Wood showed that as long as the first digit \(X_{i,j,0}\) of each independent random variable \(X_{ij}\) is not too concentrated on a single value in (1.1), when \(n\to\infty\), the distribution of the cokernel in (1.2) is insensitive to which measure we choose on \(\operatorname{M}_{n}(\mathbb{Z}_{p})\). More specifically, [12, Theorem 1.2] says:
**Theorem 1.2** (Wood).: Let \(0<\epsilon<1\) be a real number, and fix a finite abelian \(p\)-group \(G\). For each \(n\in\mathbb{Z}_{\geqslant 1}\), suppose that \(\operatorname{M}_{n}(\mathbb{Z}_{p})=\mathbb{Z}_{p}^{n^{2}}\) is equipped with a probability measure, where each random \(X\in\operatorname{M}_{n}(\mathbb{Z}_{p})\) has
Introduction
Let \(X\) be a finite abelian \(p\)-group and \(n\in\mathbb{Z}_{\geqslant 1}\) be a finite abelian \(p\)-group. We say that \(\operatorname{cok}(X)\simeq G/pG\) is _
**Theorem 1.6**.: Conjecture 1.5 is true.
Our main theorem is more general than the above statement. Namely, we are able to compute the probability in the conclusion of Conjecture 1.5 for any monic \(P(t)\in\mathbb{Z}_{p}[t]\) without any square-free condition on its reduction \(\bar{P}(t)\in\mathbb{F}_{p}[t]\) modulo \(p\). We fix a non-constant monic \(P(t)\in\mathbb{Z}_{p}[t]\) and consider the unique factorization
\[\bar{P}(t)=\bar{P}_{1}(t)^{m_{1}}\bar{P}_{2}(t)^{m_{2}}\cdots\bar{P}_{l}(t)^{m _{l}}, \tag{1.3}\]
where \(\bar{P}_{1}(t),\bar{P}_{2}(t),\ldots,\bar{P}_{l}(t)\) are distinct monic irreducible polynomials in \(\mathbb{F}_{p}[t]\) and \(m_{1},m_{2},\ldots,m_{l}\in\mathbb{Z}_{\geqslant 1}\). We shall also write \(d_{j}:=\deg(\bar{P}_{j}(t))\). Given an \(\mathbb{F}_{p}[t]/(P(t))\)-module \(M\), we write
\[u_{j}(M):=\dim_{\mathbb{F}_{p^{d_{j}}}}\left(\bar{P}_{j}(t)^{m_{j}-1}M_{j} \right),\]
where \(M_{j}:=M\otimes_{\mathbb{F}_{p}[t]/(\bar{P}(t))}\mathbb{F}_{p}[t]/(\bar{P}_{j }(t)^{m_{j}})\).
We are now ready to state one of our main theorems:
**Theorem 1.7**.: Let \(n\in\mathbb{Z}_{\geqslant 1}\). Fix a finite-sized \(\mathbb{Z}_{p}[t]/(P(t))\)-module \(G\) and \(A_{n}\in\mathrm{M}_{n}(\mathbb{F}_{p})\) such that \(\mathrm{cok}(\bar{P}(A_{n}))\simeq_{\mathbb{F}_{p}[t]}G/pG\). If \(G\) satisfies
\[|\mathrm{Hom}_{\mathbb{Z}_{p}[t]}(G,\mathbb{F}_{p^{d_{j}}})|=|\mathrm{Ext}^{1} _{\mathbb{Z}_{p}[t]/(P(t))}(G,\mathbb{F}_{p^{d_{j}}})|\]
for \(1\leqslant j\leqslant l\), then
\[\underset{X\in\mathrm{M}_{n}(\mathbb{Z}_{p})^{\mathrm{M}_{2r}}}{\mathrm{Prob }}(\mathrm{cok}(P(X))\simeq_{\mathbb{Z}_{p}[t]}G\mid X\equiv A_{n}\pmod{p})= \frac{|\mathrm{Aut}_{\mathbb{Z}_{p}[t]}(G/pG)|\prod_{j=1}^{l}\prod_{i=1}^{u_{j }(G/pG)}(1-p^{-id_{j}})}{|\mathrm{Aut}_{\mathbb{Z}_{p}[t]}(G)|}.\]
Otherwise, the probability is \(0\).
In Theorem 1.7, we note that having \(\mathrm{cok}(P(A_{n}))\simeq_{\mathbb{F}_{p}[t]}G/pG\) guarantees that there exists \(g\in\mathrm{GL}_{n}(\mathbb{F}_{p})\) such that
\[A_{n}=g\begin{bmatrix}J&*\\ 0&J^{\prime}\end{bmatrix}g^{-1}\]
in \(\mathrm{M}_{n}(\mathbb{F}_{p})\), where \(J\in\mathrm{M}_{n-r}(\mathbb{F}_{p})\) and \(J^{\prime}\in\mathrm{M}_{r}(\mathbb{F}_{p})\) with \(r=r_{p}(G)\) such that every eigenvalue of \(J\) in \(\overline{\mathbb{F}_{p}}\) is not a root of \(P(t)\), while every eigenvalue of \(J^{\prime}\) in \(\overline{\mathbb{F}_{p}}\) is a root of \(P(t)\). Moreover, we have
\[\mathrm{cok}(P(A_{n}))\simeq\mathrm{cok}\left(P\left(g\begin{bmatrix}J&*\\ 0&J^{\prime}\end{bmatrix}g^{-1}\right)\right)=\mathrm{cok}\left(gP\left( \begin{bmatrix}J&*\\ 0&J^{\prime}\end{bmatrix}\right)g^{-1}\right),\]
and for any lift \(\tilde{g}\in\mathrm{GL}_{n}(\mathbb{Z}_{p})\) of \(g\), the conjugation by \(\tilde{g}\) preserves the Haar measure on \(\mathrm{M}_{n}(\mathbb{Z}_{p})\). Thus, Theorem 1.7 is equally strong, even if we assume that
\[A_{n}=\begin{bmatrix}J&*\\ 0&J^{\prime}\end{bmatrix} \tag{1.4}\]
with \(J\) and \(J^{\prime}\) as above. (Most importantly, we recall that every eigenvalue of \(J\in\mathrm{M}_{n-r}(\mathbb{F}_{p})\) is not a root of \(P(t)\) and \(r=r_{p}(G)\).) For this specific form of \(A_{n}\), Theorem 1.7 holds in a more general setting, which can be seen as a universality result:
**Theorem 1.8**.: Let \(n\in\mathbb{Z}_{\geqslant 1}\). Fix a finite-sized \(\mathbb{Z}_{p}[t]/(P(t))\)-module \(G\) and \(A_{n}\in\mathrm{M}_{n}(\mathbb{F}_{p})\) such that \(\mathrm{cok}(\bar{P}(A_{n}))\simeq_{\mathbb{F}_{p}[t]}G/pG\). Suppose that \(A_{n}\) is of the form (1.4), and consider any probability measure on \(\mathrm{M}_{n}(\mathbb{Z}_{p})\) such that all entries of \(X\) are independent and the entries in the bottom-right \(r\times r\) submatrix of \(X\) follow the Haar measure. If \(G\) satisfies
\[|\mathrm{Hom}_{\mathbb{Z}_{p}[t]}(G,\mathbb{F}_{p^{d_{j}}})|=|\mathrm{Ext}^{1} _{\mathbb{Z}_{p}[t]/(P(t))}(G,\mathbb{F}_{p^{d_{j}}})|\]
for \(1\leqslant j\leqslant l\), then
\[\underset{X\in\mathrm{M}_{n}(\mathbb{Z}_{p})}{\mathrm{Prob}}(\mathrm{cok}(P(X) )\simeq_{\mathbb{Z}_{p}[t]}G\mid X\equiv A_{n}\pmod{p})=\frac{|\mathrm{Aut}_{ \mathbb{Z}_{p}[t]}(G/pG)|\prod_{j=1}^{l}\prod_{i=1}^{u_{j}(G/pG)}(1-p^{-id_{j}})} {|\mathrm{Aut}_{\mathbb{Z}_{p}[t]}(G)|}.\]
Otherwise, the probability is \(0\).
**Remark 1.9**.: When \(P(t)\) is square-free modulo \(p\) (i.e., \(m_{1}=m_{2}=\cdots=m_{l}=1\) in (1.3)), the condition
\[|\mathrm{Hom}_{\mathbb{Z}_{p}[t]}(G,\mathbb{F}_{p^{d_{j}}})|=|\mathrm{Ext}^{1}_{ \mathbb{Z}_{p}[t]/(P(t))}(G,\mathbb{F}_{p^{d_{j}}})|,\]
is always satisfied for all \(1\leq j\leq l\) by [13, Lemma 2.2]. This is why in Conjecture 1.5 such conditions were not visible. The following proposition explains more about what happens in general:
**Proposition 1.10**.: Let \(n\in\mathbb{Z}_{\geqslant 1}\). Fix a finite-sized module \(G\) over \(\mathbb{Z}_{p}[t]/(P(t))\) and \(A_{n}\in\mathrm{M}_{n}(\mathbb{F}_{p})\) such that \(\mathrm{cok}(P(A_{n}))\simeq_{\mathbb{F}_{p}[t]}G/pG\). Then the following are equivalent:
1. There exists \(X\in\mathrm{M}_{n}(\mathbb{Z}_{p})\) such that \(\mathrm{cok}(P(X))\simeq_{\mathbb{Z}_{p}[t]}G\) and \(X\equiv A_{n}\,(\mathrm{mod}\ p)\).
2. We have \(|\mathrm{Hom}_{\mathbb{Z}_{p}[t]}(G,\mathbb{F}_{p^{d_{j}}})|=|\mathrm{Ext}^{1} _{\mathbb{Z}_{p}[t]/(P(t))}(G,\mathbb{F}_{p^{d_{j}}})|\) for \(1\leq j\leq l\).
Theorem 1.7 implies the Haar measure case of the following theorem of the first author and Yu, whose special case (with Haar measure, assuming \(\bar{P}(t)\in\mathbb{F}_{p}[t]\) is square-free) was first proved by Lee [11]:
**Theorem 1.11** (Cheong-Yu).: Let \(0<\epsilon<1\) be a real number, and fix a finite-sized module \(G\) over \(\mathbb{Z}_{p}[t]/(P(t))\). For each \(n\in\mathbb{Z}_{\geqslant 1}\), suppose that \(\mathrm{M}_{n}(\mathbb{Z}_{p})=\mathbb{Z}_{p}^{n^{2}}\) is equipped with a probability measure, where each random \(X\in\mathrm{M}_{n}(\mathbb{Z}_{p})\) has \(n^{2}\) independent entries, each \(X_{ij}\) of which satisfies
\[\max_{a\in\mathbb{F}_{p}}\left(\underset{X_{ij}\in\mathbb{Z}_{p}}{\mathrm{Prob }}(X_{i,j,0}=a)\right)\leq 1-\epsilon,\]
in terms of the notation (1.1). If \(G\) satisfies
\[|\mathrm{Hom}_{\mathbb{Z}_{p}[t]}(G,\mathbb{F}_{p^{d_{j}}})|=|\mathrm{Ext}^{1 }_{\mathbb{Z}_{p}[t]/(P(t))}(G,\mathbb{F}_{p^{d_{j}}})|\]
for \(1\leq j\leq l\), then
\[\lim_{n\to\infty}\underset{X\in\mathrm{M}_{n}(\mathbb{Z}_{p})}{\mathrm{Prob} }(\mathrm{cok}(P(X))\simeq_{\mathbb{Z}_{p}[t]}G)=\frac{1}{|\mathrm{Aut}_{ \mathbb{Z}_{p}[t]}(G)|}\prod_{j=1}^{l}\prod_{i=1}^{\infty}\left(1-p^{-id_{j}} \right).\]
Otherwise the limit is \(0\).
**Remark 1.12**.: It turns out that random matrices \(X\) with concentrated residue \(A_{n}\) gives many constraints on the entries, and essentially, Theorem 1.8 is the best possible result one may hope for their universality. For example, consider the case \(P(t)=t\) and \(A_{n}=\mathrm{diag}(1,1,\ldots,1,0)\), the \(n\times n\) diagonal entries with \((0,1)\)-diagonal entries with one \(0\) entry. If we consider \(X=A_{n}+pB\) with \(B\in\mathrm{M}_{n}(\mathbb{Z}_{p})\), then for any odd \(p\), if the \((n,n)\)-entry of \(B\) never takes \(0\), then the conclusion of Theorem 1.8 does not hold. (More examples and counterexamples can be made from the arguments used in the proof of Theorem 1.8, which is at the end of this paper.)
### Relevance to past and future works
The first special case of Theorem 1.7 with \(P(t)=t\) was shown by Friedman and Washington, as stated in Theorem 1.4. When \(P(t)\) is square-free modulo \(p\), Theorem 1.7 was partially proven by the authors [1, Lemma 5.2], the first author and Kaplan [13, Theorem 1.6] for \(d_{1},\ldots,d_{l}\leq 2\), and the first author, Liang, and Strand [13, Theorem 1.3] for \(l=1\). Assuming that \(P(t)\) is square-free modulo \(p\) makes the problem more accessible because then the ring \(\mathbb{Z}_{p}[t]/(P(t))\) is a finite product of DVRs, and one of our contributions is to get around this difficulty for a general monic polynomial \(P(t)\in\mathbb{Z}_{p}[t]\), where the ring \(\mathbb{Z}_{p}[t]/(P(t))\) is much more complicated.
The first universality result for random integral matrices appears in Wood's breakthrough [20, Theorem 1.3] for symmetric \(\mathbb{Z}_{p}\)-matrices, which generalizes its Haar measure version proven by Clancy, Kaplan, Leake, Payne, and Wood [13, Theorem 2, summing over all the parings]. Ever since, her techniques have been used to extend many results about about Haar-random \(\mathbb{Z}_{p}\)-matrices to random \(\mathbb{Z}_{p}\)-matrices each of whose independent entry is not too concentrated on a single residue modulo \(p\) (i.e., \(X_{i,j,0}\) in (1.1) is not too concentrated on a single value). For example, universality results from [13, 14], [15], and [20] generalize Haar measuare results from [11, 12], [13], [13], and [14], respectively.
Several authors [14, 15, 16, 17] have studied properties of random \(X\in\mathrm{M}_{n}(\mathbb{Z}_{p})\) when \(X_{i,j,0}\) is constant, but all the other \(p\)-adic digits \(X_{i,j,1},X_{i,j,2}\), and so on in (1.1) are given the uniform distribution. Theorem 1.8 provides the first universality result with \(X_{i,j,0}\) being constant as it allows us to choose any distributions for all the other \(p\)-adic digits, as long as \(A_{n}\) has a specific form in (1.4) and the
bottom-right \(r_{p}(G)\times r_{p}(G)\) submatrix of \(X\) follows the Haar measure. This seems to be the best universality result that we may hope for in this concentrated residue setting.
Our work opens up numerous questions about the behavior of random integral matrices with fixed residue. To begin with, we may ask about analogues of Theorems 1.7 and 1.8 for different random matrix models such as symmetric matrices or skew-symmetric matrices. We may ask about the concentrated residue version for [20], which deals with the cokernel of product of \(\mathbb{Z}_{p}\)-matrices. We may ask about the concentrated residue version for [10], which deals with the cokernel of Hermitian matrices over a quadratic extension of \(\mathbb{Z}_{p}\).
### Methodology and brief outline of the paper
The majority of the work is going into proving Theorem 1.7. We go through a series of reductions from SS2 to SS5 for this. We shall see that behind this, there is an interesting equidistibution result (Theorem 2.4) for matrices over \(\mathbb{Z}_{p}[t]/(P(t))\), which we eventually prove by establishing a noncommutative version of the Weierstrass preparation theorem for the matrix ring \(\operatorname{M}_{n}(\mathbb{Z}_{p})\) (Theorems 5.5 and 5.7). Then to prove Theorem 1.8, we use the strategy to compute the moments (discussed in SS6) of the distribution of \(\operatorname{cok}(P(X))\) to determine the distribution. One of the major difficulties in our work in comparison to previous works is that each moment of our distribution cannot be explicitly written. We deal with this difficulty by using Theorem 1.7, to observe (in SS6.1) to get a candidate for the moment \(M_{H}\) only depending on a fixed module \(H\) over a suitable ring.
## 2. Proof of Theorem 1.7 from an equidistribution result
From this section to SS5, we prove Proposition 1.10 and Theorem 1.7. Given any \(A_{n}\in\operatorname{M}_{n}(\mathbb{F}_{p})\), we shall write
\[\operatorname{M}_{n}(\mathbb{Z}_{p})_{A_{n}}:=\{X\in\operatorname{M}_{n}( \mathbb{Z}_{p}):X\equiv A_{n}\ \ (\text{mod }p)\}\]
so that
\[\operatorname*{Prob}_{X\in\operatorname{M}_{n}(\mathbb{Z}_{p})}(\operatorname {cok}(P(X))\simeq_{\mathbb{Z}_{p}[t]}G\mid X\equiv A_{n}\ \ (\text{mod }p))= \operatorname*{Prob}_{X\in\operatorname{M}_{n}(\mathbb{Z}_{p})_{A_{n}}}( \operatorname{cok}(P(X))\simeq_{\mathbb{Z}_{p}[t]}G).\]
That is, we consider \(\operatorname{M}_{n}(\mathbb{Z}_{p})_{A_{n}}\) as the sample space instead of mentioning conditional probabilities for the statement of Theorem 1.7. The **Haar measure** on \(\operatorname{M}_{n}(\mathbb{Z}_{p})_{A_{n}}\) is defined to be the probability measure induced by the Haar measure of \(\operatorname{M}_{n}(\mathbb{Z}_{p})\).
**Remark 2.1**.: In this section, all probability measures we deal with are the Haar measures. For example, we assume \(\operatorname{M}_{n}(\mathbb{Z}_{p})_{A_{n}}=\operatorname{M}_{n}(\mathbb{Z} _{p})_{A_{n}}^{\operatorname{Haar}}\). We shall keep this assumption till SS5. Starting from Section 6, we shall drop this assumption.
### Linearization and equidistribution
For any \(X\in\operatorname{M}_{n}(\mathbb{Z}_{p})\), we note that
\[\operatorname{cok}(P(X))\simeq_{R}\operatorname{cok}_{R}(X-\bar{t}I_{n}):= \frac{R^{n}}{((X-\bar{t}I_{n})R^{n})}, \tag{2.1}\]
where
* \(I_{n}\) is the \(n\times n\) identity matrix,
* \(R:=\mathbb{Z}_{p}[t]/(P(t))\), and
* \(\bar{t}\in R\) is the image of \(t\).
We call this isomorphism **Lee's linearization trick**, first used in [10]. The isomorphism linearizes our problem by shifting the difficulty of taking the polynomial push-forward \(P(X)\) of \(X\) into dealing with a more complicated ring \(R\) instead of \(\mathbb{Z}_{p}\). This will be used not only for proving Theorem 1.7 but also for proving Theorem 1.8 by using the version of (2.1) with
* \(X\in\operatorname{M}_{n}(\mathbb{Z}/p^{k}\mathbb{Z})\) for a given \(k\in\mathbb{Z}_{\geqslant 1}\),
* \(P(t)\in(\mathbb{Z}/p^{k}\mathbb{Z})[t]\) monic, and
* \(R=(\mathbb{Z}/p^{k}\mathbb{Z})[t]/(P(t))\)
instead.
The following is the linearized version of Proposition 1.10.
**Proposition 2.2**.: Let \(n\in\mathbb{Z}_{\geqslant 1}\). Fix a finite size module \(G\) over \(R\) and \(J_{n}\in\operatorname{M}_{n}(R/pR)\) such that \(\operatorname{cok}(J_{n})\simeq_{\mathbb{F}_{p}[t]}G/pG\). Then the following are equivalent:
1. There exists \(Z\in\operatorname{M}_{n}(R)\) such that \(\operatorname{cok}(Z)\simeq_{R}G\) and \(Z\equiv J_{n}\,(\text{mod }p)\).
2. We have \(|\mathrm{Hom}_{\mathbb{Z}_{p}[t]}(G,\mathbb{F}_{p^{d_{j}}})|=|\mathrm{Ext}^{1}_{R} (G,\mathbb{F}_{p^{d_{j}}})|\) for \(1\leq j\leq l\).
The following is the linearized version of Theorem 1.7. Shortly, we show that this version together with an equidistribution theorem implies Theorem 1.7. We let \(R:=\mathbb{Z}_{p}[t]/(P(t))\) for the rest of this section.
**Theorem 2.3**.: Keeping the hypotheses and notation in Proposition 2.2, if \(G\) satisfies
\[|\mathrm{Hom}_{\mathbb{Z}_{p}[t]}(G,\mathbb{F}_{p^{d_{j}}})|=|\mathrm{Ext}^{1} _{R}(G,\mathbb{F}_{p^{d_{j}}})|\]
for \(1\leq j\leq l\), then
\[\underset{Z\in\mathrm{M}_{n}(R)}{\mathrm{Prob}}(\mathrm{cok}(Z)\simeq_{ \mathbb{Z}_{p}[t]}G|Z\equiv J_{n}\pmod{p})=\frac{|\mathrm{Aut}_{\mathbb{Z}_{p}[ t]}(G/pG)|\prod_{j=1}^{l}\prod_{i=1}^{u_{j}(G/pG)}(1-p^{-id_{j}})}{|\mathrm{Aut}_{ \mathbb{Z}_{p}[t]}(G)|}\]
for any \(n\in\mathbb{Z}_{\geq 1}\). Otherwise, the probability is \(0\).
The key in deducing Proposition 1.10 and Theorem 1.7 from Proposition 2.2 and Theorem 2.3 is to establish the following surprising equidistribution result in its own right, a special case of which was first found by the first author, Liang, and Strand in [2, Lemma 3.7]. Write \(d:=\deg(P)\) for convenience from now on.
**Theorem 2.4**.: For any \(n\in\mathbb{Z}_{\geq 1}\) and a finite size \(R\)-module \(G\). For any \(pY_{1},pY_{2},\ldots,pY_{d-1}\in p\mathrm{M}_{n}(\mathbb{Z}_{p})\), we have
\[\underset{X\in\mathrm{M}_{n}(\mathbb{Z}_{p})_{A_{n}}}{\mathrm{Prob}}(\mathrm{ cok}_{R}(X-\bar{t}I_{n})\simeq_{R}G)=\underset{X\in\mathrm{M}_{n}(\mathbb{Z}_{p})_{A_{n}}}{ \mathrm{Prob}}(\mathrm{cok}_{R}(X+\bar{t}(pY_{1}-I_{n})+\bar{t}^{2}pY_{2}+ \cdots+\bar{t}^{d-1}pY_{d-1})\simeq_{R}G).\]
We now assume Theorems 2.3 and 2.4 and then show the purported implications:
_Theorems 2.3 and 2.4 imply Theorem 1.7._ Assume the hypotheses of Theorem 1.7. Let
\[\mathrm{M}_{n}(R)_{A_{n}-\bar{t}I_{n}}:=\{Z\in\mathrm{M}_{n}(R):Z\equiv A_{n}- \bar{t}I_{n}\pmod{p}\}.\]
By Theorem 2.4 with \(J_{n}=A_{n}-\bar{t}I_{n}\), we have
\[\underset{Z\in\mathrm{M}_{n}(R)_{A_{n}-\bar{t}I_{n}}}{\mathrm{ Prob}}(\mathrm{cok}_{R}(Z)\simeq_{\mathbb{Z}_{p}[t]}G)\] \[=\int_{(X,pY_{1},\ldots,pY_{d-1})\in\mathrm{M}_{n}(\mathbb{Z}_{p} )_{A_{n}}\times(p\mathrm{M}_{n}(\mathbb{Z}_{p}))^{d-1}}\mathbb{1}(\mathrm{ cok}_{R}(X+\bar{t}(pY_{1}-I_{n})+\bar{t}^{2}pY_{2}+\cdots+\bar{t}^{d-1}pY_{d-1})\simeq_{ \mathbb{Z}_{p}[t]}G)d(\rho_{n}\times\mu_{n}^{d-1})\] \[=\int_{(pY_{1},\ldots,pY_{d-1})\in(p\mathrm{M}_{n}(\mathbb{Z}_{p }))^{d-1}}\underset{X\in\mathrm{M}_{n}(\mathbb{Z}_{p})_{A_{n}}}{\mathrm{Prob}}( \mathrm{cok}_{R}(X+\bar{t}(pY_{1}-I_{n})+\bar{t}^{2}pY_{2}+\cdots+\bar{t}^{d-1 }pY_{d-1})\simeq_{\mathbb{Z}_{p}[t]}G)d\mu_{n}^{d-1}\] \[=\int_{(pY_{1},\ldots,pY_{d-1})\in(p\mathrm{M}_{n}(\mathbb{Z}_{p }))^{d-1}}\underset{X\in\mathrm{M}_{n}(\mathbb{Z}_{p})_{A_{n}}}{\mathrm{Prob}}( \mathrm{cok}_{R}(X-\bar{t}I_{n})\simeq_{\mathbb{Z}_{p}[t]}G)d\mu_{n}^{d-1}\] \[=\underset{X\in\mathrm{M}_{n}(\mathbb{Z}_{p})_{A_{n}}}{\mathrm{ Prob}}(\mathrm{cok}_{R}(X-\bar{t}I_{n})\simeq_{\mathbb{Z}_{p}[t]}G)\] \[=\underset{X\in\mathrm{M}_{n}(\mathbb{Z}_{p})_{A_{n}}}{\mathrm{ Prob}}(\mathrm{cok}_{R}(X)\simeq_{\mathbb{Z}_{p}[t]}G),\]
where \(\mu_{n}\) is the Haar measure of \(p\mathrm{M}_{n}(\mathbb{Z}_{p})\) and \(\rho_{n}\) is the Haar measure of \(\mathrm{M}_{n}(\mathbb{Z}_{p})_{A_{n}}\), which is introduced right after Theorem 1.7. (We used Lee's linearization trick (2.1) at the end.) Hence, Theorems 2.3 and 2.4 imply Theorem 1.7.
_Proposition 2.2 implies Proposition 1.10 assuming Theorems 2.3 and 2.4._ Let \(G\) be a finite-sized \(R\)-module and \(A_{n}\in\mathrm{M}_{n}(\mathbb{F}_{p})\) such that
\[\mathrm{cok}(P(A_{n}))\simeq_{\mathbb{F}_{p}[t]}G/pG.\]
Let \(J_{n}:=A_{n}-\bar{t}I_{n}\in\mathrm{M}_{n}(R/pR)\). First, assume (1) of Proposition 1.10: there exists \(X\in\mathrm{M}_{n}(\mathbb{Z}_{p})\) such that \(\mathrm{cok}(P(X))\simeq_{R}G\) and \(X\equiv A_{n}\pmod{p}\). Then, we take \(Z:=X-\bar{t}I_{n}\in\mathrm{M}_{n}(R)\), which satisfies (1) of Proposition 2.2 due to Lee's linearization trick (2.1). This implies (2) of Propositions 2.2 and 1.10.
Conversely, assume (2) of Proposition 1.10 (which is identical to Proposition 2.2). Then by (1) of Proposition 2.2, we have \(Z\in\mathrm{M}_{n}(R)\) such that \(Z\equiv A_{n}-\bar{t}I_{n}\pmod{p}\). This implies that
\(I_{n})+\bar{t}^{2}pY_{2}+\cdots+\bar{t}^{d-1}pY_{d-1}\) for some \(pY_{0},pY_{1},pY_{2},\ldots,pY_{d-1}\in pM_{n}(\mathbb{Z}_{p})\). Take \(X:=A_{n}+pY_{0}\in pM_{n}(\mathbb{Z}_{p})\), which satisfies \(X\equiv A_{n}\,(\text{mod }p)\). By Theorem 2.4, the same argument as in the previous proof gives us
\[\operatorname*{Prob}_{X\in pM_{n}(\mathbb{Z}_{p})_{A_{n}}}(\operatorname*{ cok}(P(X))\simeq_{R}G)=\operatorname*{Prob}_{X\in pM_{n}(\mathbb{Z}_{p})_{A_{n}}}( \operatorname*{cok}_{R}(X-\bar{t}I_{n})\simeq_{R}G)=\operatorname*{Prob}_{Z \in pM_{n}(R)}(\operatorname*{cok}_{R}(Z)\simeq_{R}G\mid Z\equiv J_{n}\pmod{ p}).\]
Then by Theorem 2.3, the last probability is not \(0\), so this implies (1) of Proposition 1.10, as desired.
## 3. Proofs of Proposition 2.2 and Theorem 2.3
Recall that our current goal (from SS2 to SS5) is to prove Proposition 1.10 and Theorem 1.7. From the previous section, we know that in order to prove the desired statements, it suffices to prove Proposition 2.2, Theorem 2.3, and Theorem 2.4. In this section, we prove the first two of these.
Since \(R=\mathbb{Z}_{p}[t]/(P(t))\) is not necessarily a PID, the proofs of Proposition 2.2 and Theorem 2.3 differ significantly from the proof given in Friedman and Washington [11] (which corresponds to the case \(P(t)=t\)) due to the lack of the Smith normal form over \(R\). Instead, we shall first develop a few formulas applicable to local Noetherian rings in general. They involve minimal resolutions, which we recall next.
### Minimal resolutions
Throughout this subsection, let \((R,\mathfrak{m},\kappa)\) be a Noetherian local ring with maximal ideal \(\mathfrak{m}\) and residue field \(\kappa\). In addition, let \(M\) be a finitely generated \(R\)-module. A **minimal resolution** of a finitely generated \(R\)-module \(G\) is an exact sequence
\[\ldots\overset{A_{2}}{\to}R^{b_{1}}\overset{A_{1}}{\to}R^{b_{0}}\overset{A_ {0}}{\to}G\overset{A_{-1}}{\to}0 \tag{3.1}\]
such that the following equivalent1 conditions hold:
Footnote 1: This equivalence can be deduced from Nakayama’s lemma. (For example, it directly follows from [1, Lemma 19.4].)
1. Each matrix \(A_{i}\) with \(i\geq 1\) has entries in \(\mathfrak{m}\);
2. For each \(i\geq 0\), we have that \(b_{i}\) is the minimal number of generators for \(\ker(A_{i-1})=\operatorname{im}(A_{i})\).
By (1), we have
\[b_{i}=\dim_{\kappa}(\operatorname*{Tor}_{i}^{R}(G,\kappa))=\dim_{\kappa}( \operatorname*{Ext}_{R}^{i}(G,\kappa)). \tag{3.2}\]
In particular, \(b_{i}\) only depends on \(G\), but not on the resolution. Hence, we may write \(\beta_{i}^{R}(G):=b_{i}\) and call it the \(i\)-th **Betti number** of \(G\). We repetitively use that \(\beta_{0}^{R}(G)=\dim_{\kappa}(G/\mathfrak{m}G)\) is the minimal number of generators of \(G\), which is called the **rank** of \(G\).
We are ready to state the key formula we need in the proofs of Proposition 2.2 and Theorem 2.3. For our purpose, we only need the square-matrix case \(u=0\) of the following theorem, but we present the general case because it does not appear to be in the literature. Given \(m,n\in\mathbb{Z}_{\geq 1}\), we denote by \(M_{n\times m}(A)\) the set of \(n\times m\) matrices over a given ring \(A\).
**Theorem 3.1**.: Let \((R,\mathfrak{m},\mathbb{F}_{q})\) be a complete Noetherian local ring with a finite residue field \(\mathbb{F}_{q}\) of \(q\) elements, and fix \(u\in\mathbb{Z}_{\geq 0}\). Let \(G\) be a finite-sized \(R\)-module with Betti numbers \(\beta_{i}^{R}(G)=b_{i}\). Then there exists \(X\in M_{n\times(n+u)}(R)\) with \(\operatorname*{cok}(X)\simeq_{R}G\) if and only if \(n\geq b_{0}\geq b_{1}-u\). Moreover, with respect to the Haar measure, we have
\[\operatorname*{Prob}_{X\in M_{n\times(n+u)}(R)}(\operatorname*{cok}(X)\simeq_ {R}G)=\frac{1}{|\operatorname*{Aut}_{R}(G)|\,|G|^{u}}\prod_{i=u+b_{0}-b_{1}+1}^{ n+u}(1-q^{-i})\prod_{j=n-b_{0}+1}^{n}(1-q^{-j}) \tag{3.3}\]
if \(n\geq b_{0}\geq b_{1}-u\), and zero otherwise.
We defer the proof of Theorem 3.1 to SS3.6.
### Fixing a residue class
Proposition 2.2 and Theorem 2.3 concern Haar-random matrices with concentrated residue class, but Theorem 3.1 is just about Haar-random matrices. In order to apply Theorem 3.1, we need the following lemma, whose DVR case was implicitly noted in [11]:
**Lemma 3.2**.: Fix \(m,n\in\mathbb{Z}_{\geq 1}\). Let \((R,\mathfrak{m},\mathbb{F}_{q})\) be a complete Noetherian local ring with a finite residue field \(\mathbb{F}_{q}\) of \(q\) elements equipped with the Haar measure, and let \(\mathfrak{a}\subset\mathfrak{m}\) be an ideal of \(R\) with \(R/\mathfrak{a}\) of finite size.
Let \(G\) be a finite-length \(R\)-module. Consider any \(\bar{X}\in\mathrm{M}_{n\times m}(R/\mathfrak{a})\) satisfying \(\mathrm{cok}_{R/\mathfrak{a}}(\bar{X})\simeq_{R}G/\mathfrak{a}G\). Then the conditional probability
\[\underset{X\in\mathrm{M}_{n\times m}(R)}{\mathrm{Prob}}\bigg{(}\mathrm{cok}(X) \simeq_{R}G\bigg{|}X\equiv\bar{X}\pmod{\mathfrak{a}}\bigg{)} \tag{3.4}\]
does not depend on \(\bar{X}\).
We defer the proof of Lemma 3.2 to SS3.7. Theorem 3.1 and Lemma 3.2 immediately imply the following theorem, which is used in the proofs of Proposition 2.2 and Theorem 2.3.
**Theorem 3.3**.: Let \((R,\mathfrak{m},\mathbb{F}_{q})\) be a complete Noetherian local ring with a finite residue field \(\mathbb{F}_{q}\) with \(q\) elements equipped with the Haar measure, and let \(\mathfrak{a}\subset\mathfrak{m}\) be an ideal of \(R\) with \(R/\mathfrak{a}\) of finite size. Let \(G\) be a finite-size \(R\)-module, and let \(\bar{X}\in\mathrm{M}_{n\times(n+u)}(R/\mathfrak{a})\) be such that \(\mathrm{cok}_{R/\mathfrak{a}}(\bar{X})\simeq_{R}G/\mathfrak{a}G\). Then for any \(u\in\mathbb{Z}_{\geqslant 0}\), we have
\[\underset{X\in\mathrm{M}_{n\times(n+u)}(R)}{\mathrm{Prob}}\bigg{(}\mathrm{cok} (X)\simeq_{R}G\bigg{|}X\equiv\bar{X}\pmod{\mathfrak{a}}\bigg{)}=\begin{cases} \frac{|\mathrm{Aut}(G/\mathfrak{a}G)|}{|\mathrm{Aut}_{R}(G)||\mathfrak{a}G|^{ u}}\prod_{i=u+b_{0}-b_{1}+1}^{u+b_{0}-b_{1}^{\prime}}(1-q^{-i}),&b_{0}\geqslant b_{1}-u,\\ 0,&b_{0}<b_{1}-u,\end{cases} \tag{3.5}\]
where \(b_{i}=\beta_{i}^{R}(G)\) for \(i=0,1\) and \(b_{1}^{\prime}=\beta_{1}^{R/\mathfrak{a}}(G/\mathfrak{a}G)\). In particular, the conditional probability above does not depend on \(n\).
**Remark 3.4**.: In the above theorem, we always have \(b_{1}^{\prime}\leqslant b_{1}\) (by Lemma 3.5 (2)). It is possible to have an empty product, which we consider as \(1\) as usual. Furthermore, even though (3.5) does not depend on \(n\), the hypotheses of Theorem 3.3 forces \(n\geqslant b_{0}\). Indeed, we have \(\beta_{0}^{R/\mathfrak{a}}(G/\mathfrak{a}G)=\beta_{0}^{R}(G)=b_{0}\) because both are equal to \(\dim_{\mathbb{F}_{q}}(G/\mathfrak{m}G)\). Therefore, the existence of \(\bar{X}\in\mathrm{M}_{n\times(n+u)}(R/\mathfrak{a})\) with \(\mathrm{cok}(\bar{X})\simeq_{R}G/\mathfrak{a}G\) implies \(n\geqslant b_{0}\).
We use (1) and (2) of the following, and (3) will be used later:
**Lemma 3.5**.: Let \((R,\mathfrak{m})\) be a Noetherian local ring. Suppose \(\mathfrak{a}\subset\mathfrak{m}\) is an ideal of \(R\) and \(G\) is a finitely generated \(R\)-module. Then we have
1. \(\beta_{0}^{R/\mathfrak{a}}(G/\mathfrak{a}G)=\beta_{0}^{R}(G)\);
2. \(\beta_{1}^{R/\mathfrak{a}}(G/\mathfrak{a}G)\leqslant\beta_{1}^{R}(G)\);
3. If we assume furthermore that \(\mathfrak{a}=\mathfrak{m}\mathfrak{b}\) for some ideal \(\mathfrak{b}\subset R\), and \(\mathfrak{b}G=0\), then \(\beta_{1}^{R/\mathfrak{a}}(G)=\beta_{1}^{R}(G)\).
Proof.: Let \(\kappa=R/\mathfrak{m}\), the residue field of \(R\). Write \(b_{i}=\beta_{i}^{R}(G)\) and \(b_{i}^{\prime}=\beta_{i}^{R/\mathfrak{a}}(G/\mathfrak{a}G)\) for \(i=0,1\).
1. This follows because both sides are equal to \(\dim_{\kappa}(G/\mathfrak{m}G)\).
2. Let \[\cdots\to R^{b_{1}}\to R^{b_{0}}\to G\to 0\] be a minimal resolution of \(M\) over \(R\). Tensoring with \(R/\mathfrak{a}\), we have an exact sequence \[(R/\mathfrak{a})^{b_{1}}\to(R/\mathfrak{a})^{b_{0}}\to G/\mathfrak{a}G\to 0.\] Since \(b_{0}=b_{0}^{\prime}\), by the definition of a minimal resolution of \(G/\mathfrak{a}G\) over \(R/\mathfrak{a}\), we have \(b_{1}\geqslant b_{1}^{\prime}\).
3. Under the given hypotheses, we want to show \(b_{1}^{\prime}=b_{1}\). Note that \(\mathfrak{b}G=0\) implies \(\mathfrak{a}G=0\), so \(G\) is a finitely generated \(R/\mathfrak{a}\)-module. Using a minimal resolution of \(G\) over \(R/\mathfrak{a}\), we get a matrix \(\bar{X}\in\mathrm{Mat}_{b_{0}\times b_{1}^{\prime}}(R/\mathfrak{a})\) such that \(\mathrm{cok}_{R/\mathfrak{a}}(\overline{X})\simeq_{R}G\). Pick any lift \(X\in\mathrm{Mat}_{b_{0}\times b_{1}^{\prime}}(R)\) of \(\bar{X}\), and let \(M:=\mathrm{cok}_{R}(X)\), then we have \(M/\mathfrak{a}M\simeq_{R}M\otimes_{R}(R/\mathfrak{a})\simeq_{R}G\). By Lemma 3.6 (proven below), we must have \(G\simeq_{R}M=\mathrm{cok}(X)\). In other words, there exists an exact sequence \[R^{b_{1}^{\prime}}\overset{X}{\to}R^{b_{0}}\to G\to 0.\] By the definition of a minimal resolution of \(M\) over \(R\), we have \(b_{1}^{\prime}\geqslant b_{1}\). Combined with part (2), we get \(b_{1}^{\prime}=b_{1}\).
**Lemma 3.6**.: Let \((R,\mathfrak{m})\) be a Noetherian local ring. Fix an ideal \(\mathfrak{b}\subset R\) and let \(\mathfrak{a}:=\mathfrak{m}\mathfrak{b}\). If \(G\) is a finitely generated \(R/\mathfrak{a}\)-module such that \(\mathfrak{b}G=0\), and \(M\) is a finitely generated \(R\)-module such that \(M/\mathfrak{a}M\simeq_{R}G\), then \(\mathfrak{a}M=0\) so that \(M\simeq_{R}G\).
Proof.: Since \(\mathfrak{b}G=0\), we have \(0=\mathfrak{b}(M/\mathfrak{a}M)=\mathfrak{b}M/\mathfrak{mb}M\). By Nakayama's lemma, \(\mathfrak{b}M=0\), so \(\mathfrak{a}M=0\). Therefore, \(G\simeq_{R}M/\mathfrak{a}M=M\).
Proof that Theorem 3.1 and Lemma 3.2 imply Theorem 3.3.: Consider the set
\[\mathrm{M}_{n,u,G}(R):=\{X\in\mathrm{M}_{n\times(n+u)}(R):\mathrm{cok}(X)\simeq _{R}G\}.\]
Similarly, consider the finite nonempty set
\[\mathrm{M}_{n,u,G/\mathfrak{a}G}(R/\mathfrak{a}):=\{X^{\prime}\in\mathrm{M}_{ n\times(n+u)}(R/\mathfrak{a}):\mathrm{cok}(X^{\prime})\simeq_{R}G/\mathfrak{a}G\}\]
We have a map \(\Phi:\mathrm{M}_{n,u,M}(R)\to\mathrm{M}_{n\times(n+u)}(R/\mathfrak{a})\) that sends \(X\) to (\(X\) mod \(\mathfrak{a}\)). By Lemma 3.2, the fibers of \(\phi\) have constant measure which also implies that \(\Phi\) is surjective. As a result, we have
\[\underset{X\in\mathrm{M}_{n\times(n+u)}(R)}{\mathrm{Prob}}\bigg{(}\mathrm{cok }_{R}(X)\simeq_{R}G\text{ and }X\equiv\bar{X}\pmod{\mathfrak{a}}\bigg{)}=\mu_{n \times(n+u)}(\Phi^{-1}(\bar{X}))=\frac{\mu_{n\times(n+u)}(\mathrm{M}_{n,u,G}( R))}{\#\mathrm{M}_{n,u,G/\mathfrak{a}G}(R/\mathfrak{a})},\]
where \(\mu_{n\times(n+u)}\) is the Haar measure of \(\mathrm{M}_{n\times(n+u)}(R)\).
On the right-hand side, we apply Theorem 3.1 for the \(R\)-module \(G\) to the numerator and apply Theorem 3.1 for the \(R/\mathfrak{a}\)-module \(G/\mathfrak{a}G\) to the denominator. (Note that the ring \(R/\mathfrak{a}\) and the module \(G/\mathfrak{a}G\) satisfy the assumption of Theorem 3.1.) By Lemma 3.5 (2) \(b^{\prime}_{1}\leqslant b_{1}\), so the desired conditional probability then follows immediately.
We shall first show that Theorems 3.1 and 3.3 imply Proposition 2.2 and Theorem 2.3. Then we shall prove Theorem 3.1 and Lemma 3.2.
### Some specifics about \(\mathbb{Z}_{p}[t]/(P(t))\)
Throughout this subsection, assume \(P(t)\in\mathbb{Z}_{p}[t]\) is monic and the reduction of \(P(t)\) modulo \(p\) is of the form \(\bar{Q}(t)^{m}\), where \(m\geqslant 1\) and \(\bar{Q}(t)\) is irreducible in \(\mathbb{F}_{p}[t]\). In other words, we assume \(l=1\) in (1.3). Then \(R=\mathbb{Z}_{p}[t]/(P(t))\) is a local ring2 with maximal ideal \(\mathfrak{m}=(p,Q(t))/(P(t))\), where \(Q(t)\in\mathbb{Z}_{p}[t]\) is any lift of \(\bar{Q}(t)\), with the residue field \(\mathbb{F}_{p}[t]/(\bar{Q}(t))\), a finite field of size \(q:=p^{\deg\bar{Q}(t)}\).
Footnote 2: Given any maximal ideal \(\mathfrak{m}\) of \(\mathbb{Z}_{p}[t]/(P(t))\), we can show that \(p\in\mathfrak{m}\) by observing that \(\mathfrak{m}\) is finite over \(\mathbb{Z}_{p}\) and applying Nakayama’s lemma. From here, it follows that the image of \(Q(t)^{m}\) is in \(\mathfrak{m}\), so the image of \(Q(t)\) must be in \(\mathfrak{m}\) so that \(\mathfrak{m}=(p,Q(t))/(P(t))\).
We shall apply Theorem 3.3 with \(\mathfrak{a}=pR\). The formula we get involves taking the first Betti number over the ring \(R/\mathfrak{a}\). To explicitly compute it, we observe that \(R/pR\) is a DVR quotient. Indeed, we may identify
\[\frac{R}{pR}=\frac{\mathbb{F}_{p}[t]}{(\bar{Q}(t)^{m})}=\frac{T}{(\pi^{m})},\]
where \(T\) is the \(\bar{Q}(t)\)-adic completion of \(\mathbb{F}_{p}[t]\) and \(\pi\) is the image of \(\bar{Q}(t)\) in \(T\). We note that that \(T\) is a DVR with uniformizer \(\pi\) and residue field \(\mathbb{F}_{q}\).
**Lemma 3.7**.: Let \((T,(\pi),\kappa)\) be any DVR, and \(m\in\mathbb{Z}_{\geqslant 1}\). Let \(G\) be a finite-length module over \(T/(\pi^{m})\). Then
\[\beta_{0}^{T/(\pi^{m})}(G)-\beta_{1}^{T/(\pi^{m})}(G)=\dim_{\kappa}(\pi^{m-1}( G)).\]
Proof.: By the classification of finitely generated modules over \(T/(\pi^{m})\), it suffices to consider the case \(G=T/(\pi^{a})\) with \(1\leqslant a\leqslant m\). The zeroth step of the minimal resolution of \(G\) is given by the quotient map \(T/(\pi^{m})\twoheadrightarrow G\), so \(\beta_{0}^{T/(\pi^{m})}(G)=1\). If \(a=m\), the quotient map \(T/(\pi^{m})\twoheadrightarrow G\) is an isomoprhism, so \(\beta_{1}^{T/(\pi^{m})}(G)=0\). In this case, we also have \(\dim_{\kappa}(\pi^{m-1}(G))=\dim_{\kappa}(\pi^{m-1}T/\pi^{m}T)=1\). Otherwise, we have \(a\leqslant m-1\). Then the kernel of the quotient map \(T/(\pi^{m})\twoheadrightarrow G\) is minimally generated by one generator, so \(\beta_{1}^{T/(\pi^{m})}(G)=1\). In this case, we have \(\dim_{\kappa}(\pi^{m-1}(G))=\dim_{\kappa}(\pi^{m-1}T/\pi^{a}T)=0\), finishing the proof.
When we use Theorem 3.3, we need to decipher \(\beta_{1}^{R}(G)\). To further control this number, we need the following property of \(R=\mathbb{Z}_{p}[t]/(P(t))\), first observed by the first author and Yu [CY2023+, Lemma 2.2]. We give a different proof; it is considerably shorter because it utilizes the theory of minimal resolutions.
**Lemma 3.8**.: Suppose that the reduction \(\bar{P}(t)\) of \(P(t)\) modulo \(p\) is given by \(\bar{P}(t)=\bar{Q}(t)^{m}\) for some monic irreducible \(\bar{Q}(t)\in\mathbb{F}_{p}[t]\) and \(m\in\mathbb{Z}_{\geqslant 1}\). Then any finite-length \(R\)-module \(G\) satisfies
\[\beta_{1}^{R}(G)\geqslant\beta_{0}^{R}(G). \tag{3.6}\]
**Remark 3.9**.: The above lemma no longer holds if \(\mathbb{Z}_{p}\) is replaced by \(\mathbb{Z}/p^{k}\mathbb{Z}\) with any \(k\in\mathbb{Z}_{\geqslant 1}\), even when \(P(t)=t\), as can be seen from Lemma 3.7.
Proof of Lemma 3.8.: We note that the hypotheses imply that \(R=\mathbb{Z}_{p}[t]/(P(t))\) is local. Let \(b_{i}=\beta_{i}^{R}(G)\) and fix a monic lift \(Q(t)\in\mathbb{Z}_{p}[t]\) of \(\bar{Q}(t)\). By choosing a minimal resolution of \(G\), there exists a matrix \(A\in\mathrm{M}_{b_{0}\times b_{1}}(R)\) such that \(\mathrm{cok}(A)\simeq_{R}G\). In particular, \(\mathrm{cok}(A)\) is of finite length. We shall find an \(R\)-algebra \(K\) that is a field such that \(\mathrm{cok}_{K}(A)=0\). If so, the existence of a \(b_{0}\times b_{1}\) matrix \(A\) over \(K\) that gives rise to a surjective \(K\)-linear map would imply \(b_{1}\geqslant b_{0}\).
Recall that \(\mathbb{Z}_{p}[t]\) is a unique factorization domain. In particular, the polynomial \(P(t)\) admits a factorization into monic irreducible polynomials in \(\mathbb{Z}_{p}[t]\). Let \(F(t)\) be a monic irreducible factor of \(P(t)\) in \(\mathbb{Z}_{p}[t]\), and consider the ring \(S:=\mathbb{Z}_{p}[t]/(F(t))\), which is a quotient of \(R\). More importantly, the ring \(S\) is a local domain that is not a field. (If \(S\) were a field, then \(F(t)R\) would be a maximal ideal of \(R\). On the other hand, the unique maximal ideal of \(R\) is \(\mathfrak{m}=(p,Q(t))R\), which is not \(F(t)R\) because \(p\notin F(t)R\).) Let \(K\) be the fraction field of \(S\) and view \(K\) as an \(R\)-algebra. We now claim that \(\mathrm{cok}_{K}(A)=0\).
It suffices to show that \(G\otimes_{R}K=0\). Let \(G^{\prime}:=G\otimes_{R}S\). Note that \(\mathfrak{m}S\) is the maximal ideal of \(S\) because \(S\) is a quotient of \(R\). Note that, as an \(R\)-module, \(G^{\prime}\) is of finite length because it is a quotient of \(G\). Thus, there exists \(N\geqslant 0\) such that \(\mathfrak{m}^{N}G=0\) so that \((\mathfrak{m}^{N}S)G^{\prime}=0\) as an \(S\)-module. Since \(S\) is a domain that is not a field, there exists \(x\in\mathfrak{m}^{N}S\smallsetminus\{0\}\). We have \(xG^{\prime}=0\), so that \(x\) annihilates \(G^{\prime}\otimes_{S}K\) as well. But \(x\) is invertible in \(K\), which implies \(G\otimes_{R}K\simeq_{K}G^{\prime}\otimes_{S}K=0\), and the proof is complete.
### Proofs of Proposition 2.2 and Theorem 2.3 assuming Theorems 3.1 and 3.3
We are now ready to prove Proposition 2.2 and Theorem 2.3 assuming Theorems 3.1 and 3.3.
Proofs of Proposition 2.2 and Theorem 2.3 assuming Theorems 3.1 and 3.3.: Recall the factorization of \(\bar{P}(t)\) in (1.3). By Hensel's lemma, there exists monic \(Q_{1}(t),\ldots,Q_{l}(t)\in\mathbb{Z}_{p}[t]\) such that \(P(t)=Q_{1}(t)\cdots Q_{l}(t)\) and \(Q_{j}(t)\equiv\bar{P}_{j}(t)^{m_{j}}\pmod{p}\). Let \(R_{j}:=\mathbb{Z}_{p}[t]/(Q_{j}(t))\). By the Chinese remainder theorem, we have \(R\simeq_{R}R_{1}\times\cdots\times R_{l}\) given by \(x\mapsto(x\mod(Q_{1}),\ldots,x\mod(Q_{l}))\). Applying this particular isomorphism, we have
\[\mathrm{M}_{n}(R)\simeq_{R}\mathrm{M}_{n}(R_{1})\times\cdots\times\mathrm{M}_ {n}(R_{l}),\]
and the Haar measure on \(\mathrm{M}_{n}(R)\) is the product measure of the Haar measures of \(\mathrm{M}_{n}(R_{j})\) because of the uniqueness of the Haar measure. Hence, to prove Proposition 2.2 and Theorem 2.3, it suffices to prove them for the case \(l=1\). (More details of this reduction can be found in [10, SS2.1] by replacing \(P_{j}(t)\) in the citation with \(Q_{j}(t)\).) Therefore, we may assume from now on that \(\bar{P}(t)=\bar{Q}(t)^{m}\) for some monic irreducible \(\bar{Q}(t)\in\mathbb{F}_{p}[t]\) and \(m\in\mathbb{Z}_{\geqslant 1}\). In particular, the ring \(R=\mathbb{Z}_{p}[t]/(P(t))\) is local. Write \(d:=\deg(\bar{Q})\) and \(q:=p^{d}\).
We first assume (1) and then show (2) in Proposition 2.2. Lemma 3.8 implies that \(\beta_{0}^{R}(G)\leqslant\beta_{1}^{R}(G)\). Theorem 3.1 with \(u=0\) implies \(\beta_{0}^{R}(G)\geqslant\beta_{1}^{R}(G)\). Thus, we have
\[|\mathrm{Hom}_{\mathbb{Z}_{p}[t]}(G,\mathbb{F}_{q})|=\beta_{0}^{R}(G)=\beta_{1 }^{R}(G)=|\mathrm{Ext}_{R}^{1}(G,\mathbb{F}_{q})|,\]
which is (2).
Next, we assume that (2) from Proposition 2.2 implies the conclusion of Theorem 2.3. Taking \(u=0\) and \(\mathfrak{a}=pR\) in Theorem 3.3 (with \(J_{n}=\bar{X}\)), we have \(b_{0}=b_{0}^{\prime}\) and thus, applying Lemma 3.7 (and the discussion before that), we have
\[b_{0}-b_{1}^{\prime} =b_{0}^{\prime}-b_{1}^{\prime}\] \[=\beta_{0}^{R/pR}(G/pG)-\beta_{1}^{R/pR}(G/pG)\] \[=\dim_{\mathbb{F}_{q}}(\bar{Q}(t)^{m-1}G/pG)\] \[=u_{1}(G/pG).\]
Since (2) from Proposition 2.2 implies \(b_{0}=b_{1}\), we obtain Theorem 2.3.
Finally, we assume (2) and then show (1) in Proposition 2.2. We already know that (2) implies the conclusion of Theorem 2.3. Then
\[\operatorname*{Prob}_{Z\in\mathrm{M}_{n}(R)}(\mathrm{cok}(Z)\simeq_{\mathbb{Z}_ {p}[t]}G\text{ and }Z\equiv J_{n}\mod p))\neq 0,\]
so we get the existence of such \(Z\). This finishes the proofs of Proposition 2.2 and Theorem 2.3 assuming Theorem 3.3.
For the rest of the section, we prove Theorem 3.1 and Lemma 3.2 (which imply Theorem 3.3) that we have deferred. Then by the previous subsection, we would establish Proposition 2.2 and Theorem 2.3, which would only leave Theorem 2.4 to finish the proof of Theorem 1.7. We collect some preliminaries in commutative algebra needed in the proofs.
### Preliminaries in commutative algebra for proofs of Theorem 3.1 and Lemma 3.2
The proofs of Theorem 3.1 and Lemma 3.2 in the DVR case relies on the classification of finitely generated modules, namely, the Smith normal form. In order to generalize the proof to a Noetherian local ring that is not a DVR, we need to show that some nice consequences of the Smith normal form persist even in its absence. The following lemma is the key ingredient in the proof of Lemma 3.12 and Lemma 3.13. The former is used in the proof of Lemma 3.2, and the latter is part of Theorem 3.1 and is crucially used in Lemma 3.17, the last step of the proof of Theorem 3.1. Denote by \(\operatorname{Sur}_{R}(G,H)\) the set of \(R\)-linear surjections from \(G\) to \(H\), given \(R\)-modules \(G\) and \(H\).
**Lemma 3.10**.: Let \((R,\mathfrak{m},\kappa)\) be any Noetherian local ring, and \(G\) be a finitely generated \(R\)-module. Suppose that \(n\geqslant\beta_{0}^{R}(G)\). Then \(\operatorname{GL}_{n}(R)\) acts on \(\operatorname{Sur}_{R}(R^{n},G)\) transitively: for any \(F_{1},F_{2}\in\operatorname{Sur}_{R}(R^{n},G)\), there is \(g\in\operatorname{GL}_{n}(R)\) such that \(F_{2}=F_{1}\circ g\).
Proof.: Let \(r=\dim_{\kappa}(G/\mathfrak{m}G)=\beta_{0}^{R}(G)\), the minimal number of generators for \(G\). Fix an \(R\)-linear surjection \(\varphi:R^{r}\twoheadrightarrow G\). Recall that free modules are projective. That is, any diagram of \(R\)-modules below lifts:
Therefore, we have \(R\)-linear maps \(F_{1}^{\prime},F_{2}^{\prime}:R^{n}\to R^{r}\) such that the diagram
is commutative.
Tensoring the diagram with \(\kappa=R/\mathfrak{m}\), the map \(\varphi\) becomes an isomorphism of \(\kappa\)-vector spaces by the assumption that the minimal number of generators of \(G\) is \(r\). For \(i=1,2\), since the mod-\(\mathfrak{m}\) reduction of \(F_{i}\) is surjective, so is the mod-\(\mathfrak{m}\) reduction \(\overline{F}_{i}^{\prime}\) of \(F_{i}^{\prime}\). By Nakayama's lemma, \(F_{i}^{\prime}\) is surjective. Hence, we may replace \(G\) by \(R^{r}\), and we have reduced to the case where \(G\) is a free module \(R^{r}\), and \(F_{1},F_{2}\) are surjective \(r\times n\) matrices.
We now claim that there exists \(g\in\operatorname{GL}_{n}(R)\) such that \(F_{2}=F_{1}g\). For \(i=1,2\), by right-multiplying \(F_{i}\) by a matrix in \(\operatorname{GL}_{r}(\kappa)\) if necessary, we may assume the first \(r\) columns of \(\overline{F}_{i}\) span \(\kappa^{r}\). Write
\[F_{1}=\begin{bmatrix}U&A\end{bmatrix}\text{ and }\ F_{2}=\begin{bmatrix}V&B \end{bmatrix},\text{ where }\ U,V\in\operatorname{Mat}_{r}(R)\text{ and }\ A,B\in \operatorname{Mat}_{n-r}(R).\]
By our assumption, \(U,V\) are invertible mod \(\mathfrak{m}\), thus invertible over \(R\). Considering
\[g:=\begin{bmatrix}U^{-1}V&U^{-1}(B-A)\\ 0&I_{n-r}\end{bmatrix}\in\operatorname{GL}_{n}(R),\]
we have \(F_{2}=F_{1}g\) as desired.
**Remark 3.11**.: Lemma 3.10 can also be deduced from [E, Theorem 20.2].
Theorem 3.1 concerns all matrices with a fixed cokernel up to isomorphism. We now show that all such matrices are row-column-equivalent, as they are in the DVR case. More precisely, we have the following. (Technically, we do not need it for the proof of Theorem 3.1, but we use it in the proof of Lemma 3.2.)
**Lemma 3.12**.: Let \(R\) be a Noetherian local ring and \(m,n\in\mathbb{Z}_{\geqslant 1}\). Consider any two \(m\times n\) matrices over \(R\) or equivalently, \(R\)-linear maps \(A,B:R^{n}\to R^{m}\). Then
1. \(\operatorname{im}(A)=\operatorname{im}(B)\) as submodules of \(R^{m}\) if and only if \(A\) and \(B\) are **column-equivalent**, namely, \(Ag=B\) for some \(g\in\operatorname{GL}_{n}(R)\).
2. Let \(N_{1},N_{2}\) be submodules of \(R^{m}\). Then \(R^{m}/N_{1}\) and \(R^{m}/N_{2}\) are isomorphic as \(R\)-modules if and only if \(N_{1}\) and \(N_{2}\) are **row-equivalent**, namely, \(gN_{1}=N_{2}\) for some \(g\in\operatorname{GL}_{m}(R)\).
3. \(\operatorname{cok}(A)\) and \(\operatorname{cok}(B)\) are isomorphic as \(R\)-modules if and only if \(A\) and \(B\) are **row-column-equivalent**, namely, \(gAg^{\prime}=B\) for some \(g\in\operatorname{GL}_{m}(R)\) and \(g^{\prime}\in\operatorname{GL}_{n}(R)\).
Proof.:
1. The backward implication is trivial. For the forward implication, write \(M=\operatorname{im}(A)=\operatorname{im}(B)\subseteq R^{m}\) so that we can consider \(A,B\in\operatorname{Sur}_{R}(R^{n},M)\). By Lemma 3.10, there is \(g\in\operatorname{GL}_{n}(R)\) such that \(A\circ g=B\) as maps from \(R^{n}\) to \(M\). Composed with the inclusion map of \(M\) into \(R^{m}\), we have \(Ag=B\) as matrices.
2. The backward implication is evident, since \(g\in\operatorname{GL}_{m}(R)\) induces an isomorphism from \(R^{m}/N_{1}\) to \(R^{m}/gN_{1}\). For the forward implication, let \(M=R^{m}/N_{1}\simeq_{R}R^{m}/N_{2}\). Then we have the following commutative diagram of \(R\)-linear maps, whose rows are exact: \[\begin{CD}0@>{}>{}>N_{1}@>{}>{}>R^{m}@>{}>{}>M@>{}>{}>0\\ 0@>{}>{}>N_{2}@>{}>{}>R^{m}@>{}>{}>M@>{}>{}>0,\end{CD}\] where \(g\) is constructed from Lemma 3.10 applied to the two quotient maps \(R^{m}\to M\) induced by \(N_{1}\) and \(N_{2}\). Therefore, we have \(gN_{1}=N_{2}\).
3. The backward implication is trivial. For the forward implication, if \(\operatorname{cok}(A)\simeq_{R}\operatorname{cok}(B)\), then \(N_{1}=\operatorname{im}(A)\) and \(N_{2}=\operatorname{im}(B)\) satisfy the assumption of (2), so \(\operatorname{im}(B)=g\cdot\operatorname{im}(A)\) for some \(g\in\operatorname{GL}_{m}(R)\). Thus \(\operatorname{im}(B)=\operatorname{im}(gA)\), so by (1), there is \(g^{\prime}\in\operatorname{GL}_{n}(R)\) such that \(B=gAg^{\prime}\).
This finishes the proof.
The following lemma is a part of Theorem 3.1.
**Lemma 3.13**.: Let \(R\) be a Noetherian local ring and \(G\) a finitely generated \(R\)-module. Write \(b_{i}:=\beta_{i}^{R}(G)\). For integers \(n\geqslant 1\) and \(u\geqslant 0\), if there exists \(X\in\operatorname{M}_{n\times(n+u)}(R)\) with \(\operatorname{cok}(X)\simeq_{R}G\), then \(n\geqslant b_{0}\geqslant b_{1}-u\).
Proof.: Consider the exact sequence
\[R^{n+u}\stackrel{{ X}}{{\to}}R^{n}\stackrel{{ A}}{{\to}}G\to 0,\]
where \(A\) is the \(R\)-linear map given by \(R^{n}\to R^{n}/XR^{n+u}=\operatorname{cok}(X)\simeq_{R}G\), and let \(M:=\ker(A)\subset R^{n}\). From the existence of the surjection \(A\), it follows that \(n\geqslant b_{0}\). From the existence of the \(R\)-linear surjection \(X:R^{n+u}\to M\), it follows that \(n+u\geqslant\beta_{0}^{R}(M)\). To prove \(b_{0}\geqslant b_{1}-u\), it suffices to show that
\[\beta_{0}^{R}(M)=n+b_{1}-b_{0}.\]
By Lemma 3.10, if \(A^{\prime}\) is any \(R\)-linear surjection from \(R^{n}\) to \(M\), then \(\ker(A^{\prime})\) is isomorphic to \(M=\ker(A)\) and thus \(\beta_{0}^{R}(\ker(A^{\prime}))=\beta_{0}^{R}(M)\). We construct a convenient choice of \(A^{\prime}\) below. Pick a minimal resolution
\[\cdots\to R^{b_{1}}\to R^{b_{0}}\stackrel{{ A_{0}}}{{\to}}G\to 0\]
of \(G\), and write \(M_{0}:=\ker(A_{0})\). Then \(\beta_{0}^{R}(M_{0})=b_{1}\) by the definition of a minimal resolution. Now construct \(A^{\prime}:=A_{0}\oplus 0:R^{b_{0}}\oplus R^{n-b_{0}}\to G\), then \(\ker(A^{\prime})=M_{0}\oplus R^{n-b_{0}}\). It follows that
\[\beta_{0}^{R}(M)=\beta_{0}^{R}(M_{0}\oplus R^{n-b_{0}})=b_{1}+(n-b_{0})=n+b_{1} -b_{0},\]
as desired.
Similar to the proof by Friedman-Washington [10] in the DVR case, we reduce the Haar-measure statement in Theorem 3.1 into a counting statement by passing to a sufficiently large finite quotient of \(R\). We need the following lemmas in the reduction step.
**Remark 3.14**.: In the reduction step in the proof of Theorem 3.1, we shall apply Lemma 3.5 (3) with \(\mathfrak{a}=\mathfrak{m}^{L}\) and \(\mathfrak{b}=\mathfrak{m}^{L-1}\), where \(L\) is large enough so that \(\mathfrak{b}M=0\).
### Proof of Theorem 3.1
The "only if" direction of the existence statement of Theorem 3.1 follows from Lemma 3.13. Once the probability formula (3.3) of Theorem 3.1 is proved, the "if" direction of the existence statement follows from the fact that the probability is nonzero. Hence it suffices to prove (3.3), under the assumption that \(n\geqslant b_{0}\geqslant b_{1}-u\), where \(b_{i}:=\beta_{i}^{R}(G)\). We carry this out in three steps.
**Lemma 3.15** (Step 1).: To prove (3.3), it suffices to prove the case when \(R\) is of finite size.
Proof.: Assume (3.3) with the hypothesis \(n\geqslant b_{0}\geqslant b_{1}-u\) holds for any finite-sized local ring \(R\). Now, let \(R\) and \(G\) be given as in Theorem 3.1, where \(R\) is not necessarily of finite size. Suppose that \(n\geqslant\beta_{0}(G)\geqslant\beta_{1}(G)-u\). Since \(G\) is of finite length, there exists \(L\in\mathbb{Z}_{\geqslant 2}\) such that \(\mathfrak{m}^{L-1}G=0\). For any \(X\in\operatorname{Mat}_{n\times(n+u)}(R)\), we denote by \(\bar{X}\) the residue class of \(X\) modulo \(\mathfrak{m}^{L}\). Since
\[\operatorname{cok}_{R/\mathfrak{m}^{L}}(\bar{X})\simeq_{R}\operatorname{cok}( X)\otimes_{R}R/\mathfrak{m}^{L}\simeq_{R}\operatorname{cok}(X)/\mathfrak{m}^{L} \operatorname{cok}(X),\]
by Lemma 3.6 with \(\mathfrak{b}=\mathfrak{m}^{L-1}\) and \(M=\operatorname{cok}(X)\), we have \(\operatorname{cok}(X)\simeq_{R}G\) if and only if \(\operatorname{cok}_{R/\mathfrak{m}^{L}}(\overline{X})\simeq_{R}G\). Moreover, by Lemma 3.5 (3) with \(\mathfrak{b}=\mathfrak{m}^{L-1}\), we have \(\beta_{i}^{R/\mathfrak{m}^{L}}(M)=\beta_{i}^{R}(M)\) for \(i=0,1\). Hence, both sides of (3.3) are unchanged if we replace \(R\) by \(R/\mathfrak{m}^{L}\) everywhere. Therefore, the equality in (3.3) holds by our assumption.
For the rest of the proof, we assume \(R\) is a finite-sized local ring. Our goal is to count the cardinality of \(\{X\in\operatorname{M}_{n\times(n+u)}(R):\operatorname{cok}(X)\simeq_{R}G\}\). We divide this in two steps: we first count the number of all possible images of \(X\) in the set we count, and then count the number of such \(X\) with a given image. We may immediately notice that the image of any such \(X\) must be a submodule \(M\subset R^{n}\) such that \(R^{n}/M\simeq_{R}G\), and any such matrix \(X\) with a given image \(M\) corresponds to an \(R\)-linear surjection from \(R^{n+u}\) to \(G\). The following lemma is due to Cohen and Lenstra [10, Proposition 3.1 (iii)]:
**Lemma 3.16** (Step 2).: Let \((R,\mathfrak{m},\mathbb{F}_{q})\) be a local ring of finite size and \(G\) a finite-sized \(R\)-module. If \(n\geqslant\beta_{0}^{R}(G)=b_{0}\), then the number of submodules of \(R^{n}\) with quotient \(G\) is given by
\[\#\{M\leqslant R^{n}:R^{n}/M\simeq_{R}G\}=\frac{|G|^{n}}{|\operatorname{Aut}_{ R}(G)|}\prod_{i=n-b_{0}+1}^{n}(1-q^{-i}).\]
Proof.: We note that \(\{M\leqslant R^{n}:R^{n}/M\simeq_{R}G\}\) can be identified with the set of \(\operatorname{Aut}_{R}(G)\)-orbits of \(\operatorname{Sur}_{R}(R^{n},G)\), where \(\operatorname{Aut}_{R}(G)\) acts on \(\operatorname{Sur}_{R}(R^{n},G)\) by composition: that is, given any \(\phi_{1},\phi_{2}\in\operatorname{Sur}_{R}(R^{n},G)\), we have \(\ker(\phi_{1})=\ker(\phi_{2})\) if and only if \(\phi_{2}=\sigma\circ\phi_{1}\) for some \(\sigma\in\operatorname{Aut}_{R}(G)\). The action is free: if \(A\in\operatorname{Sur}_{R}(R^{n},G)\) and \(\sigma\in\operatorname{Aut}_{R}(G)\) satisfies \(\sigma\circ A=A\), then \(\sigma\) must be the identity because \(A\) is surjective. Therefore, the orbit-stabilizer theorem implies that every orbit has the size \(|\operatorname{Aut}_{R}(G)|\), so
\[\#\{M\leqslant R^{n}:R^{n}/M\simeq_{R}G\}=\frac{|\operatorname{Sur}_{R}(R^{n},G)|}{|\operatorname{Aut}_{R}(G)|}.\]
We now compute \(|\operatorname{Sur}_{R}(R^{n},G)|\). By Nakayama's lemma, an \(R\)-linear map \(A:R^{n}\to G\) is surjective if and only if its \(\operatorname{mod-}\mathfrak{m}\) reduction \(\bar{A}:\mathbb{F}_{q}^{\ n}\to G/\mathfrak{m}G\) is surjective. Therefore, the probability that a uniformly random \(A\in\operatorname{Hom}_{R}(R^{n},G)\) be surjective is
\[\frac{|\operatorname{Sur}_{\mathbb{F}_{q}}(\mathbb{F}_{q}^{\ n},\mathbb{F}_{q}^{\ b_{0}})|}{|\operatorname{Hom}_{\mathbb{F}_{q}}(\mathbb{F}_{q}^{\ n},\mathbb{F}_{q}^{\ b_{0}})|}=\prod_{i=n-b_{0}+1}^{n}(1-q^{-i}). \tag{3.7}\]
Since \(|\operatorname{Hom}_{R}(R^{n},G)|=|G|^{n}\), the result follows.
**Lemma 3.17** (Step 3).: Assume \((R,\mathfrak{m},\mathbb{F}_{q})\) is a local ring of finite size and \(M\subset R^{n}\) is a submodule. Let \(G:=R^{n}/M\). Then
\[|\operatorname{Sur}_{R}(R^{n+u},M)|=\frac{|R|^{n(n+u)}}{|G|^{n+u}}\prod_{i=u+b_ {0}-b_{1}}^{n+u}(1-q^{-i}),\]
where \(b_{i}=\beta_{i}^{R}(G)\). In particular, the quantity depends only on the isomorphism class of \(G\), but not on \(M\).
Proof.: From the proof of Lemma 3.13, we have \(\beta_{0}^{R}(M)=n+b_{1}-b_{0}\) by taking \(A\) to be the quotient map \(R^{n}\twoheadrightarrow R^{n}/M\simeq_{R}G\) so that \(M=\ker(A)\). By the same argument involving (3.7), we have
\[|\mathrm{Sur}(R^{n+u},M)|=|M|^{n+u}\prod_{i=n+u-\beta_{0}^{R}(M)}^{n+u}(1-q^{-i }).\]
The desired formula then follows because \(|R|^{n(n+u)}=|R^{n}|^{n+u}=|G|^{n+u}|M|^{n+u}\).
We are now ready to show Theorem 3.1:
Proof of Theorem 3.1.: By Lemma 3.15, we may assume that \(R\) is of finite size. It remains to prove (3.3) under the assumption \(n\geqslant b_{0}\geqslant b_{1}-u\). By Lemma 3.16 and Lemma 3.17, we have
\[\operatorname*{Prob}_{X\in\mathrm{M}_{n\times(n+u)}(R)}(\mathrm{ cok}(X)\simeq_{R}G) =\frac{1}{|R|^{n(n+u)}}\#\{X\in\mathrm{M}_{n\times(n+u)}(R):\mathrm{ cok}(X)\simeq_{R}G\}\] \[=\frac{1}{|R|^{n(n+u)}}\Bigg{(}\frac{|G|^{n}}{|\mathrm{Aut}_{R}( G)|}\prod_{j=n-b_{0}+1}^{n}(1-q^{-j})\Bigg{)}\Bigg{(}\frac{|R|^{n(n+u)}}{|G|^{n+u }}\prod_{i=u+b_{0}-b_{1}}^{n+u}(1-q^{-i})\Bigg{)}\] \[=\frac{1}{|\mathrm{Aut}_{R}(G)||G|^{u}}\prod_{i=u+b_{0}-b_{1}}^{ n+u}(1-q^{-i})\prod_{j=n-b_{0}+1}^{n}(1-q^{-j}),\]
which is (3.3).
### Proof of Lemma 3.2
We now prove Lemma 3.2:
Proof of Lemma 3.2.: Denote by \(P(G|\bar{X})\) the conditional probability in (3.4). Suppose that \(\overline{X_{1}},\overline{X_{2}}\in\mathrm{M}_{n}(R/\mathfrak{a})\) satisfy \(\mathrm{cok}(\overline{X_{1}})\simeq_{R}G/\mathfrak{a}G\simeq_{R}\mathrm{cok}( \overline{X_{2}})\). We shall prove that \(P(G|\overline{X_{1}})=P(G|\overline{X_{2}})\).
By Lemma 3.12 (3) applied to the ring \(R/\mathfrak{a}\), there exist \(\bar{g}\in\mathrm{GL}_{n}(R/\mathfrak{a})\) and \(\bar{g}^{\prime}\in\mathrm{GL}_{m}(R/\mathfrak{a})\) such that \(\bar{g}\overline{X_{1}}\bar{g}^{\prime}=\overline{X_{2}}\). Pick any lifts \(g\in\mathrm{M}_{n}(R)\) and \(g^{\prime}\in\mathrm{M}_{m}(R)\) of \(\bar{g}\) and \(\bar{g}^{\prime}\), respectively. Since invertibility can be tested modulo \(\mathfrak{m}\), the matrices \(g,g^{\prime}\) must be invertible.
Consider the map
\[\{X_{1}\in\mathrm{M}_{n}(R):X_{1}\equiv\overline{X_{1}}\pmod{ \mathfrak{a}}\} \to\{X_{2}\in\mathrm{M}_{n}(R):X_{2}\equiv\overline{X_{2}}\pmod{ \mathfrak{a}}\}\text{ given by}\] \[X_{1} \mapsto gX_{1}g^{\prime},\]
which is well-defined since \(\bar{g}\overline{X_{1}}\bar{g}^{\prime}=\overline{X_{2}}\). This map is a measure-preserving bijection because it is a restriction of an \(R\)-linear automorphism of \(\mathrm{M}_{n}(R)\) and the Haar measure on \(\mathrm{M}_{n}(R)\) is unique. By its definition, this map preserves the cokernel up to \(R\)-linear isomorphism, so \(P(M|\overline{X_{1}})=P(M|\overline{X_{2}})\).
Hence, to show Theorem 1.7, it remains to show Theorem 2.4. In the next section, we shall reduce Theorem 2.4 into another lemma, which is proven in SS5.
## 4. Reduction of proof of Theorem 2.4
The high-level idea of this section is originated from [10] and [10]. Write \(\bar{X}:=A_{n}\) in Theorem 2.4 since our \(n\) is fixed throughout this section. Write \(R:=\mathbb{Z}_{p}[t]/(P(t))\) and \(d:=\deg(P)\). To prove Theorem 2.4, it suffices to construct a measure-preserving bijection
\[\{X\in\mathrm{M}_{n}(\mathbb{Z}_{p})_{\bar{X}}: \mathrm{cok}_{R}(X+\bar{t}(pY_{1}-I_{n})+\bar{t}^{2}pY_{2}+\cdots+ \bar{t}^{d-1}pY_{d-1})\simeq_{R}G\}\] \[\to\{X^{\prime}\in\mathrm{M}_{n}(\mathbb{Z}_{p})_{\bar{X}}: \mathrm{cok}_{R}(X^{\prime}-\overline{t}I_{n})\simeq_{R}G\},\]
given the hypotheses of Theorem 2.4.
To achieve this, we note that \(\mathrm{cok}(ZU)\simeq_{R}\mathrm{cok}(Z)\) for any \(Z\in\mathrm{M}_{n}(R)\) and \(U\in\mathrm{GL}_{n}(R)\), so it suffices to construct a measure-preserving bijection \(\Phi:\mathrm{M}_{n}(\mathbb{Z}_{p})_{\bar{X}}\to\mathrm{M}_{n}(\mathbb{Z}_{p}) _{\bar{X}}\) such that whenever \(X^{\prime}=\Phi(X)\), there exists \(U\in\mathrm{GL}_{n}(R)\) such that
\[\Big{(}X+\overline{t}(pY_{1}-I_{n})+\overline{t}^{2}pY_{2}+\cdots+\overline{t }^{d-1}pY_{d-1}\Big{)}U=X^{\prime}-\overline{t}I_{n}. \tag{4.1}\]
When \(d=2\), the first author and Kaplan [10, p.645] observed that we can take \(\Phi(X)=X(I_{n}-pY_{1})^{-1}\) with \(U=(I_{n}-pY_{1})^{-1}\). Note that the inverse of \(\Phi\) is given by \(\Phi^{-1}(X^{\prime})=X^{\prime}(I_{n}-pY_{1})\).
When \(d\geq 3\), as observed by the first author, Liang, and Strand in [12, Remark 3.8], a simple choice of \(\Phi\) is no longer available. Nevertheless, we show that such \(\Phi\) exists through an algorithmic approach. For clarity, we state our claim as a lemma, which slightly cleans up the hypotheses in Theorem 2.4 and (4.1).
**Lemma 4.1**.: Let \(P(t)\in\mathbb{Z}_{p}[t]\) be monic of degree \(d\geq 2\) and \(pY_{2},\ldots,pY_{d-1}\in p\mathrm{M}_{n}(\mathbb{Z}_{p})\). Let \(R=\mathbb{Z}_{p}[t]/P(t)\). Then there exists a Haar measure-preserving bijection \(\Phi:\mathrm{M}_{n}(\mathbb{Z}_{p})\to\mathrm{M}_{n}(\mathbb{Z}_{p})\) such that whenever \(X^{\prime}=\Phi(X)\), we have \(X\equiv X^{\prime}\,(\mathrm{mod}\ p)\) and
\[\Big{(}X+\overline{t}I_{n}+\overline{t}^{2}pY_{2}+\cdots+\overline{t}^{d-1}pY_ {d-1}\Big{)}U=X^{\prime}+\overline{t}I_{n} \tag{4.2}\]
for some \(U\in\mathrm{GL}_{n}(R)\) potentially depending on \(X\).
Proof that Lemma 4.1 implies Theorem 2.4.: We assume Lemma 4.1 and then establish (4.1). Given the hypotheses of Theorem 2.4, we note
\[X+\bar{t}(pY_{1}-I_{n})+\bar{t}^{2}pY_{2}+\cdots+\bar{t}^{d-1}pY_ {d-1}\] \[=(X(pY_{1}-I_{n})^{-1}+\bar{t}I_{n}+\overline{t}^{2}pY_{2}(pY_{1}- I_{n})^{-1}+\cdots+\bar{t}^{d-1}pY_{d-1}(pY_{1}-I_{n})^{-1})(pY_{1}-I_{n}).\]
Applying Lemma 4.1 by replacing \(X\) with \(X(pY_{1}-I_{n})^{-1}\) and \(X^{\prime}\) with \(-X^{\prime}\), which makes sense because \(X(pY_{1}-I_{n})^{-1}\equiv-X^{\prime}\,(\mathrm{mod}\ p)\), we may find some \(V\in\mathrm{GL}_{n}(R)\) such that
\[(X(pY_{1}-I_{n})^{-1}+\bar{t}I_{n}+\bar{t}^{2}pY_{2}(pY_{1}-I_{n})^{-1}+\cdots+ \bar{t}^{d-1}pY_{d-1}(pY_{1}-I_{n})^{-1})V=-X^{\prime}+\bar{t}I_{n}.\]
Then taking \(U=-(pY_{1}-I_{n})^{-1}V\), we obtain (4.1).
Thus, to prove Theorem 1.7, it remains to prove Lemma 4.1. Before we start the proof of Lemma 4.1, we give the simplest nontrivial example to illustrate the idea and its apparent difficulties.
**Example 4.2**.: Let \(d=3\) and suppose we are given \(f=X+\overline{t}I_{n}+\overline{t}^{2}pY_{2}\), where \(X\in\mathrm{M}_{n}(\mathbb{Z}_{p})\) and \(pY_{2}\in p\mathrm{M}_{n}(\mathbb{Z}_{p})\). We say \(g\in\mathrm{M}_{n}(R)\) is **equivalent** to \(f\) if \(g=fU\) for some \(U\in\mathrm{GL}_{n}(R)\). We wish to find an element without \(\overline{t}^{2}\) or higher terms that is equivalent to \(f\). An obvious attempt is to keep updating \(f\) by an equivalent element, each step getting rid of some higher terms of \(f\), and see if this process eventually terminates. For example, an initial candidate could be
\[f(I_{n}-\overline{t}pY_{2})=X+\overline{t}(I_{n}-XpY_{2})-\overline{t}^{3}p^{ 2}Y_{2}^{2}.\]
Correcting the linear coefficient, we get
\[f(I_{n}-\overline{t}pY_{2})(I_{n}-XpY_{2})^{-1}=X(I_{n}-XpY_{2})^{-1}+\overline {t}I_{n}-\overline{t}^{3}p^{2}Y_{2}^{2}(I_{n}-XpY_{2})^{-1}.\]
We are making progress since the coefficient of \(\overline{t}^{3}\) is a multiple of \(p^{2}\), so the higher terms are more divisible by \(p\) than before. However, if we repeat this process again, we get
\[f(I_{n}-\overline{t}pY_{2})(I_{n}-XpY_{2})^{-1}(I_{n}+\overline{t }^{2}p^{2}Y_{2}^{2}(I_{n}-XpY_{2})^{-1})\] \[\quad=X(I_{n}-XpY_{2})^{-1}+\overline{t}I_{n}+\overline{t}^{2}X( I_{n}-XpY_{2})^{-1}p^{2}Y_{2}^{2}(I_{n}-XpY_{2})^{-1}-\overline{t}^{5}p^{2}Y_{2}^{2 }(I_{n}-XpY_{2})^{-1}p^{2}Y_{2}^{2}(I_{n}-XpY_{2})^{-1}.\]
Here, the higher terms (i.e., \(\bar{t}^{2}\) or higher) are still only known to be divisible by \(p^{2}\). The reader is encouraged to repeat the process again, and find that the higher terms are divisible by \(p^{3}\) after the process.
In fact, the process in Example 4.2 turns out to "converge," although it is unclear how to prove it. When \(d>3\), the situation is even more convoluted. Our goal is to is systematically describe an algorithm to establish such a convergence. Furthermore, the construction of \(\Phi(X)\) is extremely complicated, which makes it almost impossible to directly show that \(\Phi\) is a bijection. In the next section, we deal with this complication by mimicking a common technique in commutative algebra, called the _Weierstrass preparation theorem_, for our noncommutative ring \(\mathrm{M}_{n}(\mathbb{Z}_{p})\).
## 5. A noncommutative Weierstrass preparation theorem and proof of Lemma 4.1
### A noncommutative Weierstrass preparation theorem
In commutative algebra, the **Weierstrass preparation theorem** states that given a complete local ring \((A,\mathfrak{m})\), if \(f(t)=a_{0}+a_{1}t+a_{2}t^{2}+\cdots\in A[\![t]\!]\) with not all \(a_{i}\) are in \(\mathfrak{m}\), then there is a unique unit \(u(t)\in A[\![t]\!]\) and a polynomial \(F(t)=t^{s}+b_{s-1}t^{s-1}+\cdots+b_{1}t+b_{0}\in A[\![t]\!]\) with \(b_{i}\in\mathfrak{m}\) such that \(f(t)=u(t)F(t)\).
For our purpose, our ring is \(A:=\mathrm{M}_{n}(\mathbb{Z}_{p})\) which is a non-commutative ring for any \(n\geq 2\). We are fixing our \(n\in\mathbb{Z}_{\geq 1}\) in this section.
**Properties 5.1**.: We note that \(A=\mathrm{M}_{n}(\mathbb{Z}_{p})\) satisfies the following properties:
1. \(\bigcap_{N=1}^{\infty}p^{N}A=0\).
2. If \((a_{n})_{n\in\mathbb{Z}_{\geq 0}}\) is a sequence in \(A\) such that for any \(N\in\mathbb{Z}_{\geq 0}\), the sequence \((a_{n}\bmod p^{N})\) eventually stabilizes, then the sequence \((a_{n})_{n\in\mathbb{Z}_{\geq 0}}\) converges in \(A\). (That is, there exists \(a\in A\) such that for any \(N\in\mathbb{Z}_{\geq 0}\), there exists \(m\in\mathbb{Z}_{\geq 0}\) such that \(a_{n}\equiv a\,(\bmod\,p^{N})\) whenever \(n\geq m\).)
Our theorem will take place in the ring \(\widehat{A[\![t]\!]}\) defined below.
**Definition 5.2**.: Let \(A[\![t]\!]\) and \(A[\![t]\!]\) be the polynomial ring and the power series ring over \(A\) generated by a variable \(t\) that commutes with \(A\). Define \(\widehat{A[\![t]\!]}\) to be the subring of \(A[\![t]\!]\) given by
\[\widehat{A[\![t]\!]}:=\Bigg{\{}\sum_{l=0}^{\infty}C_{l}t^{l}:C_{l}\in A\text { and }\lim_{l\to\infty}C_{l}=0\Bigg{\}}. \tag{5.1}\]
For \(A[\![t]\!]\) and \(A[\![t]\!]\) we use the product topology induced from \(A\). Then \(\widehat{A[\![t]\!]}\subset A[\![t]\!]\) gets the subspace topology.
**Lemma 5.3**.: With respect to the \(p\)-adic topology, the ring \(\widehat{A[\![t]\!]}\) is complete.
Proof.: Let \((F_{j}(t))_{j\in\mathbb{Z}_{\geq 0}}\) be a Cauchy sequence in \(\widehat{A[\![t]\!]}\). Write
\[F_{j}(t)=C_{j0}+C_{j1}t+C_{j2}t^{2}+\cdots.\]
Since \((F_{j}(t))_{j\in\mathbb{Z}_{\geq 0}}\) is Cauchy in \(\widehat{A[\![t]\!]}\), for every \(l\in\mathbb{Z}_{\geq 0}\), the sequence \((C_{jl})_{j\in\mathbb{Z}_{\geq 0}}\) is Cauchy in \(A=\mathrm{M}_{n}(\mathbb{Z}_{p})\), which is complete with respect to its \(p\)-adic topology. Thus, we may consider \(C_{l}:=\lim_{j\to\infty}C_{jl}\) in \(A\) for each \(l\in\mathbb{Z}_{\geq 0}\) and \(F(t):=C_{0}+C_{1}t+C_{2}t^{2}+\cdots A[\![t]\!]\). Since \(\lim_{j\to\infty}C_{jl}=0\) in \(A\), given any \(k\in\mathbb{Z}_{\geq 0}\), there exists some \(m_{k}\in\mathbb{Z}_{\geq 0}\) such that if \(j>m_{k}\), then \(C_{jl}\in p^{k}A\). As \(C_{l}=\lim_{j\to\infty}C_{jl}\), there exists some \(n_{k}\in\mathbb{Z}_{\geq 0}\) such that if \(l>n_{k}\), then \(C_{l}-C_{jl}\in p^{k}A\) so that \(C_{l}\in p^{k}A\). This implies that \(\lim_{l\to\infty}C_{l}=0\) so that \(F(t)\in\widehat{A[\![t]\!]}\). By definition of product topology on \(A[\![t]\!]\), it follows that \(\lim_{j\to\infty}F_{j}(t)=F(t)\) in \(A[\![t]\!]\). Hence, the last convergence also happens in \(\widehat{A[\![t]\!]}\). This finishes the proof.
**Example 5.4**.: We have \((I_{n}-pI_{n}t)^{-1}=I_{n}+pI_{n}t+p^{2}I_{n}t^{2}+\dots\) is an element of \(\widehat{A[\![t]\!]}\), while \((I_{n}-I_{n}t)^{-1}=I_{n}+I_{n}t+I_{n}t^{2}+\dots\) is not.
We are ready to state a main theorem of this section.
**Theorem 5.5** (Noncommutative Weierstrass preparation theorem).: Fix any \(M(t),N(t)\in\widehat{A[\![t]\!]}\). For any \(X\in A\), there exists unique \(U(t)\in\widehat{A[\![t]\!]}\) and unique \(X^{\prime}\in A\) such that
\[(X+I_{n}t+pI_{n}t^{2}M(t))\,U(t)=X^{\prime}+I_{n}t+pI_{n}t^{2}N(t). \tag{5.2}\]
Moreover, we have \(U(t)\in I_{n}+p\widehat{A[\![t]\!]}\) and \(X^{\prime}\equiv X\,(\bmod\,p)\).
**Remark 5.6**.: Theorem 5.5 can be generalized to a more general class of noncommutative rings, but we do not choose to do this in this paper for clarity. We also remark that any element in \(I_{n}+p\widehat{A[\![t]\!]}\) has a multiplicative inverse in \(\widehat{A[\![t]\!]}\), which can be seen by applying Lemma 5.3.
We shall also need the version of the above theorem with \(A_{k}:=\mathrm{M}_{n}(\mathbb{Z}/p^{k}\mathbb{Z})\) for arbitrary \(k\in\mathbb{Z}_{\geq 1}\) instead of \(A\). We similarly define
\[\widehat{A_{k}[\![t]\!]}:=\Bigg{\{}\sum_{l=0}^{\infty}C_{l}t^{l}:C_{l}\in A_{k} \text{ and }\lim_{l\to\infty}C_{l}=0\Bigg{\}},\]
but we are using the discrete topology on \(A_{k}\), so having \(\lim_{l\to\infty}C_{l}=0\) means that \(C_{l}=0\) for large enough \(l\). This implies that \(\widehat{A_{k}[t]}=A_{k}[t]\).
**Theorem 5.7** (Finite noncommutative Weierstrass preparation theorem).: Fix any \(M(t),N(t)\in A_{k}[t]\) for given \(k\in\mathbb{Z}_{\geqslant 1}\). For any \(X\in A_{k}\), there exists unique \(U(t)\in A_{k}[t]\) and unique \(X^{\prime}\in A_{k}\) such that
\[\left(X+I_{n}t+pI_{n}t^{2}M(t)\right)U(t)=X^{\prime}+I_{n}t+pI_{n}t^{2}N(t). \tag{5.3}\]
Moreover, we have \(U(t)\in I_{n}+pA_{k}[t]\) and \(X^{\prime}=X\,(\mathrm{mod}\ p)\).
### Proof that Theorems 5.5 and 5.7 imply Lemma 4.1
Here we prove Lemma 4.1 assuming Theorems 5.5 and 5.7. Recall that, after this, the proof of Theorem 1.7 would be complete once we prove Theorems 5.5 and 5.7.
Proof of Theorems 5.5 and 5.7 imply Lemma 4.1.: Recall \(R:=\mathbb{Z}_{p}[t]/(P(t))\). We first note that we can identify \(A[t]=\mathrm{M}_{n}(\mathbb{Z}_{p}[t])\). Consider the modulo-\((P(t))\) surjective map
\[A[t]=\mathrm{M}_{n}(\mathbb{Z}_{p}[t])\twoheadrightarrow\mathrm{M}_{n}(R).\]
Explicitly, the map is given by
\[C_{0}+C_{1}t+C_{2}t^{2}+\cdots+C_{m}t^{m}\mapsto C_{0}+C_{1}\bar{t}+C_{2}\bar{ t}^{2}+\cdots+C_{m}\bar{t}^{m},\]
where \(\bar{t}\) is the image of \(t\) under the projection \(\mathbb{Z}_{p}[t]\twoheadrightarrow\mathbb{Z}_{p}[t]/(P(t))\). Now, consider any
\[F(t)=C_{0}+C_{1}t+C_{2}t^{2}+\cdots\in\widehat{A[\bar{t}]}\]
Using the fact that \(\lim_{l\to\infty 0}C_{l}=0\) in \(A\) with the \(p\)-adic topology, given any \(k\in\mathbb{Z}_{\geqslant 1}\), there exists minimal \(m_{F,k}\in\mathbb{Z}_{\geqslant 1}\) such that if \(l>m_{F,k}\), then \(C_{l}\in p^{k}A\). This lets us define a map \(\widehat{A[t]}\to\mathrm{M}_{n}((\mathbb{Z}/p^{k}\mathbb{Z})[t])\) given by
\[F(t)=\sum_{l=0}^{\infty}C_{l}t^{l}\mapsto\sum_{l=0}^{m_{F,k}}\overline{C}_{l} t^{l},\]
where \(\overline{C}_{l}\) is \(C_{l}\) modulo \(p^{k}\). Hence, we get a map \(\widehat{A[t]}\to\mathrm{M}_{n}((\mathbb{Z}/p^{k}\mathbb{Z})[t]/(P(t)))\) given by
\[F(t)=\sum_{j=0}^{\infty}C_{l}t^{l}\mapsto\sum_{l=0}^{m_{F,k}}\overline{C}_{l} \bar{t}^{l}.\]
Since \(p^{k}A\supset p^{k+1}A\supset p^{k+2}A\supset\cdots\), we have \(m_{F,k}\leqslant m_{F,k+1}\leqslant m_{F,k+2}\leqslant\cdots\). By taking \(k=1\), we have \(m_{F,1}\leqslant m_{F,2}\leqslant m_{F,3}\leqslant\cdots\), so this induces a map \(\widehat{A[t]}\to\mathrm{M}_{n}(\mathbb{Z}_{p}[t]/(P(t)))=\mathrm{M}_{n}(R)\) compatible with the projection maps \(\mathrm{M}_{n}((\mathbb{Z}/p^{k+1}\mathbb{Z})[t]/(P(t)))\to\mathrm{M}_{n}(( \mathbb{Z}/p^{k}\mathbb{Z})[t]/(P(t)))\) for all \(k\geqslant 1\). We have \(\sum_{j=0}^{\infty}C_{j}\bar{t}^{j}\in\widehat{\mathrm{M}_{n}}(R)\) as the image of \(\sum_{j=0}^{\infty}C_{j}t^{j}\in\widehat{A[\bar{t}]}\). This map is surjective because the map \(A[t]\to\mathrm{M}_{n}(R)\) we described above is surjective.
Let \(M(t)\in\widehat{A[t]}\) be any lift of \(Y_{2}+\bar{t}Y_{3}+\cdots+\bar{t}^{d-3}Y_{d-1}\in\mathrm{M}_{n}(R)\) and fix \(M(t)\) from now on. Then for any \(X\in\mathrm{M}_{n}(\mathbb{Z}_{p})\), by Theorem 5.5 with \(N(t)=0\), there exists a unique \(U(t)\in I_{n}+p\widehat{A[t]}\) and \(X^{\prime}\in\mathrm{M}_{n}(\mathbb{Z}_{p})\) such that
\[\left(X+I_{n}t+pt^{2}M(t)\right)U(t)=X^{\prime}+I_{n}t\in\widehat{A[t]}. \tag{5.4}\]
Define the map \(\Phi:\mathrm{M}_{n}(\mathbb{Z}_{p})\to\mathrm{M}_{n}(\mathbb{Z}_{p})\) by \(\Phi(X):=X^{\prime}=\left(X+I_{n}t+pt^{2}M(t)\right)U(t)-I_{n}t\). Theorem 5.5 implies \(X\equiv X^{\prime}\,(\mathrm{mod}\ p)\). We claim \(\Phi\) is the desired bijection.
First, we show \(\Phi\) is a bijection by constructing an inverse. By switching the role of \(M(t)\) and \(N(t)\) in Theorem 5.5, for any \(X^{\prime}\in\mathrm{M}_{n}(\mathbb{Z}_{p})\), there exists a unique \(V(t)\in I_{n}+p\widehat{A[t]}\) and \(X^{\prime\prime}\in\mathrm{M}_{n}(\mathbb{Z}_{p})\) such that
\[\left(X^{\prime}+I_{n}t\right)V(t)=X^{\prime\prime}+I_{n}t+pt^{2}M(t)\in \widehat{A[t]}.\]
Define the map \(\Psi:\mathrm{M}_{n}(\mathbb{Z}_{p})\to\mathrm{M}_{n}(\mathbb{Z}_{p})\) by \(\Psi(X^{\prime}):=X^{\prime\prime}=\left(X^{\prime}+I_{n}t\right)V(t)-I_{n}t- pt^{2}M(t)\). By the uniqueness statement in Theorem 5.5, it follows that \(\Psi\) is the inverse of \(\Phi\).
Next, we note that (5.3) holds for some \(U\in\mathrm{GL}_{n}(R)\) instead of \(U(t)\). This is immediate by letting \(U\) be the image of \(U(t)\) under \(\widehat{A[\bar{t}]}\twoheadrightarrow\mathrm{M}_{n}(R)\), and applying this surjection to (5.4).
Finally, we prove that \(\Phi\) is Haar measure-preserving. It suffices to prove that for \(k\geqslant 1\), the bijection \(\Phi\) is compatible with the mod-\(p^{k}\) reduction map. More precisely, we claim that if \(X_{1},X_{2}\in\mathrm{M}_{n}(\mathbb{Z}_{p})\) satisfy
\(X_{1}\equiv X_{2}\,(\text{mod }p^{k})\), then \(\Phi(X_{1})\equiv\Phi(X_{2})\,(\text{mod }p^{k})\). To prove the claim, write \(X_{i}^{\prime}=\Phi(X_{i})\) and \(\bar{X}=(X_{1}\ \text{mod }p^{k})\,=\,(X_{2}\ \text{mod }p^{k})\). Theorem 5.7 with \(N(t)=0\) implies that there exist unique \(\bar{X}^{\prime}\in A_{k}\) and \(U(t)\in I_{n}+pA_{k}[t]\) such that
\[(\bar{X}+I_{n}t+\pi t^{2}M(t))\,U(t)\equiv\bar{X}^{\prime}+I_{n}t\ \ (\text{mod }p^{k}).\]
If we replace \(\bar{X}^{\prime}\) in the above identity with \((X_{1}^{\prime}\ \text{mod }p^{k})\) and \((X_{2}^{\prime}\ \text{mod }p^{k})\), the new identity still holds by Theorem 5.5. Hence, it follows from the uniqueness Theorem 5.7 that \(X_{1}^{\prime}\equiv X_{2}^{\prime}\,(\text{mod }p^{k})\).
For the rest of the section, we prove Theorems 5.5 and 5.7, which would finish the proof of Theorem 1.7. We start with some elementary observations.
### Elementary observations
The following observation is simple but crucial in the proofs of Theorems 5.5 and 5.7. Recall the notation \(A=\operatorname{M}_{n}(\mathbb{Z}_{p})\) and \(A_{k}=\operatorname{M}_{n}(\mathbb{Z}/p^{k}\mathbb{Z})\).
**Lemma 5.8**.: For any \(k\in\mathbb{Z}_{\geqslant 1}\), we can identify
\[\frac{\widehat{A[t]}}{p^{k}\widehat{A[t]}}=(A/p^{k}A)[t]=A_{k}[t].\]
In other words, every element in \(\widehat{A[t]}\) is a polynomial modulo \(p^{k}\).
Proof.: This is simply because for any element \(\sum_{l=0}^{\infty}C_{l}t^{l}\in\widehat{A[t]}\), we must have \(\lim_{l\to\infty}C_{l}=0\) with respect to the \(p\)-adic topology, so only finitely many \(C_{l}\) are nonzero mod \(p^{k}\).
### Uniqueness for Theorems 5.5 and 5.7
We now prove the uniqueness parts of Theorems 5.5 and 5.7:
Proofs of the uniqueness statements in Theorems 5.5 and 5.7.: We first prove the uniqueness statement in Theorem 5.5. Say
\[(X+I_{n}t+pt^{2}M(t))U_{1}(t) =X_{1}^{\prime}+I_{n}t+pt^{2}N(t)\ \text{and}\] \[(X+I_{n}t+pt^{2}M(t))U_{2}(t) =X_{2}^{\prime}+I_{n}t+pt^{2}N(t)\]
are two expressions with \(U_{1}(t),U_{2}(t)\in\widehat{A[t]}\) and \(X_{1}^{\prime},X_{2}^{\prime}\in A\). Then
\[(X+I_{n}t+pt^{2}M(t))f(t)=Y,\]
where \(f(t):=U_{1}(t)-U_{2}(t)\in\widehat{A[t]}\) and \(Y:=X_{1}^{\prime}-X_{2}^{\prime}\in A\).
We need to show that \(f(t)=0\). To do so, it suffices to show \(f(t)\equiv 0\,(\text{mod }p^{k})\) for every \(k\in\mathbb{Z}_{\geqslant 0}\). We proceed by induction on \(k\). The base case \(k=0\) is vacuously true, and we assume \(f(t)\equiv 0\ (\text{mod }p^{k})\) for arbitrary \(k\in\mathbb{Z}_{\geqslant 0}\). Reducing modulo \(p^{k+1}\), we have
\[Y=(X+I_{n}t+pt^{2}M(t))f(t)\equiv(X+I_{n}t)f(t)\ \ (\text{mod }p^{k+1}).\]
For contradiction, suppose \(f(t)\not\equiv 0\ (\text{mod }p^{k+1})\). By Lemma 5.8, the above identity can be considered in the _polynomial ring_\((A/p^{k+1}A)[t]=A_{k+1}[t]\). In particular, \(\overline{f}(t):=f(t)\ \text{mod }p^{k+1}\) has a highest degree term because it is nonzero by assumption. Since the highest degree coefficient of \(X+I_{n}t\) is \(I_{n}=1_{A}\), which is not a zero divisor in \(A\), the product \((X+I_{n}t)f(t)\) cannot be a constant modulo \(p^{k+1}\). This contradicts with \((X+I_{n}t)f(t)\equiv Y\,(\text{mod }p^{k+1})\), which completes the proof of the uniqueness statement of Theorem 5.5.
The proof of the uniqueness statement of Theorem 5.5 is almost identical, so we omit it.
### Proofs of final assertions in Theorems 5.5 and 5.7
Here, we prove that in either the setting of Theorem 5.5 or that of Theorem 5.7, if \(U(t)\) and \(X^{\prime}\) in the statement exist, then they must satisfy \(U(t)\equiv I_{n}\)\((\text{mod }p)\) and \(X^{\prime}\equiv X\,(\text{mod }p)\).
Proofs of final assertions in Theorems 5.5 and 5.7.: We first assume Theorem 5.5 except its final assertion. Reducing (5.3) modulo \(p\) and using Lemma 5.8, we have
\[(\bar{X}+I_{n}t)\bar{U}(t)=\bar{X}^{\prime}+I_{n}t\in(A/pA)[t],\]
where \(\bar{X}\) denotes the reduction of \(X\) modulo \(p\) and similarly for \(\bar{X}^{\prime}\) and \(\bar{U}(t)\). By comparing the highest degree terms of both sides, the only possibility for the above identity to hold in \((A/pA)[t]\) is when \(\bar{U}(t)=I_{n}\). It then follows that \(\bar{X}^{\prime}=\bar{X}\).
The proof of the final assertion in Theorems 5.7 is identical, so we omit it.
### Proof of existence statements in Theorems 5.5 and 5.7
Here, we prove the existence statements in Theorems 5.5 and 5.7. As is suggested by Example 4.2, our approach to constructing \(U(t)\) and \(X^{\prime}\) is to perform a recursive algorithm and take the limit of the process. To be more systematic than the computations given in Example 4.2, we utilize the following division algorithm by the series \(g(t):=X+I_{n}t+pt^{2}M(t)\).
**Lemma 5.9**.: Fix \(M(t)\in\widehat{A[t]}\). Define \(g(t):=X+I_{n}t+pt^{2}M(t)\) and let \(f(t)\) be any element of \(\widehat{A[t]}\). Then there exist \(q(t)\in\widehat{A[t]}\) and \(r\in A\) such that
\[f(t)=g(t)q(t)+r.\]
Before proving Lemma 5.9, we show why it would resolve Theorems 5.5 and 5.7, which would finish the proof of Theorem 1.7.
Proof of the existence statement of Theorems 5.5 and 5.7 assuming Lemma 5.9.: Construct \(q(t)\) and \(r\) using Lemma 5.9 with \(f(t):=I_{n}t+pt^{2}N(t)\). Letting \(U(t)=q(t)\) and \(X^{\prime}=-r\), this proves Theorem 5.5. For Theorem 5.7, we reduce the statement of Lemma 5.9 modulo \(p^{k}\) and then repeat the proof.
Hence, it remains to show Lemma 5.9 to prove Theorem 1.7. Given \(f(t)\) and \(g(t)\) as in Lemma 5.9, we describe an algorithm to construct sequences \((q_{j}(t))_{j\geqslant 1}\) and \((r_{j}(t))_{j\geqslant 1}\), and prove that they converge to the desired elements \(q(t)\) and \(r\), respectively. More precisely, we prove the following lemma, which is stronger than Lemma 5.9.
**Lemma 5.10**.: Assume the hypotheses of Lemma 5.9. Define \(q_{1}(t):=0\) and \(r_{1}(t):=f(t)\) and recursively construct \(q_{j}(t)\) and \(r_{j}(t)\) for \(j\geqslant 1\) by
\[\left\{\begin{array}{l}q_{j+1}(t)=q_{j}(t)+\frac{s_{j}(t)}{t},\\ r_{j+1}(t)=r_{j}(t)-g(t)\frac{s_{j}(t)}{t},\end{array}\right. \tag{5.5}\]
where \(s_{j}(t):=r_{j}(t)-r_{j}(0)\), which is the sum of all nonconstant terms of \(r_{j}(t)\). Then both \((q_{j}(t))_{j\in Z_{\geqslant 1}}\) and \((r_{j}(t))_{j\in Z_{\geqslant 1}}\) converge \(p\)-adically in \(\widehat{A[t]}\). Moreover, if \(q(t):=\lim_{j\to\infty}q_{j}(t)\) and \(r(t):=\lim_{j\to\infty}r_{j}(t)\), then \(r(t)=r\in A\) and \(f(t)=g(t)q(t)+r\).
Proof.: We note by the recursive construction (5.5) that we always have
\[f(t)=g(t)q_{j}(t)+r_{j}(t) \tag{5.6}\]
for all \(j\in\mathbb{Z}_{\geqslant 1}\). To prove the convergence of sequences \((q_{j}(t))_{j\in Z_{\geqslant 1}}\) and \((r_{j}(t))_{j\in Z_{\geqslant 1}}\) in \(\widehat{A[t]}\), we work modulo \(p^{k}\) for any given \(k\geqslant 1\). We again use the notation \(A_{k}=A/p^{k}A\) and note that \(\widehat{A[t]}/p^{k}\widehat{A[t]}=A_{k}[t]\) by Lemma 5.8. We denote by \(\overline{q_{j}(t)}\) the image of \(q_{j}(t)\) in \(A_{k}[t]\), and similarly for \(\overline{r_{j}(t)}\) and \(\overline{s_{j}(t)}\). We claim that for any \(k\in\mathbb{Z}_{\geqslant 1}\), we have
\[\overline{s_{j}(t)}=0\in A_{k}[t]\text{ for large enough }j\geqslant 1. \tag{5.7}\]
Before we prove (5.7), we note that proving this claim suffices to prove the desired result. Indeed, if \(\overline{s_{j}(t)}\) is eventually zero, then \(\overline{q_{j}(t)}\) and \(\overline{r_{j}(t)}\) eventually stabilizes from (5.5). Since this is true for arbitrary \(k\geqslant 1\), both \((q_{j}(t))_{j\in Z_{\geqslant 1}}\) and \((r_{j}(t))_{j\in Z_{\geqslant 1}}\) converge in \(\widehat{A[t]}\) because \(\widehat{A[t]}\) is \(p\)-adically complete by Lemma 5.3. We denote their limits by \(q(t)\) and \(r(t)\), and we have \(f(t)=g(t)q(t)+r(t)\) by taking the \(p\)-adic limit of (5.6) as \(j\to\infty\). Furthermore, it follows from the definition of \(s_{j}(t)\) that \(\overline{r_{j}(t)}=\overline{r_{j}(0)}\in A_{k}[t]\) for large enough \(j\geqslant 1\) given arbitrary \(k\), so we must have \(\lim_{j\to\infty}(r_{j}(t)-r_{j}(0))=0\) in \(\widehat{A[t]}\), which implies that
\[\lim_{j\to\infty}r_{j}(0)=\lim_{j\to\infty}(r_{j}(t)-(r_{j}(t)-r_{j}(0)))=r(t)\]
in \(\widehat{A[t]}\). This implies that \(r(t)\in A\).
We now prove (5.7). As we work in \(A_{k}[t]\), we denote by \(M(t),f(t),g(t),q_{j}(t),r_{j}(t),s_{j}(t)\) to mean their reductions modulo \(p^{k}\). Let \(D\geqslant 1\) be the degree of \(g(t)=X+I_{n}t+pt^{2}M(t)\) as a _polynomial_ in \(A_{k}[t]\). Fix a real number \(\epsilon\) such that \(0<\epsilon\leqslant 1/D\). For a monomial \(at^{b}\) in \(A_{k}[t]\) with nonzero \(a\in A_{k}\) and \(b\geqslant 0\), we define
\[\delta(at^{b}):=v_{k}(a)-\epsilon b\in\mathbb{R},\]
where \(v_{k}(a):=\max\{m\in\mathbb{Z}_{\geq 0}:a\in p^{m}A_{k}\}\). Since \(a\neq 0\), we have \(v_{k}(a)\in\{0,1,\ldots,k-1\}\). For example, we have \(\delta(I_{n}t)=-\epsilon\) and \(\delta(a)=v_{k}(a)\geq 0\) for any nonzero \(a\in A_{k}\). We also define \(\delta(0):=\infty\). More generally, for any polynomial \(f(t)\in A_{k}[t]\), we define \(\delta(f)\) to be the minimal \(\delta\)-valuation of terms of \(f(t)\). Note that \(\delta(f)=\infty\) if and only if \(f(t)=0\) in \(A_{k}[t]\). Thus, our goal is to show that \(\delta(s_{j}(t))=\infty\) for large enough \(j\geq 1\).
Since \(pt^{2}M(t)=g(t)-X-I_{n}t\), we see that \(t^{2}M(t)\) has degree at most \(D\). Since there is no constant term for \(pt^{2}M(t)\), we have
\[\delta(pt^{2}M(t))\geq 1-\epsilon D\geq 0.\]
We claim \(\lim_{j\to\infty}\delta(s_{j}(t))=\infty\). This is the crux of the entire proof. Expand (5.5) to get
\[r_{j+1}(t)=r_{j}(t)-s_{j}(t)-X\frac{s_{j}(t)}{t}-pI_{n}tM(t)s_{j}(t) \tag{5.8}\]
and inspect the \(\delta\)-valuations of its terms. If we denote by \(at^{b}\) is a typical term of \(s_{j}(t)\) with nonzero \(a\in A_{k}\) and \(b\geq 1\), a typical term for \(s_{j}(t)/t\) can be described as \(at^{b-1}\). If \(Xa=0\), then \(\delta(Xat^{b-1})=\infty\). Otherwise, we have
\[\delta(Xat^{b-1})=v_{k}(Xa)-\epsilon(b-1)=v_{k}(Xa)-\epsilon b+\epsilon\geq v _{k}(a)-\epsilon b+\epsilon=\delta(at^{b})+\epsilon,\]
so we always have
\[\delta\left(X\frac{s_{j}(t)}{t}\right)\geq\delta(s_{j}(t))+\epsilon.\]
We note \(\delta(f_{1}(t)f_{2}(t))\geq\delta(f_{1}(t))+\delta(f_{2}(t))\) for any \(f_{1}(t),f_{2}(t)\in A_{k}[t]\) from definition of \(\delta\). Since \(tM(t)\) has degree at most \(D-1\) and has no constant term, we have
\[\delta(pI_{n}tM(t)s_{j}(t))\geq\delta(pI_{n}tM(t))+\delta(s_{j}(t))\geq 1-(D-1 )\epsilon+\delta(s_{j}(t))\geq\delta(s_{j}(t))+\epsilon\]
by our assumption that \(\epsilon\leq 1/D\).
Since \(r_{j}(t)-s_{j}(t)=-r_{j}(0)\) has only constant term, every possible nonconstant term of \(r_{j+1}(t)\) in (5.8) must be contributed from \(Xs_{j}(t)/t\) and \(pI_{n}tM(t)s_{j}(t)\). Since
\[s_{j+1}(t)=r_{j+1}(t)-r_{j+1}(0)=-r_{j+1}(0)+r_{j}(0)-X\frac{s_{j}(t)}{t}-pI_ {n}tM(t)s_{j}(t),\]
using the fact that \(s_{j+1}(t)\) has no constant terms, we have
\[\delta(s_{j+1}(t)) \geq\min\left\{\delta\left(X\frac{s_{j}(t)}{t}\right),\delta\left( pI_{n}tM(t)s_{j}(t)\right)\right\}\] \[\geq\delta(s_{j}(t))+\epsilon.\]
In particular, we have \(\lim_{j\to\infty}\delta(s_{j}(t))=\infty\), but the largest possible finite \(\delta\)-value in \(A_{k}[t]\) is \(k-1\): since \(p^{k}A_{k}=0\), the largest possible finite \(v_{k}(a)\) is \(k-1\), so \(\delta(at^{b})=v_{k}(a)-\epsilon b\leq k-1\) for any nonzero monomial \(at^{b}\in A[t]\). Hence, \(\delta(s_{j}(t))=\infty\) for \(j\geq 1\), which implies (5.7).
We are done with proving Theorem 1.7. For the rest of the paper, we use Theorem 1.7 to prove the remaining parts of Theorem 1.8.
## 6. Reduction of Theorem 1.8 in terms of moments
By choosing any \(k\in\mathbb{Z}_{\geq 1}\) such that \(p^{k-1}G=0\), Theorem 1.8 can be proven by proving the analogous statement we get by replacing \(\mathbb{Z}_{p}\) with \(\mathbb{Z}/p^{k}\mathbb{Z}\). (The details can be found in [10, Lemmas 2.1 and 3.1].) Write \(R:=(\mathbb{Z}/p^{k}\mathbb{Z})[t]/(P(t))\) for the rest of the paper. Fix \(n\in\mathbb{Z}_{\geq 1}\), and we assume that \(A_{n}\in\mathrm{M}_{n}(\mathbb{F}_{p})\) is of the form (1.4):
\[A_{n}=\begin{bmatrix}J&*\\ 0&J^{\prime}\end{bmatrix},\]
where \(J\in\mathrm{M}_{n-r}(\mathbb{F}_{p})\) and \(J^{\prime}\in\mathrm{M}_{r}(\mathbb{F}_{p})\) with \(r=r_{p}(G)\) such that every eigenvalue of \(J\) in \(\overline{\mathbb{F}_{p}}\) is not a root of \(\bar{P}(t)\). We fix a finite-sized \(\mathbb{F}_{p}[t]/(\bar{P}(t))\)-module \(\mathfrak{r}\) so that \(\mathfrak{r}\simeq_{\mathbb{F}_{p}[t]}G/pG\simeq_{\mathbb{F}_{p}[t]}\mathrm{ \,\,\mathrm{cok}}(\bar{P}(A_{n}))\). We introduce this notation because we may vary \(G\), while the isomorphism class of \(G/pG\) is fixed (as a \(\mathbb{F}_{p}[t]/(P(t))\)-module). We shall write
\[\mathrm{M}_{n}(\mathbb{Z}/p^{k}\mathbb{Z})_{A_{n}}:=\{X\in\mathrm{M}_{n}( \mathbb{Z}/p^{k}\mathbb{Z}):X\equiv A_{n}\pmod{p}\}\]
so that
\[\operatorname*{Prob}_{X\in\operatorname{M}_{n}(\mathbb{Z}/p^{k}\mathbb{Z})}( \operatorname{cok}(P(X))\simeq_{R}G\mid X\equiv A_{n}\ \ \ (\text{mod}\ p))=\operatorname*{Prob}_{X\in \operatorname{M}_{n}(\mathbb{Z}/p^{k}\mathbb{Z})_{A_{n}}}(\operatorname{cok}(P(X ))\simeq_{R}G).\]
That is, we consider \(\operatorname{M}_{n}(\mathbb{Z}/p^{k}\mathbb{Z})_{A_{n}}\) as the sample space instead of mentioning conditional probabilities for the statement of Theorem 1.8 (after we replace \(\mathbb{Z}_{p}\) by \(\mathbb{Z}/p^{k}\mathbb{Z}\)). The **Haar measure** on \(\operatorname{M}_{n}(\mathbb{Z}/p^{k}\mathbb{Z})_{A_{n}}\) is defined to be the probability measure induced by the Haar measure of \(\operatorname{M}_{n}(\mathbb{Z}/p^{k}\mathbb{Z})\), which is equal to the uniform measure. If \(k=1\), the statement we get from replacing \(\mathbb{Z}_{p}\) with \(\mathbb{Z}/p^{k}\mathbb{Z}\) in Theorem 1.8 is immediate (as \(p^{k-1}G=0\) with \(k=1\) would imply \(G=0\)), so we may assume \(k\geqslant 2\) from now on. Given \(X\in\operatorname{M}_{n}(\mathbb{Z}/p^{k}\mathbb{Z})\), its \((i,j)\)-entry \(X_{ij}\) can be written as
\[X_{ij}=X_{i,j,0}+X_{i,j,1}p+X_{i,j,2}p^{2}+\cdots+X_{i,j,k-1}p^{k-1} \tag{6.1}\]
with \(X_{i,j,l}\in\{0,1,2,\ldots,p-1\}\). When \(X\in\operatorname{M}_{n}(R)_{A_{n}}\), we have \(X_{i,j,0}=A_{ij}^{(n)}\) fixed, where \(A_{ij}^{(n)}\) is the \((i,j)\)-entry of \(A_{n}\). Having \(X\in\operatorname{M}_{n}(\mathbb{Z}/p^{k}\mathbb{Z})_{A_{n}}\) follow the Haar measure is equivalent to having \(X_{i,j,0}=A_{ij}^{(n)}\) and \(X_{i,j,1},X_{i,j,2},\ldots,X_{i,j,k-1}\) uniformly distributed in \(\{0,1,2,\ldots,p-1\}\). We work with the discrete \(\sigma\)-algebra on \(\operatorname{M}_{n}(\mathbb{Z}/p^{k}\mathbb{Z})_{A_{n}}\), and we assume that \(X\in\operatorname{M}_{n}(\mathbb{Z}/p^{k}\mathbb{Z})\) has \(n^{2}\) independent entries and that the entries of the bottom-right \(r\times r\) submatrix of \(X\) are uniformly distributed, where \(r=\dim_{\mathbb{F}_{p}}(\mathfrak{r})\).
Denote by \(\operatorname{\mathbf{Mod}}_{A}^{<\infty}\) the set of isomorphism classes of finite size \(A\)-modules for a given commutative ring \(A\). Given \(H\in\operatorname{\mathbf{Mod}}_{R}^{<\infty}\), the \(H\)**-moment** of the distribution \((\operatorname{cok}(P(X)))_{X\in\operatorname{M}_{n}(\mathbb{Z}/p^{k}\mathbb{Z})}\) is defined to be
\[\operatorname*{\mathbb{E}}_{X\in\operatorname{M}_{n}(\mathbb{Z}/p^{k}\mathbb{Z })_{A_{n}}}(|\text{Sur}_{R}(\operatorname{cok}(P(X)),H)|),\]
where \(\text{Sur}_{R}(S,T)\) means the set of surjective \(R\)-linear maps from \(S\) to \(T\) given \(S,T\in\operatorname{\mathbf{Mod}}_{R}^{<\infty}\). Sawin and Wood [11, Lemma 6.1] noticed that the category of finite size \(R\)-modules is a **diamond category**, whose definition can be found in [11, Definition 1.3]. The point of working in a diamond category is that the \(H\)-moments of a distribution in such a category determines the distribution, where \(H\) varies in the category, as long as the \(H\)-moments do not "grow too fast" (i.e., the \(H\)-moments are **well-behaved** in the sense of [11, p.4]).
### The Haar moment is independent to \(n\)
By applying Theorem 1.7, when \(\operatorname{M}_{n}(\mathbb{Z}/p^{k}\mathbb{Z})_{A_{n}}\) is given the Haar measure, the \(H\)-moment of the distribution \((\operatorname{cok}(P(X)))_{X\in\operatorname{M}_{n}(\mathbb{Z}/p^{k}\mathbb{Z})}\) is
\[\operatorname*{\mathbb{E}}_{X\in\operatorname{M}_{n}(\mathbb{Z}/p^{k} \mathbb{Z})_{A_{n}}^{\text{Hars}}}(|\text{Sur}_{R}(\operatorname{cok}(P(X)),H )|)\] \[=\sum_{M\in\operatorname{\mathbf{Mod}}_{R}^{<\infty}}|\text{Sur}_{ R}(M,H)|\operatorname*{Prob}_{X\in\operatorname{M}_{n}(\mathbb{Z}/p^{k}\mathbb{Z})_{A_{n}}^{ \text{Hars}}}(\operatorname{cok}(P(X))\simeq_{R}M)\] \[=\sum_{M\in\operatorname{\mathbf{Mod}}_{R}^{<\infty}}|\text{Sur}_{ R}(M,H)|\operatorname*{Prob}_{Y\in\operatorname{M}_{n}(\mathbb{Z}_{p})_{A_{n}}^{ \text{Hars}}}(\operatorname{cok}(P(Y))\otimes_{\mathbb{Z}_{p}}\mathbb{Z}/p^{k} \mathbb{Z})\simeq_{R}M)\] \[=\sum_{M\in\operatorname{\mathbf{Mod}}_{R}^{<\infty}}\sum_{ \begin{subarray}{c}W\in\operatorname{\mathbf{Mod}}_{Z_{p}[t](P(t))}:\\ W\otimes_{\mathbb{Z}_{p}}Z/p^{k}\mathbb{Z}_{x}R\end{subarray}}|\text{Sur}_{R}(W \otimes_{\mathbb{Z}_{p}}\mathbb{Z}/p^{k}\mathbb{Z},H)|_{Y\in\operatorname{M}_{n} (\mathbb{Z}_{p})_{A_{n}}^{\text{Hars}}}(\operatorname{cok}(P(Y))\simeq_{R}W)\] \[=\sum_{\begin{subarray}{c}W\in\operatorname{\mathbf{Mod}}_{Z_{p}[t](P(t ))}:\\ W/pW\simeq_{\mathbb{Z}_{p}[t]^{k}}\end{subarray}}|\text{Sur}_{R}(W\otimes_{ \mathbb{Z}_{p}}\mathbb{Z}/p^{k}\mathbb{Z},H)|_{Y\in\operatorname{M}_{n}(\mathbb{ Z}_{p})_{A_{n}}^{\text{Hars}}}(\operatorname{cok}(P(Y))\simeq_{R}W)\] \[=\sum_{\begin{subarray}{c}W\in\operatorname{\mathbf{Mod}}_{Z_{p}[t](P(t ))}:\\ W/pW\simeq_{\mathbb{Z}_{p}[t]^{k}}\end{subarray}}|\text{Sur}_{R}(W\otimes_{ \mathbb{Z}_{p}}\mathbb{Z}/p^{k}\mathbb{Z},H)|\operatorname*{Prob}_{Y\in \operatorname{M}_{n}(\mathbb{Z}_{p})_{A_{n}}^{\text{Hars}}}(\operatorname{cok}(P(Y)) \simeq_{R}W)\] \[=\sum_{\begin{subarray}{c}W\in\operatorname{\mathbf{Mod}}_{Z_{p}[t](P(t ))}:\\ W/pW\simeq_{\mathbb{Z}_{p}[t]^{k}}\end{subarray}}|\text{Sur}_{R}(W\otimes_{ \mathbb{Z}_{p}}\mathbb{Z}/p^{k}\mathbb{Z},H)|\frac{|\text{Aut}_{\mathbb{Z}_{p}[t ]}(W/pW)|\prod_{j=1}^{l}\prod_{i=1}^{u_{j}(\mathfrak{r})}(1-p^{-id_{j}})}{| \text{Aut}_{\mathbb{Z}_{p}[t]}(W)|}.\]
The last sum is a convoluted expression, but we can still observe that this only depends on \(p,k,P(t),\mathfrak{r}\), and \(H\), not depending on \(A_{n}\) nor \(n\). Since we fix \(p,k,P(t)\), and \(\mathfrak{r}\), this justifies the following notation:
\[M_{H}:=\underset{X\in\operatorname{\mathbb{M}}_{n}(\mathbb{Z}/p^{k}\mathbb{Z} )^{\operatorname{Han}}_{A_{n}}}{\mathbb{E}}(|\text{Sur}_{R}(\text{cok}(P(X)), H)|).\]
### The Haar moment is well-behaved
We have
\[M_{H}=\underset{X\in\operatorname{\mathbb{M}}_{n}(\mathbb{Z}/p^{k}\mathbb{Z} )^{\operatorname{Han}}_{A_{n}}}{\mathbb{E}}(|\text{Sur}_{R}(\text{cok}(P(X)), H)|)=\underset{M/pM\simeq_{\mathfrak{r}}p[t]^{\mathfrak{r}}}{\mathbb{E}}| \text{Sur}_{R}(M,H)|\underset{X\in\operatorname{\mathbb{M}}_{n}(\mathbb{Z}/p^{ k}\mathbb{Z})^{\operatorname{Han}}_{A_{n}}}{\mathbb{E}}(\text{cok}(P(X))\simeq_{R}M),\]
which is bounded above by
\[\underset{M/pM\simeq_{\mathfrak{r}}p[t]^{\mathfrak{r}}}{\mathbb{E}}|\text{ Hom}_{R}(M,H)|\leq C_{\mathfrak{r}}|H|^{N_{\mathfrak{r}}}\]
for some constants \(C_{\mathfrak{r}},N_{\mathfrak{r}}>0\) depending only on \(\mathfrak{r}\). We explain how the last inequality holds. First, note that by Hensel's lemma, we have a factorization
\[P(t)=Q_{1}(t)Q_{2}(t)\cdots Q_{l}(t)\in(\mathbb{Z}/p^{k}\mathbb{Z})[t]\]
such that each \(Q_{j}(t)\) is a monic polynomial whose reduction modulo \(p\) is \(\bar{Q}_{j}(t)=\bar{P}_{j}(t)^{m_{j}}\) in \(\mathbb{F}_{p}[t]\). These \(Q_{1}(t),Q_{2}(t),\ldots,Q_{l}(t)\) are pairwise comaximal in \((\mathbb{Z}/p^{k}\mathbb{Z})[t]\), so we have \(R\simeq R_{1}\times R_{2}\times\cdots\times R_{l}\) as rings with \(R_{j}:=(\mathbb{Z}/p^{k}\mathbb{Z})[t]/(Q_{j}(t))\) by the Chinese Remainder Theorem. If we consider any \(M\) in the last summand, this necessarily implies that \(M\simeq_{R}M_{1}\times M_{2}\times\cdots\times M_{l}\), where each \(M_{j}\) is an \(R_{j}\)-module, and this implies
\[\mathfrak{r}\simeq_{\mathbb{F}_{p}[t]}M/pM\simeq_{\mathbb{F}_{p}[t]}(M_{1}/pM _{1})\times(M_{2}/pM_{2})\times\cdots\times(M_{l}/pM_{l}).\]
Since each \(R_{j}\) is a local ring with the maximal ideal \((p,P_{j}(t))\) where \(P_{j}(t)\in(\mathbb{Z}/p^{k}\mathbb{Z})[t]\) is a lift of \(\bar{P}_{j}(t)\in\mathbb{F}_{p}[t]\), Nakayama's lemma implies that \(M_{j}\) can be generated by \(|\text{Hom}_{\mathbb{F}_{p}[t]}(\mathfrak{r},\mathbb{F}_{p^{d_{j}}})|\) elements. Thus, taking \(N_{\mathfrak{r}}:=\sum_{j=1}^{l}|\text{Hom}_{\mathbb{F}_{p}[t]}(\mathfrak{r}, \mathbb{F}_{p^{d_{j}}})|\) and \(C_{\mathfrak{r}}\) to be the number of \(M\in\mathbf{Mod}_{R}^{<\infty}\) such that \(M/pM\simeq_{\mathbb{F}_{p}[t]}\mathfrak{r}\), we establish the desired inequality.
### Reduction of Theorem 1.8 in terms of moments
By [11, Corollary 6.5], the previous subsection shows that \((M_{H})_{H\in\mathbf{Mod}_{R}^{<\infty}}\) are well-behaved, so we may apply [11, Theorem 1.6] to reduce the problem of showing the rest of Theorem 1.8 (in addition to Theorem 1.7 that we previously established) into the problem of showing that every \(H\)-moment for the distribution \((\text{cok}(P(X)))_{X\in\operatorname{\mathbb{M}}_{n}(\mathbb{Z}/p^{k}\mathbb{ Z})_{A_{n}}}\) is equal to \(M_{H}\). Thus, applying Lee's linearization trick (2.1), proving Theorem 1.8 is reduced into proving the following:
**Theorem 6.1**.: Suppose that \((\operatorname{\mathbb{M}}_{n}(\mathbb{Z}/p^{k}\mathbb{Z})_{A_{n}})_{n\in \mathbb{Z}_{\geqslant 1}}\) are given probability measures such that each random \(X\in\operatorname{\mathbb{M}}_{n}(\mathbb{Z}/p^{k}\mathbb{Z})\) has \(n^{2}\) independent entries. If \(A_{n}\) is of the form (1.4) and the entries of the bottom-right \(r\times r\) submatrix of \(X\) are uniformly distributed with \(r=\dim_{\mathbb{F}_{p}}(\mathfrak{r})\), then
\[\underset{X\in\operatorname{\mathbb{M}}_{n}(\mathbb{Z}/p^{k}\mathbb{Z})_{A_{n} }}{\mathbb{E}}(|\text{Sur}_{R}(\text{cok}_{R}(X-\bar{t}I_{n}),H)|)=M_{H}\]
for every \(H\in\mathbf{Mod}_{R}^{<\infty}\).
## 7. Proof of Theorem 6.1
For the rest of the paper, we prove Theorem 6.1. Fix \(H\in\mathbf{Mod}_{R}^{\leq\infty}\). Denoting by \(\mu_{n}\) the given measure on \(\mathrm{M}_{n}(\mathbb{Z}/p^{k}\mathbb{Z})_{A_{n}}\) and \(\mathbb{1}(\mathscr{P})\) the characteristic function of a property \(\mathscr{P}\), we have
\[\begin{split}\mathop{\mathbb{E}}_{X\in\mathrm{M}_{n}(\mathbb{Z}/ p^{k}\mathbb{Z})_{A_{n}}}(|\mathrm{Sur}_{R}(\mathrm{cok}_{R}(X-\bar{t}I_{n}),H)|)& =\int_{X\in\mathrm{M}_{n}(\mathbb{Z}/p^{k}\mathbb{Z})_{A_{n}}}| \mathrm{Sur}_{R}(\mathrm{cok}_{R}(X-\bar{t}I_{n}),H)|d\mu_{n}\\ &=\int_{X\in\mathrm{M}_{n}(\mathbb{Z}/p^{k}\mathbb{Z})_{A_{n}}} \mathop{\sum}_{F\in\mathrm{Sur}_{R}(\mathrm{cok}_{R}(X-\bar{t}I_{n}),H)}1d\mu _{n}\\ &=\int_{X\in\mathrm{M}_{n}(\mathbb{Z}/p^{k}\mathbb{Z})_{A_{n}}} \mathop{\sum}_{F\in\mathrm{Sur}_{R}(R^{n},H)}\mathbb{1}(F(X-\bar{t}I_{n})=0)d \mu_{n}\\ &=\mathop{\sum}_{F\in\mathrm{Sur}_{R}(R^{n},H)}\mathop{\mathrm{ Prob}}_{X\in\mathrm{M}_{n}(\mathbb{Z}/p^{k}\mathbb{Z})_{A_{n}}}(F(X-\bar{t}I_{n})=0). \end{split}\]
We first note that for many \(F\in\mathrm{Sur}_{R}(R^{n},H)\), the summand in the last sum is \(0\). We have
\[\begin{split}\mathop{\mathrm{Prob}}_{X\in\mathrm{M}_{n}(\mathbb{Z }/p^{k}\mathbb{Z})_{A_{n}}}(F(X-\bar{t}I_{n})=0)&=\mathop{ \mathrm{Prob}}_{B\in\mathrm{M}_{n}(\mathbb{Z}/p^{k}\mathbb{Z})}(F(A_{n}+pB- \bar{t}I_{n})=0)\\ &=\mathop{\mathrm{Prob}}_{B\in\mathrm{M}_{n}(\mathbb{Z}/p^{k} \mathbb{Z})}(pFB=-F(A_{n}-\bar{t}I_{n})),\end{split}\]
where the entries of \(B\in\mathrm{M}_{n}(\mathbb{Z}/p^{k}\mathbb{Z})\) are independent and the entries in the bottom-right \(r\times r\) submatrix of \(B\) are uniformly distributed, where \(r=\dim_{\mathbb{F}_{p}}(\mathfrak{r})\). We note that the above probability is \(0\) when the image of \(F(A_{n}-\bar{t}I_{n})\) is not in \(pH\). We shall identify
\[\mathrm{Hom}_{R}(R^{n},pH)=\{\phi\in\mathrm{Hom}_{R}(R^{n},H):\mathrm{im}(\phi )\subset pH\}.\]
**Notation 7.1**.: From now on, we write
* \(\mathrm{Hom}_{R}(R^{n},H)_{A_{n}}:=\{F\in\mathrm{Hom}_{R}(R^{n},H):F(A_{n}-\bar {t}I_{n})\in\mathrm{Hom}_{R}(R^{n},pH)\}\) and
* \(\mathrm{Sur}_{R}(R^{n},H)_{A_{n}}:=\{F\in\mathrm{Sur}_{R}(R^{n},H):F(A_{n}-\bar {t}I_{n})\in\mathrm{Hom}_{R}(R^{n},pH)\}\).
Moreover, we also note that the condition \(F(X-\bar{t}I_{n})=0\) implies that \(F(\bar{t}v)=F(Xv)\in F((\mathbb{Z}/p^{k}\mathbb{Z})^{n})\) for any \(v\in(\mathbb{Z}/p^{k}\mathbb{Z})^{n}\). In particular, for any such \(F\), we have \(F((\mathbb{Z}/p^{k}\mathbb{Z})^{n})=F(R^{n})\).
**Notation 7.2**.: We write
* \(\mathrm{Hom}_{R}(R^{n},H)_{A_{n}}^{\#}:=\{F\in\mathrm{Hom}_{R}(R^{n},H)_{A_{n }}:F((\mathbb{Z}/p^{k}\mathbb{Z})^{n})=F(R^{n})\}\) and
* \(\mathrm{Sur}_{R}(R^{n},H)_{A_{n}}^{\#}:=\{F\in\mathrm{Hom}_{R}(R^{n},H)_{A_{n }}^{\#}:F\text{ is surjective}\}\).
We note that to show Theorem 6.1, it suffices to show
\[\sum_{F\in\mathrm{Sur}_{R}(R^{n},H)_{A_{n}}^{\#}}\begin{pmatrix}\mathrm{Prob} _{X\in\mathrm{M}_{n}(\mathbb{Z}/p^{k}\mathbb{Z})_{A_{n}}}(F(X-\bar{t}I_{n})=0 )-\mathop{\mathrm{Prob}}_{X\in\mathrm{M}_{n}(\mathbb{Z}/p^{k}\mathbb{Z})_{A_{ n}}^{\mathrm{Haar}}}(F(X-\bar{t}I_{n})=0)\end{pmatrix}=0. \tag{7.1}\]
The following lemma counts \(\#\mathrm{Sur}_{R}(R^{n},H)_{A_{n}}\), which is an upper bound of \(\#\mathrm{Sur}_{R}(R^{n},H)_{A_{n}}^{\#}\).
**Lemma 7.3**.: We have
1. \(\#\mathrm{Hom}_{R}(R^{n},H)_{A_{n}}=\#\mathrm{Hom}_{R}(\mathfrak{r},H/pH)|pH|^ {n}\) and
2. \(\#\mathrm{Sur}_{R}(R^{n},H)_{A_{n}}=\#\mathrm{Sur}_{R}(\mathfrak{r},H/pH)|pH|^ {n}\).
Proof.: Write \(Y:=A_{n}-\bar{t}I_{n}\in\mathrm{M}_{n}(R)\) and denote by \(\bar{Y}\in\mathrm{M}_{n}(R/pR)\) the reduction of \(Y\) modulo \(p\). For any \(F\in\mathrm{Hom}_{R}(R^{n},H)\), denoting by \(\bar{F}\) its reduction modulo \(p\), we see that \(FY\in\mathrm{Hom}_{R}(R^{n},pH)\) if and only if \(\bar{F}\bar{Y}=0\in\mathrm{Hom}_{R/pR}((R/pR)^{n},H/pH)\). Since \(\mathfrak{r}\approx_{\mathbb{F}_{p}[t]}\mathrm{cok}(\bar{P}(A_{n}))\approx_{ \mathbb{F}_{p}[t]}\mathrm{cok}(A_{n}-\bar{t}I_{n})=\mathrm{cok}(\bar{Y})\), the number of \(\bar{F}\) such that \(\bar{F}\bar{Y}=0\) is
\[\#\mathrm{Hom}_{R}(\mathrm{cok}(\bar{Y}),H/pH)=\#\mathrm{Hom}_{R/pR}(\mathfrak{r},H/pH).\]
Since the size of each fiber under the modulo \(p\) projection
\[\mathrm{Hom}_{R}(R^{n},H)\twoheadrightarrow\mathrm{Hom}_{R/pR}((R/pR)^{n},H/pH)\]
is \(\#\mathrm{Hom}_{R}(R^{n},pH)=|pH|^{n}\), this finishes the proof of (1). The same proof works for (2) because \(F\) is surjective if and only if \(\bar{F}\) is.
**Notation 7.4**.: From now on, we write \(V:=R^{n}\) and \(V^{\prime}:=(\mathbb{Z}/p^{k}\mathbb{Z})^{n}\) for convenience although both expressions do depend on \(n\). We write \(v_{1},\ldots,v_{n}\) to mean the standard \(R\)-basis for \(V\). The same notation also means the standard \(\mathbb{Z}/p^{k}\mathbb{Z}\)-basis for \(V^{\prime}\).
### Deterministic property of each \(F\) and proof of Theorem 1.8
We fix any \(F\in\operatorname{Sur}_{R}(R^{n},H)_{A_{n}}^{\#}\). Recall that \(F\) satisfies \(F(V^{\prime})=F(V)=H\). Denoting by \(\bar{F}:(R/pR)^{n}\to H/pH\) the surjective map induced by \(F\), we also note that its restriction \(\mathbb{F}_{p}^{n}\to H/pH\) is a surjective \(\mathbb{F}_{p}\)-linear map. We denote by \(h:=r_{p}(H)\) the \(\mathbb{F}_{p}\)-dimension of \(H/pH\). We may assume that \(r=\dim_{\mathbb{F}_{p}}(\mathfrak{r})\geq h\) because otherwise (7.1) holds trivially. Recall that \(A_{n}\) is of the form (1.4), and since \(J\in\operatorname{M}_{n-r}(\mathbb{F}_{p}^{r})\) does not have any eigenvalues that are roots of \(P(t)\) over \(\overline{\mathbb{F}_{p}}\), we know that \(J-\bar{t}I_{n-r}\in\operatorname{M}_{n-r}(\mathbb{F}_{p}[\mathfrak{t}]/(\bar {P}(t)))\) is invertible because its image over \(\mathbb{F}_{p}[\mathfrak{t}]/(\bar{P}_{j}(t))\) is invertible for all \(1\leq j\leq l\). Since \(\bar{F}(A_{n}-\bar{t}I_{n})=0\), due to the form (1.4), we must have \(\bar{F}|_{(R/pR)^{n-r}}(J-\bar{t}I_{n-r})=0\), so the invertibility of \(J-\bar{t}I_{n-r}\) implies that \(\bar{F}|_{(R/pR)^{n-r}}=0\), which is equivalent to saying that \(F(v_{1}),\ldots,F(v_{n-r})\in pH\). Applying Nakayama's lemma, this implies that \(F(v_{n-r+1}),\ldots,F(v_{n})\) generate \(H\).
Proof of Theorem 1.8.: We may consider a random matrix \(X\in\operatorname{M}_{n}(\mathbb{Z}/p^{k}\mathbb{Z})_{A_{n}}\) by writing \(X=A_{n}+pB\), where \(B\) is a random matrix in \(\operatorname{M}_{n}(\mathbb{Z}/p^{k}\mathbb{Z})\). Having \(F(X-\bar{t}I_{n})=0\) is equivalent to \(F(A-\bar{t}I_{n})=pFB\), which can be seen as a system of equations
\[F(A_{n}-\bar{t}I_{n})v_{j}=\sum_{i=1}^{n}pB_{ij}F(v_{i}),\]
for \(1\leq j\leq n\), where \(B_{ij}\) is the \((i,j)\)-entry of \(B\). Due to the form (1.4), we know that \((A_{n}-\bar{t}I_{n})v_{1},\ldots,(A_{n}-\bar{t}I_{n})v_{n-r}\) form an \(R\)-basis for \(R^{n-r}\), so choosing values for \(F(v_{1}),\ldots,F(v_{n-r})\) is equivalent to choosing values of \(F(A_{n}-\bar{t}I_{n})v_{1},\ldots,F(A_{n}-\bar{t}I_{n})v_{n-r}\). We may rewrite each equation as
\[F(A_{n}-\bar{t}I_{n})v_{j}-\sum_{i=1}^{n-r}pB_{ij}F(v_{i})=\sum_{i=n-r+1}^{n} pB_{ij}F(v_{i}),\]
so considering \(1\leq j\leq n-r\), we see that any choice of \(F(v_{n-r+1}),\ldots,F(v_{n})\in H\) and the entries of \(B\) that are not in the bottom-right \(r\times r\) submatrix of \(B\) determine \(F(v_{1}),\ldots,F(v_{n-r})\in pH\). We also note that such choices of entries of \(B\) have no constraints. Hence, we see that the probability that \(F(X-\bar{t}I_{n})=0\) is completely determined by the values of \(F(v_{n-r+1}),\ldots,F(v_{n})\) and the entries of \(r\times r\) bottom-right submatrix of \(B\). This implies that we have
\[\operatorname*{Prob}_{X\in\operatorname{M}_{n}(\mathbb{Z}/p^{k}\mathbb{Z})_{A_ {n}}}(F(X-\bar{t}I_{n})=0)=\operatorname*{Prob}_{X\in\operatorname{M}_{n}( \mathbb{Z}/p^{k}\mathbb{Z})_{A_{n}}^{\text{\tiny{Hassr}}}}(F(X-\bar{t}I_{n}) =0),\]
so we must have (7.1), which implies Theorem 1.8.
## Acknowledgments
We thank Nathan Kaplan for helpful discussions and comments on an earlier draft of this paper. We thank Rohan Das, Christopher Qiu, and Shiqiao Zhang for sharing some computer generated data relevant to the paper. We thank Melanie Matchett Wood for helpful advice for the last part of this paper. The first author received support from NSF grant DMS 2154223 for the project. The second author thanks the AMS-Simons Travel Grant for supporting his visit to the first author. The first author thanks Jungin Lee, Youn-Seo Choi, and the Korea Institute for Advanced Study for their hospitality during his visit to the institute, thanks Myungjun Yu and Yeonsei University for their hospitality during his visit to the university, and also thanks Peter Jaehyun Cho and Ulsan National Institute of Science and Technology for their hospitality during his visit to a workshop, where a part of this work was completed. |
2304.05334 | Animation Fidelity in Self-Avatars: Impact on User Performance and Sense
of Agency | The use of self-avatars is gaining popularity thanks to affordable VR
headsets. Unfortunately, mainstream VR devices often use a small number of
trackers and provide low-accuracy animations. Previous studies have shown that
the Sense of Embodiment, and in particular the Sense of Agency, depends on the
extent to which the avatar's movements mimic the user's movements. However, few
works study such effect for tasks requiring a precise interaction with the
environment, i.e., tasks that require accurate manipulation, precise foot
stepping, or correct body poses. In these cases, users are likely to notice
inconsistencies between their self-avatars and their actual pose. In this
paper, we study the impact of the animation fidelity of the user avatar on a
variety of tasks that focus on arm movement, leg movement and body posture. We
compare three different animation techniques: two of them using Inverse
Kinematics to reconstruct the pose from sparse input (6 trackers), and a third
one using a professional motion capture system with 17 inertial sensors. We
evaluate these animation techniques both quantitatively (completion time,
unintentional collisions, pose accuracy) and qualitatively (Sense of
Embodiment). Our results show that the animation quality affects the Sense of
Embodiment. Inertial-based MoCap performs significantly better in mimicking
body poses. Surprisingly, IK-based solutions using fewer sensors outperformed
MoCap in tasks requiring accurate positioning, which we attribute to the higher
latency and the positional drift that causes errors at the end-effectors, which
are more noticeable in contact areas such as the feet. | Haoran Yun, Jose Luis Ponton, Carlos Andujar, Nuria Pelechano | 2023-04-11T16:52:41Z | http://arxiv.org/abs/2304.05334v1 | # Animation Fidelity in Self-Avatars:
###### Abstract
The use of self-avatars is gaining popularity thanks to affordable VR headsets. Unfortunately, mainstream VR devices often use a small number of trackers and provide low-accuracy animations. Previous studies have shown that the Sense of Embodiment, and in particular the Sense of Agency, depends on the extent to which the avatar's movements mimic the user's movements. However, few works study such effect for tasks requiring a precise interaction with the environment, i.e., tasks that require accurate manipulation, precise foot stepping, or correct body poses. In these cases, users are likely to notice inconsistencies between their self-avatars and their actual pose. In this paper, we study the impact of the animation fidelity of the user avatar on a variety of tasks that focus on arm movement, leg movement and body posture. We compare three different animation techniques: two of them using Inverse Kinematics to reconstruct the pose from sparse input (6 trackers), and a third one using a professional motion capture system with 17 inertial sensors. We evaluate these animation techniques both quantitatively (completion time, unintentional collisions, pose accuracy) and qualitatively (Sense of Embodiment). Our results show that the animation quality affects the Sense of Embodiment. Inertial-based MoCap performs significantly better in mimicking body poses. Surprisingly, IK-based solutions using fewer sensors outperformed MoCap in tasks requiring accurate positioning, which we attribute to the higher latency and the positional drift that causes errors at the end-effectors, which are more noticeable in contact areas such as the feet.
## 1 Introduction
Virtual reality headsets allow us to immerse ourselves in highly realistic digital worlds. A fundamental aspect of feeling present in these virtual environments is to have a virtual representation of our own body, known as the user's self-avatar. Ideally, avatars should be animated in a way that allows users to achieve natural behaviors and interactions in the virtual environment, as well as to use non-verbal communication with others. Self-avatar is the virtual representation of oneself and should be distinguished from other people's avatars as they have different requirements. Aspects such as latency or end-effectors and pose accuracy are more crucial for perceiving one's own avatar than for others.
Unfortunately, the limited number of trackers in consumer-grade devices severely restricts the quality of the self-avatar's movements. Most applications limit the representation to floating upper bodies (no legs) with floating hands/globes/tools, sometimes with the arms animated with Inverse Kinematics (IK), by using the tracking data from the HMD and the hand-held controllers. Only a few applications offer a full-body representation. However, due to the lack of trackers, legs are typically animated by playing cyclic time-warped animations. With these solutions, users may notice inconsistencies between their movements perceived via proprioception and those of the self-avatar.
Previous work has demonstrated the importance of having a self-avatar that moves in sync with the user [9, 10, 33]. If we focus on the overall movement without further interaction with the virtual world, current animation techniques based on IK from sparse tracking data may suffice. However, if accurate body poses and positioning of end-effectors matter, then artifacts that affect user performance and the Sense of Agency may pop up. For example, consider the task of building some assembly by holding pieces and putting them in specific locations. In that case, hand-eye coordination is crucial, as is the accuracy of the overall pose, to prevent parts of the arm/body from colliding with other pieces. Another example is moving through a room full of obstacles, where accurate foot positioning is also crucial. Finally, correct body poses also matter in the case of learning to dance or practicing yoga by mimicking an instructor [9].
Given that high-quality motion capture is difficult to achieve with sparse input data, we are interested in studying how animation
fidelity affects user performance and embodiment. By animation fidelity, we refer to the quality of the animations in terms of accurately following the user poses as well as the correct absolute positioning of the body parts. More specifically, we evaluate interactions with the virtual world that need pose and/or positional accuracy. We evaluate embodiment with a perceptual study, in which our main focus is on the Sense of Agency due to its relationship with animation fidelity. Furthermore, we study the effect of the quality of the interaction with the virtual world on user performance by measuring completion time and unintentional collisions. We focus on two popular methods based on Inverse Kinematics from sparse input data (6 trackers): UnityIK1 and FinalIK2, and one motion capture system based on a large number (17) of inertial sensors: Xsens Awinda3.
Footnote 1: [https://docs.unity3d.com/Manual/InverseKinematics.html](https://docs.unity3d.com/Manual/InverseKinematics.html)
Footnote 2: [http://root-motion.com/](http://root-motion.com/)
Footnote 3: [https://www.xsens.com/products/mtw-awinda](https://www.xsens.com/products/mtw-awinda)
Our results suggest that animation fidelity affects the Sense of Embodiment and user performance. We found that a straightforward IK solution, such as Unity IK, decreases the Sense of Embodiment when compared to high-quality IK and MoCap solutions. However, when interacting with the environment, having lower latency and minimal end-effector positional error may be more important than synthesizing high-quality poses suffering from positional drift.
The main contributions of this paper are:
* To the best of our knowledge, this is the first study to compare an IMU-based full-body motion capture system to IK solutions for animating self-avatars in VR during tasks that require accurate positioning of end-effectors and body postures.
* We study the relationship between animation fidelity on user task performance and the Sense of Agency to improve future research on VR avatar animation from sparse data.
## 2 Related Work
### Self-avatars and animation fidelity
A self-avatar is a virtual representation of one's own body from a first-person view of the virtual environment (VE). Previous studies have shown that full-body self-avatars are beneficial in various tasks, such as egocentric distance estimation, spatial reasoning tasks, and collision avoidance [22, 23, 28]. For instance, compared to not having an avatar, users with a full-body realistic avatar collide less frequently with the VE [23]. Similarly, Ogawa et al. [20] demonstrated that users would be less likely to walk through the virtual walls if equipped with a full-body representation compared to a hands-only representation. In social settings, using full-body self-avatars would enhance social presence and communication efficiency [1, 41].
Animation fidelity is a crucial component of self-avatars. Unlike visual fidelity, which addresses the appearance of avatars and has been extensively studied [4, 11, 12, 20], animation fidelity focuses on how accurately and synchronized the self-avatar mimics users' real-world movements. We could use avatars with the highest visual fidelity (with a realistic full-body self-avatar), but low animation fidelity if the body poses are not well mimicked, not in sync with the user, or have errors in the positioning of end-effectors. These avatars are unlikely to induce the user's feeling of owning or being in control of the virtual body [17, 36]. These kinds of problems may be introduced by the tracking system or by the methods used to capture and animate the self-avatar.
Inverse Kinematic (IK) solvers can be used with sparse input from VR devices to calculate joint angles of the articulated human model. Some frameworks are available to animate full-body avatars from six trackers (HMD, two hand-held controllers and three Vive trackers) [21, 42, 26]. Parger et al. [24] proposed an intuitive IK solution for self-avatar's upper-body animation with one HMD and two controllers. Their IK solver outperformed an optical MoCap system in terms of lower latency and accurate pose reconstruction. The reduced Jacobian IK solver proposed by Caserman et al. [3] can smoothly and rapidly animate full-body self-avatars with HTC Vive trackers.
Recently, researchers have shown an increasing interest in data-driven methods to reconstruct full-body animation for avatars from VR devices. For instance, Winker et al. [37] proposed a reinforcement learning framework with physical-based simulation to achieve real-time full-body animation. Ponton et al. [27] combined body orientation prediction, motion matching and IK to synthesize plausible full-body motion with accurate hand placement. Jiang et al. [14] used a transformer model to estimate the full-body motion. Other researchers have looked at using a sparse set of wearable IMUs to estimate full-body motion. These methods could be integrated into self-avatars in VR because of the built-in IMUs on VR headsets, controllers and trackers. For example, Huang et al. [13] used a bi-directional RNN to reconstruct a full-body human pose from six wearable IMUs attached to the head, arms, pelvis, and knees. Yi et al. [40] took the same input, but generated both accurate pose and precise global translation. More recently, Jiang et al. [15] not only accurately estimated the full-body motion but also handled the joint and global position drift that most IMU systems suffer from.
While there is an extensive body of research proposing new animation methods to improve animation fidelity for avatars, little interest has been given to how the animation fidelity of self-avatars impacts user performance, perception and behavior in a VE. Fribourg et al. [9] showed that users preferred to improve animation features when asked to choose among appearance, control (animation) and point of view, to improve the Sense of Embodiment (SoE). In their work, participants preferred mocap based on Xsens over FinalIK. However, their input to the IK system was the joints positions from the Mocap system, and thus the problems with incorrect end-effector positioning and latency were carried on to the IK condition.
Galvan et al. [10] adapted the same methodology to examine the effect of animation fidelity of different body parts. Participants were first exposed to optimal animation fidelity (53-marker optical motion capture). Then, they started with minimal animation fidelity and repeatedly chose one body part to improve until they felt the same level of the SoE as with the optimal configuration. They found users felt the same level of the SoE with an IK solution with eight trackers than with the 53-marker optical motion capture system. Their work also found that the unnatural animation of the full body caused disturbing feelings for users when separately animating the upper body and lower body with different fidelity. Thus, our work focuses on full-body animation instead of body parts to avoid breaking the user's presence. Eubanks et al. [8] explored the impact of the tracking fidelity (number of trackers) on a full-body avatar animated by an IK solver. They found that a high number of trackers could improve the SoE. However, animation fidelity is not only about tracking fidelity, but also about the animation techniques underneath. Our study thus compares not only systems with different numbers of trackers, but also different animation techniques: IK and IMU-based motion capture.
### Sense of Agency
The Sense of Agency (SoA) has been characterized in various ways in different contexts because of its interdisciplinarity property. From the point of view of psychology, the SoA refers to the feeling that _I am the one causing or generating an action_[6]. In the field of VR, the SoA is the feeling of being the agent who conducts the motions of an avatar. It results from synchronizing one's real-world movements with virtual body motions.
The Sense of Agency is a crucial subcomponent of the Sense of Embodiment. According to Kilteni et al. [16], the SoE consists of three subcomponents: the Sense of Agency, the Sense of Self-Location (SoSL), and the Sense of Body Ownership (SoBO). Many
studies have studied the impact of single or multiple factors, including avatars' appearance, visibility and tracking fidelity, on the SoE. Fribourg et al. [9] explored the relative contributions of the control factor (i.e. animation fidelity), appearance and point of view that contribute to the SoE. Results showed that control and the point of view were preferred when people had to choose among the three factors to improve their SoE. Recent studies showed that low-quality tracking, which directly impacts the animation of self-avatar, can decrease the embodiment [33, 8]. These findings analyzed the effect of the SoE, which is directly or implicitly related to animation. However, there is still a gap in how the animation fidelity directly impacts the SoE, specifically the subcomponent SoA.
The synchronicity of visuomotor correlation can induce the SoA, while discrepancies can decrease it. Kollias et al. [19] simulated different kinds of motion artifacts that may occur in a real-time motion capture system. They examined the effect of these artifacts on the SoE, specifically on the SoA. Results showed that the artifacts negatively affected the SoA, but not the SoBO.
Studies regarding the SoA mainly focus on subjective perception with questionnaires and objective brain activity measurements such as fMRI and EEG. As suggested by Kilteni et al. [16], the motor performance of VR users should be positively correlated with the SoA, under the assumption that a fine-controlled virtual body performs motor tasks more successfully. Therefore, the users' motor performance in VR could be used to measure the SoA objectively. Our study measured task performance in terms of unintentional collisions between the self-avatar and the virtual obstacles. We believe that the number of collisions and their duration could bring insights into human motor performance in 3D space. High animation fidelity means precise control of self-avatars which can perform better in motor tasks. Therefore, we expected to observe the impact of animation fidelity on collisions, completion time, and the correctness of the body poses.
## 2 Animation fidelity study
This study aims to assess the importance of animation fidelity on the users' performance and the SoE when performing a set of tasks that require careful positioning and/or accurate poses. We want to study the importance of the virtual body correctly mimicking the user's movements as well as the impact of accurate end-effector positioning.
### Experimental conditions
In this study, we adopted a within-subject experiment design with one independent variable: the animation fidelity for the virtual avatar. We designed three conditions for the animation fidelity variable: Unity Inverse Kinematics (UIK), FinalIK (FIK) and motion capture with Xsens (MoCap). These techniques provide different levels of animation quality in terms of end-effector positioning (more accurate in UIK and FIK since hand-held controllers and trackers provide accurate absolute positioning), pose angles (more accurate in MoCap thanks to a larger number of sensors), and latency (higher for MoCap). The first two conditions differ on the IK solvers, while both use sparse tracking data from consumer-level VR devices. The last condition, MoCap, uses tracking data from a professional motion capture system with 17 IMU sensors. Fig. 2 illustrates the equipment used for tracking in the three conditions. The three conditions used have been implemented as follows (see accompanying video):
**UIK**: This condition uses Unity 2020.3 game engine's built-in IK solver for animating the avatar's limbs (2-segment kinematic chains). It is important to note that it does not consider the full-body pose when solving the IK. Instead, it independently computes each limb's joints based on one target end-effector. To further improve the overall body pose, forward kinematics (FK) is included to animate two joints: head and spine, so that the self-avatar can lean forwards and sideways. IK and FK together generate a full-body animation for the avatar from the tracking data in the HMD, the hand-held controllers and three additional trackers located on the pelvis and the feet.
**FIK**: This condition uses the VRIK solver from RootMotion's FinalIK package, which combines analytic and heuristic IK solvers for generating the full-body avatar animation. With the same input, FIK produces higher-quality results than UIK because each limb is not solved independently from one end-effector, but rather from an analysis on the user pose from several end-effectors [35]. For instance, the spine is solved considering the position of the HMD and two hand-held controllers, and the elbows use the position of the hands relative to the chest joint to determine the orientation. The only exception are the legs, which are solved independently but using a 3-joint dual-pass trigonometric solver (first solve the knee and then the ankle).
**MoCap**: The Xsens Awards system receives acceleration, angular velocity and magnetic field data from 17 body-worn IMUs, processes the data with Strap-down Integration and Kalman filtering, and then outputs the rotations of the joints of the avatar, which are streamed to Unity via UDP; these processing steps increase the latency with respect to the previous conditions. IMUs suffer from a positional drift over time, that might break the Sense of Self-location. To enforce the correct location of the avatar with the user, we use the pelvis tracker to position the avatar in the VE. However, this does not guarantee accurate positioning of the end-effectors and can suffer from foot sliding. Foot lock is applied to reduce the foot sliding of the Xsens pose when the foot touches the floor. Once the foot is locked, we store the position of the HTC tracker, which we will use as a reference to detect whether the user is moving the foot. In the following frames, if the distance between the current HTC tracker and its initial position is larger than a relatively small threshold (1 cm), we unlock the foot; otherwise, it will noticeably modify the leg pose. Note that we are locking the foot at the position given by Xsens (thus, it may contain positional error); we only use the HTC tracker to detect whether the user's foot remains on the ground.
Each participant performed the same three tasks for each condition. Conditions were counterbalanced between participants using a Balanced Latin Square, which ensures each condition precedes and follows every other condition an equal number of times [7].
Figure 2: Equipment and conditions. For the experiment, participants were simultaneously equipped with two sets of tracking devices: VR devices ( HMD, controllers and three trackers); and the trackers from the Xsens Awards mocap system. The tracked body parts are shown in the figure. Different IK solvers were applied to animate the avatar using the VR tracking devices.
### Tasks
Prior studies have shown that the type of actions users perform in a VE influences users' perception [34, 9]. For instance, when picking up nearby objects, people would pay more attention to the upper body while their ignoring surroundings [5]. Similarly, walking in a room with obstacles on the floor would draw people's attention to objects and lower body parts to plan the future path and avoid collisions [32]. We thus designed three tasks that cover a wide range of common actions in VR games and applications, while each task focused on a different interaction pattern between the virtual body and the VE (see Fig. 3 and accompanying video).
* _Step-over-spikes task_ focuses on direct interaction between the lower body and the VE. It consists of walking and lifting the knees to avoid colliding with spike-like obstacles while stepping on small platforms.
* _Pick-and-place task_ focuses on direct interaction between the upper body and the VE. It consists of picking up objects and then placing them at specific target locations while avoiding collisions between the arm and nearby obstacles.
* _Copy-pose task_ involves only non-direct interactions between the virtual body and the VE. More specifically, we focus on the overall pose of the self-avatar without caring about the exact global positioning of the avatar. For this task, we show a 2D projection of an avatar in a certain pose on a virtual screen, and then the user is asked to mimic the pose as accurately as possible. The design is inspired by OhShape4. Footnote 4: [https://ohshapev.com/](https://ohshapev.com/)
One task block consisted of the following three sequential tasks, which were presented in the following order: (1) step-over-spikes task; (2) pick-and-place task; (3) copy-pose task. Each task consisted of ten trials separated by five seconds of rest. We decided to use this task order to guarantee that the last task before each SoE questionnaire (see below) equally involved the upper and lower limbs. Participants were requested to complete the entire task block for each of the three conditions on a recurrent basis.
### Apparatus
The experiments were conducted in an acoustically-isolated laboratory room with a 3 m x 6 m space. The VE was developed with Unity 2020.3 LTS and run on a PC equipped with a CPU Intel Core i7-10700K, a GPU Nvidia GeForce RTX 3070 and 32 GB of RAM. We used an HTC Vive Pro HMD with 1440 x 1600 pixels per eye, 110\({}^{\circ}\) field of view and 90 Hz refresh rate. Three 6-DoF HTC Vive trackers 3.0 were used for tracking the pelvis and feet. Two HTC Vive controllers were held in both hands of the participants. We installed four SteamVR Base Station 2.0 in each corner of the room to minimize line-of-sight occlusions.
We employed the well-established frame counting approach [3, 31] to determine the latency of the tracking system and the animation techniques used in our experiment. One person was equipped with all tracking devices and repeatedly moved one arm up and down. We used a high-speed 240fps camera to record both the person and a monitor showing the VE. The mean latency from the physical controller to the animated virtual hand was 32 ms for UIK and 33 ms for FIK. These latencies include the SteamVR tracking system, IK solver computation and rendering. For MoCap, the mean latency was 91 ms, which was notably higher than the other two conditions. The MoCap latency includes the IMU-to-Xsens software latency (\(\sim\)30 ms 5) [25], motion processing in Xsens software, network communication, motion data unpacking in Unity (\(\sim\)5 ms), and rendering.
Footnote 5: [https://base.xsens.com/s/article/MVN-Hardware-Overview](https://base.xsens.com/s/article/MVN-Hardware-Overview)
### Procedure
A total of 26 participants took part in the experiment (22 male, 4 female, aged 19-40, M = 22.4, SD = 5.5) but one participant's data was discarded due to a calibration failure.
Upon arriving at the experiment room, participants were instructed to read the information sheet and complete the consent form and a demographic survey detailing their age, gaming and VR experience. We introduced their body measurements to the Xsens software in order to obtain a scaled avatar matching the user's dimensions. Then we placed the 17 wireless trackers on the body of the participant, with the help of a t-shirt and a set of straps. Participants were asked to walk a few meters to calibrate the IMU-based motion capture suit. The calibration was repeated until the Xsens software (MVN Animate Pro) rated it as a "Good" (among "Good", "Acceptable" and "Poor"). The experimenter also validated visually that the subject's pose closely matched that of the avatar. Next, participants were equipped with an HTC Vive HMD, two hand-held controllers and three Vive trackers placed on the pelvis and both feet. They were asked to stand in a T-pose to complete the calibration of the HTC trackers for the IK solvers. During the experiment, participants were equipped with both tracking systems at all times. This ensured that they could not guess what system was being used for each condition. Before each task, participants watched a tutorial video (two minutes in total) that demonstrated how to perform the task.
### Measurements
The step-over-spikes task challenges the participants' lower-body motion so that we can quantitatively assess the effect of animation
Figure 3: Timeline for one participant (top) with details on the tasks (bottom). The participants were asked to complete three consecutive tasks followed by the questionnaire for the first condition, and then repeat the procedure with the other two conditions. During the first two tasks, the volume of the small colliders were recorded (green), and users had visual feedback from the obstacles (in red) every time a collision occurred. For the copy-pose task, the pose-related metrics were calculated. Questions and buttons for answering were displayed on a whiteboard in VR.
fidelity on the interaction between the lower body and the VE. Similarly, the pick-and-place task is intended to assess the impact of animation fidelity on the interaction between the upper body and the VE. To evaluate these two tasks, we took measurements regarding collisions and completion time. More specifically, we recorded: the total collision volume (\(V_{c}\)), the collision duration (\(T_{c}\)), the number of collisions (\(N_{c}\)), as well as the task completion time (\(T_{task}\)). This data was converted into more intuitive metrics as follows:
* Volume per collision \(v=V_{c}/N_{c}\). It reflects how deep the avatar penetrated the obstacle during each collision, on average.
* Duration per collision \(t=T_{c}/N_{c}\). It measures the average penetration time of the avatar into obstacles and how quickly participants corrected the collision when it occurred.
* Collision frequency \(f=N_{c}/T_{task}\). It reflects how often the avatar collides with obstacles while performing the task. It is specified as the number of collisions per second.
With these metrics, we investigated the relationship between the animation fidelity and the virtual body-obstacle collisions. To accurately capture the volume and duration of collisions, a set of invisible small cubic colliders (\(V_{collider}=8\,\mathrm{cm}^{3}\)) were used to match the shape of each obstacle.
The goal of the copy-pose task is different from the other two. It evaluates the correctness of the static pose of the avatar when there are no hard constraints for the avatar's end-effectors positions (i.e. no contact points between the avatar and the VE). Thus, three pose-related metrics were used to assess the accuracy of users' poses:
* Jaccard Distance \(JD=1-\frac{G\cdot U}{G\cdot U}\) (see Fig. 3). It measures the non-overlap area of the intersection between the 2D projection \(G\) of an example avatar over a plane and the 2D projection \(U\) of the avatar controlled by the user, divided by the union of the two projections.
* Mean per segment angle error (\(MPSAE\)) is defined as: \(MPSAE=\frac{1}{\left|\left|\mathbf{S}\right|}\sum_{i}^{\left|\mathbf{S}\right| }\arccos\left(\mathbf{\hat{s}}^{*}\cdot\mathbf{\hat{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{ \mathbf{ }}}}}}}}}}}}} \right)\), where \(\mathbf{S}\) is the set of segments of the skeleton, \(\mathbf{\hat{s}}\) is the unit vector representing the direction of a segment \(s\), and \(\mathbf{\hat{s}}^{*}\) is the direction of the segment in the given pose.
* Mean per part angle error \(MPPAE\) is like \(MPSAE\) but only considers one part of the body such as the spine or the limbs corresponding to arms and legs.
Participants could only observe the target poses as a 2D projection on a virtual wall that was located in front of them. Therefore, the metrics used in this task were all calculated based on the same 2D projection in the XY plane. For the Jaccard Distance, the lack of overlap between the two projections must not be a result of the user position being slightly offset with respect to the observed shape. Consequently, we iteratively applied translations in the 2D space to maximize the overlap between the two shapes before computing \(JD\). For \(MPPAE\), body segments of the avatar were grouped into three body parts: arms, legs and spine. This separation allowed us to study animation fidelity's impact individually on different body parts.
At the end of each block of tasks, participants completed a questionnaire (Table 1) which was adapted from a standard questionnaire from Virtual Embodiment Questionnaire (VEQ) [29]. The embodiment was measured through three main aspects: _agency_, _ownership_ and _change_. _Agency_ measures the sense of control, _ownership_ measures the sense of owning the virtual body as if it is one's own real body, and _change_ measures to what extent one feels the virtual body scheme differs in size from one's own real body.
The VEQ does not assess self-location discrepancies since it is not the goal of typical VR applications to produce such effects [29]. In our experiment, the use of the pelvis tracker guaranteed a correct placement of the self-avatar. The appearance and size of the avatar were kept the same through all conditions to guarantee that the only perceived differences would come from the changes in animation fidelity. Questions about _change_ in VEQ are typically studied in the context of body swap experiments that manipulate avatars' body size, sex, race, etc. [18, 38, 39]. However, with the avatar's height and body proportions consistent with the user's physical body, _change_ is not expected to be an influencing factor in our study.
The goal of the embodiment questionnaire was to gather the global experience after running the three tasks, so that it would gather both the importance of correct end-effector positioning and the accuracy of the body pose. We decided against asking the 15 questions after each task to avoid doing the experiment too long because it could introduce a biased source.
### _Hypotheses_
We hypothesize that better animation fidelity would lead to better performance in terms of reducing the number of collisions, and also their volume and duration. Although our conditions had varied trade-offs in terms of the different components of animation fidelity (pose accuracy vs end-effector accuracy, as well as latency), we expected the highest performance for the full-body IMU-based motion capture system, followed by IK methods with VR devices as input. Similarly we would expect the full-body IMU-based motion capture system to outperform the IK solution when copying body poses given its higher number of sensors allowing for a more accurate capturing of the user pose. Finally we expected animation fidelity to affect the SoE of the user. Therefore, our hypotheses are:
* Animation fidelity impacts performance of the user in step-over-spikes and pick-and-place (tasks that require precise interaction with the environment), in terms of unintended collisions and completion time.
* Animation fidelity impacts performance in copy-pose task, which requires accuracy in the body pose.
* Animation fidelity affects the SoE.
## 4 Results
In this section we summarize the results of our experiment. The complete list of statistical significance and post-hoc tests values can be found in Table 2.
\begin{table}
\begin{tabular}{l} _Agency - Scoring: (\(AG1+AG2+AG3+AG4+AG5+AG6+AG7\)) / 7_ \\
**AG1** The movements of the virtual body felt like they were my movements. \\
**AG2** I felt the virtual arms moved as my own arms. \\
**AG3** I felt the virtual debors were in the same position as my own elbows. \\
**AG4** I felt the virtual hands were in the same position as my own hands. \\
**AG5** I felt the virtual legs moved as my own legs. \\
**AG6** I felt the virtual knees were in the same position as my own knees. \\
**AG7** I found it easy to control the virtual body pose to complete the exercises. \\ \end{tabular}
\end{table}
Table 1: Questionnaire content. The scores are on a 7-Likert scale (1 = completely disagree, 7 = completely agree).
### _User performance on interaction tasks_
We first present the results of user performance on the tasks that involved a direct interaction with the VE. Table III shows the mean (M) and standard deviation (SD) of all the metrics of the step-over-spikes and pick-and-place tasks. Fig. 4 shows the violin plots of the metrics of both tasks.
Shapiro-Wilk tests showed significant departures from normality for all three measures of the step-over-spikes task. Therefore, non-parametric within-subjects Friedman tests were used and they revealed significant differences for all metrics between animation fidelity conditions. Animation fidelity significantly affected volume per collision and collision frequency but not duration per collision. Table II includes a summary of \(\chi^{2}(2)\), p-values and effect sizes calculated for these metrics. Pairwise post-hoc tests (Wilcoxon signed-rank tests) showed that MoCap had significantly higher values than FIK for all metrics except duration per collision, and a significantly higher value than UIK for collision frequency. It also showed UIK had significantly higher collision frequency than FIK.
For the pick-and-place task, Shapiro-Wilk tests showed that volume per collision and completion time data violated the normality assumption (p\(<\).05), while the other two metrics did not. Therefore,
\begin{table}
\begin{tabular}{l l l l} \hline \hline
\begin{table}
\begin{tabular}{l l l l} \multicolumn{2}{c}{\(v\)} & \(t\) & \(f\) & \(T\) \\ \hline \multicolumn{4}{l}{_Spike-over-spikes Task_} & \\ \hline
**UIK** & \(101.0(61.7)\) &
Friedman tests and post-hoc Wilcoxon signed-rank tests were conducted for volume per collision and completion time, while one-way within-subject ANOVAs and pairwise t-tests were conducted for the others. The results revealed a significant effect of animation fidelity on duration per collision and collision frequency. Post-hoc tests showed that UIK had significantly higher collision frequency than FIK and MoCap, and a longer completion time than FIK.
Therefore, hypothesis **[H1]** was validated by these results of interactions tasks performed for both the upper body and lower body. We further analyze these results in Section 5.
### _User performance on pose-related tasks_
We summarize the M and SD for all metrics of the copy-pose task in Table 4 and present the corresponding violin plots in Fig. 5. Shapiro-Wilk tests showed both JD and MPSAE data had a non-significant departures from normality (p \(<\).05). Friedman tests were thus conducted for both metrics and revealed significant differences among the three animation fidelity conditions with medium to large effect sizes. Pairwise Wilcoxon tests with Bonferroni p-value adjustment demonstrated significant differences in all pairs of conditions. For both metrics, UIK had significantly higher error values than FIK and MoCap, and FIK had significantly higher errors than MoCap.
For MPPAE, we used a two-way repeated measures Aligned Rank Transform (ART) ANOVA after asserting the normality with a Shapiro-Wilk test (p \(<\).05). The result revealed a significant main effect of animation fidelity and body part on MPPAE. It also showed a significant interaction effect between animation fidelity and body part. First, the post-hoc Tukey's tests demonstrated that, for all animation fidelity conditions, MPPAE was significantly higher for arms than for legs and spine. Next, when comparing the MPPAE for each body part, Tukey's tests showed that, for arms, the MPPAE was significantly higher for UIK than for the other conditions. For legs, UIK had significantly higher MPPAE than FIK and MoCap, and FIK had significantly higher MPPAE than MoCap. For the spine, FIK had significantly higher MPPAE than other conditions.
To summarize, these results validated our hypothesis **[H2]** in the sense that the pose errors were significantly lower when using MoCap than IK solutions.
### _Sense of Embodiment_
Table 5 shows the M and SD of the overall score of the SoE and subcomponent scores for _agency_, _ownership_ and _change_. The violin plots for these scores can be found in Fig. 6. A one-way within-subject ANOVA showed a significant effect of animation fidelity on overall score of the SoE. The post-hoc tests (pairwise t-test) showed that the SoE score for UIK was worse than both FIK and MoCap.
We analyzed the average score of _agency_ questions, Q1 - Q7, with a one-way within-subject ANOVA (see Fig. 2 for test values). The result showed a significant effect of animation fidelity on _agency_ score. The post-hoc tests (pairwise t-test) showed that users reported the SoA in UIK significantly lower than FIK and MoCap.
Since a Shapiro-Wilk test showed a non-significant departure from normality, a Friedman test was conducted for the average score of _ownership_ questions, Q8 - Q10. The result showed a significant effect of animation conditions on ownership. The post-hoc test (Wilcoxon test with Bonferroni p-value adjustment) showed UIK had a significantly lower _ownership_ score than FIK.
The same set of tests as _ownership_ were conducted for the average score of _change_ questions, Q11 and Q12. A Friedman test showed no significant effect of animation conditions on _change_. post-hoc tests showed no significant difference on _change_ in all condition pairs. Overall, these results validated our hypothesis **[H3]**.
\begin{table}
\begin{tabular}{l c c c c} & _JD_ & _MPSAE_ & _MPPAE_ \\ \hline \hline \multirow{3}{*}{**UIK**} & \multirow{3}{*}{0.539(0.035)} & \multirow{3}{*}{13.90(1.51)} & Arms & 28.6(3.45) \\ & & & & Legs & 9.09(0.98) \\ & & & & Spine & 5.90(1.43) \\ & & & & Arms & 16.1(3.45) \\
**FIK** & 0.512(0.040) & 11.10(2.10) & Legs & 7.22(1.67) \\ & & & & Spine & 10.1(3.26) \\ & & & & Arms & **13.9(2.23)** \\
**MoCap** & **0.476(0.038)** & **8.03(1.19)** & Legs & **5.85(1.33)** \\ & & & & Spine & **5.08(1.38)** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Mean and standard deviation for metrics of the copy-pose task.
Figure 4: Violin plots for metrics of the step-over-spikes and pick-and-place tasks, showing results for collisions and task completion time. Asterisks represent the significance level: * (p \(<\).05), ** (p \(<\).01), *** (p \(<\).001), *** (p \(<\).0001).
## 5 Discussion
### Accuracy of body pose vs. end-effector positioning
As expected, a motion capture suit is able to capture most of the human motion and accurately represent poses, as opposed to applying IK using only a few trackers as end-effectors. We quantitatively assessed this with the copy-pose task and found that the MoCap method performed significantly better than UIK and FIK for all metrics. Poses with MoCap were best aligned with the given poses and also when analyzing each body segment independently.
Therefore, we would expect MoCap to perform better in other tasks due to the high-quality tracking of poses. However, we found that high-quality poses do not improve task performance when tasks are not directly related to the pose, and instead require direct interactions with the VE. One possible explanation is that the positional drift from inertial systems results in the end-effectors moving away from their actual position. When this happens, the user's hands and feet are no longer co-located with their virtual representations, thus introducing inconsistencies (see Fig. 7). The higher latency of MoCap may have also contributed to these performance differences.
In the step-over-spikes task, MoCap was significantly worse than FIK in volume per collision, collision frequency and completion time. MoCap was significantly worse than UIK in volume per collision. We believe that for this task, having an accurate positioning of the feet (no drift) made users feel more confident and reliable when positioning the feet on the ground to avoid spikes. Both FIK and UIK achieved good foot positioning because IK solvers enforced the position of the feet to be the same as the trackers. In contrast, since MoCap is an IMU-based motion capture system, it does not have precise global positioning of the joints.
To lessen the positional drift issue, we moved the MoCap avatar to match the position of the VR pelvis tracker. This improves the overall co-location between the user and its avatar, but it may increase foot sliding. For instance, when one leg is acting as a supporting leg on the ground as the user's pelvis moves, if the pelvis of the MoCap animated avatar is forced to follow the HTC pelvis tracker, it makes the foot slide on the ground and increases the risk of collision with obstacles. To minimize this problem, we implemented a foot lock algorithm. This alleviated foot sliding but not the global position accuracy of the feet.
Overall, if the task requires accurate foot placement, it may be necessary to include foot trackers to position them accurately in the VE, while correctly posing all joints may not be necessary.
### Upper body animation for accurate interactions with the environment
In the pick-and-place task, UIK performed significantly worse than MoCap and FIK in terms of collision frequency. However, we found MoCap and FIK to perform similarly. This is consistent with the results of the MPPAE in the copy-pose task, for which UIK also performed worse than MoCap and FIK due to incorrect elbow positioning. For the pick-and-place task, users had to correctly position the arm to reach the goal without touching the obstacles. The incorrect elbow positioning in UIK made the task more complicated, and thus more prone to collisions. We also found that users took significantly longer to finish the task with UIK than with FIK.
Figure 5: Violin plots of metrics obtained for the copy-pose task. Asterisks represent the significance level: * (p \(<\).05), ** (p \(<\).01), *** (p \(<\).001), *** (p \(<\).0001).
\begin{table}
\begin{tabular}{c c c c c} & _Overall_ & _Agency_ & _Ownership_ & _Change_ \\ \hline
**UIK** & 4.03(1.18) & 4.22(1.51) & 4.19(1.54) & 3.1(1.34) \\
**FIK** & **4.91(0.774)** & 5.29(0.97) & **5.32(0.92)** & **2.98(1.35)** \\
**MoCap** & 4.87(0.919) & **5.37(0.99)** & 4.91(1.20) & 3.08(1.79) \\ \end{tabular}
\end{table}
Table 5: Mean and standard deviation for the overall score of the SoE and scores of subcomponents.
Figure 6: Violin plots for overall score of the SoE and scores for agency, ownership and change individually. Asterisks represent the significance level: * (p \(<\).05), ** (p \(<\).01), *** (p \(<\).001), *** (p \(<\).0001).
When comparing FIK and MoCap, our results suggest that the additional tracking data for the elbows in MoCap did not help the participants achieve better performance in terms of collision avoidance in the pick-and-place task, and also in the arm part of pose replication in the copy-pose task. Even though MoCap provides a more accurate elbow position, we believe that the inaccurate end-effector positions lead to more collisions with the obstacles. Another explanation may be due to the latency of the MoCap. A few participants commented that their virtual arms were less responsive when using MoCap while performing the pick-and-place task. As Waltemate et al. [36] stated, when latency increases above \(75\,ms\), user's motor performance in VR tends to decline.
Even if FIK provides less accurate poses for the elbows, its responsiveness and end-effector accuracy compensate for this. Participants can quickly avoid obstacles by adjusting the controllers' position. The result is consistent with the work by Parger et al. [24].
### Performance differences between arms and legs
The results of the MPPAE in the copy-pose task suggest that the arm poses were less precise than the leg poses. The angle error was larger in the arms than in the legs for all conditions. One possible explanation is that the range of movements a human person can do with their upper body is wider than with the lower body. We also studied whether users noticed the tracking inaccuracy by comparing the scores given in questions related to arms (Q2-Q4) and legs (Q5-Q6). The score for arms (\(M=4.60\), \(SD=1.19\)) was statistically (\(p<0.0001\)) lower than legs (\(M=5.41\), \(SD=1.05\)) when performing a t-test. When performing a two-way ANOVA, adding the animation fidelity as a condition, we found no statistical difference between the scores given to the arms questions between FIK and MoCap.
The participant-reported differences in responsiveness between FIK and MoCap for arm movement were not observed for the legs during the step-over-spikes task.
Based on the result above, we recommend focusing on the upper body when animating a self-water since it seems necessary to have higher-quality animations for arms. Lower-quality animation may be enough for the legs. Therefore, as some works have suggested [27], it may not be necessary to include all tracker devices for the lower body when the task does not require accurate foot placement.
### High Sense of Agency can be achieved with a small set of tracking devices
The questionnaire data showed no statistically significant differences between FIK and MoCap. However, as mentioned before, MoCap achieved better results (\(JD\) and \(MPSAE\)) than FIK and UIK in the copy-pose task. It suggests that the SoA is not only related to the pose, but also to the interaction with the VE, e.g., we found that in the pick-and-place task MoCap did not achieve the best results.
In other words, our results suggest that one can feel the same level of control over self-avatare animated by a high-end motion capture suit with 17 IMUs or a small set of tracking devices (one HMD, two controllers, and three trackers) and a high-quality IK solution. This finding is consistent with Galvan Debarba et al. [10] that suggested a total of 8 trackers were enough to achieve the same plausibility illusion as an optical-based motion capture system with 53 retro-effective markers. Goncalves et al. [12] suggested that increasing tracking points, from 3 to 6, does not significantly improve the SoA.
More research is needed to understand how to improve the SoA, given that a higher number of trackers (MoCap) did not always improve the agency scores when compared to a full-body IK such as FIK. Other factors such as end-effectors position accuracy, latency or animation smoothness may affect the users' perception.
It would also have been interesting to randomize the task order so that we could have analyzed whether the results of the SoE were affected by which was the last task being experienced by the participant. However, by looking at the results, we observe that the step-over-spike task (the first task) had FIK giving better quantitative results, the pick-and-place task (the second task) had similar performance for FIK and Mocap, and in the copy-pose task (the last task). MoCap had the best results. Even though the last task had better performance for Mocap, the embodiment questionnaires showed similar results for FIK and MoCap (not statistically significant) which may indicate that the questionnaire did gather the overall experience.
## 6 Conclusions and Future Work
We conducted a user study to examine the impact of the avatar's animation fidelity on user performance and the SoA. Our results suggest that the IMU-based motion capture system performed better than IK solutions for applications that require pose accuracy. However, IK solutions outperform IMU-based motion capture systems when directly interacting with the VE. In these cases, accurate end-effector placement and low latency may be more critical than exact pose matching due to proprioction. Our study also suggests that a high-end IK solution with sparse input (6 trackers) can achieve similar levels of the SoA as an IMU-based motion capture with dense input (17 trackers). We believe these results give insight into how animation fidelity affects user performance and perception, providing future research directions toward improving self-avatar animation fidelity in VR. Our work also highlights the limitations of current technology to achieve correct self-avatar animation (such as latency, end-effectors and body pose inaccuracy), and thus motivates future research to overcome these issues.
A limitation of our experiment is that the robotic avatar did not accurately match the shape of the participant. Since the avatar's limbs were much thinner than the participants' ones, and because they used hand-held controllers, self-contacts suggested by some copy-pose targets were not reproduced by the avatar (regardless of the condition). In fact, no participant referred to this issue. Further studies are required to study the role of animation fidelity and self-contact [2] when the avatar accurately matches the user's shape.
For future research, we would like to investigate whether participants could perform better using an optical motion capture system, providing both accurate pose and global position. This new condition will allow the decoupling of the positional drift issue from the accuracy of the body pose, allowing for a more in-depth study of the perceptual results. We believe future studies that integrate hand tracking like RotoWrist [30] or data-driven methods for self-avatar animation would be valuable to provide more insight into how animation fidelity impacts the SoE and user performance in VR.
## Acknowledgments
This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 860768 (CLIPE project) and from MCIN/AEI/10.13039/501100011033/FEDER, UE (PID2021-122136OB-C21). Jose Luis Ponton was also funded by the Spanish Ministry of Universities (FPU21/01927).
Figure 7: End-effectors positioning with respect to controllers for the different animation conditions. |
2301.00492 | A weighted $L_q(L_p)$-theory for fully degenerate second-order evolution
equations with unbounded time-measurable coefficients | We study the fully degenerate second-order evolution equation
$u_t=a^{ij}(t)u_{x^ix^j} +b^i(t) u_{x^i} + c(t)u+f, \quad t>0, x\in
\mathbb{R}^d$ given with the zero initial data. Here $a^{ij}(t)$, $b^i(t)$,
$c(t)$ are merely locally integrable functions, and $(a^{ij}(t))_{d \times d}$
is a nonnegative symmetric matrix with the smallest eigenvalue $\delta(t)\geq
0$. We show that there is a positive constant $N$ such that
$\int_0^{T} \left(\int_{\mathbb{R}^d} \left(|u|+|u_{xx} |\right)^{p} dx
\right)^{q/p} e^{-q\int_0^t c(s)ds} w(\alpha(t)) \delta(t) dt \leq N \int_0^{T}
\left(\int_{\mathbb{R}^d} \left|f\left(t,x\right)\right|^{p} dx \right)^{q/p}
e^{-q\int_0^t c(s)ds} w(\alpha(t)) (\delta(t))^{1-q} dt,$ where $p,q \in
(1,\infty)$, $\alpha(t)=\int_0^t \delta(s)ds$, and $w$ is a Muckenhoupt's
weight. | Ildoo Kim | 2023-01-02T00:32:43Z | http://arxiv.org/abs/2301.00492v1 | A weighted \(L_{q}(L_{p})\)-theory for fully degenerate second-order evolution equations with unbounded time-measurable coefficients
###### Abstract.
We study the fully degenerate second-order evolution equation
\[u_{t}=a^{ij}(t)u_{x^{i}x^{j}}+b^{i}(t)u_{x^{i}}+c(t)u+f,\quad t>0,x\in\mathbb{R} ^{d} \tag{0.1}\]
given with the zero initial data. Here \(a^{ij}(t)\), \(b^{i}(t)\), \(c(t)\) are merely locally integrable functions, and \((a^{ij}(t))_{d\times d}\) is a nonnegative symmetric matrix with the smallest eigenvalue \(\delta(t)\geq 0\). We show that there is a positive constant \(N\) such that
\[\int_{0}^{T}\left(\int_{\mathbb{R}^{d}}\left(|u|+|u_{xx}|\right)^{p} dx\right)^{q/p}e^{-q\int_{0}^{t}c(s)ds}w(\alpha(t))\delta(t)dt\] \[\leq N\int_{0}^{T}\left(\int_{\mathbb{R}^{d}}|f\left(t,x\right)|^{ p}dx\right)^{q/p}e^{-q\int_{0}^{t}c(s)ds}w(\alpha(t))(\delta(t))^{1-q}dt, \tag{0.2}\]
where \(p,q\in(1,\infty)\), \(\alpha(t)=\int_{0}^{t}\delta(s)ds\), and \(w\) is a Muckenhoupt's weight.
Key words and phrases:Degenerate second-order parabolic equations, Weighted \(L_{p}\)-estimates, zero initial-value problem 2010 Mathematics Subject Classification: 35K65, 35B65, 35K15 I. Kim has been supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT) (No.NRF-2020R1A2C1A01003959).
## 1. introduction
Needless to say, the second-order partial differential equations equations with degenerate or unbounded coefficients have been extensively studied for a long time. To the best of our knowledge, the starting point of this study was Keldysh, Fichera, and Oleinik's work (see e.g. [23, 10, 37, 38, 39]). Moreover, it is very popular to study a (maximal regularity) \(L_{p}\)-theory and its generalization to \(L_{q}(L_{p})\)-theory in harmonic analysis, Fourier analysis, and partial differential equations after Calderon and Zygmund's work. For the historical works and backgrounds of \(L_{p}\)-theories and their generalizations, we refer some outstanding books [31, 32, 42, 18, 19, 21, 22]. These days, there are tons of papers handling degenerate and unbounded coefficients in various prospectives. Among recent works with various prospectives, we only refer the author to [26, 11, 9, 35, 12, 17, 33, 16, 15, 27, 34, 40, 2, 36, 1, 6, 14, 20, 7, 8, 13, 28, 41, 43]. These results handle equations having degenerate or unbounded coefficients in Sobolev spaces.
With the degeneracy in the equation, it is hard to expect to obtain full regularity estimates of solutions unless there are weights involved in estimates. For instance, by taking the leading coefficients \(a^{ij}(t)=0\) for all \(i,j,t\), we see that it is not possible
to obtain the unweighted maximal \(L_{p}\)-regularity
\[\int_{0}^{T}\int_{\mathbb{R}^{d}}|u_{xx}(t,x)|^{p}dtdx\leq N\int_{0}^{T}\int_{ \mathbb{R}^{d}}|f(t,x)|^{p}dtdx. \tag{1.1}\]
Hence, weights have been commonly used to controls the degeneracy or unboundeness (singularity) of the coefficients. However, most results in the literature focus on degeneracy or singularity near the boundary of a domain. If we consider the whole space, it is naturally not expected that there is a regularity gain of a solution in general due to the extreme case such as \(u_{t}=f\), which could be understood as one of equation (0.1) with coefficients \(a^{ij}(t)=0\) for all \(t\). Hence when it comes to the solvability of second-order equations with degeneracy in the whole space, people used to only prove the existence and uniqueness of a weak solution without considering regularity gain from the equations.
Nonetheless, there is a way to express an \(L_{p}\)-norm of second derivatives of a solution \(u\) with a weight which could be singular even in the whole space. For instance, assume that the degeneracy happens on a time interval \((a,b)\), then \(\delta(t)=0\) for all \(t\in(a,b)\). Then we cannot expect the smoothing gain from the diffusion equations and the Sobolev second derivatives \(u_{xx}\) fails to exist. However, since there is the weight \(\delta(t)\) in the first line of (0.2), the inequality is still true if we understand the second line of (0.2) as an improper integral. To the best of our knowledge, this type estimate is firstly introduced by the author and collaborator in [24, 25]. In this paper, we add Muckenhoupt's weights in estimates and extend \(L_{p}\)-estimates to \(L_{q}(L_{p})\)-estimates with lower-order terms.
It is well-known that probabilistic methods are very powerfully working for leading coefficients which are unbounded and have degeneracy (cf. [29, 4]). We remark that probabilistic tools play very important roles to obtain our results. Especially, to obtain (0.2), it requires to understand the relation among the constant \(N\), the degeneracy, and the unboundedness of coefficients \(a^{ij}(t)\). Maximal \(L_{p}\)-regularity estimates such as (1.1) originally came from \(L_{p}\)-boundedness of singular integral operators. However, the exact relation among parameters related to coefficients is hard to obtain from singular integral theories since all parameters are combined in a complicated way to control singularities of operators. We found that this relation could be more clear by applying probabilistic representations of solutions (see Theorem 4.3).
We believe that our result could initiate various interesting weighted estimates for degenerate second-order equations with space dependent coefficients or domain problems.
This paper is organized as follows. In Section 2, we introduce our main results. A probabilistic solution representation and its application to estimate a solution \(u\) with general weights are given in Section 3 Weighted estimates for non-degenerate equations are shown in Section 4. Finally, the proof of the main theorem is specified in Section 5.
We finish the introduction with notation used in the article.
* We use Einstein's summation convention throughout this paper.
* \(\mathbb{N}\) and \(\mathbb{Z}\) denote the natural number system and the integer number system, respectively. As usual \(\mathbf{R}^{d}\) stands for the Euclidean space of points \[x=\begin{pmatrix}x^{1}\\ x^{2}\\ \vdots\\ x^{d}\end{pmatrix}.\] Frequently, the coordinates of the vector \(x\) is denoted in a row form, i.e. \(x=(x^{1},\ldots,x^{d})\). We use the notation \((a^{ij})_{d\times d}\) to denote the \(d\) by \(d\) matrix whose entry in \(i\)-th row and \(j\)-th column is \(a^{ij}\). For \(i=1,...,d\), multi-indices \(\alpha=(\alpha_{1},...,\alpha_{d})\), \(\alpha_{i}\in\{0,1,2,...\}\), and functions \(u(x)\) we set \[u_{x^{i}}=\frac{\partial u}{\partial x^{i}}=D_{i}u,\quad D^{\alpha}u=D_{1}^{ \alpha_{1}}\cdot...\cdot D_{d}^{\alpha_{d}}u.\]
* \(C^{\infty}(\mathbb{R}^{d})\) denotes the space of infinitely differentiable functions on \(\mathbb{R}^{d}\). \(\mathcal{S}(\mathbb{R}^{d})\) is the Schwartz space consisting of infinitely differentiable and rapidly decreasing functions on \(\mathbb{R}^{d}\). By \(C_{c}^{\infty}(\mathbb{R}^{d})\), we denote the subspace of \(C^{\infty}(\mathbb{R}^{d})\) with the compact support.
* For \(n\in\mathbb{N}\) and \(\mathcal{O}\subset\mathbb{R}^{d}\) and a normed space \(F\), by \(C(\mathcal{O};F)\), we denote the space of all \(F\)-valued continuous functions \(u\) on \(\mathcal{O}\) having \(|u|_{C}:=\sup_{x\in O}|u(x)|_{F}<\infty\).
* For \(p\in[1,\infty)\), a normed space \(F\), and a measure space \((X,\mathcal{M},\mu)\), by \(L_{p}(X,\mathcal{M},\mu;F)\), we denote the space of all \(F\)-valued \(\mathcal{M}^{\mu}\)-measurable functions \(u\) so that \[\left\|u\right\|_{L_{p}(X,\mathcal{M},\mu;F)}:=\left(\int_{X}\left\|u(x) \right\|_{F}^{p}\mu(dx)\right)^{1/p}<\infty,\] where \(\mathcal{M}^{\mu}\) denotes the completion of \(\mathcal{M}\) with respect to the measure \(\mu\). If there is no confusion for the given measure and \(\sigma\)-algebra, we usually omit them.
* For measurable set \(\mathcal{O}\subset\mathbb{R}^{d}\), \(|\mathcal{O}|\) denotes the Lebesgue measure of \(\mathcal{O}\).
* By \(\mathcal{F}\) and \(\mathcal{F}^{-1}\) we denote the d-dimensional Fourier transform and the inverse Fourier transform, respectively. That is, \(\mathcal{F}[f](\xi):=\int_{\mathbf{R}^{d}}e^{-ix\cdot\xi}f(x)dx\) and \(\mathcal{F}^{-1}[f](x):=\frac{1}{(2\pi)^{d}}\int_{\mathbf{R}^{d}}e^{i\xi\cdot x }f(\xi)d\xi\).
* We write \(a\lesssim b\) if there is a positive constant \(N\) such that \(a\leq Nb\). The constant \(N\) may change from a location to a location, even within a line. If we write \(N=N(a,b,\cdots)\), this means that the constant \(N\) depends only on \(a,b,\cdots\). The dependence of the constant \(N\) is usually specified in the statements of theorems, lemmas, and corollaries.
## 2. Setting and main result
Throughout the paper, we fix \(d\in\mathbb{N}\) to denote the dimension of the space variable and all functions are real-valued if there is no special comment. We study the following degenerate second-order evolution equation
\[u_{t}(t,x)=a^{ij}(t)u_{x^{i}x^{j}}(t,x)+b^{i}(t)u_{x^{i}}(t,x)+c (t)u(t,x)+f(t,x),\] \[u(0,x)=0, (t,x)\in(0,T)\times\mathbb{R}^{d}. \tag{2.1}\]
We emphasize that our coefficients \(a^{ij}(t)\), \(b^{i}(t)\), and \(c(t)\) do not satisfy any regularity conditions. More importantly, our coefficients \(a^{ij}(t)\), \(b^{i}(t)\), and \(c(t)\) can be unbounded and degenerate. Here are more concrete conditions on the coefficients \(a^{ij}(t)\), \(b^{i}(t)\), and \(c(t)\).
**Assumption 2.1**.:
1. Assume that there exists a measurable mapping \(\delta(t)\) from \((0,\infty)\) to \([0,\infty)\) such that \[a^{ij}(t)\xi^{i}\xi^{j}\geq\delta(t)|\xi|^{2}\quad\forall t\in[0,\infty)\text{ and }\xi\in\mathbb{R}^{d}.\]
2. Assume that the coefficients \(a^{ij}(t)\), \(b^{i}(t)\), and \(c(t)\) are locally integrable, i.e. \[\int_{0}^{T}\left(|a^{ij}(t)|+|b^{i}(t)|+|c(t)|\right)dt<\infty\qquad\forall T \in(0,\infty)\text{ and }\forall i,j.\] (2.2)
For \(T\in(0,\infty)\) and a measurable function \(u\) on \((0,T)\times\mathbb{R}^{d}\), we say that \(u\) is locally integrable if
\[\int_{0}^{t}\int_{|x|<c}|u(t,x)|dxdt<\infty\quad\forall t\in(0,T)\text{ and }\forall c>0.\]
**Definition 2.2** (Solution).: Let \(T\in(0,\infty)\) and \(f\) be a locally integrable function on \((0,T)\times\mathbb{R}^{d}\). We say that a locally integrable function \(u\) is a solution to (2.1) if for any \(\varphi\in C_{c}^{\infty}(\mathbb{R}^{d})\),
\[(u(t,\cdot),\varphi) =\int_{0}^{t}\left(u(s,\cdot),a^{ij}(s)\varphi_{x^{i}x^{j}}+b^{i }(s)\varphi_{x^{i}}+c(s)\varphi\right)ds\] \[\quad+\int_{0}^{t}\left(f(s,\cdot),\varphi\right)ds\quad\forall t \in(0,T), \tag{2.3}\]
where \((u(t,\cdot),\varphi)\) denotes the \(L_{2}(\mathbb{R}^{d})\)-inner product, i.e.
\[(u(t,\cdot),\varphi):=\int_{\mathbb{R}^{d}}u(t,x)\varphi(x)dx.\]
_Remark 2.3_.: Due to the definition of a solution, it is obvious that
\[a^{ij}(t)u_{x^{i}x^{j}}=\frac{a^{ij}(t)+a^{ji}(t)}{2}u_{x^{i}x^{j}}.\]
Thus without loss of generality, we may assume that our coefficient matrix \((a^{ij}(t))_{d\times d}\) is nonnegative symmetric for all \(t\). Additionally, \(\delta(t)\) in Assumption 2.1(i) can be chosen by the smallest eigenvalue of \((a^{ij}(t))_{d\times d}\).
We recall the definition of Muckenhoupt's weights.
**Definition 2.4** (Muckenhoupt's weight).: For \(q\in(1,\infty)\), let \(A_{q}(\mathbb{R})\) be the class of all nonnegative and locally integrable functions \(w\) on \(\mathbb{R}\) satisfying
\[[w]_{A_{q}(\mathbb{R})}:=\sup_{-\infty<a<b<\infty}\left(\fint_{(a,b)}w(t)dt \right)\left(\fint_{(a,b)}w(t)^{-1/(q-1)}dt\right)^{q-1}<\infty,\]
where
\[\fint_{(a,b)}w(t)dt=\frac{\int_{a}^{b}w(t)dt}{b-a}.\]
Finally, we introduce our main result.
**Theorem 2.5**.: _Let \(T\in(0,\infty)\), \(p,q\in(1,\infty)\), and \(w\in A_{q}(\mathbb{R})\). Suppose that Assumption 2.1 holds. Then for any locally integrable function \(f\) on \((0,T)\times\mathbb{R}^{d}\), there is a unique solution \(u\) to equation (2.1) such that_
\[\sup_{t\in[0,T]}\left[\left(\int_{\mathbb{R}^{d}}\left|u\left(t,x \right)\left(t,x\right)\right|^{p}dx\right)^{q/p}e^{-q\int_{0}^{t}c(s)ds}\right]\] \[\leq\left[\int_{0}^{\alpha(T)}w(t)^{-\frac{1}{q-1}}dt\right]^{q-1 }\int_{0}^{T}\left(\int_{\mathbb{R}^{d}}\left|f\left(t,x\right)\right|^{p}dx \right)^{q/p}e^{-q\int_{0}^{t}c(s)ds}w(\alpha(t))|\delta(t)|^{1-q}dt, \tag{2.4}\]
\[\int_{0}^{T}\left(\int_{\mathbb{R}^{d}}\left|u\left(t,x\right) \left(t,x\right)\right|^{p}dx\right)^{q/p}e^{-q\int_{0}^{t}c(s)ds}w(\alpha(t)) \delta(t)dt\] \[\leq[w]_{A_{q}(\mathbb{R})}[\alpha(T)]^{q}\int_{0}^{T}\left(\int _{\mathbb{R}^{d}}\left|f\left(t,x\right)\right|^{p}dx\right)^{q/p}e^{-q\int_{ 0}^{t}c(s)ds}w(\alpha(t))|\delta(t)|^{1-q}dt, \tag{2.5}\]
_and_
\[\int_{0}^{T}\left(\int_{\mathbb{R}^{d}}\left|u_{xx}\left(t,x \right)\left(t,x\right)\right|^{p}dx\right)^{q/p}e^{-q\int_{0}^{t}c(s)ds}w( \alpha(t))\delta(t)dt\] \[\leq N\int_{0}^{T}\left(\int_{\mathbb{R}^{d}}\left|f\left(t,x \right)\right|^{p}dx\right)^{q/p}e^{-q\int_{0}^{t}c(s)ds}w(\alpha(t))|\delta(t )|^{1-q}dt, \tag{2.6}\]
_where \(\alpha(t)=\int_{0}^{t}\delta(s)ds\) and \(N\) is a positive constant depending only on \(d\), \(p\), \(q\), and \([w]_{A_{q}(\mathbb{R})}\). In particular, for any \(-1<\beta<q-1\),_
\[\sup_{t\in[0,T]}\left[\left(\int_{\mathbb{R}^{d}}\left|u\left(t, x\right)\left(t,x\right)\right|^{p}dx\right)^{q/p}e^{-q\int_{0}^{t}c(s)ds}\right]\] \[\leq\left[\frac{q-1}{q-1-\beta}\right]^{q-1}\left[\int_{0}^{T} \delta(t)dt\right]^{q-1-\beta}\] \[\quad\times\int_{0}^{T}\left(\int_{\mathbb{R}^{d}}\left|f\left(t, x\right)\right|^{p}dx\right)^{q/p}e^{-q\int_{0}^{t}c(s)ds}\left|\int_{0}^{t} \delta(s)ds\right|^{\beta}(\delta(t))^{1-q}dt, \tag{2.7}\] \[\int_{0}^{T}\left(\int_{\mathbb{R}^{d}}\left|u\left(t,x\right) \left(t,x\right)\right|^{p}\right)^{q/p}e^{-q\int_{0}^{t}c(s)ds}\left|\int_{0} ^{t}\delta(s)ds\right|^{\beta}\delta(t)dt\] \[\leq\left[\left|t\right|^{\beta}\right]_{A_{p}(\mathbb{R})}\left[ \int_{0}^{T}\delta(t)dt\right]^{q}\] \[\quad\times\int_{0}^{T}\left(\int_{\mathbb{R}^{d}}\left|f\left(t, x\right)\right|^{p}dx\right)^{q/p}e^{-q\int_{0}^{t}c(s)ds}\left|\int_{0}^{t} \delta(s)ds\right|^{\beta}(\delta(t))^{1-q}dt, \tag{2.8}\]
_and_
\[\int_{0}^{T}\left(\int_{\mathbb{R}^{d}}\left|u_{xx}\left(t,x \right)\left(t,x\right)\right|^{p}\right)^{q/p}e^{-q\int_{0}^{t}c(s)ds}\left| \int_{0}^{t}\delta(s)ds\right|^{\beta}\delta(t)dt\] \[\leq N\int_{0}^{T}\left(\int_{\mathbb{R}^{d}}\left|f\left(t,x \right)\right|^{p}dx\right)^{q/p}e^{-q\int_{0}^{t}c(s)ds}\left|\int_{0}^{t} \delta(s)ds\right|^{\beta}(\delta(t))^{1-q}dt, \tag{2.9}\]
_where \(N\) depends only on \(d\), \(p\), \(q\), and \(\beta\)._
A proof of Theorem 2.5 is given in Section 5.
_Remark 2.6_.:
1. For \(t\in[0,T]\) so that \(\delta(t)=0\), the existence of the Sobolev derivatives \(u_{xx}(t,x)\) is not guaranteed by (2.6). Moreover, \(\delta(t)\) can be zero on a set with a positive Lebesgue measure, which is far from Muckenhoupt's weight.
2. Since \(\delta(t)\) can be zero on a set with a positive measure, the integral \[\int_{0}^{T}\left(\int_{\mathbb{R}^{d}}|f\left(t,x\right)|^{p}dx\right)^{q/p} e^{-q\int_{0}^{t}c(s)ds}w(\alpha(t))(\delta(t))^{1-q}dt\] is understood in an improper sense, i.e. \[\int_{0}^{T}\left(\int_{\mathbb{R}^{d}}|f\left(t,x\right)|^{p}dx \right)^{q/p}e^{-q\int_{0}^{t}c(s)ds}w(\alpha(t))(\delta(t))^{1-q}dt\\ =\lim_{\varepsilon\downarrow 0}\int_{0}^{T}\left(\int_{ \mathbb{R}^{d}}|f\left(t,x\right)|^{p}dx\right)^{q/p}e^{-q\int_{0}^{t}c(s)ds}w (\alpha(t)+\varepsilon t)(\delta(t)+\varepsilon)^{1-q}dt.\] (2.10)
3. If \[\int_{0}^{T}\left(\int_{\mathbb{R}^{d}}|f(s,x)|^{p}dx\right)^{q/p}e^{-q\int_{ 0}^{t}c(s)ds}w(\alpha(s))|\delta(s)|^{1-q}ds<\infty,\] then the local integrability condition on \(f\) is not necessary in Theorem 2.5. In other words, the finiteness condition implies the local integrability of \(f\). To investigate this fact, let \(t\in(0,T)\) and \(c>0\). Then for any \(\varepsilon\in(0,1)\), applying Holder's inequality and the change of variable \(\alpha(t)+\varepsilon t\to t\), we have \[\int_{0}^{t}\int_{|x|<c}|f(t,x)|dxds\] \[=\int_{0}^{t}\int_{\mathbb{R}^{d}}|f(t,x)|1_{|x|<c}dx1_{0<s<t}ds\] \[\leq N\int_{0}^{t}\left(\int_{\mathbb{R}^{d}}|f(t,x)|^{p}dx \right)^{1/p}1_{0<s<t}ds\] \[\leq N\left[\int_{0}^{t}\left(\int_{\mathbb{R}^{d}}|f(s,x)|^{p}dx \right)^{q/p}e^{-q\int_{0}^{s}c(\rho)d\rho}w(\alpha(s)+\varepsilon s)|\delta( s)+\varepsilon|^{1-q}ds\right]^{1/q}\] \[\quad\times\left[\int_{0}^{t}e^{\frac{q}{q-1}\int_{0}^{s}c(\rho )d\rho}w^{-\frac{1}{q-1}}(\alpha(s)+\varepsilon s)(\delta(s)+\varepsilon)ds \right]^{(q-1)/q}\] \[\leq N\left[\int_{0}^{t}\left(\int_{\mathbb{R}^{d}}|f(s,x)|^{p} dx\right)^{q/p}w(\alpha(s)+\varepsilon s)|\delta(s)+\varepsilon|^{1-q}ds \right]^{1/q}\] (2.11) \[\quad\times e^{\frac{q}{q-1}\int_{0}^{t}|c(s)|ds}\left[\int_{0}^ {\alpha(t)+t}w^{-\frac{1}{q-1}}(s)ds\right]^{(q-1)/q}.\] It is obvious that \(e^{\frac{q}{q-1}\int_{0}^{t}|c(s)|ds}<\infty\) since the function \(c(t)\) is locally integrable. Moreover, since \(w\in A_{p}(\mathbb{R})\), \(\int_{0}^{\alpha(t)+t}w^{-\frac{1}{q-1}}(s)ds\) is finite. Therefore, taking \(\varepsilon\to 0\) in (2.11), (formally) we obtain the local integrability of \(f\).
4. (2.4) and (2.5) hold even for \(p=1\) or \(p=\infty\) (see Corollary 3.3). Moreover, it is easy to check that (2.4) is a stronger estimate than (2.5) with a help of the definition of Muckenhoupt's weight, i.e. (2.4) implies (2.5). However, (2.5) can be slightly improved by using the probabilistic representation of a solution in the sense that it cannot be obtained from (2.4) directly in general. Indeed, formally using (3.12) with \[h_{1}(t)=w(\alpha(t))\delta(t)\] and \[h_{2}(t)=w(\alpha(t))|\delta(t)|^{1-q},\] we have \[\int_{0}^{T}\|u(t,\cdot)\|_{L_{p}}^{q}e^{-q\int_{0}^{t}c(s)ds}w( \alpha(t))\delta(t)dt\] \[\leq\int_{0}^{T}\Bigg{[}w(\alpha(t))\delta(t)\left[\int_{0}^{t}|w( \alpha(s))|\delta(s)|^{1-q}|^{-\frac{1}{q-1}}ds\right]^{q-1}\] \[\qquad\times\int_{0}^{t}\|f(s,\cdot)\|_{L_{p}}^{q}e^{-q\int_{0}^{t}c(s) ds}w(\alpha(s))|\delta(s)|^{1-q}ds\Bigg{]}dt.\]
5. Obviously, we can obtain \[\int_{0}^{T}\left(\int_{\mathbb{R}^{d}}|u_{xx}\left(t,x\right)( t,x)|^{p}\,w_{0}(x)dx\right)^{q/p}e^{-q\int_{0}^{t}c(s)ds}w(\alpha(t)) \delta(t)dt\] \[\leq N\int_{0}^{T}\left(\int_{\mathbb{R}^{d}}|f\left(t,x\right)|^ {p}\,w_{0}(x)dx\right)^{q/p}e^{-q\int_{0}^{t}c(s)ds}w(\alpha(t))|\delta(t)|^{1 -q}dt\] if \(w_{0}(x)\) is bounded both below and above. However, if \(w(x)\) has a degeneracy or a singularity (unboundedness), then we believe that it is impossible to add \(w_{0}\in A_{p}(\mathbb{R}^{d})\) in the estimates. In other words, generally, it is not expected to find a positive constant \(N\) such that \[\int_{0}^{T}\left(\int_{\mathbb{R}^{d}}|u_{xx}\left(t,x\right)(t, x)|^{p}\,w_{0}(x)dx\right)^{q/p}e^{-q\int_{0}^{t}c(s)ds}w(\alpha(t)) \delta(t)dt\] \[\leq N\int_{0}^{T}\left(\int_{\mathbb{R}^{d}}|f\left(t,x\right)|^ {p}\,w_{0}(x)dx\right)^{q/p}e^{-q\int_{0}^{t}c(s)ds}w(\alpha(t))|\delta(t)|^{1 -q}dt.\] (2.12) To claim it, assume that (2.12) holds with \(b(t)=c(t)=0\) for all \(t\). Then we get \[\int_{0}^{T}\left(\int_{\mathbb{R}^{d}}|u_{xx}\left(t,x\right)(t, x)|^{p}\,w_{0}(x)dx\right)^{q/p}w(\alpha(t))\delta(t)dt\] \[\leq N\int_{0}^{T}\left(\int_{\mathbb{R}^{d}}|f\left(t,x\right)|^ {p}\,w_{0}(x)dx\right)^{q/p}w(\alpha(t))|\delta(t)|^{1-q}dt.\] (2.13) Then the function \(v(t,x)=u\left(t,x+\int_{0}^{t}b(s)ds\right)\) becomes a solution to \[v_{t}(t,x)=a^{ij}(t)v_{x^{i}x^{j}}(t,x)+b^{i}(t)v_{x^{i}}(t,x)+ f\left(t,x+\int_{0}^{t}b(s)ds\right),\] \[v(0,x)=0,\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\
Thus by (2.13) with \(p=q=2\) and \(\delta(t)=1\), we obtain
\[\int_{0}^{T}\int_{\mathbb{R}^{d}}\left|v_{xx}\left(t,x\right)(t,x) \right|^{2}w_{0}\left(x+\int_{0}^{t}b(s)ds\right)w(t)dxdt\] \[\leq N\int_{0}^{T}\int_{\mathbb{R}^{d}}\left|f\left(t,x\right) \right|^{2}w_{0}\left(x+\int_{0}^{t}b(s)ds\right)w(t)dxdt.\]
Observe that
\[(t,x)\mapsto w(t)w_{0}\left(x+\int_{0}^{t}b(s)ds\right)\notin A_{2}(\mathbb{R} ^{d+1})\]
unless \(\int_{0}^{t}b(s)ds\) is a constant vector uniformly for all \(t\) since \(w\) has a singularity or a degeneracy in general. Therefore we cannot expect (2.12) if there is a non-trivial coefficient \(b(t)\) in the equation. Moreover, our main tool is the probabilistic solution representation such as (3.2). We use the translation invariant property of \(L_{p}\)-norms with this representation in many parts of proofs of the main theorem. Thus (2.12) is impossible to obtain by our method even for the case \(b(t)=0\) for all \(t\) (see Remark 4.4).
4. All constants in estimates in Theorem 2.5 do not depend on the integrals of coefficients \(a^{ij}\), \(b^{i}\), and \(c\). Thus for a fixed time \(T\in(0,\infty)\), the integrability condition on the coefficients (2.2) can be relaxed to \[\int_{0}^{t}\left(|a^{ij}(t)|+|b^{i}(t)|+|c(t)|\right)dt<\infty \qquad\forall t\in(0,T)\text{ and }\forall i,j.\]
## 3. Probabilistic solution representations
In this section, we consider equations without lower-order terms first, i.e.
\[u_{t}(t,x)=a^{ij}(t)u_{x^{i}x^{j}}(t,x)+f(t,x)\qquad(t,x)\in(0,T )\times\mathbb{R}^{d}\] \[u(0,x)=0. \tag{3.1}\]
Consider a Brownian motion \(B_{t}\) in a filtered probability space \((\Omega,\mathcal{F}_{t},\mathbb{P})\) with the usual condition. It is well-known that any predictable function \(\sigma(t):\Omega\times(0,T)\rightarrow\mathbb{R}\) satisfying
\[\int_{0}^{t}|\sigma(s)|^{2}ds<\infty\quad(a.s.)\quad\forall t\in[0,t],\]
the Ito integral
\[X_{t}=\int_{0}^{t}\sigma(s)dB_{s}\]
is well-defined and Ito's formula works for the stochastic process \(f(X_{t})\) with a smooth function \(f\) (cf. [30, Chapter 5]). Moreover, our solution \(u\) to equation (3.1) can be derived from the expectation of a composition of a function \(f\) and the stochastic process \(X_{t}\). Here is a more explicit statement.
**Theorem 3.1**.: _Let \(T\in(0,\infty)\) and \(f\) be a locally integrable function on \((0,T)\times\mathbb{R}^{d}\). Assume that the function \(t\in(0,\infty)\mapsto A(t):=\left(a^{ij}(t)\right)_{d\times d}\) is locally integrable, i.e. for each \(i\) and \(j\),_
\[\int_{0}^{T}a^{ij}(t)dt<\infty\]
_and the coefficients \(\left(a^{ij}(t)\right)_{d\times d}\) are nonnegative, i.e._
\[a^{ij}(t)\xi^{i}\xi^{j}\geq 0\quad\forall\xi\in\mathbb{R}^{d}\text{ and }\forall t\in(0,T].\]
_Then there exists a unique solution \(u\) to (3.1) and this solution \(u\) is given by_
\[u(t,x)=\int_{0}^{t}\mathbb{E}\left[f(s,x+X_{t}-X_{s})\right]ds, \tag{3.2}\]
_where_
\[X_{t}:=\sqrt{2}\int_{0}^{t}\sqrt{A}^{ij}(s)dB_{s}^{j},\]
_and \(B_{t}=(B_{t}^{1},\ldots,B_{t}^{d})\) is a \(d\)-dimensional Brownian motion (Wiener process) and the integral is Ito's stochastic integral. Moreover, for any \(p\in[1,\infty]\), we have_
\[\|u(t,\cdot)\|_{L_{p}}\leq\int_{0}^{t}\|f(s,\cdot)\|_{L_{p}}ds\quad\forall t \in[0,T] \tag{3.3}\]
_and for any functions \(h_{1}\) and \(h_{2}\) on \([0,T]\) which are positive (a.e.), we have_
\[\int_{0}^{T}\|u(t,\cdot)\|_{L_{p}}^{q}h_{1}(t)dt\] \[\leq\int_{0}^{T}\left[h_{1}(t)\left[\int_{0}^{t}|h_{2}(s)|^{- \frac{1}{q-1}}ds\right]^{q-1}\int_{0}^{t}\|f(s,\cdot)\|_{L_{p}}^{q}h_{2}(s)ds \right]dt. \tag{3.4}\]
Proof.: **Part I.** (Uniqueness)
Even though the coefficients can be unbounded or degenerate, the uniqueness of a solution can be easily obtained from a classical Fourier transform method with Gronwall's inequality. To give a rigorous detail, choose a \(\varphi\) which is a nonnegative function in \(C_{c}^{\infty}(\mathbb{R}^{d})\) with a unit integral. For \(\varepsilon\in(0,1)\), denote
\[\varphi^{\varepsilon}(x):=\frac{1}{\varepsilon^{d}}\varphi\left(\frac{x}{ \varepsilon}\right).\]
Let \(u\) be a solution to
\[u_{t}(t,x)=a^{ij}(t)u_{x^{i}x^{j}}(t,x)+f(t,x)\] \[u(0,x)=0\]
and define
\[u^{\varepsilon}(t,x)=\int_{\mathbb{R}^{d}}u(t,y)\varphi^{\varepsilon}(x-y)dy.\]
It is sufficient to show that for any \(\varepsilon\in(0,1)\) and \(t\in(0,T)\), \(u(t,x)=0\) for almost every \(x\ in\mathbb{R}^{d}\). Fix \(\varepsilon\in(0,1)\), \(t\in(0,T)\), and \(x\in\mathbb{R}^{d}\). Recalling the definition of a solution and putting \(\varphi^{\varepsilon}(x-\cdot)\) in (2.3), we have
\[u^{\varepsilon}(t,x)=\int_{0}^{t}a^{ij}(s)\left(u^{\varepsilon}\right)_{x^{i} x^{j}}(s,x)ds. \tag{3.5}\]
For each \(\varepsilon\in(0,1)\) and \(t\in(0,T)\), (3.5) holds for all \(x\in\mathbb{R}^{d}\). Take the \(d\)-dimensional Fourier transform with respect to \(x\) in (3.5) and absolute value. Then we have
\[|\mathcal{F}[u^{\varepsilon}(t,\cdot)](\xi)|\leq\int_{0}^{t}\left|a^{ij}(s) \xi^{i}\xi^{j}\right|\left|\left[\mathcal{F}[u^{\varepsilon}(s,\cdot)]\left( \xi\right)\right|ds \tag{3.6}\]
for all \(\xi\in\mathbb{R}^{d}\). Note that for each \(\varepsilon\in(0,1)\) and \(\xi\in\mathbb{R}^{d}\), (3.6) holds for all \(t\in(0,T)\). Thus finally applying Gronwall's inequality, we have
\[|\mathcal{F}[u^{\varepsilon}(t,\cdot)](\xi)|=0\]
for all \(\varepsilon\in(0,1)\), \(t\in(0,T)\), and \(\xi\in\mathbb{R}^{d}\), which completes the uniqueness of a solution \(u\).
**Part II.** (Existence)
The existence of a solution \(u\) cannot be shown based on a classical Fourier transform method since the coefficients can be degenerate. Thus we choose a probabilistic method to show the existence of a solution. Since it is a well-known fact if the inhomogeneous term \(f\) is smooth (even for more general \(f\) in a \(L_{p}\)-class). However, it is not easy to find an appropriate reference which exactly fit to our setting (cf. [25, Section 3]), we give a proof with a detail. Our main tools are Ito's formula and a smooth approximation. We divide the proof into three steps.
**Step 1.** (Smooth case) In this step, we assume that for each \(t\in(0,T)\), \(f(t,x)\) is twice continuously differentiable with respect to \(x\).
Recall that for each \(t\), \(\left(a^{ij}(t)\right)_{d\times d}\) is a nonnegative symmetric matrix. Then there exists a \(d\times d\) matrix \(\sqrt{A}(t)\) such that
\[A(t)=\sqrt{A}(t)\times\sqrt{A}^{*}(t),\]
where \(\sqrt{A}^{*}\) denotes the transpose matrix of \(\sqrt{A}\). Recall
\[X_{t}=\sqrt{2}\int_{0}^{t}\sqrt{A}(s)dB_{s}.\]
We claim that the function
\[u(t,x):=\int_{0}^{t}\mathbb{E}\left[f(s,x+X_{t}-X_{s})\right]ds \tag{3.7}\]
becomes a solution to (2.1). As mentioned before, it is not easy to show that \(u\) defined in (3.2) becomes a solution to (2.1) based on an analytic method such as the Fourier transform since the degeneracy of \(a^{ij}(t)\) makes the Fourier transform of \(u\) lose the integrability. However, it is still possible to apply Ito's formula. Fix \(s\in(0,T)\) and \(x\in\mathbb{R}^{d}\). Apply Ito's formula to
\[f\left(s,x+\sqrt{2}\int_{s}^{t}\sqrt{A}(r)dB_{r}\right).\]
Then we have
\[f\left(s,x+\sqrt{2}\int_{s}^{t}\sqrt{A}(r)dB_{r}\right) =f(s,x)+\int_{s}^{t}f_{x^{i}}\left(s,\int_{s}^{\rho}\sqrt{A}(r) dB_{r}\right)dB_{\rho}\] \[\quad+\int_{s}^{t}a^{ij}(\rho)f_{x^{i}x^{j}}\left(s,x+\sqrt{2} \int_{s}^{\rho}\sqrt{A}(r)dB_{r}\right)d\rho \tag{3.8}\]
for all \(s\leq t<T\)\((a.s)\). Taking the expectations in (3.8), using the property of the Ito integral that
\[\mathbb{E}\left[\int_{s}^{t}f_{x^{i}}\left(s,\int_{s}^{\rho}\sqrt{A}(r)dB_{r} \right)dB_{\rho}\right]=0,\]
and recalling the definition of \(X_{t}\), we have
\[\begin{split}&\mathbb{E}\left[f\left(s,x+X_{t}-X_{s}\right)\right] \\ &=f(s,x)+\mathbb{E}\left[\int_{s}^{t}a^{ij}(\rho)f_{x^{i}x^{j}} \left(s,x+X_{\rho}-X_{s}\right)d\rho\right]\end{split} \tag{3.9}\]
for all \(0<s\leq t<T\). Taking the integration \(\int_{0}^{t}\cdot\,ds\) to both sides of (3.9) and applying the Fubini Theorem, we have
\[\begin{split}&\int_{0}^{t}\mathbb{E}\left[f\left(s,x+X_{t}-X_{s} \right)\right]ds\\ &=\int_{0}^{t}f(s,x)ds+\int_{0}^{t}\mathbb{E}\left[\int_{s}^{t}a^ {ij}(\rho)f_{x^{i}x^{j}}\left(s,x+X_{\rho}-X_{s}\right)d\rho\right]ds\\ &=\int_{0}^{t}f(s,x)ds+\int_{0}^{t}a^{ij}(\rho)\int_{0}^{\rho} \mathbb{E}\left[f_{x^{i}x^{j}}\left(s,x+X_{\rho}-X_{s}\right)\right]dsd\rho. \end{split}\]
Finally due to the definition of \(u\) in (3.7), we have
\[u_{t}(t,x)=a^{ij}(t)u_{x^{i}x^{j}}(t,x)+f(t,x)\]
for all \(t\in(0,T)\) and \(x\in\mathbb{R}^{d}\).
**Step 2.** (Bounded case) In this step, we assume that \(f\) is bounded.
We use Sobolev's mollifiers. For \(\varepsilon\in(0,1)\), denote
\[f^{\varepsilon}(t,x)=\int_{\mathbb{R}^{d}}f(x-\varepsilon y) \varphi(y)dy,\]
and
\[u^{\varepsilon}(t,x)=\int_{0}^{t}\mathbb{E}\left[f^{\varepsilon} (s,x+X_{t}-X_{s})\right]ds\]
for all \(t\in(0,T)\) and \(x\in\mathbb{R}^{d}\). Then by the result in Step 1, we have
\[u^{\varepsilon}_{t}(t,x)=a^{ij}(t)u^{\varepsilon}_{x^{i}x^{j}}(t,x)+f^{ \varepsilon}(t,x).\]
In particular, applying the integration by parts, for any \(\phi\in C^{\infty}_{c}(\mathbb{R}^{d})\), we have
\[(u^{\varepsilon}(t,\cdot),\phi)=\int_{0}^{t}\left(u^{\varepsilon} (s,\cdot),a^{ij}(s)\phi_{x^{i}x^{j}}\right)ds+\int_{0}^{t}\left(f^{ \varepsilon}(s,\cdot),\phi\right)ds\quad\forall t\in(0,T).\]
Since \(f\) is bounded, applying the dominate convergence theorem one can easily check that
\[u(t,x):=\int_{0}^{t}\mathbb{E}\left[f(s,x+X_{t}-X_{s})\right]ds =\limsup_{\varepsilon\downarrow 0}u^{\varepsilon}(t,x)\ (a.e.)\]
and it becomes a solution to (3.1).
**Step 3.** (General case)
It suffices to remove the condition that \(f\) is bounded. Due to the linearity of equation (3.1) and the trivial decomposition \(f(t,x)=f^{+}(t,x)-f^{-}(t,x)\), we may assume that \(f\) is nonnegative, where \(f^{+}(t,x)=\frac{|f(t,x)|+f(t,x)}{2}\) and
\(\frac{|f(t,x)|-f(t,x)}{2}\). For \(M>0\), define \(f^{M}(t,x):=f(t,x)\wedge M:=\min\{f(t,x),M\}\) and denote
\[u^{M}(t,x)=\int_{0}^{t}\mathbb{E}\left[f^{M}(s,x+X_{t}-X_{s}) \right]ds.\]
Then by the result of step 2, for any \(M>0\), we have
\[(u^{M}(t,\cdot),\phi)=\int_{0}^{t}\left(u^{M}(s,\cdot),a^{ij}(s) \phi_{x^{i}x^{j}}\right)ds+\int_{0}^{t}\left(f^{M}(s,\cdot),\phi\right)ds\quad \forall t\in(0,T). \tag{3.10}\]
It is obvious that \(u^{M}(t,x)\to u(t,x)\) for all \(t\in(0,T)\) and \(x\in\mathbb{R}^{d}\) as \(M\to\infty\). Finally, taking \(M\to\infty\) and applying the monotone and dominate convergence theorems in (3.10), we show that \(u\) is a solution to (3.1).
**Part III.** (Estimate)
We prove (3.3) and (3.4). By (3.2), the generalized Minkowski inequality, and the translation invariant property of the \(L_{p}\)-space,
\[\|u(t,\cdot)\|_{L_{p}}\leq\int_{0}^{t}\|f(s,\cdot)\|_{L_{p}}ds.\]
Moreover, applying Holder's inequality, we have
\[\int_{0}^{T}\|u(t,\cdot)\|_{L_{p}}^{q}h_{1}(t)dt\leq\int_{0}^{T}h _{1}(t)\int_{0}^{t}\|f(s,\cdot)\|_{L_{p}}^{q}h_{2}(s)ds\left[\int_{0}^{t}|h_{ 2}(s)|^{-\frac{1}{q-1}}ds\right]^{q-1}dt.\]
_Remark 3.2_.: Assume that
\[\int_{0}^{T}\|f(s,\cdot)\|_{L_{p}}dt<\infty.\]
Then due to (3.3) and the linearity of (3.1), one can easily find a continuous modification of \(u\) so that
\[\sup_{t\in[0,T]}\|u(t,\cdot)\|_{L_{p}}\leq\int_{0}^{T}\|f(s,\cdot) \|_{L_{p}}ds\quad\forall t\in[0,T].\]
**Corollary 3.3**.: _Let \(T\in(0,\infty)\), \(p\in[1,\infty]\), and \(q\in(1,\infty)\). Suppose that Assumption 2.1 holds. Additionally, assume that \(h_{1}\) and \(h_{2}\) are functions on \([0,T]\) which are positive (a.e.). Then for any locally integrable function \(f\) on \((0,T)\times\mathbb{R}^{d}\), there is a unique solution \(u\) to equation (2.1) such that_
\[\sup_{t\in[0,T]}\left[\|u(t,\cdot)\|_{L_{p}}^{q}e^{-q\int_{0}^{t} c(s)ds}\right]\] \[\leq\left[\int_{0}^{T}|h_{2}(t)|^{-\frac{1}{q-1}}dt\right]^{q-1} \int_{0}^{T}e^{-q\int_{0}^{t}c(s)ds}\|f(t,\cdot)\|_{L_{p}}^{q}h_{2}(t)dt. \tag{3.11}\]
_and_
\[\int_{0}^{T}\|u(t,\cdot)\|_{L_{p}}^{q}e^{-q\int_{0}^{t}c(s)ds}h_{1}(t)dt\] \[\leq\int_{0}^{T}\left[h_{1}(t)\left[\int_{0}^{t}|h_{2}(s)|^{-\frac{ 1}{q-1}}ds\right]^{q-1}\int_{0}^{t}e^{-q\int_{0}^{s}c(\rho)d\rho}\|f(s,\cdot)\|_ {L_{p}}^{q}h_{2}(s)ds\right]dt. \tag{3.12}\]
Proof.: Let \(v\) be a solution to the equation
\[v_{t}(t,x) =a^{ij}(t)v_{x^{i}x^{j}}(t,x)+e^{-\int_{0}^{t}c(s)ds}f\left(t,x- \int_{0}^{t}b(s)ds\right),\] \[v(0,x) =0,\hskip 113.811024pt(t,x)\in(0,T)\times\mathbb{R}^{d}.\]
Define \(U(t,x)=e^{\int_{0}^{t}c(s)ds}v\left(t,x+\int_{0}^{t}b(s)ds\right)\), where \(b(t)=(b^{1}(t),\ldots,b^{d}(t))\). Then
\[U_{t}(t,x)\] \[=c(t)U(t,x)+e^{\int_{0}^{t}c(s)ds}\left(v_{t}\left(t,x+\int_{0}^{ t}b(s)ds\right)+b^{i}(t)v_{x^{i}}\left(t,x+\int_{0}^{t}b(s)ds\right)\right)\] \[=c(t)U(t,x)\] \[\quad+e^{\int_{0}^{t}c(s)ds}\left(a^{ij}(t)v_{x^{i}x^{j}}\left(t,x+\int_{0}^{t}b(s)ds\right)+e^{-\int_{0}^{t}c(s)ds}f(t,x)\right)\] \[\quad+e^{\int_{0}^{t}c(s)ds}\left(b^{i}(t)v_{x^{i}}\left(t,x+\int _{0}^{t}b(s)ds\right)\right)\] \[=a^{ij}(t)U_{x^{i}x^{j}}(t,x)+b^{i}(t)U_{x^{i}}(t,x)+c(t)U(t,x)+f (t,x)\]
and
\[U(0,x)=0.\]
Thus by the uniqueness of a solution, the solution \(u\) to (2.1) is given by
\[u(t,x)=e^{\int_{0}^{t}c(s)ds}v\left(t,x+\int_{0}^{t}b(s)ds\right)\]
and obviously
\[v(t,x)=e^{-\int_{0}^{t}c(s)ds}u\left(t,x-\int_{0}^{t}b(s)ds\right).\]
Applying (3.4) to \(v\) and using the translation invariant property of \(L_{p}\)-norms, we obtain (3.12). Moreover, by (3.3) and Holder's inequality, for any \(0\leq t\leq T\), we have
\[e^{-q\int_{0}^{t}c(s)ds}\|u(t,\cdot)\|_{L_{p}}^{q}\] \[=\|v(t,\cdot)\|_{L_{p}}^{q}\] \[\leq\int_{0}^{t}e^{-q\int_{0}^{s}c(\rho)d\rho}\|f(s,\cdot)\|_{L_{ p}}^{q}h_{2}(s)ds\left[\int_{0}^{t}|h_{2}(s)|^{-\frac{1}{q-1}}ds\right]^{q-1}\] \[\leq\int_{0}^{T}e^{-q\int_{0}^{t}c(s)ds}\|f(t,\cdot)\|_{L_{p}}^{q} h_{2}(t)dt\left[\int_{0}^{T}|h_{2}(s)|^{-\frac{1}{q-1}}ds\right]^{q-1},\]
which obviously implies (3.11).
## 4. Estimates for non-degenerate equations
We start the section by reviewing previous weighted estimates with uniform elliptic and bounded coefficients and apply these estimates to our model equation (3.1). We denote
\[\|f\|_{L_{p,q}(T,w)}=\left(\int_{0}^{T}\left(\int_{\mathbb{R}^{d}}|f(t,x)|^{p}dx \right)^{q/p}w(t)dt\right)^{1/q}.\]
As usual, \(L_{p,q}(T,w)\) denote the spaces of all locally integrable functions \(f\) on \((0,T)\times\mathbb{R}^{d}\) such that \(\|f\|_{L_{p,q}(T,w)}<\infty\).
**Theorem 4.1**.: _Let \(T\in(0,\infty)\), \(p,q\in(1,\infty)\), and \(w\in A_{q}(\mathbb{R})\). Assume that the coefficients \(a^{ij}(t)\) are uniformly bounded and elliptic, i.e. there exist positive constants \(M\) and \(\delta\) such that_
\[M|\xi|^{2}\geq a^{ij}(t)\xi^{i}\xi^{j}\geq\delta|\xi|^{2}\quad\forall\xi\in \mathbb{R}^{d}. \tag{4.1}\]
_Then for any \(f\in L_{p,q}(T,w)\), there exists a unique solution \(u\) to (3.1) such that_
\[\left(\int_{0}^{T}\left(\int_{\mathbb{R}^{d}}|u_{xx}(t,x)|^{p}dx \right)^{q/p}w(t)dt\right)^{1/q}\] \[\qquad\leq N\left(\int_{0}^{T}\left(\int_{\mathbb{R}^{d}}|f(t,x)| ^{p}dx\right)^{q/p}w(t)dt\right)^{1/q}, \tag{4.2}\]
_where_
\[N=N\left(p,q,M,\delta,[w]_{A_{q}(\mathbb{R})}\right).\]
Proof.: It is a well-known result which could be easily obtained by combining some classical results. However, it is not easy to find a paper covering the result directly. Thus we refer two recent papers [5, Theorem 2.2] handling more general coefficients and [3, Theorem 2.14] studying time measurable pseudo-differential operators.
_Remark 4.2_.: Theorem 4.1 is enough for our application. However, as shown in [5, Theorem 2.2] and [3, Theorem 2.14], \(w_{0}(x)\in A_{p}(\mathbb{R}^{d})\) can be inside (4.2) if (4.1) holds. In other words, we can find a positive constant \(N\) such that such that
\[\left(\int_{0}^{T}\left(\int_{\mathbb{R}^{d}}|u_{xx}(t,x)|^{p}w_{ 0}(x)dx\right)^{q/p}w(t)dt\right)^{1/q}\] \[\qquad\leq N\left(\int_{0}^{T}\left(\int_{\mathbb{R}^{d}}|f(t,x) |^{p}w_{0}(x)dx\right)^{q/p}w(t)dt\right)^{1/q},\]
where
\[N=N\left(p,q,M,\delta,[w]_{A_{q}(\mathbb{R})},[w_{0}]_{A_{p}(\mathbb{R}^{d})} \right).\]
Next we want to enhance Theorem 4.1. Specifically, we show the constant \(N\) in (4.2) is independent of the upper bound \(M\) of the coefficients \(a^{ij}(t)\) and more precise relation between the constant \(N\) and the elliptic constant \(\delta\). However, it seems to be almost impossible to prove it with only analytic tools. Thus we recall probabilistic representations of solutions to upgrade Theorem 4.1.
**Theorem 4.3**.: _Let \(T\in(0,\infty)\), \(p,q\in(1,\infty)\), and \(w\in A_{q}(\mathbb{R})\). Assume that the coefficients \(a^{ij}(t)\) are uniformly elliptic, i.e. there exists a positive constant \(\delta\) such that_
\[a^{ij}(t)\xi^{i}\xi^{j}\geq\delta|\xi|^{2}\quad\forall\xi\in\mathbb{R}^{d}. \tag{4.3}\]
_Additionally, we assume that the coefficients \(a^{ij}(t)\) are locally integrable, i.e._
\[\int_{0}^{t}a^{ij}(s)ds<\infty\qquad\forall t\in(0,T).\]
_Then for any \(f\in L_{p,q}(T,w)\), there exists a unique solution \(u\) to (3.1) such that_
\[\int_{0}^{T}\left(\int_{\mathbb{R}^{d}}|u_{xx}(t,x)|^{p}dx\right)^{q/p}w(t)dt \leq\frac{N}{\delta^{q}}\int_{0}^{T}\left(\int_{\mathbb{R}^{d}}|f(t,x)|^{p}dx \right)^{q/p}w(t)dt, \tag{4.4}\]
_where_
\[N=N\left(p,q,[w]_{A_{q}(\mathbb{R})}\right).\]
Proof.: **(Step 1)**\(a^{ij}(t)u_{x^{i}x^{j}}=\delta\Delta u\).
For this simple case, we use a basic scaling property of the equation. Put \(v(t,x)=u(t,\sqrt{\delta}x)\). Since \(u\) is the solution to
\[u_{t}(t,x)=\delta\Delta u(t,x)+f(t,x)\] \[u(0,x)=0,\]
we have
\[v_{t}(t,x)=\Delta v(t,x)+f(t,\sqrt{\delta}x)\] \[v(0,x)=0.\]
Thus applying (4.2), we have
\[\left(\int_{0}^{T}\left(\int_{\mathbb{R}^{d}}|v_{xx}(t,x)|^{p}dx \right)^{q/p}w(t)dt\right)^{1/q}\] \[\leq N\left(\int_{0}^{T}\left(\int_{\mathbb{R}^{d}}|f(t,\sqrt{ \delta}x)|^{p}dx\right)^{q/p}w(t)dt\right)^{1/q},\]
where
\[N=N\left(p,q,[w]_{A_{q}(\mathbb{R})}\right).\]
Finally, we obtain (4.4) by the simple change of the variable \(\sqrt{\delta}x\to x\).
**(Step 2)** General \(a^{ij}(t)u_{x^{i}x^{j}}\).
To prove a general case, we use probabilistic solution representations. We may assume that
\[\int_{0}^{T}a^{ij}(t)dt<\infty\]
since the constant \(N\) in (4.4) is independent of \(T\). Additionally, due to the trivial constant extension \(a^{ij}(t)1_{t\in(0,T)}+a^{ij}(T)1_{t\geq T}\), we may assume that \(a^{ij}(t)\) is defined on \((0,\infty)\). Consider two independent \(d\)-dimensional Brownian motions \(B_{t}\) and \(W_{t}\) in a probability space \((\Omega,\mathcal{F}_{t},\mathbb{P})\). Set
\[\left(a^{ij}(t)\right)_{d\times d}=A(t)=\sqrt{A}(t)\times\sqrt{A}^{*}(t),\]
\[X_{t}:=\sqrt{2}\int_{0}^{t}\sqrt{A}^{ij}(s)dB_{s}^{j},\]
\[X_{t}^{2}:=\sqrt{2}\int_{0}^{t}\left(\sqrt{A(s)-\delta I}^{ij}\right)dB_{s}^{j},\]
\[X_{t}^{1}:=\sqrt{2}\sqrt{\delta}I^{ij}W_{t}^{j},\]
where \(I=(I^{ij})_{d\times d}\) denotes the \(d\) by \(d\) identity matrix whose diagonal entries are \(1\) and the other entries are zero and \(\sqrt{A(s)-\delta I}\) is a matrix so that
\[\sqrt{A(s)-\delta I}\sqrt{A(s)-\delta I}=A(s)-\delta I,\]
which exists due to (4.3), i.e. \(A(s)-\delta I\) is a nonnegative symmetric matrix. Then due to (3.2), the solution \(u\) is given by
\[u(t,x) =\int_{0}^{t}\mathbb{E}\left[f(s,x+X_{t}-X_{s})\right]ds\] \[=\int_{0}^{t}\mathbb{E}\left[f(s,x+X_{t}^{1}-X_{s}^{1}+X_{t}^{2}- X_{s}^{2})\right]ds, \tag{4.5}\]
where the last equality is due to the fact that two probabilistic distributions of \(X_{t}-X_{s}\) and \(X_{t}^{1}-X_{s}^{1}+X_{t}^{2}-X_{s}^{2}\) are equal for all \(0<s<t\). Moreover, due to the independence of two Brownian motions \(B_{t}\) and \(W_{t}\), we can split the random parameters in (4.5). Additionally, applying Fubini's theorem we have
\[u(t,x) =\int_{0}^{t}\mathbb{E}\left[f(s,x+X_{t}^{1}-X_{s}^{1}+X_{t}^{2}- X_{s}^{2})\right]ds\] \[=\int_{0}^{t}\mathbb{E}^{\prime}\left[\mathbb{E}\left[f(s,x+X_{t} ^{1}(\omega)-X_{s}^{1}(\omega)+X_{t}^{2}(\omega^{\prime})-X_{s}^{2}(\omega^{ \prime}))\right]\right]ds\] \[=\mathbb{E}^{\prime}\left[\int_{0}^{t}\mathbb{E}\left[f(s,x+X_{t} ^{1}(\omega)-X_{s}^{1}(\omega)+X_{t}^{2}(\omega^{\prime})-X_{s}^{2}(\omega^{ \prime}))\right]ds\right]. \tag{4.6}\]
For each fixing \(\omega^{\prime}\), the function
\[v^{\omega^{\prime}}(t,x):=\int_{0}^{t}\mathbb{E}\left[f(s,x+X_{t}^{1}(\omega) -X_{s}^{1}(\omega)-X_{s}^{2}(\omega^{\prime}))\right]ds\]
becomes a solution to the equation
\[v_{t}^{\omega^{\prime}}(t,x) =\delta\Delta v^{\omega^{\prime}}(t,x)+f(t,x-X_{t}^{2}(\omega^{ \prime}))\] \[v^{\omega^{\prime}}(0,x) =0.\]
Thus by the result in **Step 1**,
\[\int_{0}^{T}\left(\int_{\mathbb{R}^{d}}|v_{xx}^{\omega^{\prime}}(t,x)|^{p}dx \right)^{q/p}w(t)dt\leq\frac{N}{\delta^{q}}\int_{0}^{T}\left(\int_{\mathbb{R}^ {d}}|f(t,x-X_{t}^{2}(\omega^{\prime}))|^{p}dx\right)^{q/p}w(t)dt, \tag{4.7}\]
where \(N\) depends only on \(p\), \(q\), \([w]_{A_{q}(\mathbb{R})}\), and \(\kappa\). Moreover, by (4.6),
\[u_{xx}(t,x)=\mathbb{E}^{\prime}\left[v_{xx}^{\omega^{\prime}}\left(t,x+X_{t} ^{2}(\omega^{\prime})\right)\right]. \tag{4.8}\]
Finally applying (4.8), (4.7), the generalized Minkowski's inequality, and Jensen's inequality, we have
\[\int_{0}^{T}\left(\int_{\mathbb{R}^{d}}|u_{xx}(t,x)|^{p}dx\right)^{q /p}w(t)dt\] \[\leq N\mathbb{E}^{\prime}\left[\int_{0}^{T}\left(\int_{\mathbb{R} ^{d}}|v_{xx}^{\omega^{\prime}}(t,x+X_{t}^{2}(\omega^{\prime}))|^{p}dx\right)^{ q/p}w(t)dt\right]\] \[\leq\frac{N}{\delta^{q}}\int_{0}^{T}\left(\int_{\mathbb{R}^{d}}|f (t,x)|^{p}w\left(x+k(t)\right)dx\right)^{q/p}w(t)dt.\]
_Remark 4.4_.: We hope that there is a positive constant \(N\) such that such that
\[\left(\int_{0}^{T}\left(\int_{\mathbb{R}^{d}}|u_{xx}(t,x)|^{p}dx \right)^{q/p}w(t)dt\right)^{1/q}\] \[\qquad\leq\frac{N}{\delta^{q}}\left(\int_{0}^{T}\left(\int_{ \mathbb{R}^{d}}|f(t,x)|^{p}dx\right)^{q/p}w(t)dt\right)^{1/q},\]
where
\[N=N\left(p,q,[w]_{A_{q}(\mathbb{R})},[w_{0}]_{A_{p}(\mathbb{R}^{d})}\right).\]
However, it cannot be obtained by following the proof of Theorem 4.3 since
\[\int_{\mathbb{R}^{d}}|f(t,x-X_{t}^{2}(\omega^{\prime}))|^{p}dx=\int_{\mathbb{R }^{d}}|f(t,x)|^{p}dx\quad\forall\omega^{\prime}\text{ and }\forall t\]
is used in the proof.
## 5. Proof of the main theorem
**Proof of Theorem 2.5**
Due to Theorem 3.1, the existence and uniqueness of a solution \(u\) is obvious. Moreover, (2.7), (2.8), and (2.9) can be easily obtained from (2.4), (2.5), and (2.6) since \(|t|^{\beta}\in A_{q}(\mathbb{R})\) for any \(-1<\beta_{1}<q-1\) (see [18, Example 7.1.7]). Thus it suffices to show (2.4), (2.5) and (2.6). Let \(u\) be the solution to (2.1). First we show (2.4) and (2.5). For each \(\varepsilon\in(0,1)\), we denote
\[h_{1,\varepsilon}(t)=w(\alpha(t)+\varepsilon t)\left(\delta(t)+\varepsilon\right)\]
and
\[h_{2,\varepsilon}(t)=w(\alpha(t)+\varepsilon t)|\delta(t)+\varepsilon|^{1-q}.\]
Then by (3.11) and (3.12) with a simple change of variable,
\[\sup_{t\in[0,T]}\left[\|u(t,\cdot)\|_{L_{p}}^{q}e^{-q\int_{0}^{t}c(s) ds}\right]\] \[\leq\left[\int_{0}^{T}\left|w(\alpha(t)+\varepsilon t)|\delta(t)+ \varepsilon|^{1-q}\right|^{-\frac{1}{q-1}}dt\right]^{q-1}\] \[\quad\times\int_{0}^{T}\|f(t,\cdot)\|_{L_{p}}^{q}e^{-q\int_{0}^{ t}c(s)ds}w(\alpha(t)+\varepsilon t)|\delta(t)+\varepsilon|^{1-q}(t)dt\] \[\leq\left[\int_{0}^{\alpha(T)+\varepsilon T}|w(t)|^{-\frac{1}{q- 1}}dt\right]^{q-1}\] \[\quad\times\int_{0}^{T}\|f(t,\cdot)\|_{L_{p}}^{q}e^{-q\int_{0}^{ t}c(s)ds}w(\alpha(t)+\varepsilon t)|\delta(t)+\varepsilon|^{1-q}(t)dt\]
and
\[\int_{0}^{T}\|u(t,\cdot)\|_{L_{p}}^{q}e^{-q\int_{0}^{t}c(s)ds}w( \alpha(t)+\varepsilon t)\left(\delta(t)+\varepsilon\right)dt\] \[\leq\left[\int_{0}^{T}w(\alpha(t)+\varepsilon t)\left(\delta(t)+ \varepsilon\right)\left[\int_{0}^{t}|w(\alpha(s)+\varepsilon s)|\delta(s+ \varepsilon)|^{1-q}|^{-\frac{1}{q-1}}ds\right]^{q-1}dt\right]\] \[\quad\times\int_{0}^{T}\|f(t,\cdot)\|_{L_{p}}^{q}e^{-q\int_{0}^{ t}c(s)ds}w(\alpha(t)+\varepsilon t)|\delta(t+\varepsilon)|^{1-q}dt.\]
Moreover, by taking \(\varepsilon\to 0\), we have
\[\sup_{t\in[0,T]}\left[\|u(t,\cdot)\|_{L_{p}}^{q}e^{-q\int_{0}^{ t}c(s)ds}\right]\] \[\leq\left[\int_{0}^{\alpha(T)}|w(t)|^{-\frac{1}{q-1}}dt\right]^{q -1}\int_{0}^{T}\|f(t,\cdot)\|_{L_{p}}^{q}e^{-q\int_{0}^{t}c(s)ds}w(\alpha(t)) |\delta(t)|^{1-q}(t)dt\]
and
\[\int_{0}^{T}\|u(t,\cdot)\|_{L_{p}}^{q}e^{-q\int_{0}^{t}c(s)ds}w( \alpha(t))\left(\delta(t)\right)dt\] \[\leq\left[\int_{0}^{T}w(\alpha(t))\left(\delta(t)\right)\left[ \int_{0}^{t}|w(\alpha(s))|\delta(s)|^{1-q}|^{-\frac{1}{q-1}}ds\right]^{q-1}dt\right]\] \[\quad\times\int_{0}^{T}\|f(t,\cdot)\|_{L_{p}}^{q}e^{-q\int_{0}^{ t}c(s)ds}w(\alpha(t))|\delta(t)|^{1-q}dt. \tag{5.1}\]
One may think that this limit procedure does not seem to be clear. However, it is clear if our weight \(w\) is continuous. Moreover, if \(w\) is bounded, then \(w\) can be approximated by a sequence of continuous functions with a uniform upper bound. Finally, considering \(w\wedge M\) for any positive constant \(M>0\), we can complete the limit procedure due to the monotone convergence theorem as \(M\to\infty\).
We keep going to estimate the term in the middle of (5.1). Recalling the definition of \([w]_{A_{p}(\mathbb{R})}\) and applying the change of variable \(\alpha(t):=\int_{0}^{t}\delta(s)ds\to t\), we
have
\[\int_{0}^{T}w(\alpha(t))\delta(t)\left[\int_{0}^{t}|w(\alpha(s))| \delta(s)|^{1-q}|^{-\frac{1}{q-1}}ds\right]^{q-1}dt\] \[\leq\int_{0}^{T}w(\alpha(t))\delta(t)dt\left[\int_{0}^{T}|w(\alpha (t))|^{-\frac{1}{q-1}}\delta(t)ds\right]^{q-1}\] \[\leq\int_{0}^{\alpha(T)}w(t)dt\left[\int_{0}^{\alpha(T)}|w(t)|^{- \frac{1}{q-1}}ds\right]^{q-1}\] \[\leq[w]_{A_{p}(\mathbb{R})}\left[\alpha(T)\right]^{q}.\]
By putting the above computations in (5.1), we obtain (2.5).
Next we prove (2.6). We may assume that \(f\) has a compact support in \([0,T]\times\mathbb{R}^{d}\). We divide the proof into several steps.
**(Step 1)**\(\delta(t)\geq\varepsilon\) and \(b^{i}(t)=c(t)=0\) for all \(i\) and \(t\).
We first assume that there exists a positive constant \(\varepsilon\in(0,1)\) such that \(\delta(t)\geq\varepsilon\) for all \(t\). Additionally, suppose that \(b^{i}(t)=0\) and \(c(t)=0\) for all \(t\) and \(i\) in this first step. Denote
\[\alpha(t)=\int_{0}^{t}\delta(s)ds.\]
Then \(\beta(t)\) becomes a strictly increasing function and it has the inverse \(\beta(t):[0,\infty)\to[0,\infty)\) such that
\[\beta^{\prime}(t)=\frac{1}{\alpha^{\prime}(\beta(t))}=\frac{1}{ \delta(\beta(t))}\quad\forall t\in[0,\infty). \tag{5.2}\]
Define \(v(t,x)=u(\beta(t),x)\). Then since \(u\) is a solution to (2.1),
\[v_{t}(t,x)=u_{t}(\beta(t),x)\beta^{\prime}(t)=\frac{a^{ij}(\beta(t))}{\delta( \beta(t))}v_{x^{i}x^{j}}(t,x)+\frac{f(\beta(t),x)}{\delta(\beta(t))}\]
and \(v(0,x)=0\). Note that
\[\frac{a^{ij}(\beta(t))}{\delta(\beta(t))}\xi^{i}\xi^{j}\geq|\xi|^{2}\quad \forall\xi\in\mathbb{R}^{d}.\]
In other words, \(v\) becomes the solution to
\[v_{t}(t,x)=\tilde{a}^{ij}(t)v_{x^{i}x^{j}}(t,x)+\frac{f(\beta(t),x)}{\delta(\beta(t))}\qquad(t,x)\in(0,T)\times\mathbb{R}^{d},\] \[u(0,x)=0, \tag{5.3}\]
with the coefficients \(\tilde{a}^{ij}(t)=\frac{a^{ij}(\beta(t))}{\delta(\beta(t))}\) whose elliptic constant is \(1\). Moreover, it is obvious that \(\tilde{a}^{ij}(t)\) is locally integrable. Indeed, by the change of the variable \(\beta(t)\to t\) and (5.2),
\[\int_{0}^{T}\tilde{a}^{ij}(t)dt=\int_{0}^{\beta(T)}a^{ij}(t)dt<\infty.\]
Thus applying (4.4), we have
\[\left(\int_{0}^{T_{0}}\left(\int_{\mathbb{R}^{d}}|v_{xx}(t,x)|^{p}dx \right)^{q/p}w(t)dt\right)^{1/q}\] \[\qquad\leq N\left(\int_{0}^{T_{0}}\left(\int_{\mathbb{R}^{d}}\left| \frac{f(\beta(t),x)}{\delta(\beta(t))}\right|^{p}dx\right)^{q/p}w(t)dt\right)^ {1/q}, \tag{5.4}\]
where
\[N=N\left(p,q,[w_{0}]_{A_{q}(\mathbb{R})},\kappa\right)\]
and \(T_{0}\) is a constant so that \(\beta(T_{0})=T\). By considering the change of variables \(\beta(t)\to t\) in (5.4), we finally obtain
\[\left(\int_{0}^{T}\left(\int_{\mathbb{R}^{d}}|u_{xx}(t,x)|^{p}dx \right)^{q/p}w(\alpha(t))\delta(t)dt\right)^{1/q}\] \[\qquad\lesssim\left(\int_{0}^{T}\left(\int_{\mathbb{R}^{d}}|f(t,x )|^{p}dx\right)^{q/p}w(\alpha(t))(\delta(t))^{1-q}dt\right)^{1/q}. \tag{5.5}\]
**(Step 2) \(b^{i}(t)=c(t)=0\)** for all \(i\) and \(t\).
In this step, we remove the condition \(\delta(t)\geq\varepsilon\). For any \(\varepsilon\in(0,1)\), we can rewrite (2.1) as
\[u_{t}(t,x)=(a^{ij}(t)+\varepsilon I_{d\times d})u_{x^{i}x^{j}}(t,x)+f(t,x)-\varepsilon\Delta u,\] \[u(0,x)=0,\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad(t,x) \in(0,T)\times\mathbb{R}^{d},\]
where \(I_{d\times d}\) denotes the \(d\) by \(d\) identity matrix whose diagonal entries are \(1\) and the other entries are zero. Thus applying (5.5), we have
\[\left(\int_{0}^{T}\left(\int_{\mathbb{R}^{d}}|u_{xx}(t,x)|^{p}dx \right)^{q/p}w(\alpha_{\varepsilon}(t))(\delta(t)+\varepsilon)dt\right)^{1/q}\] \[\qquad\lesssim\left(\int_{0}^{T}\left(\int_{\mathbb{R}^{d}}|f(t,x )|^{p}dx\right)^{q/p}w(\alpha_{\varepsilon}(t))(\delta(t)+\varepsilon)^{1-q} dt\right)^{1/q}\] \[\qquad\qquad+\left(\int_{0}^{T}\left(\int_{\mathbb{R}^{d}}| \varepsilon\Delta u(t,x)|^{p}dx\right)^{q/p}w(\alpha_{\varepsilon}(t))(\delta (t)+\varepsilon)^{1-q}dt\right)^{1/q}, \tag{5.6}\]
where \(\alpha_{\varepsilon}(t)=\int_{0}^{t}(\delta(s)+\varepsilon)ds\). Observe that
\[\int_{0}^{T}\left(\int_{\mathbb{R}^{d}}|\varepsilon\Delta u(t,x) |^{p}w(x+k(t))dx\right)^{q/p}w_{0}(\alpha_{\varepsilon}(t))(\delta(t)+ \varepsilon)^{1-q}dt\] \[=\int_{0}^{T}\left(\int_{\mathbb{R}^{d}}|\Delta u(t,x)|^{p}w(x+k (t))dx\right)^{q/p}w_{0}(\alpha_{\varepsilon}(t))(\delta(t)+\varepsilon) \left(\frac{\varepsilon}{\delta(t)+\varepsilon}\right)^{q}dt,\] \[\qquad\qquad\qquad\qquad\qquad\qquad(\delta(t)+\varepsilon) \left(\frac{\varepsilon}{\delta(t)+\varepsilon}\right)^{q}\leq(\delta(t))^{1-q}\]
and
\[(\delta(t)+\varepsilon)\left(\frac{\varepsilon}{\delta(t)+\varepsilon} \right)^{q}\to 0\text{ as }\varepsilon\to 0,\]
where \(0^{1-q}:=\infty\). Thus due to the dominate convergence theorem and the definition of the integral in (2.10), taking \(\varepsilon\to 0\) in (5.6), we have
\[\int_{0}^{T}\left(\int_{\mathbb{R}^{d}}|u_{xx}(t,x)|^{p}dx\right)^ {q/p}w_{0}(\alpha(t))\delta(t)dt\] \[\lesssim\int_{0}^{T}\left(\int_{\mathbb{R}^{d}}|f(t,x)|^{p}dx \right)^{q/p}w_{0}(\alpha(t))(\delta(t))^{1-q}dt. \tag{5.7}\]
**(Step 3)** (General case).
Let \(v\) be a solution to the equation
\[v_{t}(t,x)=a^{ij}(t)v_{x^{i}x^{j}}(t,x)+e^{-\int_{0}^{t}c(s)ds}f \left(t,x-\int_{0}^{t}b(s)ds\right),\] \[v(0,x)=0,\hskip 113.811024pt(t,x)\in(0,T)\times\mathbb{R}^{d}.\]
The as shown in the proof of Corollary 3.3, the solution \(u\) is given by
\[u(t,x)=e^{\int_{0}^{t}c(s)ds}v\left(t,x+\int_{0}^{t}b(s)ds\right)\]
and obviously
\[v(t,x)=e^{-\int_{0}^{t}c(s)ds}u\left(t,x-\int_{0}^{t}b(s)ds\right).\]
Applying (5.7) to \(v\), we have
\[\int_{0}^{T}\left(\int_{\mathbb{R}^{d}}\left|e^{-\int_{0}^{t}c(s) ds}u_{xx}\left(t,x-\int_{0}^{t}b(s)ds\right)(t,x)\right|^{p}dx\right)^{q/p}\] \[\quad\times w(\alpha(t))\delta(t)dt\] \[\lesssim\int_{0}^{T}\left(\int_{\mathbb{R}^{d}}\left|f\left(t,x- \int_{0}^{t}b(s)ds\right)\right|^{p}dx\right)^{q/p}e^{-q\int_{0}^{t}c(s)ds}w( \alpha(t))(\delta(t))^{1-q}dt.\]
Finally, the translation \(x\to x+\int_{0}^{t}b(s)ds\) leads us to (2.6).
## 6. Acknowledgement
I would like to thank prof. Kyeong-Hun Kim for careful reading and suggesting valuable comments. |
2310.19682 | Planar parallel phonon Hall effect and local symmetry breaking | Y-kapellasite [Y3Cu9(OH)19Cl8] is a frustrated antiferromagnetic insulator
which remains paramagnetic down to a remarkably low N\'eel temperature of about
2 K. Having studied this material in the paramagnetic regime, in which phonons
are the only possible heat carriers, we report the observation of a planar
parallel thermal Hall effect coming unambiguously from phonons. This is an
advantage over the Kitaev quantum spin liquid candidates {\alpha}-RuCl3 and
Na2Co2TeO6 where in principle other heat carriers can be involved [1-4]. As it
happens, Y-kapellasite undergoes a structural transition attributed to the
positional freezing of a hydrogen atom below about 33 K. Above this transition,
the global crystal symmetry forbids the existence of a planar parallel signal -
the same situation as in Na2Co2TeO6 and cuprates [3-5]. This points to the
notion of a local symmetry breaking at the root of the phonon Hall effect. In
this context, the advantage of Y-kapellasite over Na2Co2TeO6 (with high levels
of Na disorder and stacking faults) and cuprates (with high levels of disorder
coming from dopants and oxygen vacancies) is its clean structure, where the
only degree of freedom available for local symmetry breaking is this hydrogen
atom randomly distributed over six equivalent positions above 33 K. This
provides a specific and concrete case for the general idea of local symmetry
breaking leading to the phonon Hall effect in a wide range of insulators. | Quentin Barthélemy, Étienne Lefrançois, Lu Chen, Ashvini Vallipuram, Katharina M. Zoch, Cornelius Krellner, Pascal Puphal, Louis Taillefer | 2023-10-30T16:02:14Z | http://arxiv.org/abs/2310.19682v1 | # Planar parallel phonon Hall effect and local symmetry breaking
###### Abstract
Y-kapellasite [Y\({}_{3}\)Cu\({}_{9}\)(OH)\({}_{19}\)Cl\({}_{8}\)] is a frustrated antiferromagnetic insulator which remains paramagnetic down to a remarkably low Neel temperature of about \(2\) K. Having studied this material in the paramagnetic regime, in which phonons are the only possible heat carriers, we report the observation of a planar parallel thermal Hall effect coming unambiguously from phonons. This is an advantage over the Kitaev quantum spin liquid candidates \(\alpha\)-RuCl\({}_{3}\) and Na\({}_{2}\)Co\({}_{2}\)TeO\({}_{6}\) where in principle other heat carriers can be involved [1, 2, 3, 4]. As it happens, Y-kapellasite undergoes a structural transition attributed to the positional freezing of a hydrogen atom below about \(33\) K. Above this transition, the global crystal symmetry forbids the existence of a planar parallel signal - the same situation as in Na\({}_{2}\)Co\({}_{2}\)TeO\({}_{6}\) and cuprates [3, 4, 5]. This points to the notion of a local symmetry breaking at the root of the phonon Hall effect. In this context, the advantage of Y-kapellasite over Na\({}_{2}\)Co\({}_{2}\)TeO\({}_{6}\) (with high levels of Na disorder and stacking faults) and cuprates (with high levels of disorder coming from dopants and oxygen vacancies) is its clean structure, where the only degree of freedom available for local symmetry breaking is this hydrogen atom randomly distributed over six equivalent positions above \(33\) K. This provides a specific and concrete case for the general idea of local symmetry breaking leading to the phonon Hall effect in a wide range of insulators.
Rather unexpectedly, C. Strohm and colleagues discovered the phonon Hall effect (PHE) through conventional thermal Hall effect measurements in a paramagnetic dielectric garnet, in which phonons are the only possible heat carriers [6]. Given a longitudinal thermal gradient \(\Delta T_{\rm i}\) produced by the heat flux \(q_{\rm i}\) set at one end of the sample along the direction \(i\), an orthogonal temperature gradient \(\Delta T_{\rm j}\) develops along the direction \(j\) when a magnetic field \(B_{\rm k}\) is applied _fully normal_ to the \((ij)\) plane along the direction \(k\), originating in a finite \(Q_{\rm ijk}\) Righi-Leduc tensor component: \(q_{\rm i}=Q_{\rm ijk}\Delta T_{\rm j}B_{\rm k}\), see Fig. 1**a**. In a conventional metal, where electrons carry heat in tandem with phonons, the thermal Hall conductivity \(\kappa_{\rm ij}\) naturally includes a sizeable Lorentz force-like contribution, directly related to the electrical Hall effect through the Wiedemann-Franz law. Conversely, observing a finite \(\kappa_{\rm ij}\) in an insulator, where all possible heat carriers are necessarily neutral, is quite counter-intuitive.
Over the past few years, a number of theoretical and experimental studies focusing on magnetic insulators highlighted that phonons are not the only possible neutral heat carriers and that collective spin excitations such as magnons - conventional spin waves - may also generate a thermal Hall effect [7; 8; 9; 10; 11; 12; 13]. Naturally, the coupling of these magnetic quasiparticles to the magnetic field appears less cryptic. A tantalising aspect is that most studies conducted on frustrated quantum magnets reported a sizeable thermal Hall signal presented as compelling evidence for the emergence of long-sought exotic spin excitations while assuming a marginal or null PHE.
The most recent and salient examples are those of two Kitaev quantum spin liquid candidates, namely \(\alpha\)-RuCl\({}_{3}\) and Na\({}_{2}\)Co\({}_{2}\)TeO\({}_{6}\), in which a startling planar parallel thermal Hall effect was detected when a magnetic field \(B_{\rm i}\) is applied _fully parallel_ to the heat flux [1; 2; 3; 4], see Fig. 1**a**. Until now, this finite \(Q_{\rm iji}\) was systematically attributed to unconventional magnetic edge states rather than phonons. In particular, regarding the highly scrutinised \(\alpha\)-RuCl\({}_{3}\), there is a heated debate to decide between the two scenarios proposed so far: chiral Majorana fermions from the gapped Kitaev quantum spin liquid [1], expected to yield a quantized temperature dependence of \(\kappa_{\rm ij}\), versus topological magnons [2], expected to yield a steeper temperature dependence, typical of bosons.
Yet, given that phonons contribute to the conventional thermal Hall effect in these two materials [14; 15], one wonders if they also contribute to the planar parallel thermal Hall effect. If so, this would cast doubt on the putative evidence for exotic spin excitations.
In our recent studies of Na\({}_{2}\)Co\({}_{2}\)TeO\({}_{6}\)[4] and of the Mott insulating (antiferromagnetic) cuprate Nd\({}_{2-\rm x}\)Ce\({}_{\rm x}\)CuO\({}_{4}\) (with \(\rm x=0.04\)) [5], we argued that the similar temperature dependence of the phonon-dominated longitudinal thermal conductivity and the planar parallel thermal Hall conductivity is a strong indication that phonons do contribute to the planar parallel thermal Hall effect. To support our interpretations, it is now crucial to provide an unambiguous observation of the planar parallel phonon Hall effect in a material where phonons are the only possible heat carriers.
Beyond the nature of the involved heat carriers, the symmetry requirements for the planar parallel thermal Hall effect constitute a key issue. In Na\({}_{2}\)Co\({}_{2}\)TeO\({}_{6}\) for instance, it is forbidden by the global crystal symmetries of the \(P6_{3}22\) space group even though finite signals were reported by two independent groups, using samples
from different sources [3; 4]. There is nonetheless a striking difference in magnitude and temperature dependence between the two sets of results, which suggests that mechanisms related to the sample quality and history, e.g., defects and domains, whether structural or magnetic, are responsible for the planar parallel thermal Hall effect. In Nd\({}_{2-\mathrm{x}}\)Ce\({}_{\mathrm{x}}\)CuO\({}_{4}\) (with \(\mathrm{x}=0.04\)), it is also forbidden by the global crystal symmetries of the \(I4/mmm\) space group [5]. We thus wonder if local symmetry breaking, around positional disorder or extrinsic defects, is at the root of the planar parallel thermal Hall effect.
In the present study, we investigate the thermal Hall effect in clean, phase pure single crystals of Y-kapellasite [\(\mathrm{Y}_{3}\)Cu\({}_{9}\)(OH)\({}_{19}\)Cl\({}_{8}\)], a kagome-based insulating frustrated antiferromagnet which does not display quantum spin liquid physics nor exotic spin excitations down to the lowest temperatures reached experimentally [16]. While it undergoes two structural transitions at \(T_{\mathrm{S1}}\simeq 33\) K and \(T_{\mathrm{S2}}\simeq 13\) K, it remains a simple paramagnet down to a very low Neel temperature \(T_{\mathrm{N}}\simeq 2\) K above which phonons are the only possible heat carriers. In the high-temperature crystal structure (above \(T_{\mathrm{S1}}\)) defined by the space group \(R\overline{3}\) (\(148\)), the inter-plane hydrogen is randomly distributed over six equivalent positions, see Fig. 1**b**. The transition at \(T_{\mathrm{S2}}\) is still to be clarified but the transition at \(T_{\mathrm{S1}}\) is attributed to the positional freezing of this atom and potentially leads to a crystal structure defined by the space group \(P1\) with three different twin domains. This system thus provides a unique platform in which one can investigate the PHE and correlate symmetry considerations with the temperature dependence of specific thermal transport coefficients.
We performed thermal transport measurements on three high-quality single crystals of Y-kapellasite, labelled S1, S2 and S3. We focused on five distinct configurations listed in Table 1 so as to assess the \(Q_{123}\) (conventional \(\kappa_{12}\), in S1), \(Q_{213}\) (conventional \(\kappa_{21}\), in S2), \(Q_{212}\) (planar parallel \(\kappa_{21}\), in S2), \(Q_{211}\) (planar orthogonal \(\kappa_{21}\), in S2) and \(Q_{321}\) (conventional \(\kappa_{32}\), in S3) Righi-Leduc tensor components, where \(1\), \(2\) and \(3\) respectively denote the \(a\), \(b^{\star}\) and \(c\) orthonormal lattice vectors of the \(R\overline{3}\) structure, see Fig. 1**b**.
First examining the longitudinal thermal conductivities associated with the three corresponding heat flux directions, \(\kappa_{11}\) (in S1), \(\kappa_{22}\) (in S2) and \(\kappa_{33}\) (in S3) as displayed in Fig. 2, we obtain confirmation that there are no other heat carriers than phonons at all considered temperatures from \(80\) down to \(2\) K. The temperature dependence of the three \(\kappa_{\mathrm{ii}}\) is similar in shape, with clear anomalies at \(T_{\mathrm{S1}}\) and \(T_{\mathrm{S2}}\) reflecting the two structural transitions, see Fig. 2**a**. Actually, the three curves are almost identical up to a multiplicative factor which includes potential variations in crystalline quality and geometric factor uncertainties on top of any real anisotropy, with \(\kappa_{22}:\kappa_{11}:\kappa_{33}\simeq 1.00:1.43:5.30\) for a perfect match at \(25\) K, see Fig. 2**b**. Here, the fact that \(\kappa_{33}\) is about five times smaller than \(\kappa_{11}\) and \(\kappa_{22}\) reflects the quasi two-dimensional nature of the structure. Note that any contribution from mobile spin excitations, necessarily contained within the kagome planes and thus unable to carry heat along \(c\), would preclude such scaling and would be substantially affected by the magnetic field. On the contrary, \(\kappa_{22}\) and \(\kappa_{33}\) are found to be field independent up to \(15\) T while \(\kappa_{11}\) displays a minute field dependence below \(T_{\mathrm{S1}}\), see Fig. 2**a**. Most probably, the latter is related to the scattering of phonons by the paramagnetic spin fluctuations
on a temperature range where short-range spin correlations (or paramagnons) start to develop [17]. Increasing the magnetic field (here applied along \(c\)) gaps out some of these spin fluctuations, which in turn slightly enhances \(\kappa_{11}\).
It should also be pointed out that the three \(\kappa_{\rm ii}\) are of remarkably modest magnitude, e.g., far below the low-temperature boundary scattering limit, and similar to the universal thermal conductivity of amorphous solids, see Fig. 2**b**. In particular, when considering the evolution of \(\kappa_{\rm ii}\) with increasing temperature, a key feature is the slow and almost linear rise observed after a plateau-like regime ending at \(T_{\rm S1}\). Such behaviour was explained at the theoretical level in terms of anharmonic interactions between phonons and fractons, which are short-scale vibrational excitations [18]. Here, it seems logical to attribute the glass-like thermal conductivity to a strong scattering of phonons by the randomly distributed inter-plane hydrogen. While the latter freezes below \(T_{\rm S1}\), \(\kappa_{\rm ii}\) drops less rapidly with decreasing temperature: it remains more or less constant down to \(T_{\rm S2}\), at which it experiences a minor improvement (most noticeable in \(\kappa_{33}\)) before vanishing.
Now turning to the thermal Hall effect results presented in Fig. 3, we obtain small but finite \(Q_{123}\), \(Q_{213}\) and \(Q_{212}\) components, while the \(Q_{211}\) component is found to be virtually null over the whole temperature range. As for the \(Q_{321}\) component, we were not able to obtain satisfactory data owing to the poor \(\kappa_{33}\) which prevented us from generating any detectable thermal Hall gradient \(\Delta T_{2}\) given our experimental sensitivity. In other words, a clear PHE is observed in two of the conventional configurations and in the planar parallel configuration.
We first concentrate on the \(Q_{123}\) component, which corresponds to the conventional thermal Hall conductivity \(\kappa_{12}\), see Fig. 3**a**. It has a positive sign and a visible response to both structural transitions with a well-defined maximum position at \(T_{\rm S1}\) and a fuzzier peak centred around \(T_{\rm S2}\). Within reproducibility tolerance, it scales linearly with the magnetic field up to \(17\) T, in stark contrast to expectations for the magnon Hall effect which declines with increasing magnetic field [9]. This weak conventional PHE translates into a thermal Hall angle \(\kappa_{12}/\kappa_{22}/B_{3}\simeq 7\times 10^{-5}\) T\({}^{-1}\) at \(T_{\rm S1}\) in \(15\) T. It is instructive to note that \(\kappa_{12}\) and \(\kappa_{22}\) have a different temperature dependence. While a shape similarity between \(\kappa_{\rm ij}\) and \(\kappa_{\rm jj}\) is often considered as an evidence that a single type of heat carriers (hence phonons) is involved [4, 14], we demonstrate here that this criterion is a condition that may be sufficient but by no means necessary. For instance, when considering the evolution with decreasing temperature, \(\kappa_{12}\) increases down to \(T_{\rm S1}\) whereas \(\kappa_{22}\) decreases. Then, down to \(T_{\rm S2}\), \(\kappa_{12}\) decreases more significantly than \(\kappa_{22}\), which remains more or less constant. The latter decrease reveals that \(\kappa_{12}\) is not enhanced by the positional freezing of the inter-plane hydrogen at low temperatures.
We now consider the \(Q_{213}\) component, which corresponds to the conventional thermal Hall conductivity \(\kappa_{21}\). According to the symmetry-adapted form of the Righi-Leduc tensor for the space groups \(R\overline{3}\) and \(P1\), \(Q_{213}\) and \(Q_{123}\) are Onsager-Casimir reciprocal, with \(Q_{213}=-Q_{123}\), see Ext. Data Tables 1, 2. As depicted in Fig. 3**b**, we confirm that \(Q_{213}\) mirrors \(Q_{123}\) with a negative sign and no significant magnitude difference. Note that this beautiful verification was carried out using two distinct samples, which underlines the reliability and reproducibility of our measurements.
Finally, we discuss the two most exciting components \(Q_{212}\) and \(Q_{211}\), which correspond to the planar parallel and planar orthogonal thermal Hall conductivities \(\kappa_{21}\) when the magnetic field is applied either parallel or orthogonal to the heat flux within the \((ab)\) plane. We immediately rule out a spurious occurrence of \(Q_{212}\) arising from a contamination by \(Q_{213}\). On the one hand, \(Q_{212}\) is found to have the opposite sign, positive, and displays a different temperature dependence which may in turn suggest a different origin, see Fig. 3**b**. In particular, we notice that \(Q_{212}\) increases with decreasing temperature from \(T_{\rm S1}\) to \(T_{\rm S2}\) whereas \(|Q_{213}|\) decreases like \(Q_{123}\). On the other hand, we took great care to check the alignment of the magnetic field and estimate that it deviates from the \((ab)\) plane towards \(c\) by \(\pm 5\)\({}^{\circ}\) at most. This corresponds to a maximal contamination of about \(9\) % of \(|Q_{213}|\), well below the observed \(Q_{212}\), with for instance \(0.09\times 0.81\simeq 0.07\) mW.K\({}^{-1}\).m\({}^{-1}\) versus \(0.61\) mW.K\({}^{-1}\).m\({}^{-1}\) (more than eight times larger) at \(T_{\rm S2}\).
Detecting a finite \(Q_{212}\) at temperatures above \(T_{\rm S1}\) comes as a real surprise because it is forbidden in the symmetry-adapted form of the Righi-Leduc tensor for the space group \(R\overline{3}\), see Ext. Data Tables 1. Its existence necessarily implies the occurrence of special symmetry breakings. We therefore listed all the space groups resulting from lowering the symmetries of \(R\overline{3}\) down to \(P1\), see Ext. Data Fig. 1. Among all possible subgroups, only \(P\overline{1}\) and \(P1\) have symmetries compatible with a finite \(Q_{212}\) (note that a finite \(Q_{211}\) is also allowed), see Ext. Data Tables 1, 2. Therefore, in contrast to the cases of \(Q_{123}\) and \(Q_{213}\), the inter-plane hydrogen may play a crucial role in the establishment of \(Q_{212}\). We propose that a local symmetry breaking from \(R\overline{3}\) to \(P\overline{1}\) or \(P1\) resulting from the random distribution of this atom is sufficient for the planar parallel PHE to emerge above \(T_{\rm S1}\). It is then only slightly enhanced when some if not all of the broken symmetries become global below \(T_{\rm S1}\) and peaks around \(T_{\rm S2}\), see Fig. 3**b**.
For now, it is also not clear why \(Q_{211}\) remains vanishingly small but this observation, in line with other planar thermal Hall effect studies [1; 2; 4], may prove useful in subsequent theoretical developments to clarify the precise mechanisms responsible for the planar PHE. For comparison, \(Q_{\rm jj}\) and \(Q_{\rm jj}\) are correspondingly reported to be finite and null in both \(\alpha\)-RuCl\({}_{3}\) (in agreement with the symmetries of the space group \(C2/m\)) and Na\({}_{2}\)Co\({}_{2}\)TeO\({}_{6}\) (although a finite \(Q_{\rm jj}\) is there forbidden by the symmetries of the space group \(P6_{3}22\)).
In summary, our thermal transport measurements on Y-kapellasite yield a paradigm shift in the study of the thermal Hall effect in insulators. First, we report the first unambiguous detection of a planar parallel PHE. Our findings thus prompt a second look at the interpretations put forward in previous studies on quantum spin liquid candidates and open up a wider range of scenarios in which the phonon contribution cannot be neglected. Second, we now have a specific and concrete case for the general idea of local symmetry breaking at the root of the PHE in a wide range of insulators.
## Methods
Structure and magnetic model of Y-kapellasite.Y-kapellasite is a derivative of the emblematic quantum spin liquid candidate herbertsmithite ZnCu\({}_{3}\)(OH)\({}_{6}\)Cl\({}_{2}\), discovered as an interesting by-product of unsuccessful doping attempts when substituting divalent zinc for trivalent yttrium [19, 20]. In this material, copper spins \(S=1/2\) decorate a slightly distorted kagome lattice with yttrium located close to the centre of hexagons, see Fig. 1**b**, hence the kapellasite denomination (Zn-kapellasite is a polymorph of herbertsmithite, the difference between the two structures being the position of zinc, either within or in between the kagome planes). Contrary to zinc, yttrium has a significantly larger ionic radius than copper, which precludes any intersite mixing and renders the system immune to the troublesome magnetic defects typical of herbertsmithite, Zn-kapellasite or Zn-barlowite. Hydroxyl groups and chlorine constitute thick diamagnetic layers separating the kagome planes and recent _ab initio_ density functional theory combined with inelastic neutron scattering results confirmed the quasi two-dimensional nature of the magnetic lattice, demonstrating that three intra-plane antiferromagnetic Heisenberg couplings between nearest neighbours dominate over all other possible intra- or inter-plane couplings between further neighbours [16, 21]. These main three couplings \(J\simeq J_{\bigcirc}\simeq 140\) K and \(J^{{}^{\prime}}\simeq 63\) K result in an original anisotropic variant of the standard nearest neighbour Heisenberg model (recovered for \(J=J_{\bigcirc}=J^{{}^{\prime}}\)), breaking translational symmetry of the kagome lattice but retaining six-fold rotational symmetry around hexagons, see Fig. 1**b**. Owing to the considerable frustration produced by the lattice geometry and the competition between the latter three antiferromagnetic terms, the material remains paramagnetic down to \(T_{\rm N}\simeq 2\) K, below which a coplanar long-range order with propagation vector \(Q=(1/3,1/3)\) sets in, as predicted theoretically for the ground state [21]. This magnetic transition, resulting in a remarkably weak ordered moment of about \(1/30\)\(\mu_{\rm B}\), initially remained elusive when focusing on polycrystalline samples [20] until it was later demonstrated in the case of large phase-pure single crystals prepared via optimal synthesis [16]. For that matter, every improvement in the synthesis procedure triggered a thorough reappraisal of the exact stoichiometry and crystallographic structure which were eventually settled through inductively coupled plasma mass spectroscopy, gas extraction and detailed neutron diffraction measurements from \(40\) K down to \(65\) mK [16]. Y-kapellasite crystallises in a trigonal rhombohedral structure defined by the space group \(R\overline{3}\) (\(148\)), in which the inter-plane hydrogen randomly occupies six equivalent positions, thus locally breaking the global crystal symmetry, see Fig. 1**b**. Upon cooling, two structural transitions occuring at \(T_{\rm S1}\simeq 33\) K and \(T_{\rm S2}\simeq 13\) K were detected through specific heat, thermal expansion and \({}^{35}\)Cl NMR measurements on single crystals while they remained unnoticed in prior studies of polycrystalline samples [20]. Strikingly, these transitions only reflect in the neutron diffraction data through a clear intensity increase for some Bragg peaks. Preserving the same space group and lattice parameters down to the lowest temperatures does not affect the refinement quality. The transition at \(T_{\rm S1}\) is attributed to the positional freezing of the inter-plane hydrogen, thereby potentially leading to a global crystal symmetry breaking from \(R\overline{3}\) to \(P1\) (1) with three different twin domains. This is compatible with the complex \({}^{35}\)Cl quadrupolar line splitting reported below \(T_{\rm S1}\). Further terahertz magnetometry measurements revealed that these structural transitions are accompanied by the building up of short-range spin correlations (or paramagnons) although long-range magnons only emerge below \(T_{\rm N}\) as highlighted with inelastic neutron scattering [16, 17].
Optimal synthesis of Y-kapellasite.The crystal growth of Y-kapellasite was originally reported in Reference [19]. Subsequently, as described in Reference [17], the synthesis was improved to obtain inclusion-free, large, bulk single crystals
by means of a horizontal external gradient method in thick-walled quartz ampoules with a wall thickness of \(2.5\)-\(3\) mm. Growth is achieved by slowly dissolving CuO in a YCl\({}_{3}\)-H\({}_{2}\)O solution and transporting it to the cold end. This is realised in a three-zone furnace with a gradient of \(25\)\({}^{\circ}\)C and a temperature of \(240\)\({}^{\circ}\)C at the hot end, over a length of \(20\) cm. The gradient was optimised because too low temperatures yielded a phase mixture of Y-kapellasite and clinoatacamite. The phase-pure, optically transparent single crystals have an average size of \(3\times 3\times 1\) mm\({}^{3}\) up to \(3\times 3\times 3\) mm\({}^{3}\) when grown over several weeks. Their orientation is facilitated by their hexagonal plaquette shape, with \(c\) perpendicular to the hexagonal faces and \(a\) (respectively \(b^{\star}\)) perpendicular (respectively parallel) to the hexagonal edge. The samples S1, S2 and S3 examined here are from the same batch as the single crystals investigated in References [16, 17]. S1 and S2 were selected among the thinnest and measured as grown, while S3 was cut along \(c\) in one of the thickest.
Thermal transport measurements.Thermal transport measurements were performed using a standard steady-state method. A constant heat flux \(q_{i}\) is injected at one end of the sample along the direction \(i\) while the other end is thermally sunk to a heat bath at temperature \(T_{0}\) (either a copper or lithium fluoride block), see Fig. 1**a**. The heat flux is generated by applying an electric current through a strain gauge whose resistance (of about \(5\) k\(\Omega\)) marginally depends on the temperature and magnetic field. Assuming a one-dimensional heat flow and an isotropic medium, the longitudinal thermal gradient \(\Delta T_{\rm i}\) is measured between two contacts separated by a distance \(l\) along the direction \(i\), see Fig. 1**a**. This gradient is assessed using either two Cernox sensors (calibrated _in situ_ against a reference Cernox) or two type-E thermocouples. The longitudinal thermal conductivity \(\kappa_{\rm ii}\) is given by \(\kappa_{\rm ii}=q_{i}/(\alpha\Delta T_{\rm i})\), where \(\alpha\) is a geometric factor determined by the cross section \(wt\) (\(w\): width, \(t\): thickness) divided by \(l\). In presence of an applied magnetic field \(B_{\rm x}\), with \(x\in\{i,j,k\}\), the orthogonal thermal gradient \(\Delta T_{\rm j}\) is measured between two contacts separated by a distance \(w\) along the direction \(j\), see Fig. 1**a**. This gradient is assessed using a differential type-E thermocouple and antisymmetrised between the two field polarities to remove any contamination by the longitudinal gradient: \(\Delta T_{\rm j}(B_{\rm x})=[\Delta T_{\rm j}(+B_{\rm x})-\Delta T_{\rm j}(-B _{\rm x})]/2\). The thermal Hall conductivity \(\kappa_{\rm ij}\) is given by \(\kappa_{\rm ij}=l\kappa_{\rm ij}\Delta T_{\rm j}/(w\Delta T_{\rm i})\), which implies a two-step computation because \(\kappa_{\rm ij}\) and \(\Delta T_{\rm j}\) are not measured simultaneously. Error bars in the figures represent one standard deviation. In our mountings, all connections between the samples and the heat baths, temperature sensors and strain gauges consisted of gold and silver wires with a \(17\) to \(100\)\(\mu\)m diameter attached using silver paste. The contacts had geometries (\(l\times w\times t\)) \(1859(60)\times 2239(78)\times 334(10)\)\(\mu\)m\({}^{3}\) on S1, \(558(108)\times 1040(60)\times 85(4)\)\(\mu\)m\({}^{3}\) on S2 and \(418(70)\times 600(99)\times 137(38)\)\(\mu\)m\({}^{3}\) on S3. Note that a quantitative determination of \(\kappa_{\rm ij}\) requires measuring both \(\kappa_{\rm ij}\) and \(\Delta T_{\rm j}\) in the same sample. Here, the longitudinal thermal conductivity \(\kappa_{11}\) and the thermal Hall gradient \(\Delta T_{2}\) were measured in sample S1 while the longitudinal thermal conductivity \(\kappa_{22}\) and the thermal Hall gradient \(\Delta T_{1}\) were measured in sample S2. Owing to the modest anisotropy between \(\kappa_{11}\) and \(\kappa_{22}\) (if any beyond potential variations in crystalline quality and geometric factor uncertainties), see Fig. 2, we assumed that \(\kappa_{11}\simeq\kappa_{22}\) to compute \(\kappa_{21}\) so as to compare \(\kappa_{12}\) and \(\kappa_{21}\) in a meaningful way.
AcknowledgementsWe are grateful to S. Fortier for extensive technical support and acknowledge valuable discussions with N. Gauthier and J.A. Quilliam. C.K. acknowledges funding from the Deutsche Forschungsgemeinschaft (DFG) through TRR \(288\)-\(422213477\) (project A03). L.T. acknowledges support from the Canadian Institute for Advanced Research (CIFAR) as a Fellow and funding from the Institut Quantique, the Natural Sciences and Engineering Research Council of Canada (NSERC, PIN \(123817\)), the Fonds de Recherche du Quebec - Nature et Technologies (FRQNT), the Canada Foundation for Innovation (CFI), and a Canada research chair. This research was undertaken thanks in part to funding from the Canada First Research Excellence Fund.
Q.B., P.P. and L.T. conceived and led the project. P.P., K.M.Z. and C.K. grew the single crystals. Q.B., E.L., L.C., and A.V. carried out the thermal transport measurements and analysis. Q.B. wrote the manuscript with feedback from all the authors.
The authors declare no competing interests.
Correspondence and requests for materials should be addressed to Q.B., P.P. or L.T. |
2306.15986 | Some results concerning the valences of (super) edge-magic graphs | A graph $G$ is called edge-magic if there exists a bijective function
$f:V\left(G\right) \cup E\left(G\right)\rightarrow \left\{1, 2, \ldots ,
\left\vert V\left( G\right) \right\vert +\left\vert E\left( G\right)
\right\vert \right\}$ such that $f\left(u\right) + f\left(v\right) +
f\left(uv\right)$ is a constant (called the valence of $f$) for each $uv\in
E\left( G\right) $. If $f\left(V \left(G\right)\right) =\left\{1, 2, \ldots ,
\left\vert V\left( G\right) \right\vert \right\}$, then $G$ is called a super
edge-magic graph. A stronger version of edge-magic and super edge-magic graphs
appeared when the concepts of perfect edge-magic and perfect super edge-magic
graphs were introduced. The super edge-magic deficiency $
\mu_{s}\left(G\right)$ of a graph $G$ is defined to be either the smallest
nonnegative integer $n$ with the property that $G \cup nK_{1}$ is super
edge-magic or $+ \infty$ if there exists no such integer $n$. On the other
hand, the edge-magic deficiency $ \mu\left(G\right)$ of a graph $G$ is the
smallest nonnegative integer $n$ for which $G\cup nK_{1}$ is edge-magic, being
$ \mu\left(G\right)$ always finite. In this paper, the concepts of (super)
edge-magic deficiency are generalized using the concepts of perfect (super)
edge-magic graphs. This naturally leads to the study of the valences of
edge-magic and super edge-magic labelings. We present some general results in
this direction and study the perfect (super) edge-magic deficiency of the star
$K_{1,n}$. | Yukio Takahashi, Francesc A. Muntaner-Batle, Rikio Ichishima | 2023-06-28T07:54:49Z | http://arxiv.org/abs/2306.15986v1 | # Some new results concerning the valences of (super) edge-magic graphs
###### Abstract.
A graph \(G\) is called edge-magic if there exists a bijective function \(f:V\left(G\right)\cup E\left(G\right)\rightarrow\left\{1,2,\ldots,\left|V \left(G\right)\right|+\left|E\left(G\right)\right|\right\}\) such that \(f\left(u\right)+f\left(v\right)+f\left(uv\right)\) is a constant (called the valence of \(f\)) for each \(uv\in E\left(G\right)\). If \(f\left(V\left(G\right)\right)=\left\{1,2,\ldots,\left|V\left(G\right)\right|\right\}\), then \(G\) is called a super edge-magic graph. A stronger version of edge-magic and super edge-magic graphs appeared when the concepts of perfect edge-magic and perfect super edge-magic graphs were introduced. The super edge-magic deficiency \(\mu_{s}\left(G\right)\) of a graph \(G\) is defined to be either the smallest nonnegative integer \(n\) with the property that \(G\cup nK_{1}\) is super edge-magic or \(+\infty\) if there exists no such integer \(n\). On the other hand, the edge-magic deficiency \(\mu\left(G\right)\) of a graph \(G\) is the smallest nonnegative integer \(n\) for which \(G\cup nK_{1}\) is edge-magic, being \(\mu\left(G\right)\) always finite. In this paper, the concepts of (super) edge-magic deficiency are generalized using the concepts of perfect (super) edge-magic graphs. This naturally leads to the study of the valences of edge-magic and super edge-magic labelings. We present some general results in this direction and study the perfect (super) edge-magic deficiency of the star \(K_{1,n}\).
Key words and phrases:perfect (super) edge-magic labeling, perfect (super) edge-magic deficiency, valence, graph labeling 2020 Mathematics Subject Classification: Primary 05C78
## 1. Introduction
Unless stated otherwise, the graph-theoretical notation and terminology used here will follow Chartrand and Lesniak [2]. In particular, the _vertex set_ of a graph \(G\) is denoted by \(V\left(G\right)\), while the _edge set_ of \(G\) is denoted by \(E\left(G\right)\).
For the sake of brevity, we will use the notation \(\left[a,b\right]\) for the interval of integers \(x\) such that \(a\leq x\leq b\). Kotzig and Rosa [14] initiated the study of what they called magic valuations. This concept was later named edge-magic labelings by Ringel and Llado [22], and this has become the popular term. A graph \(G\) is called _edge-magic_ if there exists a bijective function \(f:V\left(G\right)\cup E\left(G\right)\rightarrow\left[1,\left|V\left(G\right) \right|+\left|E\left(G\right)\right|\right]\) such that \(f\left(u\right)+f\left(v\right)+f\left(uv\right)\) is a constant (called the _valence_\(\operatorname{val}\left(f\right)\) of \(f\)) for each \(uv\in E\left(G\right)\). Such a function is called an _edge-magic labeling_. More recently, they have also been referred to as edge-magic total labelings by Wallis [24].
Enomoto et al. [3] introduced a particular type of edge-magic labelings, namely, super edge-magic labelings. They defined an edge-magic labeling of a graph \(G\) with the additional property that \(f\left(V\left(G\right)\right)=\left[1,\left|V\left(G\right)\right|\right]\) to be a _super edge-magic labeling_. Thus, a _super edge-magic graph_ is a graph that admits a super edge-magic labeling.
## 1. Introduction
Let \(G\) be a graph and \(G\) be a graph. A _graph_\(G\) is a graph \(G\) if and only if there exists a bijective function \(f:V\left(G\right)\rightarrow\left[1,\;\left|V\left(G\right)\right|\right]\) such that the set
\[S=\left\{f\left(u\right)+f\left(v\right):uv\in E\left(G\right)\right\}\]
consists of \(\left|E\left(G\right)\right|\) consecutive integers. In such a case, \(f\) extends to a super edge-magic labeling of \(G\) with valence \(k=\left|V\left(G\right)\right|+\left|E\left(G\right)\right|+s\), where \(s=\min\left(S\right)\) and
\[S=\left[k-\left(\left|V\left(G\right)\right|+\left|E\left(G\right)\right| \right),k-\left(\left|V\left(G\right)\right|+1\right)\right].\]
The _low characteristic_ and _high characteristic_ of a super edge-magic graph \(G\) are defined by \(\gamma\left(G\right)=\min\left(S\right)\) and \(\Gamma\left(G\right)=\max\left(S\right)\), respectively, where \(S\) is the set as in Lemma 1.1. These concepts will prove to be useful in this paper later.
For every graph \(G\), Kotzig and Rosa [14] proved that there exists an edge-magic graph \(H\) such that \(H=G\cup nK_{1}\) for some nonnegative integer \(n\). This motivated them to define the edge-magic deficiency of a graph. The _edge-magic deficiency_\(\mu\left(G\right)\) of a graph \(G\) is the smallest nonnegative integer \(n\) for which \(G\cup nK_{1}\) is edge-magic. Inspired by Kotzig and Rosa's notion, the concept of _super edge-magic deficiency_\(\mu_{s}\left(G\right)\) of a graph \(G\) was analogously defined in [6] as either the smallest nonnegative integer \(n\) with the property that \(G\cup nK_{1}\) is super edge-magic or \(+\infty\) if there exists no such integer \(n\). Thus, the super edge-magic deficiency of a graph \(G\) is a measure of how "close" (" far ") \(G\) is to (from) being super edge-magic.
An alternative term exists for the super edge-magic deficiency, namely, the vertex dependent characteristic. This term was coined by Hedge and Shetty [10]. In [10], they gave a construction of polygons having the same angles and distinct sides using the result on the super edge-magic deficiency of cycles provided in [7].
Noting that for a super edge-magic labeling \(f\) of a graph \(G\) with order \(p\) and size \(q\), the valence \(k\) is given by the formula:
\[k=\frac{\sum_{u\in V\left(G\right)}\deg(u)f(u)+\sum_{i=p+1}^{p+q}i}{q}.\]
Lopez et al. [16] defined the set
\[S_{G}=\Bigg{\{}\frac{\sum_{u\in V\left(G\right)}\deg(u)g(u)+\sum_{i=p+1}^{p+q} i}{q}\text{:}\]
\[\text{the function }g:V\left(G\right)\rightarrow\left[1,p\right]\text{ is bijective}\Bigg{\}}.\]
If \(\left[\min S_{G}\right]\leq\left\lfloor\max S_{G}\right\rfloor\), then the _super edge-magic interval_ of \(G\) is the set
\[I_{G}=\left[\left\lceil\min S_{G}\right\rceil,\left\lfloor\max S_{G}\right\rfloor\right]\text{.}\]
The _super edge-magic set_ of \(G\) is
\[\sigma_{G}=\left\{k\in I_{G}:\text{ there exists a super edge-magic labeling of }G\text{ with valence }k\text{ }\right\}.\]
Lopez et al. called a graph \(G\)_perfect super edge-magic_ if \(I_{G}=\sigma_{G}\). They showed that the family of paths \(P_{n}\) is a family of perfect super edge-magic graphs with \(|I_{P_{n}}|=1\) if \(n\) is even and \(|I_{P_{n}}|=2\) if \(n\) is odd, and raise the question of whether there is an infinite family of graphs \(\left(F_{1},F_{2},\ldots\right)\) such that each member of the family is perfect super edge-magic and \(\lim_{i\to+\infty}\)\(|I_{F_{i}}|=+\infty\). They showed that graphs \(G\cong C_{p^{k}}\odot\overline{K_{n}}\), where \(p>2\) is a prime number and \(\odot\) denotes the corona product, is such a family. For more detailed information on this matter, see [16].
For an edge-magic labeling \(f\) of a graph \(G\), the valence \(k\) is given by the formula:
\[k=\frac{\sum_{u\in V\left(G\right)}\deg(u)f(u)+\sum_{e\in E\left(G\right)}f \left(e\right)}{|E\left(G\right)|}.\]
In [18] Lopez et al. introduced, in a similar way of what we have seen so far for the case of super edge-magic labelings, the concepts of edge-magic interval, edge-magic set and perfect edge-magic graph.
For a graph \(G\), define the set
\[T_{G}=\Bigg{\{}\frac{\sum_{u\in V\left(G\right)}\deg(u)g(u)+\sum_{e\in E\left( G\right)}g\left(e\right)}{|E\left(G\right)|}\text{:}\]
\[\text{the function }g:V\left(G\right)\cup E\left(G\right)\to\left[1,|V\left(G \right)|+|E\left(G\right)|\right]\text{ is bijective}\Bigg{\}}.\]
If \(\left\lceil\min T_{G}\right\rceil\leq\left\lfloor\max T_{G}\right\rfloor\), then the _edge-magic interval_ of \(G\) is the set
\[\lambda_{G}=\left[\left\lceil\min T_{G}\right\rceil,\left\lfloor\max T_{G} \right\rfloor\right].\]
The _edge-magic set_ of \(G\) is
\[\tau_{G}=\left\{k\in\lambda_{G}:\text{ there exists an edge-magic labeling of }G\text{ with valence }k\text{ }\right\}.\]
Lopez et al. called a graph \(G\)_perfect edge-magic_ if \(\lambda_{G}=\tau_{G}\).
Motivated by the concepts of edge-magic and super edge-magic deficiencies together with the concepts of perfect edge-magic and perfect super edge-magic graphs, we introduce the concepts of perfect edge-magic deficiency and perfect super edge-magic deficiency next.
The _perfect edge-magic deficiency_\(\mu_{p}\left(G\right)\) of a graph \(G\) is defined to be the smallest nonnegative integer \(n\) with the property that \(G\cup nK_{1}\) is perfect edge-magic or \(+\infty\) if there exists no such integer \(n\). On the other hand, the _perfect super edge-magic deficiency_\(\mu_{p}^{s}\left(G\right)\) of a graph \(G\) is defined to be the smallest nonnegative integer \(n\) with the property that \(G\cup nK_{1}\) is perfect super edge-magic or \(+\infty\) if there exists no such integer \(n\).
In [18] Lopez et al. defined the _irregular crown_\(C(n;j_{1},j_{2},\ldots,j_{n})=(V,E)\), where \(n>2\) and \(j_{i}\geq 0\) for all \(i\in[1,n]\) as follows: \(V=\left\{v_{i}:i\in[1,n]\right\}\cup V_{1}\cup V_{2}\cup\cdots\cup V_{n}\), where \(V_{k}=\left\{v_{k}^{i}:i\in[1,j_{k}]\right\}\), if \(j_{k}\neq 0\) and \(V_{k}=\emptyset\) if \(j_{k}=0\) for each \(k\in[1,n]\) and \(E=\left\{v_{i}v_{i+1}:i\in[1,n-1]\right\}\cup\left\{v_{1}v_{n}\right\}\cup \left(\cup_{k=1,j_{k}\neq 0}^{n}\{v_{k}v_{k}^{l}:k\in[1,j_{k}]\}\right)\). In particular, they denoted the graph \(C_{m}^{n}\cong C(m;j_{1},j_{2},\ldots,j_{m})\), where \(j_{2i-1}=n\) for each \(i\in[1,(m+1)/2]\), and \(j_{2i}=0\) for each \(i\in[1,(m-1)/2]\) using the notation \(C_{m}^{n}\). They proved that the graphs \(C_{3}^{n}\) and \(C_{5}^{n}\) are perfect edge-magic for all integers \(n>1\). In the same paper, they also proved that if \(m=p^{k}\) when \(p\) is a prime number
and \(k\in\mathbb{N}\), then the graph \(C_{m}\odot\overline{K_{n}}\) is perfect edge-magic for all positive integers \(n\).
Lopez et al. [17] defined the concepts of \(\mathfrak{F}^{k}\)-family and \(\mathfrak{E}^{k}\)-family of graphs as follows. The infinite family of graphs \(\left(F_{1},F_{2},\ldots\right)\) is an \(\mathfrak{F}^{k}\)_-family_ if each element \(F_{n}\) admits exactly \(k\) different valences for super edge-magic labelings, and \(\lim_{n\rightarrow+\infty}\lvert I(F_{n})\rvert=+\infty\). The infinite family of graphs \(\left(F_{1},F_{2},\ldots\right)\) is an \(\mathfrak{E}^{k}\)_-family_ if each element \(F_{n}\) admits exactly \(k\) different valences for edge-magic labelings, and \(\lim_{n\rightarrow+\infty}\lvert J(F_{n})\rvert=+\infty\).
An easy observation from the results found in [5] and independently in [24], is that \(\left(K_{1,2},K_{1,3},\ldots\right)\) is an \(\mathfrak{F}^{2}\)-family and \(\mathfrak{E}^{3}\)-family. They posed the following two problems: for which positive integers \(k\) is it possible to find \(\mathfrak{F}^{k}\)-families and \(\mathfrak{E}^{k}\)-families? Their main results in [17] are that an \(\mathfrak{F}^{k}\)-family exits for each \(k=1\), \(2\) and \(3\), and an \(\mathfrak{E}^{k}\)-family exits for each \(k=3\), \(4\) and \(7\).
The following inequality was found by Enomoto et al. [3].
**Theorem 1.1**.: _If \(G\) is a super edge-magic graph, then_
\[\left\lvert E\left(G\right)\right\rvert\leq 2\left\lvert V\left(G\right) \right\rvert-3.\]
The following result found in [13] provides a sufficient condition for a super edge-magic graph to contain a triangle.
**Theorem 1.2**.: _If \(G\) is a super edge-magic graph with_
\[\left\lvert E\left(G\right)\right\rvert=2\left\lvert V\left(G\right)\right\rvert -3\text{ or }2\left\lvert V\left(G\right)\right\rvert-4\text{,}\]
_then \(G\) contains a triangle._
## 2. General results
In this section, we will establish some general results on super edge-magic graphs. We begin with the following result, which relates the order, girth and the cardinality of \(\sigma_{G}\).
**Theorem 2.1**.: _Let \(G\) be a super edge-magic graph of size \(2\left\lvert V\left(G\right)\right\rvert-3\) and girth \(g\left(G\right)=3\). Then \(\left\lvert\sigma_{G}\right\rvert=1\)._
Proof.: Consider such a graph \(G\). If \(f\) is a super edge-magic labeling of \(G\), then
\[\left\{f\left(u\right)+f\left(v\right):uv\in E\left(G\right)\right\}=\left[3, 2\left\lvert V\left(G\right)\right\rvert-1\right]\]
and
\[\operatorname{val}\left(f\right)=\left\lvert V\left(G\right)\right\rvert+ \left\lvert E\left(G\right)\right\rvert+\min\left\{f\left(u\right)+f\left(v \right):uv\in E\left(G\right)\right\}=3\left\lvert V\left(G\right)\right\rvert\]
by Lemma 1.1. Therefore, \(\sigma_{G}=\left\{3\left\lvert V\left(G\right)\right\rvert\right\}\), completing the proof.
The next concept will prove to be useful throughout this paper. Let \(f\) be a super edge-magic labeling of a graph \(G\). Then the _complementary labeling_\(\overline{f}\) of \(f\) is defined as
\[\overline{f}\left(x\right)=\left\{\begin{array}{ll}\left(\left\lvert V \left(G\right)\right\rvert+1\right)-f\left(x\right)&\text{if }x\in V\left(G\right)\\ \left(2\left\lvert V\left(G\right)\right\rvert+\left\lvert E\left(G\right) \right\rvert+1\right)-f\left(x\right)&\text{if }x\in E\left(G\right)\text{.}\end{array}\right.\]
It is true that if \(f\) is a super edge-magic labeling of a graph \(G\), then \(\overline{f}\) is also a super edge-magic labeling of \(G\).
Next, we show an example of a super edge-magic labeling of a graph \(G\) with \(\left\lvert E\left(G\right)\right\rvert=2\left\lvert V\left(G\right)\right\rvert -4\) and labeled with two different super edge-magic labelings
and \(\overline{f}\) (see Figure 1 and Figure 2, respectively), producing two different valences. On the other hand, if \(f\) is an edge-magic labeling of a graph \(G\), then the complementary labeling \(\overline{f}\) of \(f\) is the labeling \(\overline{f}:V\left(G\right)\cup E\left(G\right)\rightarrow\left[1,\left|V\left(G \right)\right|+\left|E\left(G\right)\right|\right]\) defined by
\[\overline{f}\left(x\right)=\left(\left|V\left(G\right)\right|+\left|E\left(G \right)\right|+1\right)-f\left(x\right)\]
if \(x\in V\left(G\right)\cup E\left(G\right)\). Observe that the complementary labeling of an edge-magic labeling is also an edge-magic labeling.
**Theorem 2.2**.: _Let \(G\) be a super edge-magic graph of size \(2\left|V\left(G\right)\right|-4\). Then \(\left|\sigma_{G}\right|=2\)._
Proof.: Let \(G\) be a super edge-magic graph with \(\left|E\left(G\right)\right|=2\left|V\left(G\right)\right|-4\) and \(f\) be a super edge-magic labeling of \(G\). Then it is clear that
\[\left\{f\left(u\right)+f\left(v\right):uv\in E\left(G\right)\right\}=\left[3,2\left|V\left(G\right)\right|-2\right]\]
or
\[\left\{f\left(u\right)+f\left(v\right):uv\in E\left(G\right)\right\}=\left[4, 2\left|V\left(G\right)\right|-1\right].\]
Since each one of these two possibilities will provide a different valence, it follows that \(\left|\sigma_{G}\right|\leq 2\). However, we can guarantee that \(\left|\sigma_{G}\right|=2\). To see this, notice that the only way to get \(3\) as an induced edge sum is by joining vertices, which have been labeled \(1\) and \(2\) by a super edge-magic labeling \(f\) of \(G\). Now, if we consider the super edge-magic labeling \(\overline{f}\), the vertices to which \(f\) assignes labels \(1\) and \(2\) become labeled \(\left|V\left(G\right)\right|\) and \(\left|V\left(G\right)\right|-1\) by \(\overline{f}\), respectively. Moreover, since they are adjacent, it follows that there is an edge with induced sum \(2\left|V\left(G\right)\right|-1\) when we consider \(\overline{f}\). Thus, \(f\) and \(\overline{f}\) have valences
\[\operatorname{val}\left(f\right)=3+\left|V\left(G\right)\right|+2\left|V\left( G\right)\right|-4=3\left|V\left(G\right)\right|-1\]
Figure 1. A super edge-magic labeling \(f\) with valence \(11\)
Figure 2. A super edge-magic labeling \(\overline{f}\) with valence \(12\)
and
\[\operatorname{val}\left(\overline{f}\right)=\left(2\left|V\left(G\right)\right|-1 \right)+\left|V\left(G\right)\right|+1=3\left|V\left(G\right)\right|,\]
respectively. In a similar way, if \(f\) is a labeling that has an induced edge sum \(2\left|V\left(G\right)\right|-1\), then vertices labeled \(\left|V\left(G\right)\right|\) and \(\left|V\left(G\right)\right|-1\) must be adjacent. Therefore, taking the complement of this labeling, we obtain the desired result.
The following result was shown in [13] (see also [15] for a different approach to this problem using the product \(\otimes_{h}\) defined on digraphs in [8]).
**Theorem 2.3**.: _There exists an infinite family of super edge-magic graphs \(G\) of size \(2\left|V\left(G\right)\right|-5\) and girth \(g\left(G\right)=5\)._
We next show the following result.
**Theorem 2.4**.: _Let \(G\) be a super edge-magic graph with \(\left|E\left(G\right)\right|=2\left|V\left(G\right)\right|-5\) and girth \(g\left(G\right)\geq 5\). Then \(\left|\sigma_{G}\right|=1\)._
Proof.: Let \(G\) be a graph of girth \(g\left(G\right)\geq 5\) and with a super edge-magic labeling \(f\). Consider the set \(S=\left\{f\left(u\right)+f\left(v\right):uv\in E\left(G\right)\right\}\). Then there are three possibilities for \(S\), namely,
\[S=\left[3,2\left|V\left(G\right)\right|-3\right],S=\left[4,2\left|V\left(G \right)\right|-2\right]\text{ or }S=\left[5,2\left|V\left(G\right)\right|-1\right].\]
Assume, to the contrary, that \(S=\left[3,2\left|V\left(G\right)\right|-3\right]\). Now, the only way to get the induced edge sum \(3\) is by joining vertices labeled \(1\) and \(2\). The only way to get the induced edge sum \(4\) is by joining vertices labeled \(1\) and \(3\). Hence, \(G\) has edges joining those vertices. To get the induced edge sum \(5\), we cannot use vertices labeled \(2\) and \(3\), since this choice would force a triangle and would have girth \(3\). Thus, we need to include an edge joining vertices labeled by \(1\) and \(4\). For a similar reason, to get the induced edge sum \(6\), we need to include an edge joining vertices labeled by \(1\) and \(5\). Any other choice would produce a triangle. We proceed in this manner until we get the induced edge sum \(\left|V\left(G\right)\right|+1\). At this point, we have a star of order \(\left|V\left(G\right)\right|\) (see Figure 3). Thus, we cannot add any new edge to get the induced edge sum \(\left|V\left(G\right)\right|+2\), since any new edge would produce a triangle. In a similar way, we can see that the set \(S=\left[5,2\left|V\left(G\right)\right|-1\right]\) cannot take place. Therefore, the only possibility left is \(S=\left[4,2\left|V\left(G\right)\right|-2\right]\), and the result follows.
To conclude this section, we present the following result.
**Theorem 2.5**.: _Let \(G\) be a graph with \(\left|E\left(G\right)\right|\geq\left|V\left(G\right)\right|\). If \(G\) is a super edge-magic graph of girth \(g\left(G\right)\geq 4\), then \(3+\left|E\left(G\right)\right|+\left|V\left(G\right)\right|\) is not a valence for any super edge-magic labeling of \(G\)._
Figure 3. A star \(G\) of order \(\left|V\left(G\right)\right|=p\) with induced edge sums
Proof.: Let \(G\) be a super edge-magic graph with \(\left|E\left(G\right)\right|\geq\left|V\left(G\right)\right|\), girth \(g\left(G\right)\geq 4\) and with a super edge-magic labeling \(f\). Assume, to the contrary, that \(\operatorname{val}\left(f\right)=3+\left|V\left(G\right)\right|+\left|E\left(G \right)\right|\). Then
\[3+\left|V\left(G\right)\right|+\left|E\left(G\right)\right|=\min\left\{f\left(u \right)+f\left(v\right):uv\in E\left(G\right)\right\}+\left|V\left(G\right) \right|+\left|E\left(G\right)\right|,\]
implying that
\[\min\left\{f\left(u\right)+f\left(v\right):uv\in E\left(G\right)\right\}=3.\]
Now, the only way to obtain an edge with induced sum \(3\) is by joining vertices labeled \(1\) and \(2\). Also, the only way to obtain an edge with induced sum \(4\) is by joining vertices labeled \(1\) and \(3\). Hence, if we want to avoid triangles, we need to induce the edge sums with labels \(\left\{1,4\right\},\left\{1,5\right\},\ldots,\left\{1,\left|V\left(G\right) \right|\right\}\) to obtain the induced edge sums \(5,6,\ldots,\left|V\left(G\right)\right|+1\), since any other choice would produce a triangle. Moreover, notice that any choice for the edge with induced sum \(\left|V\left(G\right)\right|+2\) will produce a triangle, which contradicts the fact that \(g\left(G\right)\geq 4\).
## 3. New results involving stars
This section is devoted to study the valences for the edge-magic and super edge-magic labelings of the unions of stars and isolated vertices. We begin by presenting a result concerning with isomorphic labelings. To do this, we introduce the concept of isomorphic (super) edge-magic labelings. For a graph \(G\), assume that \(f\) and \(g\) are two (super) edge-magic labelings of \(G\). We denote by \(G_{f}\) and \(G_{g}\) the graph \(G\), where the vertices of \(G\) take the name of the labels assigned by the labelings \(f\) and \(g\), respectively. Then we compute the adjacency matrices of \(G_{f}\) and \(G_{g}\), respectively, and we denote them by \(A\left(G_{f}\right)\) and \(A\left(G_{g}\right)\), where the rows and columns are placed in increasing order from left to right and top to bottom, respectively. Notice that the rows and columns are not necessarily labeled with consecutive integers, since the vertex labels of an edge-magic labeling are not necessarily consecutive integers. Then labelings \(f\) and \(g\) are _isomorphic labelings_, written \(f\cong g\), if \(A\left(G_{f}\right)=A\left(G_{g}\right)\).
For example, consider the graph \(G=C_{4}\cup K_{1}\) with two edge-magic labelings \(f\) and \(g\) illustrated in Figure 4.
Then
\[A\left(G_{f}\right)=A\left(G_{g}\right)=\begin{array}{c}1\begin{array}{ cccc}1&2&3&5&8\\ 2\end{array}\\ \begin{array}{c}1\\ 2\end{array}\\ \begin{array}{c}1\\ 3\end{array}\\ \begin{array}{c}0\\ 1\end{array}\\ 0\end{array}\\ \begin{array}{c}1\\ 0\end{array}\\ \begin{array}{c}1\\ 0\end{array}\\ \begin{array}{c}1\\ 0\end{array}\\ \begin{array}{c}1\\ 0\end{array}\\ \begin{array}{c}1\\ 0\end{array}\\ \begin{array}{c}0\\ 0\end{array}\\ \begin{array}{c}1\\ 0\end{array}\\ \begin{array}{c}0\end{array}\\ \begin{array}{c}0\\ 0\end{array}\\ \begin{array}{c}1\\ 0\end{array}\\ \begin{array}{c}0\end{array}\\ \begin{array}{c}1\\ 0\end{array}\\ \begin{array}{c}0\end{array}\\ \begin{array}{c}0\end{array}\\ \begin{array}{c}0\end{array}\\ \begin{array}{c}1\\ 0\end{array}\\ \begin{array}{c}0\end{array}\\ \begin
and hence \(f\cong g\).
Next, consider the graph \(G=C_{4}\cup K_{1}\) labeled by \(\overline{f}\) as illustrated in Figure 5.
Then it is clear that \(\overline{f}\not\cong f\) and \(\overline{f}\not\cong g\), since
\[A\left(G_{\overline{f}}\right)=\begin{array}{c}2\begin{array}{ccccc}5&7&8&9 \\ 5\\ 7\\ 8\\ 9\end{array}\begin{bmatrix}0&0&0&0&0\\ 0&0&1&0&1\\ 0&1&0&1&0\\ 0&0&1&0&1\\ 0&1&0&1&0\end{bmatrix},\end{array}\]
\(A\left(G_{\overline{f}}\right)\neq A\left(G_{f}\right)\) and \(A\left(G_{\overline{f}}\right)\neq A\left(G_{g}\right)\).
**Theorem 3.1**.: _For every two positive integers \(n\) and \(l\), there exist exactly \(\left(l+1\right)\left(l+2\right)\) non-isomorphic super edge-magic labelings of \(K_{1,n}\cup lK_{1}\)._
Proof.: Let \(G=K_{1,n}\cup lK_{1}\), and define the graph \(G\) with
\[V\left(G\right)=\{x\}\cup\{y_{i}:i\in\left[1,n\right]\}\cup\{z_{i}:i\in\left[ 1,l\right]\}\]
and \(E\left(G\right)=\{xy_{i}:i\in\left[1,n\right]\}\). By Lemma 1.1, the set \(\{f\left(xy_{i}\right):i\in\left[1,n\right]\}\) is a set of \(n\) consecutive integers. Hence, if \(S=\{f\left(y_{i}\right):i\in\left[1,n\right]\}\) is a set of \(n\) consecutive integers, then \(S\) is \(\left[1,n\right]\text{or}\left[2,n+1\right]\) or \(\dots\) or \(\left[l+2,n+l+1\right]\). This gives us \(\left(l+2\right)\) possibilities for the labels of \(\{y_{i}:i\in\left[1,n\right]\}\) up to reordering. For each one of these possibilities, there are exactly \(\left(l+1\right)\) possible labels that have not been used. This means that these labels must be assigned to the vertices \(\{x\}\cup\{z_{i}:i\in\left[1,l\right]\}\). However, since \(\deg z_{i}=0\) for each \(i\in\left[1,l\right]\), we only need to concern about the label assigned to \(x\). Therefore, there are exactly \(\left(l+1\right)\) possible choices for \(f\left(x\right)\). This produces exactly \(\left(l+1\right)\left(l+2\right)\) non-isomorphic super edge-magic labelings of \(G\).
**Fact 1**.: _By the proof of the previous result, we know that if \(f\) is a super edge-magic labeling of \(G\), then the set \(\{f\left(y_{i}\right):i\in\left[1,n\right]\}\) is a set of \(n\) consecutive integers._
Let \(f\) be any super edge-magic labeling of \(G\), and assume, without loss of generality, that
\[f\left(y_{1}\right)<f\left(y_{2}\right)<\dots<f\left(y_{n}\right)\]
and
\[f\left(z_{1}\right)<f\left(z_{2}\right)<\dots<f\left(z_{n}\right).\]
Figure 5. An edge-magic labeling \(\overline{f}\) of \(G\)
For the complementary labeling of \(f\), we have
\[\overline{f}\left(y_{1}\right)>\overline{f}\left(y_{2}\right)>\cdots>\overline{f} \left(y_{n}\right)\]
and
\[\overline{f}\left(z_{1}\right)>\overline{f}\left(z_{2}\right)>\cdots>\overline{f }\left(z_{n}\right).\]
Let \(f\) be a super edge-magic labeling of \(K_{1,n}\cup lK_{1}\). By Lemma 1.1, we know that
\[\operatorname{val}\left(f\right) = f\left(x\right)+f\left(y_{1}\right)+2n+l+1\] \[= f\left(x\right)+f\left(y_{n}\right)+n+2.\]
Thus, both sums \(f\left(x\right)+f\left(y_{1}\right)\) and \(f\left(x\right)+f\left(y_{n}\right)\) perfectly determine the valence of the labeling \(f\).
From this, the following fact is clear.
**Fact 2**.: _Let \(f_{1}\) and \(f_{2}\) be two super edge-magic labelings of \(G\). Then_
\[\operatorname{val}\left(f_{1}\right)+1=\operatorname{val}\left(f_{2}\right)\]
_if and only if_
\[f_{1}\left(x\right)+f_{1}\left(y_{1}\right)+1=f_{2}\left(x\right)+f_{2}\left( y_{1}\right)\]
_if and only if_
\[f_{1}\left(x\right)+f_{1}\left(y_{n}\right)+1=f_{2}\left(x\right)+f_{2}\left( y_{n}\right).\]
Let \(f\) be any super edge-magic labeling of \(G\). Then \(\gamma\left(G\right)=f\left(x\right)+f\left(y_{1}\right)\) and \(\Gamma\left(G\right)=f\left(x\right)+f\left(y_{n}\right)\).
The fact that the set \(\left\{f\left(y_{i}\right):i\in\left[1,n\right]\right\}\) is a set of \(n\) consecutive integers for any super edge-magic labeling of \(G\) suggests the following definitions. Consider a super edge-magic labeling \(f\) of \(G\) and define the set \(S_{2}^{f}=\left\{f\left(y_{i}\right):i\in\left[1,n\right]\right\}\). Then
\[S_{1}^{f}=\left[1,f\left(y_{1}\right)\right]\text{ and }S_{3}^{f}=\left[1,n+l+1 \right]\setminus\left(S_{1}^{f}\cup S_{1}^{f}\right).\]
A super edge-magic labeling \(f\) of \(G\) is of type \(1\) and we write it as \(f\in T_{1}\) if \(f\left(x\right)\in S_{1}^{f}\), and it is of type \(2\) and we write it as \(f\in T_{2}\) if \(f\left(x\right)\in S_{3}^{f}\). From Fact 1, it is easy to deduce the following fact.
**Fact 3**.: _The set \(T_{1}\cup T_{2}\) is a partition of the set_
\[F\left(G\right)=\left\{\text{ $f$ : $f$ is a super edge-magic labeling of $G$ }\right\}.\]
We are now ready to state and prove the following theorem.
**Theorem 3.2**.: _There exists a bijective function between \(T_{1}\) and \(T_{2}\)._
Proof.: Consider the function \(\phi:T_{1}\to T_{2}\) defined by \(\phi\left(f\right)=\overline{f}\) for all \(f\in T_{1}\). First, we show that if \(f\in T_{1}\), then \(\overline{f}\in T_{2}\). If \(f\in T_{1}\), then
\[f\left(x\right)<f\left(y_{1}\right)<f\left(y_{2}\right)<\cdots<f\left(y_{n} \right),\]
implying that
\[\left(n+l+2\right)-f\left(x\right) >\left(n+l+2\right)-f\left(y_{1}\right)\] \[>\left(n+l+2\right)-f\left(y_{2}\right)>\cdots>\left(n+l+2\right) -f\left(y_{n}\right).\]
Thus,
\[\overline{f}\left(x\right)>\overline{f}\left(y_{1}\right)>\overline{f}\left(y _{2}\right)>\cdots>\overline{f}\left(y_{n}\right)\]
so that \(\overline{f}\in T_{2}\).
Next, assume that \(\left|\left\{f_{a},f_{b}\right\}\cap T_{1}\right|=2\), and we will show that \(\phi\left(f_{a}\right)\neq\phi\left(f_{b}\right)\). Assume that \(f_{a}\neq f_{b}\). Then we have two possibilities.
**Case 1.** If \(f_{a}\neq f_{b}\), then \(\left(n+l+2\right)-\phi\left(f_{a}\right)\neq\left(n+l+2\right)-\phi\left(f_{b}\right)\). Thus, \(\overline{f_{a}}\neq\overline{f_{b}}\) so that \(\phi\left(f_{a}\right)\neq\phi\left(f_{b}\right)\).
**Case 2.** If \(\left\{f_{a}\left(y_{i}\right):i\in\left[1,n\right]\right\}\neq\left\{f_{b} \left(y_{i}\right):i\in\left[1,n\right]\right\}\), then, without loss of generality, assume that
\[f_{a}\left(y_{1}\right)<f_{b}\left(y_{1}\right)\text{ and }f_{a}\left(y_{n} \right)<f_{b}\left(y_{n}\right).\]
Then
\[\left(n+l+2\right)-f_{a}\left(y_{1}\right)>\left(n+l+2\right)-f_{b}\left(y_{1}\right)\]
and
\[\left(n+l+2\right)-f_{a}\left(y_{n}\right)>\left(n+l+2\right)-f_{b}\left(y_{n }\right).\]
This implies that \(\overline{f_{a}}\left(y_{1}\right)>\overline{f_{b}}\left(y_{1}\right)\), which clearly implies that \(\overline{f_{a}}\left(y_{n}\right)>\overline{f_{b}}\left(y_{n}\right)\). Since
\[\max\{\overline{f_{a}}\left(y_{i}\right):i\in\left[1,n\right]\}>\max\{ \overline{f_{b}}\left(y_{i}\right):i\in\left[1,n\right]\}\]
and
\[\min\{\overline{f_{a}}\left(y_{i}\right):i\in\left[1,n\right]\}>\min\{ \overline{f_{b}}\left(y_{i}\right):i\in\left[1,n\right]\},\]
it follows that \(\phi\left(f_{a}\right)\neq\phi\left(f_{b}\right)\). Therefore, \(\phi\) is injective. It now remains to see that \(\phi\) is surjective. Let \(\overline{f}\in T_{2}\) and, without loss of generality, assume that \(\overline{f}\) has the property that
\[\overline{f}\left(x\right)>\overline{f}\left(y_{1}\right)>\cdots>\overline{f} \left(y_{n}\right),\]
that is,
\[\left(n+l+2\right)-\overline{f}\left(x\right)<\left(n+l+2\right)-\overline{f} \left(y_{1}\right)<\cdots<\left(n+l+2\right)-\overline{f}\left(y_{n}\right).\]
This implies that
\[\overline{\overline{f}}\left(x\right)<\overline{\overline{f}}\left(y_{1} \right)<\cdots<\overline{\overline{f}}\left(y_{n}\right).\]
Hence, \(\overline{\overline{f}}\in T_{1}\). Clearly, \(\phi\left(\overline{\overline{f}}\right)=\overline{f}\), and \(\phi\) is surjective. Therefore, \(\phi\) is bijective.
**Proposition 1**.: _Let \(f_{1}\) and \(f_{2}\) be two super edge-magic labelings of \(K_{1,n}\cup lK_{1}\) such that the valences of \(f_{1}\) and \(f_{2}\) are consecutive. Then the valences of \(\overline{f_{1}}\) and \(\overline{f_{2}}\) are consecutive._
Proof.: Assume that \(\operatorname{val}\left(f_{1}\right)\) and \(\operatorname{val}\left(f_{2}\right)\) are consecutive, and let \(\operatorname{val}\left(f_{1}\right)+1=\operatorname{val}\left(f_{2}\right)\). Then we have \(f_{1}\left(a\right)+f_{1}\left(b\right)+f_{1}\left(ab\right)+1=f_{2}\left(a \right)+f_{2}\left(b\right)+f_{2}\left(ab\right)\), where \(ab\in E\left(K_{1,n}\cup lK_{1}\right)\). If we let \(\omega=n+l+2\) and \(\rho=3n+2l+3\), then we have
\[-2\omega-\rho+f_{1}\left(a\right)+f_{1}\left(b\right)+f_{1}\left(ab\right)+1=-2 \omega-\rho+f_{2}\left(a\right)+f_{2}\left(b\right)+f_{2}\left(ab\right),\]
implying that
\[\left(\omega-f_{1}\left(a\right)\right)+\left(\omega-f_{1}\left(b \right)\right)+\left(\rho-f_{1}\left(ab\right)\right)-1\] \[= \left(\omega-f_{2}\left(a\right)\right)+\left(\omega-f_{2}\left(b \right)\right)+\left(\rho-f_{2}\left(ab\right)\right).\]
Hence, \(\overline{f_{1}}\left(a\right)+\overline{f_{1}}\left(b\right)+\overline{f_{1 }}\left(ab\right)-1=\overline{f_{2}}\left(a\right)+\overline{f_{2}}\left(b \right)+\overline{f_{2}}\left(ab\right)\). Thus, \(\operatorname{val}\left(\overline{f_{1}}\right)-1=\operatorname{val}\left( \overline{f_{2}}\right)\). Consequently, the valences of \(\overline{f_{1}}\) and \(\overline{f_{2}}\) are consecutive.
Next, consider the graph \(K_{1,n}\cup lK_{1}\) together with its associated set \(T_{1}\). Define the set \(S\left(T_{1}\right)\) as \(S\left(T_{1}\right)=\{\ x\in\mathbb{N}:x=\operatorname{val}\left(f\right) \text{ and }f\in T_{1}\ \}\). Also, consider the set \(T_{2}\) associated to \(K_{1,n}\cup lK_{1}\) and, similarly, define the set \(S\left(T_{2}\right)\) as \(S\left(T_{2}\right)=\{\ x\in\mathbb{N}:x=\operatorname{val}\left(f\right) \text{ and }f\in T_{2}\ \}\). With these definitions in hand, we have the following result.
**Theorem 3.3**.: _The set \(S\left(T_{1}\right)\) consists of consecutive integers._
Proof.: Let \(f\in T_{1}\), and consider \(\left(l+1\right)\) cases, depending on the possibilities for \(S_{1}^{f}\).
**Case 1.** Let \(S_{1}^{f}=\left\{1\right\}\). In this case, \(\min\left(S_{2}^{f}\right)=2\) and hence \(3\) is the low characteristic of \(f\).
**Case 2.** Let \(S_{1}^{f}=\left\{1,2\right\}\). In this case, \(\min\left(S_{2}^{f}\right)=3\) and hence the low characteristic of \(f\) is \(4\) or \(5\).
**Case 3.** Let \(S_{1}^{f}=\left\{1,2,3\right\}\). In this case, \(\min\left(S_{2}^{f}\right)=4\) and hence the low characteristic of \(f\) is \(5\), \(6\) or \(7\).
**Case 4.** Let \(S_{1}^{f}=\left\{1,2,3,4\right\}\). In this case, \(\min\left(S_{2}^{f}\right)=5\) and hence the low characteristic of \(f\) is \(6\), \(7\), \(8\) or \(9\).
\[\vdots\]
**Case \(\left(l+1\right)\).** Let \(S_{1}^{f}=\left\{1,2,\ldots,l+1\right\}\). In this case, \(\min\left(S_{2}^{f}\right)=l+2\) and hence the low characteristic of \(f\) is \(l+3\), \(l+4\),..., or \(2l+3\).
This means that the labelings of \(T_{1}\) have low characteristics \(3\), \(4\),..., \(2l+3\). Therefore, we conclude by Fact 2 that the set \(S\left(T_{1}\right)\) is a set of consecutive integers.
The following result is an immediate consequence of the preceding theorem and proposition.
**Corollary 3.1**.: _The set \(S\left(T_{2}\right)\) consists of consecutive integers._
Next, we will concentrate on the elements of \(S\left(T_{2}\right)\). We know that \(S\left(T_{2}\right)\) contains the valences of the complementary labelings of the super edge-magic labelings of \(T_{1}\). We have seen above that the low characteristics are \(3,4,\ldots,2l+3\). Thus, the high characteristics of the elements of \(T_{2}\) are \(\left(2n+2l+4\right)-3,\left(2n+2l+4\right)-4,\ldots,\left(2n+2l+4\right)- \left(2l+3\right)\). From these, we deduce that the minimum among all the low characteristics of these labelings is
\[\left(2n+2l+4\right)-\left(2l+3\right)=n+2.\]
Therefore, the only requirement needed for \(K_{1,n}\cup lK_{1}\) to be perfect super edge-magic is \(2l+3\geq n+2\).
**Theorem 3.4**.: _For every positive integer \(n\),_
\[\mu_{p}\left(K_{1,n}\right)<+\infty.\]
Proof.: Let \(G=K_{1,n}\cup lK_{1}\), and assume that \(G\) is perfect super edge-magic. Then it is trivial to observe that \(K_{1,n}\cup l^{\prime}K_{1}\) is perfect super edge-magic for any \(l^{\prime}\) with \(l^{\prime}\geq l\). Since \(G\) is perfect super edge-magic, it follows that the minimum valence of a super edge-magic labeling (and also the smallest possible valence for an edge-magic labeling of \(G\)) is computed as follows. First, observe that \(\left|V\left(G\right)\right|=n+l+1\) and \(\left|E\left(G\right)\right|=n\). Next, observe that the super edge-magic labeling with smallest valence has the following properties.
1. The vertex of \(G\) of degree \(n\) is labeled \(1\).
2. The labels of the vertices other than isolated vertices and edges are all the numbers in the set \(\left[2,2n\left(2n+1\right)\right]\).
With this knowledge in hand, we can compute the minimum possible valence \(\operatorname{val}_{\min}\left(G\right)\) of \(G\) as follows:
\[\operatorname{val}_{\min}\left(G\right)=\frac{n+\sum_{i=2}^{2n+1}i}{n}=2n+4.\]
Thus, the minimum possible valence of any edge-magic labeling of \(G\) is \(2n+4\). On the other hand, the maximum valence \(\operatorname{val}_{\max}\left(G\right)\) of \(G\) is given by
\[\operatorname{val}_{\max}\left(G\right)=\frac{\left(n-1\right)\left(n+l+1 \right)+\sum_{i=l+1}^{2n+l+1}i}{n}=3n+3l+3.\]
Hence, the maximum possible labeling of \(G\) is \(3n+3l+3\). Since \(G\) is perfect super edge-magic by assumption, it follows that for every \(\alpha\in[2n+4,3n+3l+3]\), there exists a super edge-magic labeling of \(G\) that has valence \(\alpha\). At this point, define the graph \(G\) with
\[V\left(G\right)=\left\{u_{i}:i\in[1,l]\right\}\cup\left\{v_{i}:i\in[n+l+1,2n+ l+1]\right.\]
and \(E\left(G\right)=\left\{v_{2n+l+1}v_{i}:i\in[n+l+1,2n+l]\right\}\), and consider the following \(l\) edge-magic labelings \(f_{0},f_{1},\ldots,f_{l-1}\). For each \(k\in[0,l-1]\), let \(f_{k}\left(v_{i}\right)=i\), where \(i\in[1,2n+l+1]\). Also, let \(f_{0}\left(v_{2n+l+1}v_{n+l+i}\right)=n+l+1-i\) for each \(i\in[1,n]\), and the labels not used go on the set of vertices \(\left\{u_{i}:i\in[1,l]\right\}\). Now, the labels on the edges assigned by each edge-magic labeling \(f_{k}\) (\(k\in[1,l]\)) are defined as \(f_{k}\left(ab\right)=f_{0}\left(ab\right)-k\) for \(ab\in E\left(G\right)\). Once again, the labels assigned by \(f_{k}\) to the vertices in the set \(\left\{u_{i}:i\in[1,l]\right\}\) are the labels not used for the rest of vertices and edges. These edge-magic labelings \(f_{0},f_{1},\ldots,f_{l-1}\) produce the valences \(4n+3l+2,4n+3l+1,\ldots,4n+2l+2\) (see the following example for labelings \(f_{0},f_{1},f_{2}\) depicted in Figure 6).
Finally, recall that all valences from \(2n+4\) to \(3n+3l+3\) are attained by super edge-magic labelings. Also, all valences from \(4n+2l+2\) to \(4n+3l+2\) are attained by edge-magic labelings. Therefore, if we take \(l\) large enough so that \(3n+3l+3\geq 4n+2l+2\). We conclude that \(G\) is perfect edge-magic and hence \(\mu_{p}\left(K_{1,n}\right)<+\infty\).
## 4. Conclusions and new research trends
The main goal of this paper is to study the valences of the edge-magic and super edge-magic labelings of graphs. This study started with the paper by Godbold and Slater [12] in which they conjectured that the cycles other than \(C_{5}\) are perfect edge-magic. This conjecture remains open, but in [20], and independently in [19] some substantial progress was made. For further information on this problem, the
Figure 6. Example for labelings \(f_{0},f_{1},f_{2}\) with minimum induced edge sums
interested reader may consult also [15] and [21]. In this paper, we have introduced the concepts of perfect edge-magic deficiency and perfect super edge-magic deficiency of graphs, and we have studied these concepts in relation to the star \(K_{1,n}\). In addition, we have presented results on the cardinality of the super edge-magic set of graphs with certain order, size and girth.
For future work, it is interesting to notice the following. Consider the cycle \(C_{3}\). It is clear that the super edge-magic interval for \(C_{3}\) is [9, 9].
Since \(C_{3}\) is super edge-magic, it follows that \(C_{3}\) is also perfect super edge-magic. Thus, \(\mu_{p}^{s}\left(G\right)=0\).
At this point, consider the graph \(C_{3}\cup K_{1}\). Since \(C_{3}=K_{3}\), it follows that there exist four non-isomorphic bijections of the form \(f:V\left(C_{3}\cup K_{1}\right)\rightarrow[1,4]\) (see Figure 7). From these four bijections, it is clear that only bijections \(A\) and \(B\) can be extended to a super edge-magic labeling. In case of \(A\), the labeling has valence \(10\) and in case of \(B\) the labeling has valence \(12\). Thus, there is not any super edge-magic labeling of \(C_{3}\cup K_{1}\) with valence \(11\), implying that \(C_{3}\cup K_{1}\) is not perfect super edge-magic.
From this, we can see that, on the contrary of when we deal with super edge-magic deficiency, that a graph \(G\) has perfect super edge-magic deficiency \(t\) so that \(G\cup tK_{1}\) is perfect super edge-magic and \(C_{3}\cup t^{\prime}K_{1}\) is not perfect super edge-magic if \(t^{\prime}<t\), it is not necessarily true that \(G\cup t^{\prime\prime}K_{1}\) is also perfect super edge-magic for \(t^{\prime\prime}>t\). This suggests the definition of strong perfect super edge-magic deficiency. The _strong perfect super edge-magic deficiency_ of a graph \(G\) is the minimum non-negative integer \(t\) such that \(G\cup t^{\prime\prime}K_{1}\) is perfect super edge-magic for all \(t^{\prime\prime}\geq t\). If such \(t\) does not exist, then the strong super edge-magic deficiency of \(G\) is defined to be \(+\infty\).
We suspect that something similar comes about in the case of perfect edge-magic deficiency and that similar concepts can be introduced in this case; however, we do not have examples at this point to support this claim. Other problems that we feel that would be interesting are problems of the following type.
**Problem 1**.: _For which real numbers \(x\)\((0<x<1)\), there exists a sequence of graphs \((G_{1}^{x},G_{2}^{x},\dots)\) such that \(\text{lim}_{n\rightarrow+\infty}\frac{|\tau_{G_{x}^{x}}|}{|\lambda_{G_{n}^{x} }|}=x\)?_
**Problem 2**.: _For which real numbers \(x\)\((0<x<1)\), there exists a sequence of graphs \((H_{1}^{x},H_{2}^{x},\dots)\) such that \(\text{lim}_{n\rightarrow+\infty}\frac{|\sigma_{H_{x}^{x}}|}{|I_{H_{x}^{x}}|}=x\)?_
In summary, we consider that the continuation of the ideas established in this paper constitute very interesting new trends for developing further research.
## Acknowledgement
The authors acknowledge Susana Clara Lopez Masip for her careful reading of this work and for her continuous support during the competition of this paper.
|
2310.08957 | Towards a compact soliton microcomb fully referenced on atomic reference | A fully stabilized soliton microcomb is critical for many applications of
optical frequency comb based on microresonators. However, the current
approaches for full frequency stabilization require either external
acousto-optic or electro-optic devices or auxiliary lasers and multiple
phase-locked loops, which compromises the convenience of the system. This study
explores a compact atomic referenced fully stabilized soliton microcomb that
directly uses a rubidium atomic optical frequency reference as the pump source,
and complements the repetition rate (7.3 GHz) of the soliton microcomb was
phase-locked to an atomic-clock-stabilized radio frequency (RF) reference by
mechanically tuning the resonance of the optical resonator. The results
demonstrate that the stability of the comb line (0.66 THz away from the pump
line) is consistent with that of the Rb87 optical reference, attaining a level
of approximately 4 Hz @100 s, corresponding to the frequency stability of 2E-14
@100 s. Furthermore,the frequency reproducibility of the comb line was
evaluated over six days and it was discovered that the standard deviation (SD)
of the frequency of the comb line is 10 kHz, resulting in a corresponding
absolute deviation uncertainty of 1.3E-10, which is technically limited by the
locking range of the soliton repetition rate. The proposed method gives a
low-power and compact solution for fully stabilized soliton micorcombs. | Mingfei Qu, Dou Li, Chenhong Li, Kangqi Liu, Weihang Zhu, Yuan Wei, Pengfei Wang, Songbai Kang | 2023-10-13T09:05:24Z | http://arxiv.org/abs/2310.08957v1 | # Towards a compact soliton microcomb fully referenced on atomic reference
###### Abstract
A fully stabilized soliton microcomb is critical for many applications of optical frequency comb based on microresonators. However, the current approaches for full frequency stabilization require either external acousto-optic or electro-optic devices or auxiliary lasers and multiple phase-locked loops, which compromises the convenience of the system. This study explores a compact atomic referenced fully stabilized soliton microcomb that directly uses a rubidium atomic optical frequency reference as the pump source, and complements the repetition rate (\(\sim\)7.3 GHz) of the soliton microcomb was phase-locked to an atomic-clock-stabilized radio frequency (RF) reference by mechanically tuning the resonance of the optical resonator. The results demonstrate that the stability of the comb line (\(\sim\)0.66 THz away from the pump line) is consistent with that of the \(Rb^{87}\) optical reference, attaining a level of approximately 4 Hz @100 s, corresponding to the frequency stability of \(\sim\)2\(\times\)10\({}^{-14}\) @100 s. Furthermore,the frequency reproducibility of the comb line was evaluated over six days and it was discovered that the standard deviation (SD) of the frequency of the comb line is 10 kHz, resulting in a corresponding absolute deviation uncertainty of \(\sim\)1.3\(\times\)10\({}^{-10}\), which is technically limited by the locking range of the soliton repetition rate. The proposed method gives a low-power and compact solution for fully stabilized soliton microcombs.
**Usage:** Secondary publications and information retrieval purposes.
## I Introduction
Soliton microcombs based on whispering gallery mode microresonators (WGMRs) have recently emerged as a low-power integrable solution for low-noise frequency comb applications. Solitons result from the dual-balance between the nonlinearity and dispersion of the resonators, as well as between the parametric gain and cavity loss [1]. They exhibit smooth and highly coherent envelope spectra, rendering them highly versatile in various fields, including optical ranging [2], low-phase-noise microwave generation [3], and dual-comb spectroscopy [4]. Among the numerous applications of soliton microcombs, obtaining a fully frequency stabilized microcomb laser source is crucial, particularly in fields such as optical atomic clocks [5] and optical frequency synthesis [6].
For the typical scheme, simultaneous stabilization of the carrier-envelope offset frequency (\(f_{ceo}\)) and repetition rate (\(f_{rep}\)) was achieved utilizing supercontinuum spectra and the \(f\)-2\(f\) self-referencing technique [7]. The soliton microcombs can be fully stabilized equivalently manner by locking the pump laser frequency (\(f_{p}\)) and repetition rate (\(f_{rep}\)), becasue the pump laser is among the frequency components of the microcomb. This is a compact full stabilization scheme that has been investigated in several studies [8; 9; 10]. Soliton microcombs based on optical microresonators exhibit ultra-small size and low-power consumption. However, the successful execution of a fully stabilized soliton microcomb typically requires the use of electro-optic and acousto-optic devices (EOM, AOM) [8; 9; 10] or an auxiliary laser and high-bandwidth optical phase locking loop [6; 7; 8]. This compromises the device's SWaP-C (size, weight and power, cost) advantage and increases the system complexity, which hinders the practical application of the microcomb as a "real" compact device.
Here, we demonstrate a compact scheme for a fully stabilized atom-referenced soliton microcomb. The proposed method utilizes a homemade \(MgF_{2}\) crystalline WGMR as a platform for generating soliton microcombs. In addition, the pump laser is directly locked to the rubidium atomic transition (5S-5D) [11]. The resonance mode frequency of the WGMR is purposely detuned through both thermal and mechanical means [12; 13] to initiate solitons, and maintain a stable soliton state for an extended period by utilizing an intracavity auxiliary mode to compensate for the thermal effects [14; 15]. Finally, a mechanically actuated method crystalline WGMR is used to stabilize the \(f_{rep}\) of the soliton to the radio frequency reference (H-MASER). Compared to approaches proposed in previous studies, the suggested method employs a laser directly interrogated to the \(Rb^{87}\) atomic frequency reference as the pump light for generating soliton microcombs, eliminating the necessity for an optical frequency phase-locking-loop [16]. Therefore, the dynamic coupling of \(f_{p}\) and \(f_{rep}\) with the parameters such as the pumping laser power and pumping resonance detuning [8] is prevented. Furthermore, the proposed method does not require any additional optoelectronic devices or auxiliary lasers for fully stabilizing the soliton microcombs. Consequently,
it presents a miniaturized solution for attaining full stability of the soliton microcomb and has the potential to be extended to other types of WGMRs-based soliton microcombs.
## II Experimental platform of soliton generation
Precision grinding and polishing technologies were employed to fabricate Z-cut \(MgF_{2}\) crystals for the WGMRs. The cavity's diameter is approximately 9 mm (FSR \(\sim\)7.3GHz), and it boasts a load-Q factor of \(\sim\)2\(\times\)10\({}^{9}\). The resonance frequency of the WGMR was detuned to trigger the soliton using a PZT glued via epoxy on the top of the resonator (depicted in Fig.1 (a)) for fast frequency detuning. At the same time, LED irradiation is used here to carry out a large-range (\(\sim\)10 GHz) of coarse tuning of the resonant frequency, which is used to retrieve the appropriate soliton mode. When a voltage is applied in the stretching direction of the PZT, it results in the PZT stretching along the Z axis. Owing to the Poisson effect, the force (perpendicular to the Z axis) causes mechanical deformation of the microresonator, resulting in a change the resonance frequency of the WGMR. Fig.1 (b) illustrates the result of the tuning efficiency of the resonator resonance frequency induced by 0-to-10 V voltage triangle-wave scan at a rate of 0.2 Hz. The resonance frequency was linearly adjusted within a range of 200 MHz and the extracted frequency linear tuning efficiency is approximately 20 MHz/V, and the maximum PZT-driven range can reached \(\sim\)3 GHz. The response rate of the PZT-driven resonance mode frequency is depicted in Fig.1 (c). Here, the resonator thermal self-stability method [17] is employed to determine the PZT-driven response rate \(S_{21}(\omega)\) of the resonance frequency [12] (where \(S_{21}\) is the frequency response of the electrical to optical signal transduction of the PZT, \(\omega\) is the modulation frequency). The mechanical resonance point of the system indicates that the driving bandwidth is approximately 40 kHz. The thermal effect of the cavity compensated for the amplitude of the response below 100 Hz.
To achieve a stable operation of solitons without relying on active controlling techniques (e.g.PDH [8], power stabilization [16], and auxiliary laser thermal compensation [18],[19]), a close-by WGMR auxiliary mode resonance was employed to compensate for the
Figure 1: Principle of piezoelectric control of resonance frequencies of crystalline WGMRs. (a) Homemade \(MgF_{2}\) crystalline WGMR with a diameter of approximately 9-mm, corresponding to the free spectral range (FSR) of approximately 7.3 GHz. A piezoelectric transducer (PZT) is adhered to the upper surface of WGMR, with the vibration direction aligned along the Z axis (indicated by the white arrow). (b) Observed resonance frequency shift induced by a 10 V triangular voltage scan with a rate of 0.2 Hz. (c) Electrical to optical signal transduction \(S_{21}(\omega)\) of the PZT actuator, thermal self-locking reduces the response amplitude at low frequencies (gray area). Arrows mark the mechanical resonance mode frequency at 40 kHz. (d) The transmission spectrum of two closely spaced resonances with a low probe beam power. The loading quality factor of the soliton generation mode is \(\sim\) 2\(\times\)10\({}^{8}\), whereas that of the adjacent auxiliary mode is \(\sim\)4\(\times\)10\({}^{8}\). (e) The transmission of the two resonances in proximity with a pump beam power of 130 mW, displays a typical soliton step with a duration of milliseconds.
thermal effect. Typically, the auxiliary mode must be specially designed to generate appropriate inter-mode interactions for those single-mode on-chip resonators [15],[18]. However, in millimeter-sized crystalline WGMR, the feature of the dense resonant modes can provide several different inter-mode interactions that can support the auxiliary mode of the soliton without deliberate design. Fig.1 (d) depicts the transmission spectrum of the WGMR resonance soliton mode (Q \(\sim\)2\(\times\)10\({}^{9}\)) and auxiliary mode under a low-power probe beam power. The auxiliary mode has a loaded quality factor of \(\sim\)4\(\times\)10\({}^{8}\) (five times the linewidth of the soliton mode). It can effectively extend the soliton step and improve the conversion efficiency of the soliton microcomb [19]. In this experiment, we scanned the resonant frequency by controlling the voltage applied to the PZT at a pump power of \(\sim\)130 mW. Dynamic of soliton generation Fig.1 (e) illustrates a typical soliton step of duration milliseconds. The step time of the single soliton state was significantly prolonged owing to the thermal compensation from the auxiliary mode, enabling us to obtain a long-term stable soliton microcomb without any additional optoelectronic devices or locking techniques. In the experiment, we acquired a single soliton state through manual hand tuning. The conversion efficiency of soliton combs is up to approximately 10%, which is considerably higher than that without the auxiliary mode.
## III Full stabilization of soliton microcomb
To realize a fully stabilized optical frequency comb, both \(f_{ceo}\) and \(f_{rep}\) have to be independently controlled. A unique characteristic of the soliton microcomb is that the pump laser constitutes one of its teeth. The frequency of each optical component (\(f_{n}\)) of the soliton microcomb can be expressed as \(f_{n}\)= \(f_{p}\)+ \(nf_{rep}\) (where \(n\) is an integer number of FSR away from \(f_{p}\)). Therefore, full stabilization can be achieved by controlling \(f_{p}\) and \(f_{rep}\). The configuration for stabilizing the soliton microcomb is depicted in Fig.2 (a). To lock \(f_{p}\), a 1556 nm wavelength pump laser is frequency-doubled to 778 nm using a PPLN crystal, which was then used to probe the rubidium 5S-5D two-photon atomic transition within
Figure 2: Schematic diagram of a fully stabilized atom-referenced soliton microcomb. (a) Experimental setup utilized for generating and stabilizing the soliton. The components consist of a continuous wave (CW) pump laser stabilized to _Rb\({}^{87}\)_ optical frequency transition, erbium-doped fiber amplifier (EDFA), fiber polarization controller (FPC), optical hatch filter (ONF), optical spectrum analyzer (OSA), photodetector (PD), electrical spectrum analyzer (ESA), local oscillator (LO), Proportion Integration Differentiation (PID) and counter. (b) Optical spectrum of the soliton microcomb as measured by the OSA after filtering out the pump light. The inset demonstrates a radio-frequency signal of the soliton repetition rate of \(\sim\)7.3 GHz.
a millimeter-scale \(Rb\)[87] vapor cell; the error signal was fed back to the laser to achieve locking (Fig.2 (a) green box). Such a two-photon optical frequency reference has demonstrated excellent stability performance similar to an active hydrogen atomic clock in a MEMS vapor cell [20]. The frequency-locked laser is directly coupled to an optical microresonator using a tapered fiber as the pump source. The single soliton state is triggered by the mode frequency of the detuning resonator and is stably sustained with the assistance of the auxiliary mode. Notably, in most previous studies, a soliton was generated using the pump laser frequency scanning method; therfore, an extra optical phase-locked loop is required when the pump laser is locked to an optical frequency reference (atoms or an ultra-stable cavity) [6; 7; 8]. Fig.2 (b) shows a single soliton state spectrum with a smooth \(sech^{2}\) spectral envelope after filtering out the pump light (The 3 dB bandwidth of the spectrum is approximately 20 nm). The RF spectrum of the repetition frequency signal (inset of Fig.2 (b)) has a signal-to-noise ratio of more than 50 dB, indicating that the soliton microcomb has high coherence.
\(f_{rep}\) of the soliton microcomb was actively phase-locked to an RF atomic reference ( H-maser) (yellow box in Fig.2 (a) ). The in-loop noise of \(f_{rep}\) was measured using counter \(\#3\). Fig.3 (a) illustrates the Allan deviations of soliton microcomb's \(f_{rep}\) before (black trace, free running) and after the pump laser references to the \(Rb\)[87] optical transition frequency (red trace), and under the fully stabilized state (blue trace). The free-running stability of \(f_{rep}\) is maintained at a level of 10 Hz (\(\sim\)1\(\times\)10\({}^{-8}\)) from milliseconds to hundreds of seconds with the assistance of the auxiliary mode. The thermal-optics noise resulting from the phase nosie of the pump beam was suppressed when the pump laser was locked to the \(Rb\)[87] optical frequency reference. Thus, the stability of \(f_{rep}\) improves to a \(\sim\)1\(\times\)10\({}^{-9}\) level around an average time of 1 s. On a time scale of hundreds of seconds, the thermal expansion effect of the resonator overrides the instability resulting from temperature fluctuations in the environment, achieving a level comparable to that without pump-beam frequency locking. Once phase-locked to the H-maser (yellow box in Fig.2 (a)), the
Figure 3: (a) Allan deviation of repetition frequency for a stabilized single soliton state. For the pump frequency free running (black),the pump frequency referenced \(Rb\)[87] optical transition(blue), and the repetition frequency phase locked to the RF reference (H-Maser)(blue). The expected in-loop noise stability of the phase-locked loop (gray). (b) Measured phase noise of the beat signal in the soliton free running and stabilized states. The locking bandwidth of the PZT actuator is 1kHz.
Figure 4: (a) Coherent transform measurement from the optical reference (pump beam) to the 91st optical component of the microcomb. The solid black lines represent the optical components of the soliton microcomb, while the red lines represent the the fiber comb modes referenced on the ultra-stable cavity, and the green line is the optical reference laser locked to the ultrastable cavity. The insert shows the time domain data of the frequencies. (b) Resulting Allan deviations are displayed in the traces of the ultrastable cavity laser frequency (green), pump laser frequency (blue), and the 91st tooth of the soliton microcomb (gray).
in-loop frequency noise (red trace, detected at Counter #3) of \(f_{rep}\) has been sufficiently suppressed down to the level of 1\(\times\)10\({}^{-12}\) @100 s. Because of the properties of the 53210A counters (A-type) utilized, the in-loop frequency noise data shows a dependence of \(\sim\)\(\tau^{-1/2}\). For a phase-locked system, the in-loop frequency noise should be reduced to \(\tau^{-1}\) (depicted in Fig.3 (a) as a gray dotted line). Fig.3 (b) illustrates the typical phase noise spectrum of the \(f_{rep}\) for free running and locking, indicating that the loop has a locking bandwidth of approximately 1 kHz. Soliton comb's \(f_{rep}\) can be effectively controlled by the resonator-pump detuning via the soliton self-frequency shift (SSFS) response [8]. However, the SSFS is negligible for the millimeter-scale crystalline WGMRs. In this experiment, the bandwidth of the locking loop was limited by the thermal-optics time constant of the auxiliary mode [8].
The stability of the optical components of a fully stabilized microcomb is crucial for practical applications. When our approach was used, the stability of the 91st comb tooth of the stabilized soliton microcomb, which was \(\sim\)0.66 THz away from the pump line, was measured to demonstrate a highly coherent transformation from the atomic optical reference (or pump beam) to all other optical components of the microcomb, as depicted in Fig.4 (a). Here, the optical reference used was an ultra-stable laser that was cavity-stabilized. The beat note (\(f_{beat1}\)) between the atomic optical reference and a fiber comb which is referenced on the same ultra-stable cavity was measured. The stabilities of the ultra-stable laser and referenced fiber comb modes were far below the 1\(\times\)10\({}^{-14}\) level at a short-time scale, however, they decreased beyond 10 s owing to the ultra-stable cavity drift (as depicted by the green trace in Fig.4 (b)). The stability of the 91st comb line (black trace) was \(\sigma_{y}\)(\(\tau\))= 1\(\times\)10\({}^{-13}\tau^{-1/2}\)(0.1-30 s) and it maintained a frequency level of 2\(\times\)10\({}^{-14}\) (approximately 4 Hz) from 30 s to 100 s, which is consistent with the performance of the \(Rb^{87}\) optical reference(blue trace, depicted in Fig.4 (b)). The results indicate that all the microcomb mode frequencies are as stable as the atomic optical reference, attaining a level of less than 10 Hz.
In addition to frequency stability, repeatability is also a crucial factor for the stabilized soliton microcombs. The \(Rb^{87}\) two-photon optical reference and soliton microcomb were restarted and restabilized each day (over six days); in addition, the absolute frequency of the optical tooth (including the \(Rb^{87}\) optical reference) with the fiber comb referenced on an H-maser reference was counted to assess the day-to-day repeatability of the 91st tooth of the stabilized comb. Fig.5, presents the day-to-day absolute frequency results of the system. The maximum daily frequency variation of the comb tooth was \(\sim\)25 kHz (between the 2 day and 6 day), and the standard deviation (SD) of the frequency deviation was 10 kHz. The fractional uncertainty corresponding to the absolute deviation is \(\sim\)1.3\(\times\)10\({}^{-10}\). The \(Rb^{87}\) optical reference, it has a good repeatability performance of \(\sim\)2\(\times\)10\({}^{-13}\) (standard deviation) over 6-day turn-on-off measurements. We discovered that the repeatability of \(f_{rep}\) after triggering and before locking was low due to environmental perturbations and variations in the power of the laser pump, and its SD was approximately 100 Hz (\(\sim\)1.4\(\times\)10\({}^{-8}\)). This deviation far exceeded the locking range when the PZT was used as the only actuator. Therefore, we finally chose to adjust the absolute frequency of RF reference to achieve \(f_{req}\)'s locking, which decreased the daily repeatability of \(f_{rep}\). The technique issue of the locking range can be addressed by improving the stability of the pump beam power or environmental temperature and/or by actively and precisely tuning the resonator's temperature to increase the locking range.
## IV Conclusion
We explored a solution for a compact fully stabilized atomic referenced soliton microcomb. The optical tooth of the stabilized microcomb, \(\sim\)0.66 THz away from the pump line, demonstrated an out-of-loop stability behavior of \(<\sigma_{y}\)(\(\tau\))= 1\(\times\)10\({}^{-13}\tau^{-1/2}\)(0.1-30 s) and a floor of \(\sim\)2\(\times\)10\({}^{-14}\)@100 s, which is consistent with the stability performance of the pump beam. And its day-to-day repeatability to \(\sim\)10 kHz which is currently technical limited by the locking range. Presumably, this is the best-reported stability and accuracy results for a atomic referenced Kerr microcomb. If the locking
Figure 5: Measurement of frequency repeatability. The graph above shows the absolute frequency of 91 comb teeth measured after the daily soliton microcomb is fully stabilized. The illustration shows the distribution histogram of counter-sampling frequency data is depicted. The graph below presents the absolute frequency of the daily locked pump light to the \(Rb^{87}\) double photon transition. In parentheses is the standard deviation corresponding to 1-second stability. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.